BackgroundCheck.run
Search For

Lu L Luo, 68501 Moorpark Way UNIT 126, Mountain View, CA 94041

Lu Luo Phones & Addresses

Mountain View, CA   

Campbell, CA   

26096 Dougherty Ct, Carmel, CA 93923    831-6261368   

Carmel by the Sea, CA   

1013 Miller St, San Jose, CA 95129    408-2526673   

Cupertino, CA   

Martinsburg, WV   

Carmel By The, CA   

Monterey, CA   

Johnstown, PA   

Mentions for Lu L Luo

Lu Luo resumes & CV records

Resumes

Lu Luo Photo 32

Casualty Treaty Underwriter At Swiss Re

Position:
Graduate at Swiss Re
Location:
Beijing City, China
Industry:
Financial Services
Work:
Swiss Re since Sep 2011
Graduate
Education:
Vanderbilt University 2008 - 2010
Master of Arts (M.A.), Economics
University of International Business and Economics 2004 - 2008
Bachelor of Arts (B.A.), International Trade and Economics
Lu Luo Photo 33

Lu Luo

Location:
United States
Education:
Vanderbilt University 2008 - 2010
Master, Economics
Lu Luo Photo 34

Lu Luo

Location:
United States
Lu Luo Photo 35

Lu Luo

Location:
United States

Publications & IP owners

Us Patents

Apparatus And Method For Detecting Proximate Devices

US Patent:
2013030, Nov 21, 2013
Filed:
May 21, 2012
Appl. No.:
13/476693
Inventors:
Vidya Raghavan Setlur - Portola Valley CA, US
Lu Luo - Sunnyvale CA, US
David Alexander Dearman - San Bruno CA, US
Hawk Yin Pang - San Jose CA, US
Raja Bose - Mountain View CA, US
Vivek Vishal Shrivastava - San Francisco CA, US
Assignee:
NOKIA CORPORATION - Espoo
International Classification:
H04B 7/24
US Classification:
455 412
Abstract:
An apparatus, method, and computer program product are described that provide for a user to share content with other users who are proximate to his or her device in a simple and intuitive manner. In some embodiments, a “wave” gesture is used to identify users of devices that nearby to the source user's device with whom the source user may communicate, such as to share content. Upon receiving a first orientation input, a scanning mode may be initiated during which one or more devices proximate the apparatus are determined. A second orientation input that is different from the first orientation input, and the scanning mode may be terminated in response. As a result, a communication with at least one selected device of the one or more devices determined to be proximate the apparatus may be facilitated.

System And Method For Controlling Operational Modes For Xr Devices For Performance Optimization

US Patent:
2022038, Dec 1, 2022
Filed:
May 23, 2022
Appl. No.:
17/750960
Inventors:
- Suwon-si, KR
Lu Luo - Cupertino CA, US
International Classification:
G06F 1/329
G06F 1/3212
Abstract:
A method includes obtaining a request for one of multiple operational modes from an application installed on an extended reality (XR) device or an XR runtime/renderer of the XR device. The method also includes selecting a first mode of the operational modes, based at least partly on a real-time system performance of the XR device. The method also includes publishing the selected first mode to the XR runtime/renderer or the application. The method also includes performing a task related to at least one of image rendering or computer vision calculations for the application, using an algorithm associated with the selected first mode.

Intelligence-Based Editing And Curating Of Images

US Patent:
2022020, Jun 30, 2022
Filed:
Mar 17, 2022
Appl. No.:
17/697741
Inventors:
- Suwon si, KR
Euisuk Chung - Cupertino CA, US
Yingen Xiong - Mountain View CA, US
Lu Luo - Sunnyvale CA, US
International Classification:
G06T 5/00
G06T 5/20
G06K 9/62
G06V 20/10
Abstract:
In one embodiment, a method includes accessing a plurality of image frames captured by one or more cameras, classifying one or more first objects detected in one or more first image frames of the plurality of image frames as undesirable, applying a pixel filtering to the one or more first image frames to replace one or more first pixel sets associated with the one or more first objects with pixels from one or more second image frames of the plurality of image frames to generate a final image frame, providing the final image frame for display.

Depth Map Re-Projection On User Electronic Devices

US Patent:
2021032, Oct 21, 2021
Filed:
Jun 30, 2021
Appl. No.:
17/363678
Inventors:
- Suwon Si, KR
Yingen Xiong - Mountain View CA, US
Lu Luo - Sunnyvale CA, US
International Classification:
G06T 19/00
G06T 15/20
G02B 27/00
H04N 13/128
G06T 7/73
Abstract:
A method includes rendering, on displays of an extended reality (XR) display device, a first sequence of image frames based on image data received from an external electronic device associated with the XR display device. The method further includes detecting an interruption to the image data received from the external electronic device, and accessing a plurality of feature points from a depth map corresponding to the first sequence of image frames. The plurality of feature points includes movement and position information of one or more objects within the first sequence of image frames. The method further includes performing a re-warping to at least partially re-render the one or more objects based at least in part on the plurality of feature points and spatiotemporal data, and rendering a second sequence of image frames corresponding to the partial re-rendering of the one or more objects.

System And Method For Reduced Communication Load Through Lossless Data Reduction

US Patent:
2021031, Oct 7, 2021
Filed:
Aug 11, 2020
Appl. No.:
16/990779
Inventors:
- Suwon-si, KR
Yingen Xiong - Mountain View CA, US
Lu Luo - Sunnyvale CA, US
International Classification:
G02B 27/01
G06T 11/00
G06T 19/20
G06F 3/01
Abstract:
A method includes obtaining, from a memory of an electronic device connected to a head mounted display (HMD), a first reference frame, wherein the first reference frame comprises a first set of pixels associated with a first time. The method includes, rendering, at the electronic device, a source image as a new frame, wherein the new frame includes a second set of pixels associated with a display to be provided by the HMD at a second time, and generating, by the electronic device, a differential frame, wherein the differential frame is based on a difference operation between pixels of the new frame with pixels of the first reference frame to identify pixels unique to the new frame. Still further, the method includes sending the differential frame to the HMD, and storing the new frame in the memory of the electronic device as a second reference frame.

Depth Map Re-Projection On User Electronic Devices

US Patent:
2021027, Sep 2, 2021
Filed:
Jul 29, 2020
Appl. No.:
16/942627
Inventors:
- Suwon-si, KR
Yingen Xiong - Mountain View CA, US
Lu Luo - Sunnyvale CA, US
International Classification:
G06T 19/00
G02B 27/00
G06T 7/73
H04N 13/128
G06T 15/20
Abstract:
A method includes rendering, on displays of an extended reality (XR) display device, a first sequence of image frames based on image data received from an external electronic device associated with the XR display device. The method further includes detecting an interruption to the image data received from the external electronic device, and accessing a plurality of feature points from a depth map corresponding to the first sequence of image frames. The plurality of feature points includes movement and position information of one or more objects within the first sequence of image frames. The method further includes performing a re-warping to at least partially re-render the one or more objects based at least in part on the plurality of feature points and spatiotemporal data, and rendering a second sequence of image frames corresponding to the partial re-rendering of the one or more objects.

Intelligence-Based Editing And Curating Of Images

US Patent:
2021006, Mar 4, 2021
Filed:
Jul 20, 2020
Appl. No.:
16/933860
Inventors:
- SUWON SI, KR
Euisuk Chung - Cupertino CA, US
Yingen Xiong - Mountain View CA, US
Lu Luo - Sunnyvale CA, US
International Classification:
G06T 5/00
G06K 9/00
G06K 9/62
G06T 5/20
Abstract:
A method implemented by a client device includes accessing a plurality of image frames captured by one or more cameras of the client device and generating a working image frame based at least in part on one or more of the plurality of image frames. The method further includes classifying one or more first objects detected in the working image frame based at least in part on a determined desirability of the one or more first objects. The one or more first objects are determined to be undesirable. The method further includes applying a pixel filtering process to the working image frame to replace one or more first pixel sets associated with the first objects with pixels from one or more image frames of the plurality of image frames to generate a final image frame, and displaying the final image frame on a display of the client device.

Emotion-Aware Reactive Interface

US Patent:
2019025, Aug 15, 2019
Filed:
Nov 8, 2018
Appl. No.:
16/184601
Inventors:
- Suwon-si, KR
Abhijit Bendale - Sunnyvale CA, US
Zhihan Ying - Hangzhou, CN
Simon Gibbs - San Jose CA, US
Lu Luo - Sunnyvale CA, US
International Classification:
G06F 9/451
H04M 1/725
G06K 9/00
G06F 3/0481
Abstract:
A computer-implemented method of providing an emotion-aware reactive interface in an electronic device includes receiving an image of a user as an input and identifying a multi-modal non-verbal cue in the image. The method further includes interpreting the multi-modal non-verbal cue to determine a categorization and outputting a reactive interface event determined based on the categorization.

NOTICE: You may not use BackgroundCheck or the information it provides to make decisions about employment, credit, housing or any other purpose that would require Fair Credit Reporting Act (FCRA) compliance. BackgroundCheck is not a Consumer Reporting Agency (CRA) as defined by the FCRA and does not provide consumer reports.