BackgroundCheck.run
Search For

Lifeng Liu, 5454 Patricia Rd, North Sudbury, MA 01776

Lifeng Liu Phones & Addresses

54 Patricia Rd, Sudbury, MA 01776   

19 Union St, Arlington, MA 02474    781-6461790   

Acton, MA   

Boston, MA   

Brookline, MA   

Work

Position: Professional/Technical

Education

Degree: High school graduate or higher

Mentions for Lifeng Liu

Lifeng Liu resumes & CV records

Resumes

Lifeng Liu Photo 23

Architect

Location:
Arlington, MA
Industry:
Computer Software
Work:
Futurewei Technologies
Architect
Cognex Corporation Oct 1, 2005 - Dec 2016
Software Engineer
Center For Neurological Imaging Brigham and Women’s Hospital May 2003 - Oct 2005
Computer Scientist
Mdol 2000 - 2002
Senior Software Engineer
Education:
Boston University 1986 - 2001
Boston University 1996 - 2001
Tsinghua University 1987 - 1996
Masters, Bachelors, Bachelor of Engineering, Engineering
Skills:
Algorithms, Image Processing, Software Engineering, Software Development, Computer Vision, Machine Learning, Pattern Recognition, C++, Java
Lifeng Liu Photo 24

Lifeng Liu

Lifeng Liu Photo 25

Lifeng Liu

Lifeng Liu Photo 26

Lifeng Liu

Lifeng Liu Photo 27

Lifeng Liu

Publications & IP owners

Us Patents

System And Method For Finding Correspondence Between Cameras In A Three-Dimensional Vision System

US Patent:
8600192, Dec 3, 2013
Filed:
Dec 8, 2010
Appl. No.:
12/962918
Inventors:
Lifeng Liu - Sudbury MA, US
Aaron S. Wallack - Natick MA, US
Cyril C. Marrion - Acton MA, US
Assignee:
Cognex Corporation - Natick MA
International Classification:
G06K 9/36
G06K 9/00
US Classification:
382285, 382154
Abstract:
This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system implementation having a plurality of cameras arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of a runtime object and determine the pose of the object, and in which at least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image, corresponding to trained object features in the non-perspective camera assembly, can be combined with the searched 2D object features in images of other camera assemblies (perspective or non-perspective), based on their trained object features to generate a set of 3D image features and thereby determine a 3D pose of the object. In this manner the speed and accuracy of the overall pose determination process is improved. The non-perspective lens can be a telecentric lens.

System And Method For Three-Dimensional Alignment Of Objects Using Machine Vision

US Patent:
2010016, Jul 1, 2010
Filed:
Dec 29, 2008
Appl. No.:
12/345130
Inventors:
Cyril C. Marrion - Acton MA, US
Nigel J. Foster - Natick MA, US
Lifeng Liu - Arlington MA, US
David Y. Li - West Roxbury MA, US
Guruprasad Shivaram - Chestnut Hill MA, US
Aaron S. Wallack - Natick MA, US
Xiangyun Ye - Framingham MA, US
Assignee:
COGNEX CORPORATION - Natick MA
International Classification:
G06K 9/00
US Classification:
382154
Abstract:
This invention provides a system and method for determining the three-dimensional alignment of a modeledobject or scene. After calibration, a 3D (stereo) sensor system views the object to derive a runtime 3D representation of the scene containing the object. Rectified images from each stereo head are preprocessed to enhance their edge features. A stereo matching process is then performed on at least two (a pair) of the rectified preprocessed images at a time by locating a predetermined feature on a first image and then locating the same feature in the other image. 3D points are computed for each pair of cameras to derive a 3D point cloud. The 3D point cloud is generated by transforming the 3D points of each camera pair into the world 3D space from the world calibration. The amount of 3D data from the point cloud is reduced by extracting higher-level geometric shapes (HLGS), such as line segments. Found HLGS from runtime are corresponded to HLGS on the model to produce candidate 3D poses. A coarse scoring process prunes the number of poses. The remaining candidate poses are then subjected to a further more-refined scoring process. These surviving candidate poses are then verified by, for example, fitting found 3D or 2D points of the candidate poses to a larger set of corresponding three-dimensional or two-dimensional model points, whereby the closest match is the best refined three-dimensional pose.

System And Method For Robust Calibration Between A Machine Vision System And A Robot

US Patent:
2011028, Nov 17, 2011
Filed:
May 14, 2010
Appl. No.:
12/780119
Inventors:
Aaron S. Wallack - Natick MA, US
Lifeng Liu - Arlington MA, US
Xiangyun Ye - Framingham MA, US
International Classification:
G06T 7/00
US Classification:
382153, 901 14
Abstract:
A system and method for robustly calibrating a vision system and a robot is provided. The system and method enables a plurality of cameras to be calibrated into a robot base coordinate system to enable a machine vision/robot control system to accurately identify the location of objects of interest within robot base coordinates.

System And Method For Training A Model In A Plurality Of Non-Perspective Cameras And Determining 3D Pose Of An Object At Runtime With The Same

US Patent:
2012014, Jun 14, 2012
Filed:
Dec 8, 2010
Appl. No.:
12/963007
Inventors:
Lifeng Liu - Sudbury MA, US
Aaron S. Wallack - Natick MA, US
Assignee:
COGNEX CORPORATION - Natick MA
International Classification:
H04N 13/02
US Classification:
348 50, 348E13074, 348E13001
Abstract:
This invention provides a system and method for training and performing runtime 3D pose determination of an object using a plurality of camera assemblies in a 3D vision system. The cameras are arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of an object, both at training and runtime. Each of the camera assemblies includes a non-perspective lens that acquires a respective non-perspective image for use in the process. The searched object features in one of the acquired non-perspective image can be used to define the expected location of object features in the second (or subsequent) non-perspective images based upon an affine transform, which is computed based upon at least a subset of the intrinsics and extrinsics of each camera. The locations of features in the second, and subsequent, non-perspective images can be refined by searching within the expected location of those images. This approach can be used in training, to generate the training model, and in runtime operating on acquired images of runtime objects. The non-perspective cameras can employ telecentric lenses.

System And Method For Multi-Task Lifelong Learning On Personal Device With Improved User Experience

US Patent:
2023005, Feb 16, 2023
Filed:
Sep 19, 2022
Appl. No.:
17/947937
Inventors:
- Shenzhen, CN
Lifeng Liu - Sudbury MA, US
Jian Li - Waltham MA, US
Assignee:
HUAWEI TECHNOLOGIES CO., LTD. - Shenzhen
International Classification:
G06N 20/20
G06K 9/62
G06N 5/02
G06F 11/34
Abstract:
This disclosure relates to recommendations made to users based on learned behavior patterns. User behavior data is collected and grouped according labels. The grouped user behavior data is labeled and used to train a machine learning model based on features and tasks associated with the classification. User behavior is then predicted by applying the trained machine learning model to the collected user behavior data, and a task is recommended to the user.

System And Method For Tying Together Machine Vision Coordinate Spaces In A Guided Assembly Environment

US Patent:
2020006, Feb 27, 2020
Filed:
May 13, 2019
Appl. No.:
16/410672
Inventors:
- Natick MA, US
Lifeng Liu - Arlington MA, US
Tuotuo Li - Newton MA, US
International Classification:
G06T 7/80
B25J 9/16
G06T 7/73
G06T 7/33
G06K 9/46
H04N 5/247
Abstract:
This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.

System And Method For Robust Calibration Between A Machine Vision System And A Robot

US Patent:
2019008, Mar 21, 2019
Filed:
Nov 15, 2018
Appl. No.:
16/192233
Inventors:
- NATICK MA, US
Lifeng Liu - Arlington MA, US
Xiangyun Ye - Framingham MA, US
International Classification:
B25J 9/16
G06T 7/80
Abstract:
A system and method for robustly calibrating a vision system and a robot is provided. The system and method enables a plurality of cameras to be calibrated into a robot base coordinate system to enable a machine vision/robot control system to accurately identify the location of objects of interest within robot base coordinates.

Integrated System For Detection Of Driver Condition

US Patent:
2019001, Jan 17, 2019
Filed:
Jul 12, 2017
Appl. No.:
15/647748
Inventors:
- Plano TX, US
Lifeng Liu - Sudbury MA, US
Xiaotian Yin - Belmont MA, US
Jun Zhang - Cambridge MA, US
Jian Li - Austin TX, US
International Classification:
G06K 9/62
G06K 9/00
G05D 1/00
B60W 40/08
B60W 50/14
Abstract:
Methods, apparatus, and systems are provided for integrated driver expression recognition and vehicle interior environment classification to detect driver condition for safety. A method includes obtaining an image of a driver of a vehicle and an image of an interior environment of the vehicle. Using a machine learning method, the images are processed to classify a condition of the driver and of the interior environment of the vehicle. The machine learning method includes general convolutional neural network (CNN) and CNN with adaptive filters. The adaptive filters are determined based on influence of filters. The classification results are combined and compared with predetermined thresholds to determine if a decision can be made based on existing information. Additional information is requested by self-motivated learning if a decision cannot be made, and safety is determined based on the combined classification results. A warning is provided to the driver based on the safety determination.

NOTICE: You may not use BackgroundCheck or the information it provides to make decisions about employment, credit, housing or any other purpose that would require Fair Credit Reporting Act (FCRA) compliance. BackgroundCheck is not a Consumer Reporting Agency (CRA) as defined by the FCRA and does not provide consumer reports.