BackgroundCheck.run
Search For

Su Ying Chen, 49San Jose, CA

Su Chen Phones & Addresses

San Jose, CA   

Vestal, NY   

Mountain View, CA   

Rancho Cucamonga, CA   

Sunnyvale, CA   

Redlands, CA   

Santa Clara, CA   

Mentions for Su Ying Chen

Career records & work history

Medicine Doctors

Su Chen Photo 1

Su H Chen, Palo Alto CA - MFT

Specialties:
Marriage & Family Therapy
Address:
430 Sherman Ave Suite 203, Palo Alto, CA 94306
Languages:
English
Su Chen Photo 2

Su Hsiu Chen, Palo Alto CA

Specialties:
Psychologist
Address:
430 Sherman Ave, Palo Alto, CA 94306

Su Chen resumes & CV records

Resumes

Su Chen Photo 44

Software Engineer

Location:
Buffalo, NY
Industry:
Internet
Work:
Amazon Lab126
Software Developer
Nexon M May 2017 - Feb 2018
Senior Software Engineer
Gaea Mobile Jan 2017 - May 2017
Software Engineer
Kabam Aug 2014 - Dec 2016
Software Engineer
Sun West Mortgage Company, Inc. Aug 2012 - Jul 2014
Software Engineer
Arkansas State University Aug 2011 - May 2012
Research Assistant
Protechsoft Technologies May 2011 - Aug 2011
Research Assistant
Heartlyteamgo Information System Engineering Feb 2010 - May 2010
Image Processing Algorithm Engineer
National University of Defense Technology Aug 2008 - Dec 2009
Research Assistant
Institute of Cyber Education Technology In Bupt China Feb 2008 - Jun 2008
Graduation Project
Amazon Feb 2008 - Jun 2008
Software Engineer
Education:
Arkansas State University 2010 - 2012
Masters, Computer Science
Beijing University of Posts and Telecommunications 2004 - 2009
Bachelors, Telecommunications, Engineering
Skills:
C, Algorithms, Sql, Linux, Javascript, Computer Science, Java, Cuda, Html, Perl, C++, Unix, Matlab, Distributed Systems, Mysql, Xml, Databases, Bash, Software Engineering, Unix Shell Scripting, Vi, Pthreads, High Performance Computing, Parallel Computing, Php, Memcached, Couch Base, Redis, New Relic, Objective C, Ios Development, Python, Amazon Redshift, Amazon S3, Aws, Postgresql, Docker, Rest, Etl, Airflow, Couchbase, Grafana, Kibana
Interests:
Children
Education
Environment
Science and Technology
Arts and Culture
Health
Languages:
Mandarin
English
Su Chen Photo 45

Su Chen

Su Chen Photo 46

Su Chen

Su Chen Photo 47

Su Chen

Su Chen Photo 48

Manager

Work:
Chinamobiletech
Manager
Su Chen Photo 49

Su Qiong Chen

Su Chen Photo 50

Su Chen

Skills:
Biotechnology, Ems
Su Chen Photo 51

Su Chin Chen

Publications & IP owners

Us Patents

Distributed Computing Document Recognition And Processing

US Patent:
6742161, May 25, 2004
Filed:
Mar 7, 2000
Appl. No.:
09/520892
Inventors:
Barnaby James - Los Gatos CA
Su Chen - San Jose CA
Assignee:
ScanSoft, Inc. - Peabody MA
International Classification:
G06F 1721
US Classification:
715500, 715530
Abstract:
The present invention is a system and method for performing document recognition and processing in a distributed computing environment. The invention uses applications which are remotely located from one or more users and may be accessed via a network. One or more users utilize terminals including computers, facsimile machines, and/or scanners to transmit documents to be processed to a network or a network server which in turn transmits the documents to various computer software applications which process the documents at a network processing location. Once the documents have been processed, the processed documents are transmitted to the users according to one or more preferences associated with a user identification and/or authentication which may be determined by either a network server or an application server. Users utilizing a computer terminal make use of various data transfer programs capable of transferring document data over a network to an application server at a remote location and receiving processed document data via a network.

Method And Apparatus Of Data Compression For Computer Networks

US Patent:
7558290, Jul 7, 2009
Filed:
Dec 16, 2005
Appl. No.:
11/303651
Inventors:
Antonio Nucci - Burlingame CA, US
Su Chen - Somerset NJ, US
Assignee:
Narus, Inc. - Sunnyvale CA
International Classification:
H04J 3/18
US Classification:
370477
Abstract:
An important component of network monitoring is to collect traffic data which is a bottleneck due to large data size. We introduce a new table compression method called “Group Compression” to address this problem. This method uses a small training set to learn the relationship among columns and group them; the result is a “compression plan”. Based on this plan, each group is compressed separately. This method can reduce the compressed size to 60%-70% of the IP flow logs compressed by GZIP.

System And Method For Network Data Compression

US Patent:
8516157, Aug 20, 2013
Filed:
Apr 20, 2011
Appl. No.:
13/091090
Inventors:
Antonio Nucci - San Jose CA, US
Su Chen - Somerset NJ, US
Assignee:
Narus, Inc. - Sunnyvale CA
International Classification:
G06F 15/16
H04J 3/18
US Classification:
709247, 709246, 370477
Abstract:
The present invention relates to a method of compressing data in a network, the data comprising a plurality of packets each having a header and a payload, the header comprising a plurality of header fields, the method comprising generating a classification tree based on at least a portion of the plurality of header fields, determining a inter-packet compression plan based on the classification tree, and performing inter-packet compression in real time for each payload of at least a first portion of the plurality of packets, the inter-packet compression being performed according to at least a portion of the inter-packet compression plan.

System And Method For Network Data Compression

US Patent:
8046496, Oct 25, 2011
Filed:
Dec 12, 2007
Appl. No.:
11/955259
Inventors:
Antonio Nucci - San Jose CA, US
Su Chen - Sunnyvale CA, US
Assignee:
Narus, Inc. - Sunnyvale CA
International Classification:
G06F 15/16
H04J 3/18
US Classification:
709247, 709246, 370477
Abstract:
The present invention relates to a method of compressing data in a network, the data comprising a plurality of packets each having a header and a payload, the header comprising a plurality of header fields, the method comprising generating a classification tree based on at least a portion of the plurality of header fields, determining a inter-packet compression plan based on the classification tree, and performing inter-packet compression in real time for each payload of at least a first portion of the plurality of packets, the inter-packet compression being performed according to at least a portion of the inter-packet compression plan.

Generating Deep Harmonized Digital Images

US Patent:
2022029, Sep 15, 2022
Filed:
Mar 12, 2021
Appl. No.:
17/200338
Inventors:
- San Jose CA, US
Yifan Jiang - Austin TX, US
Yilin Wang - San Jose CA, US
Jianming Zhang - Campbell CA, US
Kalyan Sunkavalli - San Jose CA, US
Sarah Kong - Cupertino CA, US
Su Chen - San Jose CA, US
Sohrab Amirghodsi - Seattle WA, US
Zhe Lin - Fremont CA, US
International Classification:
G06T 5/50
G06T 7/194
G06T 11/60
G06T 11/00
G06N 3/04
G06N 3/08
Abstract:
The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating harmonized digital images utilizing a self-supervised image harmonization neural network. In particular, the disclosed systems can implement, and learn parameters for, a self-supervised image harmonization neural network to extract content from one digital image (disentangled from its appearance) and appearance from another from another digital image (disentangled from its content). For example, the disclosed systems can utilize a dual data augmentation method to generate diverse triplets for parameter learning (including input digital images, reference digital images, and pseudo ground truth digital images), via cropping a digital image with perturbations using three-dimensional color lookup tables (“LUTs”). Additionally, the disclosed systems can utilize the self-supervised image harmonization neural network to generate harmonized digital images that depict content from one digital image having the appearance of another digital image.

Generating Depth Images Utilizing A Machine-Learning Model Built From Mixed Digital Image Sources And Multiple Loss Function Sets

US Patent:
2022028, Sep 8, 2022
Filed:
Feb 26, 2021
Appl. No.:
17/186436
Inventors:
- San Jose CA, US
Jianming Zhang - Campbell CA, US
Oliver Wang - Seattle WA, US
Simon Niklaus - San Jose CA, US
Mai Long - Portland OR, US
Su Chen - San Jose CA, US
International Classification:
G06T 7/593
G06T 7/30
G06T 7/13
G06T 7/143
G01S 17/894
G01S 17/42
Abstract:
This disclosure describes one or more implementations of a depth prediction system that generates accurate depth images from single input digital images. In one or more implementations, the depth prediction system enforces different sets of loss functions across mix-data sources to generate a multi-branch architecture depth prediction model. For instance, in one or more implementations, the depth prediction model utilizes different data sources having different granularities of ground truth depth data to robustly train a depth prediction model. Further, given the different ground truth depth data granularities from the different data sources, the depth prediction model enforces different combinations of loss functions including an image-level normalized regression loss function and/or a pair-wise normal loss among other loss functions.

Reconstructing Three-Dimensional Scenes Portrayed In Digital Images Utilizing Point Cloud Machine-Learning Models

US Patent:
2022027, Sep 1, 2022
Filed:
Feb 26, 2021
Appl. No.:
17/186522
Inventors:
- San Jose CA, US
Jianming Zhang - Campbell CA, US
Oliver Wang - Seattle WA, US
Simon Niklaus - San Jose CA, US
Mai Long - Portland OR, US
Su Chen - San Jose CA, US
International Classification:
G06T 17/00
G06K 9/00
G06N 3/04
G06T 7/80
Abstract:
This disclosure describes implementations of a three-dimensional (3D) scene recovery system that reconstructs a 3D scene representation of a scene portrayed in a single digital image. For instance, the 3D scene recovery system trains and utilizes a 3D point cloud model to recover accurate intrinsic camera parameters from a depth map of the digital image. Additionally, the 3D point cloud model may include multiple neural networks that target specific intrinsic camera parameters. For example, the 3D point cloud model may include a depth 3D point cloud neural network that recovers the depth shift as well as include a focal length 3D point cloud neural network that recovers the camera focal length. Further, the 3D scene recovery system may utilize the recovered intrinsic camera parameters to transform the single digital image into an accurate and realistic 3D scene representation, such as a 3D point cloud.

Utilizing A Segmentation Neural Network To Process Initial Object Segmentations And Object User Indicators Within A Digital Image To Generate Improved Object Segmentations

US Patent:
2022019, Jun 23, 2022
Filed:
Dec 18, 2020
Appl. No.:
17/126986
Inventors:
- San Jose CA, US
Su Chen - San Jose CA, US
Shuo Yang - Bloomington IN, US
International Classification:
G06T 7/11
G06T 7/143
G06T 7/136
G06T 7/162
G06T 7/90
G06N 3/08
G06N 3/04
Abstract:
The present disclosure relates to systems, non-transitory computer-readable media, and methods that utilize a deep neural network to process object user indicators and an initial object segmentation from a digital image to efficiently and flexibly generate accurate object segmentations. In particular, the disclosed systems can determine an initial object segmentation for the digital image (e.g., utilizing an object segmentation model or interactive selection processes). In addition, the disclosed systems can identify an object user indicator for correcting the initial object segmentation and generate a distance map reflecting distances between pixels of the digital image and the object user indicator. The disclosed systems can generate an image-interaction-segmentation triplet by combining the digital image, the initial object segmentation, and the distance map. By processing the image-interaction-segmentation triplet utilizing the segmentation neural network, the disclosed systems can provide an updated object segmentation for display to a client device.

NOTICE: You may not use BackgroundCheck or the information it provides to make decisions about employment, credit, housing or any other purpose that would require Fair Credit Reporting Act (FCRA) compliance. BackgroundCheck is not a Consumer Reporting Agency (CRA) as defined by the FCRA and does not provide consumer reports.