BackgroundCheck.run
Search For

Hui Jui Hu, 598325 Vietor Ave, Flushing, NY 11373

Hui Hu Phones & Addresses

8325 Vietor Ave, Elmhurst, NY 11373    718-5076281   

8330 Vietor Ave, Elmhurst, NY 11373    718-5076281   

Flushing, NY   

San Francisco, CA   

Mountain View, CA   

Brattleboro, VT   

Mentions for Hui Jui Hu

Hui Hu resumes & CV records

Resumes

Hui Hu Photo 34

Writer

Location:
San Francisco Bay Area
Industry:
Motion Pictures and Film
Hui Hu Photo 35

Operator

Location:
Berkeley, CA
Industry:
Financial Services
Work:
M&M Tour
Operator
Chengdu Style Restaurant
Manager and Owner
Bcssa Aug 2012 - Jun 2013
Event Manager
Sichuan Earthquake May 2008 - Jun 2008
Volunteer
Education:
University of California, Berkeley 2012 - 2014
Bachelors, Mathematics, Statistics
East Los Angeles County Community 2010 - 2012
Associates, Mathematics
Skills:
Statistics, Matlab, Data Analysis, Microsoft Excel, Mandarin, Mathematical Modeling, R, Microsoft Office, Sql, Finance, Financial Analysis
Languages:
English
Mandarin
Hui Hu Photo 36

Hui Hu

Location:
San Francisco, CA
Work:
University of Wyoming
Student
Education:
University of Wyoming
Hui Hu Photo 37

Hui Ming Hu

Location:
Flushing, NY
Industry:
Cosmetics
Work:
Nail & Spa O Llc 2011 - 2012
Stuff
Hui Hu Photo 38

Hui Z Hu

Hui Hu Photo 39

Hui Z Hu

Hui Hu Photo 40

Hui Hu

Publications & IP owners

Us Patents

Few-Shot Language Model Training And Implementation

US Patent:
2021037, Dec 2, 2021
Filed:
Jun 17, 2021
Appl. No.:
17/350917
Inventors:
- Kansas City MO, US
Hui Peng Hu - Oakland CA, US
International Classification:
G06F 40/30
G06N 3/08
G06F 40/263
Abstract:
A technique making use of a few-shot model to determine whether a query text content belongs to a same language as a small set of examples, or alternatively provide a next member in the same language to the small set of examples. The related few-shot model makes use of convolutional models that are trained in a “learning-to-learn” fashion such that the models know how to evaluate few-shots that belong to the same language. The term “language” in this usage is broader than spoken languages (e.g., English, Spanish, German, etc.). “Language” refers to a category, or data domain, of expression through characters. Belonging to a given language is not specifically based on what the language is, but the customs or traits expressed in that language.

Analyzing Content Of Digital Images

US Patent:
2021004, Feb 18, 2021
Filed:
Aug 2, 2019
Appl. No.:
16/530778
Inventors:
- Oakland CA, US
Yoriyasu Yano - Oakland CA, US
Hui Peng Hu - Berkeley CA, US
Kuang Chen - Oakland CA, US
International Classification:
G06K 9/46
G06F 16/583
G06K 9/00
Abstract:
Methods, apparatuses, and embodiments related to analyzing the content of digital images. A computer extracts multiple sets of visual features, which can be keypoints, based on an image of a selected object. Each of the multiple sets of visual features is extracted by a different visual feature extractor. The computer further extracts a visual word count vector based on the image of the selected object. An image query is executed based on the extracted visual features and the extracted visual word count vector to identify one or more candidate template objects of which the selected object may be an instance. When multiple candidate template objects are identified, a matching algorithm compares the selected object with the candidate template objects to determine a particular candidate template of which the selected object is an instance.

Few-Shot Language Model Training And Implementation

US Patent:
2020036, Nov 19, 2020
Filed:
May 15, 2019
Appl. No.:
16/413159
Inventors:
- Oakland CA, US
Hui Peng Hu - Berkeley CA, US
International Classification:
G06F 17/27
G06N 3/08
Abstract:
A technique making use of a few-shot model to determine whether a query text content belongs to a same language as a small set of examples, or alternatively provide a next member in the same language to the small set of examples. The related few-shot model makes use of convolutional models that are trained in a “learning-to-learn” fashion such that the models know how to evaluate few-shots that belong to the same language. The term “language” in this usage is broader than spoken languages (e.g., English, Spanish, German, etc.). “Language” refers to a category, or data domain, of expression through characters. Belonging to a given language is not specifically based on what the language is, but the customs or traits expressed in that language.

Interactively Predicting Fields In A Form

US Patent:
2019022, Jul 18, 2019
Filed:
Jan 16, 2019
Appl. No.:
16/249561
Inventors:
- Oakland CA, US
Hui Peng Hu - Berkeley CA, US
International Classification:
G06F 17/24
G06F 3/0481
G06F 16/583
Abstract:
Methods, apparatuses, and embodiments related to interactively predicting fields in a form. A computer system received an image of a form. A user moves a cursor to a first field of the form, and the computer system automatically displays a predicted location of the field, including a bounding box that represents a boundary of the field. The computer system further predicts the field name/label based on text in the document. The user clicks on the field to indicate that he wants to digitize the field. When needed, the user interactively modifies the size of the bounding box that represents the extent of the field, changes the name/label of the field. Once finalized, the user can cause the field information (e.g., the bounding box coordinate, the bounding box location, the name/label of the field, etc.) to be written to a database.

Identifying Versions Of A Form

US Patent:
2019019, Jun 27, 2019
Filed:
Dec 21, 2018
Appl. No.:
16/230812
Inventors:
- Oakland CA, US
Michail Iliadis - Oakland CA, US
Hui Peng Hu - Berkeley CA, US
International Classification:
G06K 9/00
G06K 9/62
Abstract:
Disclosed are a method and apparatus for determining a given variation of a form used by a filled in instance of that type of form from amongst a number of form templates. The given instance is aligned to each of the variants or form templates. The result of the alignment includes a series of key points that did not match up well (“bad” key points). The bad key points are taken from the form templates. Then, a set of pixel patches from around each of the bad key points of the form templates are extracted. The pixel patches are individually compared to corresponding pixel patches of the instance. The comparison generates a match score. The form template having the greatest match score is the correct form template.

Interactively Predicting Fields In A Form

US Patent:
2018014, May 24, 2018
Filed:
Jan 19, 2018
Appl. No.:
15/875969
Inventors:
- Oakland CA, US
Hui Peng Hu - Berkeley CA, US
International Classification:
G06F 17/24
G06F 3/0481
G06F 17/30
Abstract:
Methods, apparatuses, and embodiments related to interactively predicting fields in a form. A computer system received an image of a form. A user moves a cursor to a first field of the form, and the computer system automatically displays a predicted location of the field, including a bounding box that represents a boundary of the field. The computer system further predicts the field name/label based on text in the document. The user clicks on the field to indicate that he wants to digitize the field. When needed, the user interactively modifies the size of the bounding box that represents the extent of the field, changes the name/label of the field. Once finalized, the user can cause the field information (e.g., the bounding box coordinate, the bounding box location, the name/label of the field, etc.) to be written to a database.

Analyzing Content Of Digital Images

US Patent:
2017025, Sep 7, 2017
Filed:
Apr 10, 2017
Appl. No.:
15/483291
Inventors:
- Oakland CA, US
Yoriyasu Yano - Oakland CA, US
Hui Peng Hu - Berkeley CA, US
Kuang Chen - Oakland CA, US
International Classification:
G06K 9/46
G06F 17/30
Abstract:
Methods, apparatuses, and embodiments related to analyzing the content of digital images. A computer extracts multiple sets of visual features, which can be keypoints, based on an image of a selected object. Each of the multiple sets of visual features is extracted by a different visual feature extractor. The computer further extracts a visual word count vector based on the image of the selected object. An image query is executed based on the extracted visual features and the extracted visual word count vector to identify one or more candidate template objects of which the selected object may be an instance. When multiple candidate template objects are identified, a matching algorithm compares the selected object with the candidate template objects to determine a particular candidate template of which the selected object is an instance.

Interactively Predicting Fields In A Form

US Patent:
2017004, Feb 16, 2017
Filed:
Aug 12, 2015
Appl. No.:
14/824493
Inventors:
- Oakland CA, US
Hui Peng Hu - Berkeley CA, US
International Classification:
G06F 17/24
G06F 3/0481
G06F 17/30
Abstract:
Methods, apparatuses, and embodiments related to interactively predicting fields in a form. A computer system received an image of a form. A user moves a cursor to a first field of the form, and the computer system automatically displays a predicted location of the field, including a bounding box that represents a boundary of the field. The computer system further predicts the field name/label based on text in the document. The user clicks on the field to indicate that he wants to digitize the field. When needed, the user interactively modifies the size of the bounding box that represents the extent of the field, changes the name/label of the field. Once finalized, the user can cause the field information (e.g., the bounding box coordinate, the bounding box location, the name/label of the field, etc.) to be written to a database.

NOTICE: You may not use BackgroundCheck or the information it provides to make decisions about employment, credit, housing or any other purpose that would require Fair Credit Reporting Act (FCRA) compliance. BackgroundCheck is not a Consumer Reporting Agency (CRA) as defined by the FCRA and does not provide consumer reports.