BackgroundCheck.run
Search For

Hui H Hu, 581041 Vista Oak, San Jose, CA 95132

Hui Hu Phones & Addresses

2314 Four Seasons Ct, San Jose, CA 95131   

Las Vegas, NV   

Milpitas, CA   

Concord, CA   

Flushing, NY   

Fremont, CA   

Mentions for Hui H Hu

Hui Hu resumes & CV records

Resumes

Hui Hu Photo 34

Hui Z Hu

Hui Hu Photo 35

Hui Z Hu

Hui Hu Photo 36

Hui Hu

Publications & IP owners

Us Patents

Few-Shot Language Model Training And Implementation

US Patent:
2021037, Dec 2, 2021
Filed:
Jun 17, 2021
Appl. No.:
17/350917
Inventors:
- Kansas City MO, US
Hui Peng Hu - Oakland CA, US
International Classification:
G06F 40/30
G06N 3/08
G06F 40/263
Abstract:
A technique making use of a few-shot model to determine whether a query text content belongs to a same language as a small set of examples, or alternatively provide a next member in the same language to the small set of examples. The related few-shot model makes use of convolutional models that are trained in a “learning-to-learn” fashion such that the models know how to evaluate few-shots that belong to the same language. The term “language” in this usage is broader than spoken languages (e.g., English, Spanish, German, etc.). “Language” refers to a category, or data domain, of expression through characters. Belonging to a given language is not specifically based on what the language is, but the customs or traits expressed in that language.

Analyzing Content Of Digital Images

US Patent:
2021004, Feb 18, 2021
Filed:
Aug 2, 2019
Appl. No.:
16/530778
Inventors:
- Oakland CA, US
Yoriyasu Yano - Oakland CA, US
Hui Peng Hu - Berkeley CA, US
Kuang Chen - Oakland CA, US
International Classification:
G06K 9/46
G06F 16/583
G06K 9/00
Abstract:
Methods, apparatuses, and embodiments related to analyzing the content of digital images. A computer extracts multiple sets of visual features, which can be keypoints, based on an image of a selected object. Each of the multiple sets of visual features is extracted by a different visual feature extractor. The computer further extracts a visual word count vector based on the image of the selected object. An image query is executed based on the extracted visual features and the extracted visual word count vector to identify one or more candidate template objects of which the selected object may be an instance. When multiple candidate template objects are identified, a matching algorithm compares the selected object with the candidate template objects to determine a particular candidate template of which the selected object is an instance.

Few-Shot Language Model Training And Implementation

US Patent:
2020036, Nov 19, 2020
Filed:
May 15, 2019
Appl. No.:
16/413159
Inventors:
- Oakland CA, US
Hui Peng Hu - Berkeley CA, US
International Classification:
G06F 17/27
G06N 3/08
Abstract:
A technique making use of a few-shot model to determine whether a query text content belongs to a same language as a small set of examples, or alternatively provide a next member in the same language to the small set of examples. The related few-shot model makes use of convolutional models that are trained in a “learning-to-learn” fashion such that the models know how to evaluate few-shots that belong to the same language. The term “language” in this usage is broader than spoken languages (e.g., English, Spanish, German, etc.). “Language” refers to a category, or data domain, of expression through characters. Belonging to a given language is not specifically based on what the language is, but the customs or traits expressed in that language.

Interactively Predicting Fields In A Form

US Patent:
2019022, Jul 18, 2019
Filed:
Jan 16, 2019
Appl. No.:
16/249561
Inventors:
- Oakland CA, US
Hui Peng Hu - Berkeley CA, US
International Classification:
G06F 17/24
G06F 3/0481
G06F 16/583
Abstract:
Methods, apparatuses, and embodiments related to interactively predicting fields in a form. A computer system received an image of a form. A user moves a cursor to a first field of the form, and the computer system automatically displays a predicted location of the field, including a bounding box that represents a boundary of the field. The computer system further predicts the field name/label based on text in the document. The user clicks on the field to indicate that he wants to digitize the field. When needed, the user interactively modifies the size of the bounding box that represents the extent of the field, changes the name/label of the field. Once finalized, the user can cause the field information (e.g., the bounding box coordinate, the bounding box location, the name/label of the field, etc.) to be written to a database.

Identifying Versions Of A Form

US Patent:
2019019, Jun 27, 2019
Filed:
Dec 21, 2018
Appl. No.:
16/230812
Inventors:
- Oakland CA, US
Michail Iliadis - Oakland CA, US
Hui Peng Hu - Berkeley CA, US
International Classification:
G06K 9/00
G06K 9/62
Abstract:
Disclosed are a method and apparatus for determining a given variation of a form used by a filled in instance of that type of form from amongst a number of form templates. The given instance is aligned to each of the variants or form templates. The result of the alignment includes a series of key points that did not match up well (“bad” key points). The bad key points are taken from the form templates. Then, a set of pixel patches from around each of the bad key points of the form templates are extracted. The pixel patches are individually compared to corresponding pixel patches of the instance. The comparison generates a match score. The form template having the greatest match score is the correct form template.

Interactively Predicting Fields In A Form

US Patent:
2018014, May 24, 2018
Filed:
Jan 19, 2018
Appl. No.:
15/875969
Inventors:
- Oakland CA, US
Hui Peng Hu - Berkeley CA, US
International Classification:
G06F 17/24
G06F 3/0481
G06F 17/30
Abstract:
Methods, apparatuses, and embodiments related to interactively predicting fields in a form. A computer system received an image of a form. A user moves a cursor to a first field of the form, and the computer system automatically displays a predicted location of the field, including a bounding box that represents a boundary of the field. The computer system further predicts the field name/label based on text in the document. The user clicks on the field to indicate that he wants to digitize the field. When needed, the user interactively modifies the size of the bounding box that represents the extent of the field, changes the name/label of the field. Once finalized, the user can cause the field information (e.g., the bounding box coordinate, the bounding box location, the name/label of the field, etc.) to be written to a database.

Analyzing Content Of Digital Images

US Patent:
2017025, Sep 7, 2017
Filed:
Apr 10, 2017
Appl. No.:
15/483291
Inventors:
- Oakland CA, US
Yoriyasu Yano - Oakland CA, US
Hui Peng Hu - Berkeley CA, US
Kuang Chen - Oakland CA, US
International Classification:
G06K 9/46
G06F 17/30
Abstract:
Methods, apparatuses, and embodiments related to analyzing the content of digital images. A computer extracts multiple sets of visual features, which can be keypoints, based on an image of a selected object. Each of the multiple sets of visual features is extracted by a different visual feature extractor. The computer further extracts a visual word count vector based on the image of the selected object. An image query is executed based on the extracted visual features and the extracted visual word count vector to identify one or more candidate template objects of which the selected object may be an instance. When multiple candidate template objects are identified, a matching algorithm compares the selected object with the candidate template objects to determine a particular candidate template of which the selected object is an instance.

Interactively Predicting Fields In A Form

US Patent:
2017004, Feb 16, 2017
Filed:
Aug 12, 2015
Appl. No.:
14/824493
Inventors:
- Oakland CA, US
Hui Peng Hu - Berkeley CA, US
International Classification:
G06F 17/24
G06F 3/0481
G06F 17/30
Abstract:
Methods, apparatuses, and embodiments related to interactively predicting fields in a form. A computer system received an image of a form. A user moves a cursor to a first field of the form, and the computer system automatically displays a predicted location of the field, including a bounding box that represents a boundary of the field. The computer system further predicts the field name/label based on text in the document. The user clicks on the field to indicate that he wants to digitize the field. When needed, the user interactively modifies the size of the bounding box that represents the extent of the field, changes the name/label of the field. Once finalized, the user can cause the field information (e.g., the bounding box coordinate, the bounding box location, the name/label of the field, etc.) to be written to a database.

NOTICE: You may not use BackgroundCheck or the information it provides to make decisions about employment, credit, housing or any other purpose that would require Fair Credit Reporting Act (FCRA) compliance. BackgroundCheck is not a Consumer Reporting Agency (CRA) as defined by the FCRA and does not provide consumer reports.