BackgroundCheck.run
Search For

Yang Jiao, 4630 61St St, New York, NY 10023

Yang Jiao Phones & Addresses

30 61St St, New York, NY 10023    212-9292827   

1965 Broadway APT 20A, New York, NY 10023   

60 23Rd St, New York, NY 10010    212-9292827   

Bantam, CT   

Scarsdale, NY   

Berkeley, CA   

Pleasanton, CA   

Fremont, CA   

Sunnyvale, CA   

Mentions for Yang Jiao

Yang Jiao resumes & CV records

Resumes

Yang Jiao Photo 43

Yang Jiao - Edison, NJ

Work:
Center of Alcohol Studies, Rutgers University 2010 to 2000
Graduate Research Assistant
Education:
The State University of New Jersey - New Brunswick, NJ 2011 to 2014
Ph.D. in Statistics
The State University of New Jersey - New Brunswick, NJ 2009 to 2011
M.S. in Statistics
East China Normal University 2005 to 2009
B.S. in Mathematics
Yang Jiao Photo 44

Yang Jiao - West New York, NJ

Work:
Stevens Institute of Technology 2009 to 2000
Research Assistant
Responsive Magneto-Micelles for Drug Delivery Mar 2014 to May 2014 Stevens Institute of Technology 2010 to 2011
Research Assistant, Stevens Institute of Technology, NJ
Beijing University of Chemical Technology 2007 to 2009
Research Assistant
Education:
Stevens Institute of Technology - Hoboken, NJ Aug 2010
Ph.D. in Chemical Engineering
Beijing University of Chemical Technology Sep 2006 to Jun 2009
M.S. in Materials Science
Beijing University of Chemical Technology Sep 2002 to Jun 2006
B.S. in Polymer Science and Engineering
Yang Jiao Photo 45

Yang Jiao - New York, NY

Work:
PULSE ADVISORY May 2013 to 2000
Business Analyst - Capital Raising Group
Diane Terman Public Relations - New York, NY Oct 2012 to May 2013
Public Relations Intern/ Assistant to the President
MERCEDES-BENZ, LTD Jun 2010 to Aug 2010
Business Operations Intern - Finance and Controlling
Education:
ZHEJIANG NORMAL UNIVERSITY Jun 2012
BA in Chinese Literature & Education
NEW YORK UNIVERSITY - New York, NY Apr 2000
M.S in Public Relations & Corporate Communication
Skills:
Financial statement analysis; financial modeling using discounted cash flow; social media analytics.
Yang Jiao Photo 46

Yang Jiao - Newark, NJ

Work:
CHINA REINSURANCE CORPORATION, ASSET MANAGEMENT COMPANY 2009 to 2010
Portfolio Management Intern
CITIC SECURITIES CO., LTD 2008 to 2009
Risk Management Intern
GOVERNMENT OFFICE OF FENGDU COUNTY 2008 to 2009
County Government Officer Intern
Education:
THE STATE UNIVERSITY OF NEW JERSEY - Newark, NJ Jan 2010 to Jan 2012
Master of Quantitative Finance in Business
SOUTHWESTERN UNIVERSITY OF FINANCE AND ECONOMICS - Chengdu Jan 2006 to Jan 2010
Bachelor of Economics in Finance
Skills:
C++, Matlab, Python, AfterEffects

Publications & IP owners

Us Patents

Seek-And-Scan Probe Memory Devices With Nanostructures For Improved Bit Size And Resistance Contrast When Reading And Writing To Phase-Change Media

US Patent:
2009000, Jan 1, 2009
Filed:
Jun 29, 2007
Appl. No.:
11/824382
Inventors:
Nathan Franklin - San Mateo CA, US
Qing Ma - San Jose CA, US
Valluri R. Rao - Saratoga CA, US
Mike Brown - Phoeniz AZ, US
Yang Jiao - Sunnyvale CA, US
International Classification:
H01L 47/00
US Classification:
257 2, 257E47001
Abstract:
A seek-and-scan probe memory device comprising a patterned capping layer over a phase-change media, where the patterned capping layer defines the bit locations on the phase-change media. The patterned capping layer may be formed from self-assembled structures. In other embodiments, nanostructures are formed on the bottom electrode below the phase-change media to focus an applied electric field from the probe, so as to increase bit density and contrast. The nanostructures may be a regular or random array of nanostructures, formed by using a self-assembling material. The nanostructures may be conductive or non-conductive. Other embodiments are described and claimed.

Efficient And More Advanced Implementation Of Ring-Allreduce Algorithm For Distributed Parallel Deep Learning

US Patent:
2023008, Mar 23, 2023
Filed:
Nov 28, 2022
Appl. No.:
18/059368
Inventors:
- George Town, KY
Yang JIAO - San Mateo CA, US
International Classification:
G06F 9/52
G06N 20/00
G06F 9/48
Abstract:
The present disclosure provides a method for syncing data of a computing task across a plurality of groups of computing nodes, each group comprising a set of computing nodes A-D, a set of intra-group interconnects that communicatively couple computing node A with computing nodes B and C and computing node D with computing nodes B and C, and a set of inter-group interconnects that communicatively couple a computing node A of a first group of the plurality of groups with a computing node A of a second group neighboring the first group, a computing node B of the first group with a computing node B of the second group, a computing node C of the first group with the computing node C of the second group, and a computing node D of the to first group with a computing node D of the second group, the method comprising: syncing across a first dimension of computing nodes using a first set of ring connections, wherein the first set of ring connections are formed using inter-group and intra-group interconnects that communicatively couple the computing nodes along the first dimension; and broadcasting synced data across a second dimension of computing nodes using a second ring connection.

Multi-Size Convolutional Layer Background

US Patent:
2021035, Nov 18, 2021
Filed:
May 12, 2020
Appl. No.:
16/872979
Inventors:
- George Town, KY
Chao CHENG - San Mateo CA, US
Yang JIAO - San Mateo CA, US
International Classification:
G06N 3/04
G06F 17/18
G06F 17/15
Abstract:
Systems and methods for improved convolutional layers for neural networks are disclosed. An improved convolutional layer can obtain at least two input feature maps of differing channel sizes. The improved convolutional layer can generate an output feature map for each one of the at least two input feature maps. Each input feature map can be applied to a convolutional sub-layer to generate an intermediate feature map. For each intermediate feature map, versions of the remaining intermediate feature maps can be resized to match the channel size of the intermediate feature map. For each intermediate feature map, an output feature map can be generated by combining the intermediate feature map and the corresponding resized versions of the remaining intermediate feature maps.

Contribution Incrementality Machine Learning Models

US Patent:
2021032, Oct 21, 2021
Filed:
Dec 5, 2019
Appl. No.:
17/278395
Inventors:
- Mountain View CA, US
Ali Nasiri Amini - Redwood City CA, US
Jing Wang - Mountain View CA, US
Mert Dikmen - Belmont CA, US
Amy Richardson - Santa Cruz CA, US
Dinah Shender - Sunnyvale CA, US
Junji Takagi - Sunnyvale CA, US
Sen Li - Mountain View CA, US
Ruoyi Jiang - Sunnyvale CA, US
Yang Jiao - San Mateo CA, US
Yang Zhang - Sunnyvale CA, US
Zhuo Zhang - Santa Clara CA, US
International Classification:
G06F 11/34
G06N 20/00
Abstract:
Methods, systems, and computer programs encoded on a computer storage medium, for training and using machine learning models are disclosed. Methods include creating a model that represents relationships between user attributes, content exposures, and performance levels for a target action using organic exposure data specifying one or more organic exposures experienced by a particular user over a specified time prior to performance of a target action by the particular user and third party exposure data specifying third party exposures of a specified type of digital component to the particular user over the specified time period. Using the model, an incremental performance level attributable to each of the third party exposures at an action time when the target action was performed by the particular user is determined. Transmission criteria for at least some digital components to which the particular user was exposed are modified based on the incremental performance.

Efficient Inter-Chip Interconnect Topology For Distributed Parallel Deep Learning

US Patent:
2021024, Aug 5, 2021
Filed:
Jan 30, 2020
Appl. No.:
16/777683
Inventors:
- George Town, KY
Yang JIAO - San Mateo CA, US
International Classification:
G06F 9/50
G06N 3/08
G06N 3/063
Abstract:
The present disclosure provides a system comprising: a first group of computing nodes and a second group of computing nodes, wherein the first and second groups are neighboring devices and each of the first and second groups comprising: a set of computing nodes A-D, and a set of intra-group interconnects, wherein the set of intra-group interconnects communicatively couple computing node A with computing nodes B and C and computing node D with computing nodes B and C; and a set of inter-group interconnects, wherein the set of inter-group interconnects communicatively couple computing node A of the first group with computing node A of the second group, computing node B of the first group with computing node B of the second group, computing node C of the first group with computing node C of the second group, and computing node D of the first group with computing node D of the second group.

Hyper-Square Implementation Of Tree Allreduce Algorithm For Distributed Parallel Deep Learning

US Patent:
2021024, Aug 5, 2021
Filed:
Jan 30, 2020
Appl. No.:
16/777731
Inventors:
- George Town, KY
Yang JIAO - San Mateo CA, US
International Classification:
G06N 3/063
G06N 3/04
G06N 3/08
Abstract:
The present disclosure provides a method for syncing data of a computing task across a plurality of groups of computing nodes. Each group including a set of computing nodes A-D, a set of intra-group interconnects that communicatively couple computing node A with computing nodes B and C and computing node D with computing nodes B and C, and a set of inter-group interconnects that communicatively couple each of computing nodes A-D with corresponding computing nodes A-D in each of a plurality of neighboring groups. The method comprises syncing data at a computing node of the plurality of groups of computing nodes using inter-group interconnects and intra-group interconnects along four different directions relative to the node; and broadcasting synced data from the node to the plurality of groups of computing nodes using inter-group interconnects and intra-group interconnects along four different directions relative to the node.

NOTICE: You may not use BackgroundCheck or the information it provides to make decisions about employment, credit, housing or any other purpose that would require Fair Credit Reporting Act (FCRA) compliance. BackgroundCheck is not a Consumer Reporting Agency (CRA) as defined by the FCRA and does not provide consumer reports.