BackgroundCheck.run
Search For

Jeffrey Chao, 3733235 Transit Ave, Union City, CA 94587

Jeffrey Chao Phones & Addresses

Union City, CA   

Troy, NY   

Mentions for Jeffrey Chao

Career records & work history

Medicine Doctors

Jeffrey Chao

Specialties:
Psychiatry
Work:
Natalia E Fudim MD
4147 Adams Ave, San Diego, CA 92116
619-2811932 (phone) 619-2811947 (fax)
Education:
Medical School
Thomas Jefferson University, Jefferson Medical College
Graduated: 1994
Procedures:
Psychiatric Diagnosis or Evaluation, Psychiatric Therapeutic Procedures
Conditions:
Anxiety Phobic Disorders, Attention Deficit Disorder (ADD), Depressive Disorders, Obsessive-Compulsive Disorder (OCD), Post Traumatic Stress Disorder (PTSD), Anxiety Dissociative and Somatoform Disorders, Bipolar Disorder, Bulimia Nervosa, Dementia, Eating Disorders, Schizophrenia
Languages:
English
Description:
Dr. Chao graduated from the Thomas Jefferson University, Jefferson Medical College in 1994. He works in San Diego, CA and specializes in Psychiatry.

Jeffrey C. Chao

Specialties:
General Surgery
Work:
Kaiser Permanente Medical GroupKaiser Permanente Riverside Surgery
10800 Magnolia Ave BLDG 1 FL 5, Riverside, CA 92505
951-3532000 (phone) 951-3533906 (fax)
Site
Education:
Medical School
University of Louisville School of Medicine
Graduated: 2001
Procedures:
Endoscopic Retrograde Cholangiopancreatography (ERCP), Laparoscopic Gallbladder Removal
Conditions:
Abdominal Hernia, Breast Disorders, Cholelethiasis or Cholecystitis, Malignant Neoplasm of Female Breast, Inguinal Hernia
Languages:
English, Spanish
Description:
Dr. Chao graduated from the University of Louisville School of Medicine in 2001. He works in Riverside, CA and specializes in General Surgery. Dr. Chao is affiliated with Kaiser Permanente Riverside Medical Center.
Jeffrey Chao Photo 1

Jeffrey C Chao

Jeffrey Chao Photo 2

Jeffrey Chunya Chao

Specialties:
Surgery
Education:
University of Louisville(2001)
Jeffrey Chao Photo 3

Jeffrey Chao

Specialties:
Psychiatry
Education:
Thomas Jefferson University (1994)

Jeffrey Chao resumes & CV records

Resumes

Jeffrey Chao Photo 37

Jeffrey Chao

Publications & IP owners

Us Patents

Stream Processing Task Deployment Using Precompiled Libraries

US Patent:
2019025, Aug 15, 2019
Filed:
Apr 26, 2019
Appl. No.:
16/396522
Inventors:
- San Francisco CA, US
Jeffrey Chao - San Francisco CA, US
International Classification:
G06F 9/48
G06F 9/455
G06F 9/445
G06F 8/71
G06F 9/50
Abstract:
The technology disclosed provides a novel and innovative technique for compact deployment of application code to stream processing systems. In particular, the technology disclosed relates to obviating the need of accompanying application code with its dependencies during deployment (i.e., creating fat jars) by operating a stream processing system within a container defined over worker nodes of whole machines and initializing the worker nodes with precompiled dependency libraries having precompiled classes. Accordingly, the application code is deployed to the container without its dependencies, and, once deployed, the application code is linked with the locally stored precompiled dependencies at runtime. In implementations, the application code is deployed to the container running the stream processing system between 300 milliseconds and 6 seconds. This is drastically faster than existing deployment techniques that take anywhere between 5 to 15 minutes for deployment.

Managing Resource Allocation In A Stream Processing Framework

US Patent:
2019016, May 30, 2019
Filed:
Nov 26, 2018
Appl. No.:
16/200365
Inventors:
- San Francisco CA, US
Jeffrey CHAO - San Francisco CA, US
International Classification:
G06F 9/50
Abstract:
The technology disclosed herein relates to method, system, and computer program product (computer-readable storage device) embodiments for managing resource allocation in a stream processing framework. An embodiment operates by configuring an allocation of a task sequence and machine resources to a container, partitioning a data stream into a plurality of batches arranged for parallel processing by the container via the machine resources allocated to the container, and running the task sequence, running at least one batch of the plurality of batches. Some embodiments may also include changing the allocation responsive to a determination of an increase in data volume, and may further include changing the allocation to a previous state of the allocation, responsive to a determination of a decrease in data volume. Additionally, time-based throughput of the data stream may be monitored for a given worker node configured to run a batch of the plurality of batches.

Providing Strong Ordering In Multi-Stage Streamng Processing

US Patent:
2019015, May 23, 2019
Filed:
Jan 28, 2019
Appl. No.:
16/259745
Inventors:
- San Francisco CA, US
Jeffrey Chao - Campbell CA, US
International Classification:
G06F 9/48
Abstract:
The technology disclosed relates to providing strong ordering in multi-stage processing of near real-time (NRT) data streams. In particular, it relates to maintaining current batch-stage information for a batch at a grid-scheduler in communication with a grid-coordinator that controls dispatch of batch-units to the physical threads for a batch-stage. This includes operating a computing grid, and queuing data from the NRT data streams as batches in pipelines for processing over multiple stages in the computing grid. Also included is determining, for a current batch-stage, batch-units pending dispatch, in response to receiving the current batch-stage information; identifying physical threads that processed batch-units for a previous batch-stage on which the current batch-stage depends and have registered pending tasks for the current batch-stage; and dispatching the batch-units for the current batch-stage to the identified physical threads subsequent to complete processing of the batch-units for the previous batch-stage.

Recovery Strategy For A Stream Processing System

US Patent:
2018030, Oct 25, 2018
Filed:
Apr 16, 2018
Appl. No.:
15/954014
Inventors:
- San Francisco CA, US
Jeffrey Chao - San Francisco CA, US
Assignee:
salesforce.com, inc. - San Francisco CA
International Classification:
G06F 11/14
G06F 11/20
Abstract:
The technology disclosed relates to discovering multiple previously unknown and undetected technical problems in fault tolerance and data recovery mechanisms of modern stream processing systems. In addition, it relates to providing technical solutions to these previously unknown and undetected problems. In particular, the technology disclosed relates to discovering the problem of modification of batch size of a given batch during its replay after a processing failure. This problem results in over-count when the input during replay is not a superset of the input fed at the original play. Further, the technology disclosed discovers the problem of inaccurate counter updates in replay schemes of modern stream processing systems when one or more keys disappear between a batch's first play and its replay. This problem is exacerbated when data in batches is merged or mapped with data from an external data store.

Maintaining Throughput Of A Stream Processing Framework While Increasing Processing Load

US Patent:
2018025, Sep 6, 2018
Filed:
May 7, 2018
Appl. No.:
15/973230
Inventors:
- San Francisco CA, US
Jeffrey Chao - San Francisco CA, US
International Classification:
G06F 9/50
G06F 3/06
Abstract:
The technology disclosed relates to maintaining throughput of a stream processing framework while increasing processing load. In particular, it relates to defining a container over at least one worker node that has a plurality workers, with one worker utilizing a whole core within a worker node, and queuing data from one or more incoming near real-time (NRT) data streams in multiple pipelines that run in the container and have connections to at least one common resource external to the container. It further relates to concurrently executing the pipelines at a number of workers as batches, and limiting simultaneous connections to the common resource to the number of workers by providing a shared connection to a set of batches running on a same worker regardless of the pipelines to which the batches in the set belong.

Compact Task Deployment For Stream Processing Systems

US Patent:
2018007, Mar 15, 2018
Filed:
Sep 14, 2016
Appl. No.:
15/265817
Inventors:
- San Francisco CA, US
Jeffrey CHAO - San Francisco CA, US
Assignee:
salesforce.com, inc. - San Francisco CA
International Classification:
G06F 9/48
G06F 9/50
G06F 9/455
G06F 9/44
Abstract:
The technology disclosed provides a novel and innovative technique for compact deployment of application code to stream processing systems. In particular, the technology disclosed relates to obviating the need of accompanying application code with its dependencies during deployment (i.e., creating fat jars) by operating a stream processing system within a container defined over worker nodes of whole machines and initializing the worker nodes with precompiled dependency libraries having precompiled classes. Accordingly, the application code is deployed to the container without its dependencies, and, once deployed, the application code is linked with the locally stored precompiled dependencies at runtime. In implementations, the application code is deployed to the container running the stream processing system between 300 milliseconds and 6 seconds. This is drastically faster than existing deployment techniques that take anywhere between 5 to 15 minutes for deployment.

Maintaining Throughput Of A Stream Processing Framework While Increasing Processing Load

US Patent:
2017008, Mar 23, 2017
Filed:
Dec 31, 2015
Appl. No.:
14/986401
Inventors:
- San Francisco CA, US
Jeffrey Chao - San Francisco CA, US
Assignee:
salesforce.com, inc. - San Francisco CA
International Classification:
G06F 9/50
G06F 3/06
Abstract:
The technology disclosed relates to maintaining throughput of a stream processing framework while increasing processing load. In particular, it relates to defining a container over at least one worker node that has a plurality workers, with one worker utilizing a whole core within a worker node, and queuing data from one or more incoming near real-time (NRT) data streams in multiple pipelines that run in the container and have connections to at least one common resource external to the container. It further relates to concurrently executing the pipelines at a number of workers as batches, and limiting simultaneous connections to the common resource to the number of workers by providing a shared connection to a set of batches running on a same worker regardless of the pipelines to which the batches in the set belong.

Managing Processing Of Long Tail Task Sequences In A Stream Processing Framework

US Patent:
2017008, Mar 23, 2017
Filed:
Dec 31, 2015
Appl. No.:
14/986419
Inventors:
- SAN FRANCISCO CA, US
Jeffrey Chao - San Francisco CA, US
Assignee:
salesforce.com, inc. - SAN FRANCISCO CA
International Classification:
G06F 9/50
G06F 17/30
Abstract:
The technology disclosed relates to managing processing of long tail task sequences in a stream processing framework. In particular, it relates to operating a computing grid that includes a plurality of physical threads which processes data from one or more near real-time (NRT) data streams for multiple task sequences, and queuing data from the NRT data streams as batches in multiple pipelines using a grid-coordinator that controls dispatch of the batches to the physical threads. The method also includes assigning a priority-level to each of the pipelines using a grid-scheduler, wherein the grid-scheduler initiates execution of a first number of batches from a first pipeline before execution of a second number of batches from a second pipeline, responsive to respective priority levels of the first and second pipelines.

NOTICE: You may not use BackgroundCheck or the information it provides to make decisions about employment, credit, housing or any other purpose that would require Fair Credit Reporting Act (FCRA) compliance. BackgroundCheck is not a Consumer Reporting Agency (CRA) as defined by the FCRA and does not provide consumer reports.