BackgroundCheck.run
Search For

Ning Hai Lin, 64Port Orchard, WA

Ning Lin Phones & Addresses

Port Orchard, WA   

Mill Creek, WA   

Renton, WA   

PO Box 1171, Port Orchard, WA 98366   

Mentions for Ning Hai Lin

Career records & work history

Medicine Doctors

Ning Lin

Specialties:
Ophthalmology
Work:
Sante Community PhysiciansEye & Vision Of Central California
2325 W Cleveland Ave STE 103, Madera, CA 93637
559-6744700 (phone)
Education:
Medical School
Sun Yat Sen Univ of Med Sci, Guangzhou, China (242 21 Pr 1/71)
Graduated: 1982
Procedures:
Corneal Surgery, Destruction of Lesion of Retina and Choroid, Eyeglass Fitting, Lens and Cataract Procedures, Ophthalmological Exam
Conditions:
Acute Conjunctivitis, Cataract, Diabetic Retinopathy, Glaucoma, Keratitis, Macular Degeneration, Primary Angle-Closure Glaucoma
Languages:
English, Spanish
Description:
Dr. Lin graduated from the Sun Yat Sen Univ of Med Sci, Guangzhou, China (242 21 Pr 1/71) in 1982. He works in Madera, CA and specializes in Ophthalmology. Dr. Lin is affiliated with Saint Agnes Medical Center.

Ning Lin

Specialties:
Surgery , Neurological
Work:
NewYork Presbyterian/Queens Medical Group Neurology Surgery
5620 Main St STE 300, Flushing, NY 11355
718-6701837 (phone) 718-6617186 (fax)
Languages:
English, Spanish
Description:
Dr. Lin works in Flushing, NY and specializes in Surgery , Neurological.

Ning Lin resumes & CV records

Resumes

Ning Lin Photo 30

Ning Lin

Work:
Kk
Dd
Ning Lin Photo 31

Dd At Kk

Position:
dd at kk
Location:
United States
Industry:
Consumer Services
Work:
kk
dd

Publications & IP owners

Us Patents

Proactive Load Balancing

US Patent:
8073952, Dec 6, 2011
Filed:
Apr 22, 2009
Appl. No.:
12/427774
Inventors:
Won Suk Yoo - Redmond WA, US
Anil K. Ruia - Issaquah WA, US
Himanshu Patel - Redmond WA, US
Ning Lin - Redmond WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G06F 15/173
US Classification:
709226, 709227, 710 36
Abstract:
A load balancing system is described herein that proactively balances client requests among multiple destination servers using information about anticipated loads or events on each destination server to inform the load balancing decision. The system detects one or more upcoming events that will affect the performance and/or capacity for handling requests of a destination server. Upon detecting the event, the system informs the load balancer to drain connections around the time of the event. Next, the event occurs on the destination server, and the system detects when the event is complete. In response, the system informs the load balancer to restore connections to the destination server. In this way, the system is able to redirect clients to other available destination servers before the tasks occur. Thus, the load balancing system provides more efficient routing of client requests and improves responsiveness.

Network Caching For Multiple Contemporaneous Requests

US Patent:
8046432, Oct 25, 2011
Filed:
Apr 17, 2009
Appl. No.:
12/425395
Inventors:
Won Suk Yoo - Redmond WA, US
Anil K. Ruia - Issaquah WA, US
Himanshu Patel - Redmond WA, US
John A. Bocharov - Seattle WA, US
Ning Lin - Redmond WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G06F 15/16
G06F 15/167
US Classification:
709217, 709213
Abstract:
A live caching system is described herein that reduces the burden on origin servers for serving live content. In response to receiving a first request that results in a cache miss, the system forwards the first request to the next tier while “holding” other requests for the same content. If the system receives a second request while the first request is pending, the system will recognize that a similar request is outstanding and hold the second request by not forwarding the request to the origin server. After the response to the first request arrives from the next tier, the system shares the response with other held requests. Thus, the live caching system allows a content provider to prepare for very large events by adding more cache hardware and building out a cache server network rather than by increasing the capacity of the origin server.

Byte Range Caching

US Patent:
2010031, Dec 16, 2010
Filed:
Jun 16, 2009
Appl. No.:
12/485090
Inventors:
Won Suk Yoo - Redmond WA, US
Anil K. Ruia - Issaquah WA, US
Himanshu Patel - Redmond WA, US
Ning Lin - Redmond WA, US
Chittaranjan Pattekar - Bothell WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G06F 15/16
G06F 12/08
US Classification:
709219, 711118, 711E12017
Abstract:
A caching system segments content into multiple, individually cacheable chunks cached by a cache server that caches partial content and serves byte range requests with low latency and fewer duplicate requests to an origin server. The system receives a request from a client for a byte range of a content resource. The system determines the chunks overlapped by the specified byte range and sends a byte range request to the origin server for the overlapped chunks not already stored in a cache. The system stores the bytes of received responses as chunks in the cache and responds to the received request using the chunks stored in the cache. The system serves subsequent requests that overlap with previously requested ranges of bytes from the already retrieved chunks in the cache and makes requests to the origin server only for those chunks that a client has not previously requested.

Low Latency Cacheable Media Streaming

US Patent:
2011008, Apr 7, 2011
Filed:
Nov 3, 2009
Appl. No.:
12/611133
Inventors:
John A. Bocharov - Seattle WA, US
Krishna Prakash Duggaraju - Renton WA, US
Lin Liu - Sammamish WA, US
Jack E. Freelander - Monroe WA, US
Ning Lin - Redmond WA, US
Anirban Roy - Kirkland WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
H04N 11/04
G06F 15/16
US Classification:
37524001, 709231, 375E07001
Abstract:
A low latency streaming system provides a stateless protocol between a client and server with reduced latency. The server embeds incremental information in media fragments that eliminates the usage of a typical control channel. In addition, the server provides uniform media fragment responses to media fragment requests, thereby allowing existing Internet cache infrastructure to cache streaming media data. Each fragment has a distinguished Uniform Resource Locator (URL) that allows the fragment to be identified and cached by both Internet cache servers and the client's browser cache. The system reduces latency using various techniques, such as sending fragments that contain less than a full group of pictures (GOP), encoding media without dependencies on subsequent frames, and by allowing clients to request subsequent frames with only information about previous frames.

Selective Content Pre-Caching

US Patent:
2011013, Jun 2, 2011
Filed:
Nov 30, 2009
Appl. No.:
12/626957
Inventors:
Won Suk Yoo - Redmond WA, US
Venkat Raman Don - Redmond WA, US
Anil K. Ruia - Issaquah WA, US
Ning Lin - Redmond WA, US
Chittaranjan Pattekar - Boethell WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G06F 15/16
US Classification:
709237, 709230, 709223
Abstract:
A selective pre-caching system reduces the amount of content cached at cache proxies by limiting the cached content to that content that a particular cache proxy is responsible for caching. This can substantially reduce the content stored on each cache proxy and reduces the amount of resources consumed for pre-caching in preparation for a particular event. The cache proxy receives a list of content items that and an indication of the topology of the cache network. The cache proxy uses the received topology to determine the content items in the received list of content items that the cache proxy is responsible for caching. The cache proxy then retrieves the determined content items so that they are available in the cache before client requests are received.

Intelligent Caching For Requests With Query Strings

US Patent:
2011013, Jun 9, 2011
Filed:
Dec 3, 2009
Appl. No.:
12/629904
Inventors:
Won Suk Yoo - Redmond WA, US
Venkat Raman Don - Redmond WA, US
Anil K. Ruia - Issaquah WA, US
Ning Lin - Redmond WA, US
Chittaranjan Pattekar - Bothell WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
G06F 17/30
G06F 12/08
US Classification:
707713, 707765, 711118, 707E17017, 707E17032, 707E17115, 711E12017
Abstract:
An intelligent caching system is described herein that intelligently consolidates the name-value pairs in content requests containing query strings so that only substantially non-redundant responses are cached, thereby saving cache proxy resources. The intelligent caching system determines which name-value pairs in the query string can affect the redundancy of the content response and which name-value pairs can be ignored. The intelligent caching system organically builds the list of relevant name-value pairs by relying on a custom response header or other indication from the content server. Thus, the intelligent caching system results in fewer requests to the content server as well as fewer objects in the cache.

Byte Range Caching

US Patent:
2018016, Jun 7, 2018
Filed:
Oct 11, 2017
Appl. No.:
15/730301
Inventors:
- Redmond WA, US
Anil K. Ruia - Issaquah WA, US
Himanshu Patel - Redmond WA, US
Ning Lin - Redmond WA, US
Chittaranjan Pattekar - Bothell WA, US
International Classification:
H04N 21/643
H04N 21/61
H04N 21/231
H04L 29/08
H04L 29/06
Abstract:
A caching system segments content into multiple, individually cacheable chunks cached by a cache server that caches partial content and serves byte range requests with low latency and fewer duplicate requests to an origin server. The system receives a request from a client for a byte range of a content resource. The system determines the chunks overlapped by the specified byte range and sends a byte range request to the origin server for the overlapped chunks not already stored in a cache. The system stores the bytes of received responses as chunks in the cache and responds to the received request using the chunks stored in the cache. The system serves subsequent requests that overlap with previously requested ranges of bytes from the already retrieved chunks in the cache and makes requests to the origin server only for those chunks that a client has not previously requested.

Overwriting Existing Media Content With Viewer-Specific Advertisements

US Patent:
2014024, Aug 28, 2014
Filed:
Feb 22, 2013
Appl. No.:
13/774661
Inventors:
- Redmond WA, US
Ning Lin - Redmond WA, US
Pu Su - Redmond WA, US
Vishal Sood - Bothell WA, US
Assignee:
Microsoft Corporation - Redmond WA
International Classification:
H04N 21/81
US Classification:
725 32
Abstract:
Embodiments are directed to pacing on-demand linear advertisement entries to appear as being live entries, to generating a sequential segment map from a parallel playlist and to consolidating linear ad and main content portions into a single linear chunklist. In one embodiment, a computer system receives video content updates for a portion of live video programming, and generates a parallel playlist with parallel playlist entries that identify a presentation that is to be played. The computer system then generates a sequential segment map from the parallel playlist that identifies which parallel playlist entry is to be played, monitors a live position for new media, determines that an on-demand linear advertisement is to be played at the live position, and appends on-demand linear advertisement chunks to a chunklist to replace the main content chunks and play the on-demand linear advertisement entries in a pseudo-live format as if they were live.

NOTICE: You may not use BackgroundCheck or the information it provides to make decisions about employment, credit, housing or any other purpose that would require Fair Credit Reporting Act (FCRA) compliance. BackgroundCheck is not a Consumer Reporting Agency (CRA) as defined by the FCRA and does not provide consumer reports.