BackgroundCheck.run
Search For

Dongwook W Suh, 58Springfield, VA

Dongwook Suh Phones & Addresses

Burke, VA   

18877 Westview Dr, Saratoga, CA 95070    669-3427247   

4658 Englewood Dr, San Jose, CA 95129   

37195 Creekside Ter, Fremont, CA 94536   

Philadelphia, PA   

Santa Clara, CA   

Gainesville, FL   

Social networks

Dongwook W Suh

Linkedin

Work

Company: Suh, dongwook Address: 4658 Englewood Dr, San Jose, CA 95129 Phones: 408-2552269 Position: Executive Industries: Eating Places

Mentions for Dongwook W Suh

Dongwook Suh resumes & CV records

Resumes

Dongwook Suh Photo 10

Dongwook Suh

Publications & IP owners

Us Patents

Memory Architecture Having Multiple Partial Wordline Drivers And Contacted And Feed-Through Bitlines

US Patent:
7697364, Apr 13, 2010
Filed:
Dec 1, 2005
Appl. No.:
11/291219
Inventors:
Raymond Jit-Hung Sung - Sunnyvale CA, US
Dongwook Suh - Saratoga CA, US
Daniel Rodriguez - Hayward CA, US
Assignee:
Broadcom Corporation - Irvine CA
International Classification:
G11C 8/00
US Classification:
36523006, 36523003, 36523002
Abstract:
Various embodiments are disclosed relating to a memory circuit architecture. In an example embodiment, which may accommodate a change to a new memory size or cell aspect ratio, while migrating between different process nodes or the same process generation, while retaining at least a portion of the periphery circuitry, a memory circuit architecture may be employed in which the memory array is divided into an upper half and a lower half, thereby splitting the cache Ways among the two halves. The wordline may be split among the two array halves, with each half driven by a half wordline driver. Also, in another embodiment, two sets of bitlines may be provided for each column, including a contacted set of bitlines and a feed-through set of bitlines.

Memory Architecture Having Multiple Partial Wordline Drivers And Contacted And Feed-Through Bitlines

US Patent:
8009506, Aug 30, 2011
Filed:
Mar 24, 2010
Appl. No.:
12/730873
Inventors:
Raymond J. Sung - Sunnyvale CA, US
Dongwook Suh - Saratoga CA, US
Daniel O. Rodriguez - Hayward CA, US
Assignee:
Broadcom Corporation - Irvine CA
International Classification:
G11C 8/00
US Classification:
36523006, 36523003, 36523002, 365203
Abstract:
Various embodiments are disclosed relating to a memory circuit architecture. In an example embodiment, which may accommodate a change to a new memory size or cell aspect ratio, while migrating between different process nodes or the same process generation, while retaining at least a portion of the periphery circuitry, a memory circuit architecture may be employed in which the memory array is divided into an upper half and a lower half, thereby splitting the cache Ways among the two halves. The wordline may be split among the two array halves, with each half driven by a half wordline driver. Also, in another embodiment, two sets of bitlines may be provided for each column, including a contacted set of bitlines and a feed-through set of bitlines.

Memory Architecture Having Multiple Partial Wordline Drivers And Contacted And Feed-Through Bitlines

US Patent:
8477556, Jul 2, 2013
Filed:
Jul 26, 2011
Appl. No.:
13/191107
Inventors:
Raymond J. Sung - Sunnyvale CA, US
Dongwook Suh - Saratoga CA, US
Daniel O. Rodriguez - Hayward CA, US
Assignee:
Broadcom Corporation - Irvine CA
International Classification:
G11C 8/00
US Classification:
36523006, 36523003, 36523002
Abstract:
Various embodiments are disclosed relating to a memory circuit architecture. In an example embodiment, which may accommodate a change to a new memory size or cell aspect ratio, while migrating between different process nodes or the same process generation, while retaining at least a portion of the periphery circuitry, a memory circuit architecture may be employed in which the memory array is divided into an upper half and a lower half, thereby splitting the cache Ways among the two halves. The wordline may be split among the two array halves, with each half driven by a half wordline driver. Also, in another embodiment, two sets of bitlines may be provided for each column, including a contacted set of bitlines and a feed-through set of bitlines.

Neural Network Hardware Accelerator Architectures And Operating Method Thereof

US Patent:
2018007, Mar 15, 2018
Filed:
Aug 11, 2017
Appl. No.:
15/675358
Inventors:
- Gyeonggi-do, KR
Dongwook SUH - Saratoga CA, US
International Classification:
G06N 3/04
G06F 7/02
G11C 13/00
Abstract:
A memory-centric neural network system and operating method thereof includes: a processing unit; semiconductor memory devices coupled to the processing unit, the semiconductor memory devices contain instructions executed by the processing unit; weight matrixes including a positive weight matrix and a negative weight matrix constructed with rows and columns of memory cells, inputs of the memory cells of a same row are connected to one of Axons, outputs of the memory cells of a same column are connected to one of Neurons; timestamp registers registering timestamps of the Axons and the Neurons; and a lookup table containing adjusting values indexed in accordance with the timestamps, the processing unit updates the weight matrixes in accordance with the adjusting values.

Neural Network Hardware Accelerator Architectures And Operating Method Thereof

US Patent:
2018007, Mar 15, 2018
Filed:
Aug 11, 2017
Appl. No.:
15/675390
Inventors:
- Gyeonggi-do, KR
Dongwook SUH - Saratoga CA, US
International Classification:
G06N 3/06
G06N 3/04
Abstract:
A memory-centric neural network system and operating method thereof includes: a processing unit; semiconductor memory devices coupled to the processing unit, the semiconductor memory devices contain instructions executed by the processing unit; a weight matrix constructed with rows and columns of memory cells, inputs of the memory cells of a same row are connected to one of Axons, outputs of the memory cells of a same column are connected to one of Neurons; timestamp registers registering timestamps of the Axons and the Neurons; and a lookup table containing adjusting values indexed in accordance with the timestamps, the processing unit updates the weight matrix in accordance with the adjusting values.

Mechanism Enabling The Use Of Slow Memory To Achieve Byte Addressability And Near-Dram Performance With Page Remapping Scheme

US Patent:
2017020, Jul 20, 2017
Filed:
Dec 6, 2016
Appl. No.:
15/370858
Inventors:
- Gyeonggi-do, KR
Dongwook SUH - Saratoga CA, US
International Classification:
G06F 3/06
Abstract:
Memory systems may include a memory storage including a dynamic random access memory (DRAM) portion, a non-volatile memory (NVM) portion, and a virtual memory (VM), a software page remapping kernel driver (SPRKD) suitable for intercepting a memory management command, the memory management command including an access to a virtual address location of the VM, and remapping the virtual address location from a physical address of the NVM portion mapped to the virtual address location to a physical address of the DRAM portion, and a controller suitable for executing the memory management command by accessing the physical address of the DRAM portion to which the virtual address location is remapped.

Tehcniques With Os- And Application- Transparent Memory Compression

US Patent:
2017020, Jul 20, 2017
Filed:
Dec 6, 2016
Appl. No.:
15/370890
Inventors:
- Gyeonggi-do, KR
Dongwook SUH - Saratoga CA, US
International Classification:
G06F 12/12
G11C 16/26
G06F 3/06
G11C 16/08
G06F 12/0891
G06F 12/0815
G11C 16/10
G11C 16/16
Abstract:
Memory systems may include a memory storage including a fast memory portion and a slow memory portion, a software page remapping kernel driver (SPRKD) suitable for intercepting a memory management command generated by an operating system, at least one of compressing data to be written from the fast memory portion to the slow memory portion prior to execution of the operating system memory management command, and decompressing data to be written from the slow memory portion to the fast memory portion prior to execution of the operating system memory management command, and transferring either the compressed data to be written to the slow memory portion or the decompressed data to be written to the fast memory portion, and a controller suitable for executing the memory management command after the transferring by the SPRKD such that the compressing or decompressing of data is performed transparent to the operating system.

NOTICE: You may not use BackgroundCheck or the information it provides to make decisions about employment, credit, housing or any other purpose that would require Fair Credit Reporting Act (FCRA) compliance. BackgroundCheck is not a Consumer Reporting Agency (CRA) as defined by the FCRA and does not provide consumer reports.