BackgroundCheck.run
Search For

Brian E Roark, 582215 NW Mill Pond Rd, Portland, OR 97229

Brian Roark Phones & Addresses

2215 Mill Pond Rd, Portland, OR 97229    503-2976333   

Beaverton, OR   

Oceanside, OR   

18 Myrtle Ave, Madison, NJ 07940    973-4101992   

Upland, CA   

Baltimore, MD   

Morristown, NJ   

Deland, FL   

Providence, RI   

Mentions for Brian E Roark

Career records & work history

License Records

Brian Roark

Licenses:
License #: E099663 - Active
Category: Emergency medical services
Issued Date: Aug 7, 2013
Expiration Date: Jun 30, 2017
Type: San Diego County EMS Agency

Brian Roark resumes & CV records

Resumes

Brian Roark Photo 36

Research Scientist

Location:
Portland, OR
Industry:
Research
Work:
Google
Research Scientist
Ohsu | Oregon Health & Science University Jul 2004 - Jul 2008
Associate Professor
Education:
University of California, Berkeley
Skills:
Natural Language Processing, Machine Learning, Computer Science, Pattern Recognition, Information Retrieval, Information Extraction, Translational Research
Brian Roark Photo 37

Brian Roark

Publications & IP owners

Us Patents

System And Method For Using Meta-Data Dependent Language Modeling For Automatic Speech Recognition

US Patent:
7752046, Jul 6, 2010
Filed:
Oct 29, 2004
Appl. No.:
10/976378
Inventors:
Michiel A. E. Bacchiani - Summit NJ, US
Brian E. Roark - Morristown NJ, US
Assignee:
AT&T Intellectual Property II, L.P. - New York NY
International Classification:
G10L 15/06
US Classification:
704245
Abstract:
Disclosed are systems and methods for providing a spoken dialog system using meta-data to build language models to improve speech processing. Meta-data is generally defined as data outside received speech; for example, meta-data may be a customer profile having a name, address and purchase history of a caller to a spoken dialog system. The method comprises building tree clusters from meta-data and estimating a language model using the built tree clusters. The language model may be used by various modules in the spoken dialog system, such as the automatic speech recognition module and/or the dialog management module. Building the tree clusters from the meta-data may involve generating projections from the meta-data and further may comprise computing counts as a result of unigram tree clustering and then building both unigram trees and higher-order trees from the meta-data as well as computing node distances within the built trees that are used for estimating the language model.

System And Method For Using Meta-Data Dependent Language Modeling For Automatic Speech Recognition

US Patent:
8069043, Nov 29, 2011
Filed:
Jun 3, 2010
Appl. No.:
12/793181
Inventors:
Brian E. Roark - Morristown NJ, US
Assignee:
AT&T Intellectual Property II, L.P. - Atlanta GA
International Classification:
G10L 15/06
US Classification:
704245
Abstract:
Disclosed are systems and methods for providing a spoken dialog system using meta-data to build language models to improve speech processing. Meta-data is generally defined as data outside received speech; for example, meta-data may be a customer profile having a name, address and purchase history of a caller to a spoken dialog system. The method comprises building tree clusters from meta-data and estimating a language model using the built tree clusters. The language model may be used by various modules in the spoken dialog system, such as the automatic speech recognition module and/or the dialog management module. Building the tree clusters from the meta-data may involve generating projections from the meta-data and further may comprise computing counts as a result of unigram tree clustering and then building both unigram trees and higher-order trees from the meta-data as well as computing node distances within the built trees that are used for estimating the language model.

System And Method Of Using Meta-Data In Speech Processing

US Patent:
7996224, Aug 9, 2011
Filed:
Oct 29, 2004
Appl. No.:
10/977030
Inventors:
Sameer Raj Maskey - New York NY, US
Brian E. Roark - Morristown NJ, US
Richard William Sproat - Urbana IL, US
Assignee:
AT&T Intellectual Property II, L.P. - Atlanta GA
International Classification:
G10L 15/04
US Classification:
704254
Abstract:
Systems and methods relate to generating a language model for use in, for example, a spoken dialog system or some other application. The method comprises building a class-based language model, generating at least one sequence network and replacing class labels in the class-based language model with the at least one sequence network. In this manner, placeholders or tokens associated with classes can be inserted into the models at training time and word/phone networks can be built based on meta-data information at test time. Finally, the placeholder token can be replaced with the word/phone networks at run time to improve recognition of difficult words such as proper names.

Rapid Serial Presentation Communication Systems And Methods

US Patent:
2010028, Nov 4, 2010
Filed:
Jan 12, 2009
Appl. No.:
12/812401
Inventors:
Deniz Erdogmus - Beaverton OR, US
Brian Roark - Portland OR, US
Jan Van Santen - Lake Oswego OR, US
Michael Pavel - Portland OR, US
International Classification:
A61B 5/0482
US Classification:
600545
Abstract:
Embodiments of the disclosed technology provide reliable and fast communication of a human through a direct brain interface which detects the intent of the user. An embodiment of the disclosed technology comprises a system and method in which least one sequence of a plurality of stimuli is presented to an individual (using appropriate sensory modalities), and the time course of at least one measurable response to the sequence(s) is used to select at least one stimulus from the sequence(s). In an embodiment, the sequence(s) may be dynamically altered based on previously selected stimuli and/or on estimated probability distributions over the stimuli. In an embodiment, such dynamic alteration may be based on predictive models of appropriate sequence generation mechanisms, such as an adaptive or static sequence model.

Transliteration For Speech Recognition Training And Scoring

US Patent:
2020019, Jun 18, 2020
Filed:
Dec 12, 2019
Appl. No.:
16/712492
Inventors:
- Mountain View CA, US
Min Ma - New York NY, US
Pedro J. Moreno Mengibar - Jersey City NJ, US
Jesse Emond - New York NY, US
Brian E. Roark - Portland OR, US
International Classification:
G10L 15/19
G10L 15/06
G10L 15/22
G10L 15/16
Abstract:
Methods, systems, and apparatus, including computer programs stored on a computer-readable storage medium, for transliteration for speech recognition training and scoring. In some implementations, language examples are accessed, some of which include words in a first script and words in one or more other scripts. At least portions of some of the language examples are transliterated to the first script to generate a training data set. A language model is generated based on occurrences of the different sequences of words in the training data set in the first script. The language model is used to perform speech recognition for an utterance.

Generating Output For Presentation In Response To User Interface Input, Where The Input And/Or The Output Include Chatspeak

US Patent:
2019022, Jul 18, 2019
Filed:
Mar 21, 2019
Appl. No.:
16/360752
Inventors:
- Mountain View CA, US
Bryan Horling - Belmont MA, US
Maryam Garrett - Cambridge MA, US
Brian Roark - Portland OR, US
Richard Sproat - Hamilton NJ, US
International Classification:
G06F 17/28
G06F 16/31
H04L 12/58
G06F 17/27
G06F 17/22
Abstract:
Some implementations are directed to translating chatspeak to a normalized form, where the chatspeak is included in natural language input formulated by a user via a user interface input device of a computing device—such as input provided by the user to an automated assistant. The normalized form of the chatspeak may be utilized by the automated assistant in determining reply content that is responsive to the natural language input, and that reply content may be presented to the user via one or more user interface output devices of the computing device of the user. Some implementations are additionally and/or alternatively directed to providing, for presentation to a user, natural language output that includes chatspeak in lieu of a normalized form of the chatspeak, based at least in part on a “chatspeak measure” that is determined based on past usage of chatspeak by the user and/or by additional users.

Generating Output For Presentation In Response To User Interface Input, Where The Input And/Or The Output Include Chatspeak

US Patent:
2017033, Nov 23, 2017
Filed:
May 17, 2016
Appl. No.:
15/157293
Inventors:
- Mountain View CA, US
Bryan Horling - Belmont MA, US
Maryam Garrett - Cambridge MA, US
Brian Roark - Portland OR, US
Richard Sproat - Hamilton NJ, US
International Classification:
G06F 17/28
G06F 17/22
G06F 17/30
H04L 12/58
Abstract:
Some implementations are directed to translating chatspeak to a normalized form, where the chatspeak is included in natural language input formulated by a user via a user interface input device of a computing device—such as input provided by the user to an automated assistant. The normalized form of the chatspeak may be utilized by the automated assistant in determining reply content that is responsive to the natural language input, and that reply content may be presented to the user via one or more user interface output devices of the computing device of the user. Some implementations are additionally and/or alternatively directed to providing, for presentation to a user, natural language output that includes chatspeak in lieu of a normalized form of the chatspeak, based at least in part on a “chatspeak measure” that is determined based on past usage of chatspeak by the user and/or by additional users.

Enhanced Maximum Entropy Models

US Patent:
2015026, Sep 24, 2015
Filed:
Mar 24, 2015
Appl. No.:
14/667518
Inventors:
- Mountain View CA, US
Brian E. Roark - Portland OR, US
International Classification:
G10L 15/18
G10L 15/26
Abstract:
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, relating to enhanced maximum entropy models. In some implementations, data indicating a candidate transcription for an utterance and a particular context for the utterance are received. A maximum entropy language model is obtained. Feature values are determined for n-gram features and backoff features of the maximum entropy language model. The feature values are input to the maximum entropy language model, and an output is received from the maximum entropy language model. A transcription for the utterance is selected from among a plurality of candidate transcriptions based on the output from the maximum entropy language model. The selected transcription is provided to a client device.

Isbn (Books And Publications)

Computational Approaches To Morphology And Syntax

Author:
Brian Roark
ISBN #:
0199274770

Computational Approaches To Morphology And Syntax

Author:
Brian Roark
ISBN #:
0199274789

NOTICE: You may not use BackgroundCheck or the information it provides to make decisions about employment, credit, housing or any other purpose that would require Fair Credit Reporting Act (FCRA) compliance. BackgroundCheck is not a Consumer Reporting Agency (CRA) as defined by the FCRA and does not provide consumer reports.