BackgroundCheck.run
Search For

Thomas E Bagby, 46Russian River, CA

Thomas Bagby Phones & Addresses

Monte Rio, CA   

2030 3Rd St APT 14, San Francisco, CA 94107   

Seattle, WA   

Los Angeles, CA   

Springfield, VA   

Santa Monica, CA   

Mentions for Thomas E Bagby

Thomas Bagby resumes & CV records

Resumes

Thomas Bagby Photo 31

Thomas Bagby

Location:
San Francisco, CA
Industry:
Internet
Work:
Kosmix
Skills:
Scalability, Software Development, Distributed Systems, Software Engineering, Linux, Python, Algorithms, Hadoop, Mobile Applications, Cloud Computing
Thomas Bagby Photo 32

Thomas Bagby

Location:
United States
Thomas Bagby Photo 33

Thomas Bagby

Location:
United States

Publications & IP owners

Us Patents

Variational Embedding Capacity In Expressive End-To-End Speech Synthesis

US Patent:
2020037, Nov 26, 2020
Filed:
May 20, 2020
Appl. No.:
16/879714
Inventors:
- Mountain View CA, US
Daisy Stanton - Mountain View CA, US
Russell John Wyatt Skerry-Ryan - Mountain View CA, US
David Teh-hwa Kao - San Francisco CA, US
Thomas Edward Bagby - San Francisco CA, US
Sean Matthew Shannon - Mountain View CA, US
Assignee:
Google LLC - Mountain View CA
International Classification:
G10L 13/047
G10L 13/10
Abstract:
A method for estimating an embedding capacity includes receiving, at a deterministic reference encoder, a reference audio signal, and determining a reference embedding corresponding to the reference audio signal, the reference embedding having a corresponding embedding dimensionality. The method also includes measuring a first reconstruction loss as a function of the corresponding embedding dimensionality of the reference embedding and obtaining a variational embedding from a variational posterior. The variational embedding has a corresponding embedding dimensionality and a specified capacity. The method also includes measuring a second reconstruction loss as a function of the corresponding embedding dimensionality of the variational embedding and estimating a capacity of the reference embedding by comparing the first measured reconstruction loss for the reference embedding relative to the second measured reconstruction loss for the variational embedding having the specified capacity.

Complex Evolution Recurrent Neural Networks

US Patent:
2020011, Apr 9, 2020
Filed:
Dec 11, 2019
Appl. No.:
16/710005
Inventors:
- Mountain View CA, US
Thomas E. Bagby - San Francisco CA, US
Russell John Wyatt Skerry-Ryan - Mountain View CA, US
International Classification:
G10L 15/16
G10H 1/00
G10L 15/02
G10L 19/02
G06N 3/02
Abstract:
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech recognition using complex evolution recurrent neural networks. In some implementations, audio data indicating acoustic characteristics of an utterance is received. A first vector sequence comprising audio features determined from the audio data is generated. A second vector sequence is generated, as output of a first recurrent neural network in response to receiving the first vector sequence as input, where the first recurrent neural network has a transition matrix that implements a cascade of linear operators comprising (i) first linear operators that are complex-valued and unitary, and (ii) one or more second linear operators that are non-unitary. An output vector sequence of a second recurrent neural network is generated. A transcription for the utterance is generated based on the output vector sequence generated by the second recurrent neural network. The transcription for the utterance is provided.

Complex Evolution Recurrent Neural Networks

US Patent:
2019015, May 23, 2019
Filed:
Jan 18, 2019
Appl. No.:
16/251430
Inventors:
- Mountain View CA, US
Thomas E. Bagby - San Francisco CA, US
Russell John Wyatt Skerry-Ryan - Mountain View CA, US
International Classification:
G10L 15/16
G10L 19/02
G10L 15/02
G10H 1/00
Abstract:
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech recognition using complex evolution recurrent neural networks. In some implementations, audio data indicating acoustic characteristics of an utterance is received. A first vector sequence comprising audio features determined from the audio data is generated. A second vector sequence is generated, as output of a first recurrent neural network in response to receiving the first vector sequence as input, where the first recurrent neural network has a transition matrix that implements a cascade of linear operators comprising (i) first linear operators that are complex-valued and unitary, and (ii) one or more second linear operators that are non-unitary. An output vector sequence of a second recurrent neural network is generated. A transcription for the utterance is generated based on the output vector sequence generated by the second recurrent neural network. The transcription for the utterance is provided.

Isbn (Books And Publications)

Memphis Firefighters Case Impact Of Supreme Courts Stotts Decision On Affirmative Action

Author:
Thomas Bagby
ISBN #:
0916559017

NOTICE: You may not use BackgroundCheck or the information it provides to make decisions about employment, credit, housing or any other purpose that would require Fair Credit Reporting Act (FCRA) compliance. BackgroundCheck is not a Consumer Reporting Agency (CRA) as defined by the FCRA and does not provide consumer reports.