BackgroundCheck.run
Search For

Eric T Bax, 56314 N Sunnyside Ave, Sierra Madre, CA 91024

Eric Bax Phones & Addresses

314 N Sunnyside Ave, Sierra Madre, CA 91024    626-3253325   

1267 Michigan Ave, Pasadena, CA 91104   

2241 Santa Rosa Ave, Altadena, CA 91001   

1754 Sandra Dr, Columbia, SC 29209   

Richmond, VA   

Long Beach, CA   

Los Angeles, CA   

Mentions for Eric T Bax

Eric Bax resumes & CV records

Resumes

Eric Bax Photo 18

Director, Marketplace Design

Location:
Pasadena, CA
Industry:
Computer Software
Work:
Dimagi Nov 2006 - Sep 2007
Consultant
Yahoo Nov 2006 - Sep 2007
Director, Marketplace Design
Google Jun 2006 - Nov 2006
Member of Technical Staff
Applied Minds Mar 2005 - Jun 2006
Senior Scientist
Ispheres Jan 2000 - Feb 2005
Vice President of Research and Development
University of Richmond Aug 1998 - Dec 1999
Assistant Professor
Idealab Jun 1998 - Aug 1998
Scientist
Education:
Caltech Jun 1993 - 1998
Doctorates, Doctor of Philosophy, Mathematics, Computer Science
Furman University 1986 - 1990
Bachelors, Bachelor of Science, Mathematics
Lower Richland Hs
Skills:
Machine Learning, Algorithms, Pattern Recognition, Distributed Systems, Computer Science, Software Development, Big Data, Data Mining, Python, C, Software Engineering, Artificial Intelligence, Statistics, Optimization, Natural Language Processing, Programming, Perl, Information Retrieval, Optimizations, Java, Text Mining
Eric Bax Photo 19

Eric Bax

Eric Bax Photo 20

Eric Bax

Publications & IP owners

Us Patents

Validation Of Nearest Neighbor Classifiers

US Patent:
6732083, May 4, 2004
Filed:
Feb 21, 2001
Appl. No.:
09/790124
Inventors:
Eric T Bax - Pasadena CA, 91116-6543
International Classification:
G06F 1518
US Classification:
706 12, 706 20, 711173
Abstract:
A computer-based system computes a probabilistic bound on the error probability of a nearest neighbor classifier as follows. A subset of the examples in the classifier is used to form a reduced classifier. The error frequency of the reduced classifier on the remaining examples is computed as a baseline estimate of the error probability for the original classifier. Additionally, subsets of the examples outside the reduced classifier are combined with the reduced classifier and applied to the remaining examples in order to estimate the difference in error probability for the reduced classifier and error probability for the original classifier.

Using Validation By Inference To Select A Hypothesis Function

US Patent:
6850873, Feb 1, 2005
Filed:
Dec 18, 2000
Appl. No.:
09/740188
Inventors:
Eric T Bax - Pasadena CA, US
International Classification:
G06F 1710
US Classification:
703 2, 700 51, 700 91
Abstract:
Given a set of basis functions, a set of example inputs, and a set of uniform error bounds for the basis functions over the example inputs, a quadratic program is formed. The quadratic program is solved, producing a solution vector and a solution value. A hypothesis function is formed through fusion by using the solution vector to weight the outputs of the basis function. The hypothesis function is a function with minimum error bound among the functions formed by convex combination of basis function outputs. The solution value is an error bound for the hypothesis function. The error bound is logically implied by the uniform error bounds over the basis functions rather than uniform error bounds over the entire class of functions formed by convex combination of basis function outputs.

Distributed Data Store With An Orderstamp To Ensure Progress

US Patent:
7590635, Sep 15, 2009
Filed:
Dec 14, 2005
Appl. No.:
11/300950
Inventors:
W. Daniel Hillis - Encino CA, US
Eric Bax - Altadena CA, US
Mathias L. Kolehmainen - Los Angeles CA, US
Assignee:
Applied Minds, Inc. - Glendale CA
International Classification:
G06F 7/00
G06F 17/30
US Classification:
707 10, 707 4
Abstract:
A distributed data store labels operations with globally unique identifiers that contain approximate timestamps. The labels are used to address causes of inconsistency in the distributed data store while ensuring progress. A first mode is provided that stores the latest label for each entry is useful if re-inserts and deletes are rare. Another mode is provided that stores a history of labels for each entry can be used if there are many re-inserts and deletes. A further mode is provided that stores a history of labels for queries can report updates to query answers as inserts and deletes settle across the distributed data store.

Time Series Monitoring System

US Patent:
7599913, Oct 6, 2009
Filed:
Feb 9, 2004
Appl. No.:
10/775744
Inventors:
Joseph Greg Billock - Altadena CA, US
Ian Douglas Swett - Pasadena CA, US
Eric Theodore Bax - Altadena CA, US
Assignee:
Avaya, Inc. - Basking Ridge NJ
International Classification:
G06F 17/30
US Classification:
707 3, 707 5, 707101, 7071041
Abstract:
A time series monitoring system, implemented in software, executes persistent queries on multiple input time series, handling high data throughput with low response time. The system supports dynamic management of time series, of windows in time series, and of persistent queries. Also, the system can use historical values in present windows to help populate inserted windows.

Finite-State Machine Augmented For Multiple Evaluations Of Text

US Patent:
7672965, Mar 2, 2010
Filed:
Feb 9, 2004
Appl. No.:
10/775745
Inventors:
Eric Theodore Bax - Altadena CA, US
Assignee:
Avaya, Inc. - Basking Ridge NJ
International Classification:
G06F 17/00
US Classification:
707102, 707 3, 716 1, 326121, 381 61
Abstract:
A process performs multiple evaluations of text simultaneously. There are multiple counters, each with pattern-amount pairs. The pattern-amount pairs are accumulated into a single finite-state machine, with each state having a list of (counter, value) pairs instead of a single value. While the finite-state machine is applied to text, a score for each counter is accumulated by summing values for the counter from value lists of visited states. With one state transition per character, evaluating text using one finite-state machine for multiple counters is more efficient than using separate finite-state machines for counters or patterns.

Technique For Extracting Data From Structured Documents

US Patent:
7689906, Mar 30, 2010
Filed:
Dec 1, 2000
Appl. No.:
09/728689
Inventors:
Eric T. Bax - Pasadena CA, US
Charless C. Fowlkes - Bozeman MT, US
Louis Cisnero, Jr. - Jourdanton TX, US
Assignee:
AVAYA, Inc. - Basking Ridge NJ
International Classification:
G06F 17/21
US Classification:
715237, 715234
Abstract:
The present invention discloses a technique for extracting data from a file. In accordance with the present invention, a request to extract one or more data records from the file is received. The data records within the file are identified, without using prior knowledge of a structure of the file. The data records are then extracted.

Validation Of Function Approximation By Fusion

US Patent:
7778803, Aug 17, 2010
Filed:
Sep 28, 2000
Appl. No.:
09/677334
Inventors:
Eric T Bax - Pasadena CA, US
International Classification:
G06F 17/10
US Classification:
703 2, 703 6, 706 12, 706 20, 707100
Abstract:
An error bound that indicates how well a hypothesis function approximates an unknown target function over a set of out-of-sample examples is computed from known error bounds for basis functions, as follows. An optimization problem is formed in which basis function error bounds imply constraints on feasible outputs of the target function over out-of-sample inputs. The optimization problem is solved to find an upper bound on the differences between the hypothesis function outputs and feasible target function outputs. This upper bound is a an error bound for the hypothesis function.

Bounding Error Rate Of A Classifier Based On Worst Likely Assignment

US Patent:
7899766, Mar 1, 2011
Filed:
Feb 7, 2008
Appl. No.:
12/069129
Inventors:
Eric Theodore Bax - Altadena CA, US
Augusto Daniel Callejas - Pasadena CA, US
International Classification:
G06F 15/18
G06F 11/07
US Classification:
706 20, 706 25
Abstract:
Given a set of training examples—with known inputs and outputs—and a set of working examples—with known inputs but unknown outputs—train a classifier on the training examples. For each possible assignment of outputs to the working examples, determine whether assigning the outputs to the working examples results in a training and working set that are likely to have resulted from the same distribution. If so, then add the assignment to a likely set of assignments. For each assignment in the likely set, compute the error of the trained classifier on the assignment. Use the maximum of these errors as a probably approximately correct error bound for the classifier.

NOTICE: You may not use BackgroundCheck or the information it provides to make decisions about employment, credit, housing or any other purpose that would require Fair Credit Reporting Act (FCRA) compliance. BackgroundCheck is not a Consumer Reporting Agency (CRA) as defined by the FCRA and does not provide consumer reports.