BackgroundCheck.run
Search For

Heather A Ames, 45Billerica, MA

Heather Ames Phones & Addresses

Billerica, MA   

9 Thatcher St, Medford, MA 02155    781-3938155   

Revere, MA   

9 Thatcher St, Medford, MA 02155   

Mentions for Heather A Ames

Career records & work history

License Records

Heather D Ames

Licenses:
License #: 36395 - Expired
Category: Nursing Support
Issued Date: Apr 10, 1997
Effective Date: Feb 5, 2003
Type: Nurse Aide

Heather Ames resumes & CV records

Resumes

Heather Ames Photo 48

Co-Director Of Celest Technology Outreach At Boston University

Location:
Greater Boston Area
Industry:
Computer Software
Heather Ames Photo 49

Heather Ames

Publications & IP owners

Us Patents

Graphic Processor Based Accelerator System And Method

US Patent:
2008011, May 22, 2008
Filed:
Sep 24, 2007
Appl. No.:
11/860254
Inventors:
Anatoli Gorchetchnikov - Belmont MA, US
Heather Marie Ames - South Boston MA, US
Massimiliano Versace - South Boston MA, US
Fabrizio Santini - Jamaica Plain MA, US
Assignee:
Neurala LLC - Boston MA
International Classification:
G06T 1/20
G06F 15/76
US Classification:
345503, 345522, 345536
Abstract:
An accelerator system is implemented on an expansion card comprising a printed circuit board having (a) one or more graphics processing units (GPU), (b) two or more associated memory banks (logically or physically partitioned), (c) a specialized controller, and (d) a local bus providing signal coupling compatible with the PCI industry standards (this includes but is not limited to PCI-Express, PCI-X, USB 2.0, or functionally similar technologies). The controller handles most of the primitive operations needed to set up and control GPU computation. As a result, the computer's central processing unit (CPU) is freed from this function and is dedicated to other tasks. In this case a few controls (simulation start and stop signals from the CPU and the simulation completion signal back to CPU), GPU programs and input/output data are the information exchanged between CPU and the expansion card. Moreover, since on every time step of the simulation the results from the previous time step are used but not changed, the results are preferably transferred back to CPU in parallel with the computation.

Systems And Methods To Enable Continual, Memory-Bounded Learning In Artificial Intelligence And Deep Learning Continuously Operating Applications Across Networked Compute Edges

US Patent:
2018033, Nov 15, 2018
Filed:
May 9, 2018
Appl. No.:
15/975280
Inventors:
- Boston MA, US
Santiago OLIVERA - Brookline MA, US
Jeremy WURBS - Worcester MA, US
Heather Marie AMES - Milton MA, US
Massimiliano VERSACE - Milton MA, US
International Classification:
G06N 3/08
G06N 3/04
G06K 9/62
Abstract:
Lifelong Deep Neural Network (L-DNN) technology revolutionizes Deep Learning by enabling fast, post-deployment learning without extensive training, heavy computing resources, or massive data storage. It uses a representation-rich, DNN-based subsystem (Module A) with a fast-learning subsystem (Module B) to learn new features quickly without forgetting previously learned features. Compared to a conventional DNN, L-DNN uses much less data to build robust networks, dramatically shorter training time, and learning on-device instead of on servers. It can add new knowledge without re-training or storing data. As a result, an edge device with L-DNN can learn continuously after deployment, eliminating massive costs in data collection and annotation, memory and data storage, and compute power. This fast, local, on-device learning can be used for security, supply chain monitoring, disaster and emergency response, and drone-based inspection of infrastructure and properties, among other applications.

Apparatuses, Methods And Systems For Defining Hardware-Agnostic Brains For Autonomous Robots

US Patent:
2017007, Mar 16, 2017
Filed:
Nov 4, 2016
Appl. No.:
15/343673
Inventors:
- Boston MA, US
Roger Matus - Waltham MA, US
Alexandrea Defreitas - Warwick RI, US
John Michael Amadeo - Mont Vernon NH, US
Tim Seemann - Brookline MA, US
Ethan Marsh - Allston MA, US
Heather Marie Ames - Boston MA, US
Anatoli GORCHETCHNIKOV - Newton MA, US
International Classification:
G06N 3/00
G06N 3/08
G06N 3/04
Abstract:
Conventionally, robots are typically either programmed to complete tasks using a programming language (either text or graphical), shown what to do for repetitive tasks, or operated remotely by a user. The present technology replaces or augments conventional robot programming and control by enabling a user to define a hardware-agnostic brain that uses Artificial Intelligence (AI) systems, machine vision systems, and neural networks to control a robot based on sensory input acquired by the robot's sensors. The interface for defining the brain allows the user to create behaviors from combinations of sensor stimuli and robot actions, or responses, and to group these behaviors to form brains. An Application Program Interface (API) underneath the interface translates the behaviors' inputs and outputs into API calls and commands specific to particular robots. This allows the user to port brains among different types of robot to robot without knowing specifics of the robot commands.

Graphic Processor Based Accelerator System And Method

US Patent:
2014019, Jul 10, 2014
Filed:
Jan 3, 2014
Appl. No.:
14/147015
Inventors:
- Boston MA, US
Heather Marie Ames - South Boston MA, US
Massimiliano Versace - South Boston MA, US
Fabrizio Santini - Jamaica Plain MA, US
Assignee:
Neurala Inc. - Boston MA
International Classification:
G06T 1/60
US Classification:
345531
Abstract:
An accelerator system is implemented on an expansion card comprising a printed circuit board having (a) one or more graphics processing units (GPU), (b) two or more associated memory banks (logically or physically partitioned), (c) a specialized controller, and (d) a local bus providing signal coupling compatible with the PCI industry standards (this includes but is not limited to PCI-Express, PCI-X, USB 2.0, or functionally similar technologies). The controller handles most of the primitive operations needed to set up and control GPU computation. As a result, the computer's central processing unit (CPU) is freed from this function and is dedicated to other tasks. In this case a few controls (simulation start and stop signals from the CPU and the simulation completion signal back to CPU), GPU programs and input/output data are the information exchanged between CPU and the expansion card. Moreover, since on every time step of the simulation the results from the previous time step are used but not changed, the results are preferably transferred back to CPU in parallel with the computation.

NOTICE: You may not use BackgroundCheck or the information it provides to make decisions about employment, credit, housing or any other purpose that would require Fair Credit Reporting Act (FCRA) compliance. BackgroundCheck is not a Consumer Reporting Agency (CRA) as defined by the FCRA and does not provide consumer reports.