BackgroundCheck.run
Search For

Bo Morgan, 43Redwood City, CA

Bo Morgan Phones & Addresses

Emerald Hills, CA   

Palo Alto, CA   

218 Putnam Ave, Cambridge, MA 02139    617-4924577    617-8646132   

220 Putnam Ave, Cambridge, MA 02139    617-8646132   

513 Putnam Ave, Cambridge, MA 02139   

San Diego, CA   

Boston, MA   

Fallbrook, CA   

Mentions for Bo Morgan

Bo Morgan resumes & CV records

Resumes

Bo Morgan Photo 35

Bo Morgan

Location:
Palo Alto, CA
Industry:
Research
Work:
Aibrain, Inc.
Massachusetts Institute of Technology (Mit)
Education:
Fallbrook Union High School
Massachusetts Institute of Technology
Skills:
Machine Learning, Artificial Intelligence, Python, Pattern Recognition, Algorithms, Natural Language Processing, Matlab, Linux, Java, Human Computer Interaction, Computer Science, Cognitive Science, Sensors, Lisp, Funk, Computer Vision, Neural Networks, Image Processing, Research, Software Engineering, Os X, Programming, C/C++, Cognitive Architectures, Computational Metacognition, Layered Control Systems, Signal Processing, Windows, Programming Language Design, Bass Guitar, Opencv, Software Development, Robotics, C++, Assembly Language, Flash, Machine Language, Cognitive Neuroscience, Unity3D, Android, Latex, Start Ups, C
Languages:
English
Bo Morgan Photo 36

Bo Morgan

Bo Morgan Photo 37

Bo Morgan

Bo Morgan Photo 38

Bo Morgan

Bo Morgan Photo 39

Bo Morgan

Bo Morgan Photo 40

Bo Morgan

Bo Morgan Photo 41

Bo Morgan

Bo Morgan Photo 42

Bo Morgan

Location:
San Diego, California
Industry:
Research
Skills:
C/C++, Lisp, Matlab, Python, Java, Funk, Linux, Windows, OS X, Algorithms, Artificial Intelligence, Cognitive Architectures, Computational Metacognition, Programming Language Design, Machine Learning, Pattern Recognition, Layered Control Systems, Assembly Language, Flash, Bass Guitar, Machine Language, Human Computer Interaction
Awards:
Bank of America Research Fellowship
France Telecom Research Fellowship
Wayne (Todd) Matson Artificial Intelligence Scholarship
Frederick S. Fenning Scholarship
Intel Science Talent Search, Semi-finalist
New Zealand National Science Fair, U.S. Representative Guest
California State Science Fair, Computer Science Division Second Place
Greater San Diego Science and Engineering Fair, Senior Division Winner

Publications & IP owners

Us Patents

Methods And Systems For Composing And Executing A Scene

US Patent:
2023008, Mar 23, 2023
Filed:
Jun 29, 2022
Appl. No.:
17/853557
Inventors:
- Cupertino CA, US
Daniel L. Kovacs - Santa Clara CA, US
Shaun D. Budhram - Los Gatos CA, US
Edward Ahn - San Francisco CA, US
Behrooz Mahasseni - San Jose CA, US
Aashi Manglik - Sunnyvale CA, US
Payal Jotwani - Santa Clara CA, US
Mu Qiao - Campbell CA, US
Bo Morgan - Emerald Hills CA, US
Noah Gamboa - San Franciso CA, US
Michael J. Gutensohn - San Francisco CA, US
Dan Feng - Santa Clara CA, US
Siva Chandra Mouli Sivapurapu - Santa Clara CA, US
International Classification:
G06T 19/00
Abstract:
In one implementation, a method of displaying content is performed at a device including a display, one or more processors, and non-transitory memory. The method includes scanning a first physical environment to detect a first physical object in the first physical environment and a second physical object in the first physical environment, wherein the first physical object meets at least one first object criterion and the second physical object meets at least one second object criterion. The method includes displaying, in association with the first physical environment, a virtual object moving along a first path from the first physical object to the second physical object. The method includes scanning a second physical environment to detect a third physical object in the second physical environment and a fourth physical object in the second physical environment, wherein the third physical object meets the at least one first object criterion and the fourth physical object meets the at least one second object criterion. The method includes displaying, in association with the second physical environment, the virtual object moving along a second path from the third physical object to the fourth physical object, wherein the second path is different than the first path.

Directing A Virtual Agent Based On Eye Behavior Of A User

US Patent:
2023002, Jan 26, 2023
Filed:
Jun 24, 2022
Appl. No.:
17/848818
Inventors:
- Cupertino CA, US
Dan Feng - Santa Clara CA, US
Bo Morgan - Emerald Hills CA, US
Mark E. Drummond - Palo Alto CA, US
International Classification:
G06F 3/01
G02B 27/01
Abstract:
According to various implementations, a method is performed at an electronic device including one or more processors, a non-transitory memory, and a display. The method includes displaying, on the display, a virtual agent that is associated with a first viewing frustum. The first viewing frustum includes a user avatar associated with a user, and the user avatar includes a visual representation of one or more eyes. The method includes, while displaying the virtual agent associated with the first viewing frustum, obtaining eye tracking data that is indicative of eye behavior associated with an eye of the user, updating the visual representation of one or more eyes based on the eye behavior, and directing the virtual agent to perform an action based on the updating and scene information associated with the electronic device.

Generating A Semantic Construction Of A Physical Setting

US Patent:
2021040, Dec 30, 2021
Filed:
Sep 14, 2021
Appl. No.:
17/475004
Inventors:
- Cupertino CA, US
Bo Morgan - Emerald Hills CA, US
Siva Chandra Mouli Sivapurapu - Santa Clara CA, US
International Classification:
G06T 17/00
G06K 9/20
G06K 7/14
G06F 16/53
Abstract:
In some implementations, a method includes obtaining environmental data corresponding to a physical environment. In some implementations, the method includes determining, based on the environmental data, a bounding surface of the physical environment. In some implementations, the method includes detecting a physical element located within the physical environment based on the environmental data. In some implementations, the method includes determining a semantic label for the physical element based on at least a portion of the environmental data corresponding to the physical element. In some implementations, the method includes generating a semantic construction of the physical environment based on the environmental data. In some implementations, the semantic construction of the physical environment includes a representation of the bounding surface, a representation of the physical element and the semantic label for the physical element.

Responding To Representations Of Physical Elements

US Patent:
2021039, Dec 23, 2021
Filed:
Sep 2, 2021
Appl. No.:
17/465334
Inventors:
- Cupertino CA, US
Bo Morgan - Emerald Hills CA, US
Siva Chandra Mouli Sivapurapu - Santa Clara CA, US
International Classification:
G06T 11/00
G06T 13/40
G06T 13/80
Abstract:
In some implementations, a method includes obtaining, by a virtual intelligent agent (VIA), a perceptual property vector (PPV) for a graphical representation of a physical element. In some implementations, the PPV includes one or more perceptual characteristic values characterizing the graphical representation of the physical element. In some implementations, the method includes instantiating a graphical representation of the VIA in a graphical environment that includes the graphical representation of the physical element and an affordance that is associated with the graphical representation of the physical element. In some implementations, the method includes generating, by the VIA, an action for the graphical representation of the VIA based on the PPV. In some implementations, the method includes displaying a manipulation of the affordance by the graphical representation of the VIA in order to effectuate the action generated by the VIA.

Perpetual Property Vector For An Object

US Patent:
2021039, Dec 23, 2021
Filed:
Sep 2, 2021
Appl. No.:
17/465320
Inventors:
- Cupertino CA, US
Bo Morgan - Emerald Hills CA, US
Siva Chandra Mouli Sivapurapu - Santa Clara CA, US
International Classification:
G06T 19/00
G06T 15/04
G06T 15/08
G06K 9/00
Abstract:
In some implementations, a method includes obtaining a semantic construction of a physical environment. In some implementations, the semantic construction of the physical environment includes a representation of a physical element and a semantic label for the physical element. In some implementations, the method includes obtaining a graphical representation of the physical element. In some implementations, the method includes synthesizing a perceptual property vector (PPV) for the graphical representation of the physical element based on the semantic label for the physical element. In some implementations, the PPV includes one or more perceptual characteristic values characterizing the graphical representation of the physical element. In some implementations, the method includes compositing an affordance in association with the graphical representation of the physical element. In some implementations, the affordance allows interaction with the graphical representation of the physical element in accordance with the perceptual characteristic values included in the PPV.

Generating Content Based On State Information

US Patent:
2021039, Dec 23, 2021
Filed:
Sep 2, 2021
Appl. No.:
17/465342
Inventors:
- Cupertino CA, US
Bo Morgan - Emerald Hills CA, US
International Classification:
G06T 19/00
G06T 15/00
G06K 9/62
G06K 9/00
Abstract:
A method includes determining a first portion of state information that is accessible to a first agent instantiated in an environment. The method includes determining a second portion of the state information that is accessible to a second agent instantiated in the environment. The method includes generating a first set of actions for a representation of the first agent based on the first portion of the state information to satisfy a first objective of the first agent. The method includes generating a second set of actions for a representation of the second agent based on the second portion of the state information to satisfy a second objective of the second agent. The method includes modifying the representations of the first and second agents based on the first and second set of actions.

Training A Model With Human-Intuitive Inputs

US Patent:
2021037, Dec 2, 2021
Filed:
Aug 9, 2021
Appl. No.:
17/397839
Inventors:
- Cupertino CA, US
Peter Meier - Los Gatos CA, US
Bo Morgan - Emerald Hills CA, US
Cameron J. Dunn - Los Angeles CA, US
Siva Chandra Mouli Sivapurapu - Santa Clara CA, US
International Classification:
G06N 20/00
G06K 9/62
G06K 9/00
G10L 15/22
G10L 15/26
Abstract:
In one implementation, a method of generating environment states is performed by a device including one or more processors and non-transitory memory. The method includes displaying an environment including an asset associated with a neural network model and having a plurality of asset states. The method includes receiving a user input indicative of a training request. The method includes selecting, based on the user input, a training focus indicating one or more of the plurality of asset states. The method includes generating a set of training data including a plurality of training instances weighted according to the training focus. The method includes training the neural network model on the set of training data.

Generating Directives For Objective-Effectuators

US Patent:
2021027, Sep 2, 2021
Filed:
May 20, 2021
Appl. No.:
17/325454
Inventors:
- Cupertino CA, US
Siva Chandra Mouli Sivapurapu - Santa Clara CA, US
Bo Morgan - Emerald Hills CA, US
International Classification:
G06T 19/20
G06T 19/00
Abstract:
A method includes generating, in coordination with an emergent content engine, a first objective for a first objective-effectuator and a second objective for a second objective-effectuator instantiated in a computer-generated reality (CGR) environment. The first and second objectives are associated with a mutual plan. The method includes generating, based on characteristic values associated with the first and second objective-effectuators a first directive for the first objective-effectuator and a second directive for the second objective-effectuator. The first directive limits actions generated by the first objective-effectuator over a first set of time frames associated with the first objective and the second directive limits actions generated by the second objective-effectuator over a second set of time frames associated with the second objective. The method includes displaying manipulations of CGR representations of the first and second objective-effectuators in the CGR environment in accordance with the first and second directives.

NOTICE: You may not use BackgroundCheck or the information it provides to make decisions about employment, credit, housing or any other purpose that would require Fair Credit Reporting Act (FCRA) compliance. BackgroundCheck is not a Consumer Reporting Agency (CRA) as defined by the FCRA and does not provide consumer reports.