BackgroundCheck.run
Search For

Joseph J Verbeke, 3420 Crestline Dr UNIT 9, San Francisco, CA 94131

Joseph Verbeke Phones & Addresses

San Francisco, CA   

Golden, CO   

Castle Rock, CO   

St Petersburg, FL   

Littleton, CO   

Golden, CO   

Mountain View, CA   

Mentions for Joseph J Verbeke

Joseph Verbeke resumes & CV records

Resumes

Joseph Verbeke Photo 16

User Experience Design And Prototyping Engineer, Future Experience

Location:
San Francisco, CA
Industry:
Computer Software
Work:
Harman International
User Experience Design and Prototyping Engineer, Future Experience
Signal-To-Noise Media Labs Aug 2013 - Feb 2016
Interactive Artist
Harman International May 2015 - Aug 2015
Future Experience Intern - Autostereoscopic and Autostereogram Hud Prototyping
Harman International Dec 2014 - Jan 2015
Future Experience Intern - Gesture Controlled Surround Sound
Harman International May 2014 - Aug 2014
Future Experience Intern - Auditory Augmented Reality Prototyper
Education:
University of Colorado Denver 2012 - 2015
Bachelors, Bachelor of Science, Music, Recording Arts
University of Colorado Boulder 2009 - 2010
Skills:
Rapid Prototyping, User Experience, C++, Unity, Max/Msp, Microcontrollers, Sound Design, Java, Sound Fx Editing, Audio Engineering, Audio Editing, Pro Tools, Sound Mixing, Sound Editing, Audio Post Production, Audio Processing, C#, Midi, Live Sound, Mastering, Javascript, Angularjs, Objective C, Projection Mapping
Joseph Verbeke Photo 17

Owner And Regional Developer Of A Growing Neighborhood-Based Self-Serve Frozen Yogurt Franchise

Location:
1217 east Porter St, Albion, MI 49224
Industry:
Food Production
Work:
Zoyo Neighborhood Yogurt
Owner and Regional Developer of A Growing Neighborhood-Based Self-Serve Frozen Yogurt Franchise
Education:
University of Michigan 2008 - 2009
Masters, Master of Music
Albion College 1997 - 2001
Bachelors, Bachelor of Arts, Liberal Arts
Lapeer East High School 1997
Albion College
University of Michigan
Masters, Music Education
Linked University
Skills:
Music Education, Marketing, Band, Team Building, Microsoft Office, Teaching, Leadership, Sales, Small Business, Public Speaking, Small Business Development, Social Media, Curriculum Development, Curriculum Design, Higher Education, Event Planning, Franchising, Small Business Marketing, Customer Retention, Catering, Customer Service, Strategic Planning, Social Media Marketing, Management, Entrepreneurship, Team Management, Linkedin Marketing, Long Term Customer Relationships
Interests:
Small Business
Blogging
Food Entrepreneurship
Investing
Educational Partnerships
Education
Employee Training
Arts and Culture
Franchising
Social Media Marketing
Community Building
Direct Mail Marketing
Joseph Verbeke Photo 18

Joseph Verbeke

Publications & IP owners

Us Patents

Autonomous Vehicle Interaction System

US Patent:
2022025, Aug 11, 2022
Filed:
Aug 9, 2019
Appl. No.:
17/625304
Inventors:
- Stamford CT, US
Joseph VERBEKE - San Francisco CA, US
Stefan MARTI - Oakland CA, US
International Classification:
B60W 60/00
G10L 15/22
G08G 1/00
B60Q 1/24
G06Q 50/30
Abstract:
A system for interacting with an autonomous vehicle includes a sensor included in the autonomous vehicle and configured to generate sensor data corresponding to a projected hailing area; a projection system included in the autonomous vehicle and configured to generate the projected hailing area on a surface proximate the autonomous vehicle; and a processor included in the autonomous vehicle and configured to execute instructions to: analyze the sensor data to detect the person within the projected hailing area; and in response to detecting the person within the projected hailing area, cause an acknowledgment indicator to be outputted.

Techniques For Detecting And Processing Domain-Specific Terminology

US Patent:
2022024, Aug 4, 2022
Filed:
Feb 1, 2021
Appl. No.:
17/164030
Inventors:
- Stamford CT, US
Evgeny BURMISTROV - Saratoga CA, US
Joseph VERBEKE - San Francisco CA, US
Priya SESHADRI - San Francisco CA, US
International Classification:
G10L 25/78
G10L 15/08
G10L 21/0208
Abstract:
Various embodiments set forth systems and techniques for explaining domain-specific terms detected in a media content stream. The techniques include detecting a speech portion included in an audio signal; determining that the speech portion comprises a domain-specific term; determining an explanatory phrase associated with the domain-specific term; and integrating the explanatory phrase associated with the domain-specific term into playback of the audio signal.

Auditory Augmented Reality Using Selective Noise Cancellation

US Patent:
2021040, Dec 23, 2021
Filed:
Jun 19, 2020
Appl. No.:
16/907063
Inventors:
- Stamford CT, US
Joseph VERBEKE - San Francisco CA, US
International Classification:
H04R 1/10
G10K 11/178
Abstract:
Various embodiments include a computer-implemented method comprising receiving an input signal representing an ambient auditory environment of a user, generating, from the input signal, a set of ambient audio signals that includes a first component signal and a second component signal, generating, based on the first component signal, a first inverse signal that is a polar inverse of the first component signal, removing the first component signal from the set of ambient audio signals, generating a first composite signal that includes at least the first inverse signal and the second component signal, and driving an audio output device to produce soundwaves based on the first composite signal.

Affective-Cognitive Load Based Digital Assistant

US Patent:
2021030, Oct 7, 2021
Filed:
Apr 2, 2020
Appl. No.:
16/839056
Inventors:
- Stamford CT, US
Sven KRATZ - Saratoga CA, US
Joseph VERBEKE - San Francisco CA, US
Priya SESHADRI - Stamford CT, US
Evgeny BURMISTROV - Saratoga CA, US
Neeka MANSOURIAN - El Dorado Hills CA, US
Stefan MARTI - Oakland CA, US
International Classification:
B60W 60/00
G06F 3/01
G06K 9/00
B60W 40/08
Abstract:
Embodiments of the present disclosure sets forth a computer-implemented method comprising receiving, from at least one sensor, sensor data associated with an environment, computing, based on the sensor data, a cognitive load associated with a user within the environment, computing, based on the sensor data, an affective load associated with an emotional state of the user, determining, based on both the cognitive load at the affective load, an affective-cognitive load, determining, based on the affective-cognitive load, a user readiness state associated with the user, and causing one or more actions to occur based on the user readiness state.

Automatically Estimating Skill Levels And Confidence Levels Of Drivers

US Patent:
2021030, Sep 30, 2021
Filed:
Mar 8, 2021
Appl. No.:
17/195493
Inventors:
- Stamford CT, US
Joseph VERBEKE - San Francisco CA, US
Sven KRATZ - Saratoga CA, US
International Classification:
B60W 40/08
B60W 50/12
B60W 50/16
Abstract:
In various embodiments, a driver sensing subsystem computes a characterization of a driver based on physiological attribute(s) of the driver that are measured as the driver operates a vehicle. Subsequently, a driver assessment application uses a confidence level model to estimate a confidence level of the driver based on the characterization of the driver. The driver assessment application then causes driver assistance application(s) to modify at least one functionality of the vehicle based on the confidence level. Advantageously, by enabling the driver assistance application(s) to take into account the confidence level of the driver, the driver assessment application can improve driving safety relative to conventional techniques for implementing driver assistance applications that disregard the confidence levels of drivers.

Techniques For Separating Driving Emotion From Media Induced Emotion In A Driver Monitoring System

US Patent:
2021028, Sep 16, 2021
Filed:
Mar 16, 2020
Appl. No.:
16/820533
Inventors:
- Stamford CT, US
Joseph Verbeke - San Francisco CA, US
International Classification:
G10L 25/63
G06K 9/00
G10L 25/81
Abstract:
One or more embodiments include an emotion analysis system for computing and analyzing emotional state of a user. The emotion analysis system acquires, via at least one sensor, sensor data associated with a user. The emotion analysis system determines, based on the sensor data, an emotional state associated with a user. The emotion analysis system determines a first component of the emotional state that corresponds to media content being accessed by the user. The emotion analysis system applies a first function to the emotional state to remove the first component from the emotional state.

Automatic Reference Finding In Audiovisual Scenes

US Patent:
2020023, Jul 23, 2020
Filed:
Jan 22, 2019
Appl. No.:
16/254523
Inventors:
- Stamford CT, US
Sven KRATZ - Mountain View CA, US
Joseph VERBEKE - San Francisco CA, US
Stefan MARTI - Oakland CA, US
International Classification:
G06F 16/483
G06K 9/00
G06F 16/43
G06F 16/438
G06F 16/48
Abstract:
Embodiments of the present disclosure set forth a computer-implemented method for identifying an object within an environment comprising receiving, via at least one sensor, first sensor data associated with an environment, storing, in a memory, the first sensor data in association with a first scene, and in response to receiving a user request for information associated with the environment, selecting, based on the user request, the first scene, accessing, via the memory, the first sensor data associated with the first scene, and, analyzing the first sensor data to identify a first object included in the first scene, and causing information associated with the first object to be output via at least one output device.

Mapping Virtual Sound Sources To Physical Speakers In Extended Reality Applications

US Patent:
2020023, Jul 23, 2020
Filed:
Jan 22, 2019
Appl. No.:
16/254527
Inventors:
- Stamford CT, US
Adam BOULANGER - Palo Alto CA, US
Joseph VERBEKE - San Francisco CA, US
Stefan MARTI - Oakland CA, US
International Classification:
H04S 7/00
H04R 5/02
H04R 5/04
H04S 3/00
Abstract:
One or more embodiments include an audio processing system for generating an audio scene for an extended reality (XR) environment. The audio processing system determines that a first virtual sound source associated with the XR environment affects a sound in the audio scene. The audio processing system generates a sound component associated with the first virtual sound source based on a contribution of the first virtual sound source to the audio scene. The audio processing system maps the sound component to a first loudspeaker included in a plurality of loudspeakers. The audio processing system outputs at least a first portion of the component for playback on the first loudspeaker.

NOTICE: You may not use BackgroundCheck or the information it provides to make decisions about employment, credit, housing or any other purpose that would require Fair Credit Reporting Act (FCRA) compliance. BackgroundCheck is not a Consumer Reporting Agency (CRA) as defined by the FCRA and does not provide consumer reports.