Skip to content Skip to navigation

Context-Aware Hearing System

Stanford Engineering logo
December 12, 2014 - 2:15pm
Mechanical Engineering Research Laboratory (MERL), Room 203

Free and open to the public

The ever-growing number of amplification parameters available in modern digital hearing aids has brought a corresponding increase in complexity of the fitting procedure and difficulty in prescribing the best settings. Trainable hearing aids have been described in the literature, but have not gained widespread use. Including the smartphone as part of a hearing system has the potential to revolutionize the way users interact with their hearing aids, empowering individuals to participate in improving the way they hear and leveraging the positive motivational effects of co-creation. Furthermore, the sensing, memory, and processing capabilities of a smartphone enable context awareness, an additional layer of processing that takes into account factors such as the user's sound environment, location, sound level, and the time of day.

The overarching research question is

Can people train their hearing aids, using a smartphone in real-world situations, to provide settings preferred over existing hearing aid automatic settings?

We investigated this question through the design, implementation, and experimental validation of Awear — a noovel, wearable context-aware hearing system comprising a smartphone, hearing aids, and a body-worn gateway. To make the problem tractable, the space of settings chosen was limited to certain modes (microphone directionality, noise reduction) and programs (general, music, and party). The encouraging results obtained in a user study with 16 participants are a first step in demonstrating how smartphones might make hearing aid settings more personalized and lead to increased user satisfaction.

Gabriel Aldaz
Department of Mechanical Engineering
Advisor: Larry Leifer

Contact Email: 
zamfir@stanford.edu