Research

Imagine having a sound mixing board for real life: You could turn up the voice of a conversation partner, silence the drone of an airplane engine, or remove echoes from the public address system in a train station.

Our research group studies assistive and augmentative listening technologies that change the way humans experience the world through sound. Using advanced sensing and processing systems, not only can we make listening easier for people with hearing loss, we can also give everyone superhuman hearing.

Assistive and Augmentative Listening Applications Heading link

Hearing aids

Hearing aids work poorly in noisy environments like crowded restaurants, where users need help most. We believe that the key to improving performance is connectivity: Future hearing devices will be able to cooperate with other devices in the room and with each other to deliver better listening experiences.

Assistive listening systems

Assistive listening systems, like wireless microphones and headsets, can offer dramatic benefits but are overlooked by users, clinicians, and researchers. Our team is reimagining assistive listening technology as an effortless, ambient accessibility resource integrated into other devices and the built environment.

Virtual reality headset

Immersive technologies, like augmented and virtual reality, promise to transform the ways that humans interact with computers, their environments, and one other. Audio signal processing plays an important role in extended-reality experiences, helping listeners to feel immersed in a virtual soundscape.

Soldier wearing a headset with many sensors

Sensory augmentation systems provide superhuman perception, allowing users to hear a quiet sound from across a room or communicate naturally from across the country. Sensory augmentation aims to enhance, not replace, human abilities; it lies at the intersection of artificial intelligence and human intelligence.

Research Areas Heading link

There are two key challenges in designing advanced listening systems:

  1. Build technological systems that can hear better than our unaided ears, for example by using dozens or hundreds of microphones spread around a room.
  2. Process that superhuman sound information into a form that’s useful for humans. For example, we can make it sound like we’re hearing everything using our own ears, but with some sounds turned up and other sounds turned down.

Within those broad challenges, there are numerous technical problems for researchers to solve. Here are some of the topics we are exploring.

Listening devices must process audio in real time and with imperceptible delay and distortion, which are challenges not found in communication and machine listening applications.

  • Low-delay algorithms and architectures
  • Wearable device microphones and other sensors
  • Dynamic range compression in noisy environments

A key challenge in listening technology is to simulate, preserve, and/or manipulate the spatial cues that humans use to localize sounds around them.

  • Preserving realistic binaural cues in assistive/augmentative processing
  • Rich spatial sound field capture and manipulation
  • Data- and physics-driven acoustic modeling for complex environments
  • Robustness to motion, especially of human users

Microphone arrays have long been used to capture sound from a distance. Modern transducer and processing technologies are enabling arrays with dozens or hundreds of microphones.

  • Large microphone arrays in standalone devices, wearables, and infrastructure
  • Scalable algorithms for high-resolution beamforming and source separation
  • Novel paradigms for distance- and area-based spatial filtering (e.g. “bubbleforming”)

Thanks to the growing ubiquity of and connectivity between consumer, professional, and medical audio devices, we can realize large-scale networks with dozens of fixed, mobile, wearable, and infrastructure devices to deliver truly superhuman hearing capabilities.

  • Cooperative algorithms to combine signals from heterogeneous devices
  • Sensor fusion techniques for beamforming and source separation
  • Robustness to network bandwidth and latency constraints
  • Robustness to motion and synchronization uncertainty

Rigorous research in audio signal processing, especially spatial audio, requires large data sets recorded in real rooms under repeatable conditions. We are developing an open-source robotic acoustic research system to enable automated experiments with realistic but controlled motion.

  • Low-noise robotic systems to manipulate playback and recording devices
  • Software platform for synchronous robot motion and sound playback/recording