AUDIO: Modeling and Interaction
Projects in the Audio theme explore using technology to model retrieve, recommend, and interact with audio and speech content.
The Connected Experiences lab is addressing this understudied field in several areas.
First, we're working toward understanding a new and relatively understudied medium: Podcasts. We're conducting user studies to better understand podcast use, and developing deep learning-based labeling algorithms to predict attributes of podcast content to be used for recommendations.
Second, we're working to develop better audio-only interfaces. This group of projects explores how people search and manage content using smart speakers and invents novel assistive interfaces that can help users with low vision interact with content.
An example of an audio-only interface, also in the visual track, uses 3D printing to create objects with computer-readable labels. Combined with sensing technologies and real-time feedback, the models help people with low vision learn new concepts without relying on visual or tactile graphics. You can read more about the project at its website.