We’re developing novel methods to enable people with different abilities to perform daily tasks more effectively with augmented reality. For example, in one project we developed an augmented reality application that identifies specified products on a grocery store shelf and provides visual cues to quickly direct a user’s attention to these products. We’re exploring augmented reality applications in other domains as well such as wayfinding.
We look at how data from mobile devices and sensors can enable a socio-technical infrastructure to provide awareness, trust and meaningful connections between physically co-located individuals. Such infrastructure will empower people to make better connections and communication in their local communities, with long term impact on participation and democracy.
The goal of this project is to advance our understanding of the psychological mechanisms behind people’s attention, as reflected through their interactions with digital content. In particular, we focus on the context of actions that people take online without any experimental intervention and examine how context affects behavior. We draw on theories from a wide range of fields to address questions that pertain to individual’s attention to content, expectations for attention from others and the value in getting that attention. To that end, we harness machine learning methods as well as language and statistical modeling to analyze signals of human attention as they occurs naturally outside of lab settings.
Humans have all kinds of preferences for things we can’t describe formally. We all have our favourite music, food, writers, and TV shows. Sometimes we can’t articulate quite why we like something — we just do. That means it’s difficult to teach machines about these intuitive ideas like food taste or music similarity.
Our work is about how to reach into people’s minds and pull out their intuitive notion of similarity, like food taste. To do this, first we build a deep learning system that can classify different foods. We then augment this visual machine expertise with the human expertise of thousands of crowd workers. The final result is a learned food embedding that captures which foods taste similar. Now we can teach machines that broccoli tastes more similar to carrots than cake, even though a human can’t always articulate why.
We introduce a new user-centric recommendation model, called Immersive Recommendation, that incorporates cross-platform and diverse personal digital traces into recommendations. Our recent work includes (1) creative content recommendation with unstructured application usage traces, (2) food and restaurant recommendation with food photos, and (3) news and events recommendation with personal text data.
3D printing technology is becoming mainstream, and offers a potential alternative to tactile graphics. However, current 3D printed graphics can only convey limited information through their shapes and textures. We are developing tools to enable users to interact with 3D prints with gestures.