VISUAL: Mixed & Augmented Reality

We’re developing novel methods to enable people with all abilities to perform daily tasks more effectively with augmented reality.

In one project, we developed an augmented reality application that identifies specific products on a grocery store shelf and provides visual cues to quickly direct a user’s attention to these products. We’re exploring augmented reality applications in other domains as well such as wayfinding.

ArLane is a HoloLens proof of concept application, built by the Lab and The Foundry @ Cornell Tech to run in Aol’s Area 51 Lab in their New York City headquarters. Building from the technology described above, ArLane aims to enrich people’s in-store experience: as they gaze at an item, ArLane provides extra information displayed on a virtual card that can be flipped to display related products.

Another project aims to link distributed teams in a way that builds mutual awareness while respecting privacy. The Visualink Portal is a remote telepresence system that can be installed in offices, labs, or common areas to connect groups working toward a common goal. While previous visual awareness systems are difficult to integrate into existing spaces and cause privacy concerns among users, the Visualink Portal uses modular components and existing surfaces while applying techniques that help it assuage privacy concerns.

Yet another project, also found in the audio track, uses 3D printing to create objects with computer-readable labels. Combined with sensing technologies and real-time feedback, the models help people with low vision learn new concepts without relying on visual or tactile graphics. You can read more about the project at its website.

Visual Theme News

Visual Theme Publications

DejaVu: A System for Journalists to Collaboratively Address Visual Misinformation

Hana Matatov, Adina Bechhofer, Lora Arroyo, Ofra Amir, and Mor Naaman

Presented at the Computation + Journalism Symposium, February 2019, Miami, FL.

link.png

Understanding Image Quality and Trust in Peer-to-Peer Marketplaces

Xiao Ma, Lina Mezghani, Kimberly Wilber, Hui Hong, Robinson Piramuthi, Mor Naaman, and Serge Belongie

Proceedings of WACV 2019

link.png

Visualink: A Minimal and Nonintrusive System for Distributed Awareness.

Benedetta Piantella, Doron Tal, and Mor Naaman

Demonstration at the 2018 ACM Conference on Computer-Supported Cooperative Work (CSCW) 2018, Jersey City, NJ.

link.png

The Effect of Computer-Generated Descriptions on Photo-Sharing Experiences of People with Visual Impairments

Yuhang Zhao, Shaomei Wu, Lindsay Reynolds, and Shiri Azenkot

Proceedings of the 2018 ACM Conference on Computer-Supported Cooperative Work and Social Computing.

link.png

Designing Smartglasses Applications for People with Low Vision

Shiri Azenkot and Yuhang Zhao

Proceedings of ACM SIGACCESS Accessibility and Computing (2017), p 19.

link.png

Designing and Evaluating Livefonts [video]

Danielle Bragg, Shiri Azenkot, Kevin Larson, Ann Bessemans, and Adam Tauman Kalai

Proceedings of the 30th Annual Symposium on User Interface Software and Technology (2017).

link.png

Understanding Low Vision People's Perception of Commercial Augmented Reality Glasses

Yuhang Zhao, Michele Hu, Shafeka Hashash, and Shiri Azenkot

Proceedings of CHI 2017.

link.png

ARLane: A Mixed Reality Shopping Experience

Adrian Vatchinsky, Arnaud Sahuguet, Chumeng Xu, Delia Sage Casa, Juliana Kleist-Mendez, Shiri Azenkot, and Stephen Lang

Demo installed at Aol's Area 51 Lab in New York City. 2017.

link.png

CueSee: Exploring Visual Cues for People with Low Vision to Facilitate a Visual Search Task

Yuhang Zhao, Sarit Szpiro, Jonathan Knighten, and Shiri Azenkot

Proceedings of UBICOMP 2016.

link.png

ForeSee: A Customizable Head-Mounted Vision Enhancement System for People with Low Vision

Yuhang Zhao, Sarit Szpiro, and Shiri Azenkot

Proceedings of ASSETS 2015.

link.png

GET IN TOUCH

© 2019 The Connected Experiences Lab.