Creating Gaze Annotations in Head Mounted Displays

Diako Mardanbeigi, Pernilla Qvarfordt

    Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

    Abstract

    To facilitate distributed communication in mobile settings, we developed GazeNote for creating and sharing gaze annotations in head mounted displays (HMDs). With gaze annotations it possible to point out objects of interest within an image and add a verbal description. To create an annota- tion, the user simply captures an image using the HMD’s camera, looks at an object of interest in the image, and speaks out the information to be associated with the object. The gaze location is recorded and visualized with a marker. The voice is transcribed using speech recognition. Gaze annotations can be shared. Our study showed that users found that gaze annotations add precision and expressive- ness compared to annotations of the image as a whole
    Original languageEnglish
    Title of host publicationISWC '15 Proceedings of the 2015 ACM International Symposium on Wearable Computers
    PublisherAssociation for Computing Machinery
    Publication date2015
    Pages161-162
    ISBN (Print)978-1-4503-3578-2
    DOIs
    Publication statusPublished - 2015

    Keywords

    • Gaze annotations
    • Head mounted displays
    • Spatial communication
    • Speech recognition
    • Multimodal interaction

    Fingerprint

    Dive into the research topics of 'Creating Gaze Annotations in Head Mounted Displays'. Together they form a unique fingerprint.

    Cite this