Investigating Representation of Text and Audio in Educational VR Using Learning Outcomes and EEG

Sarune Baceviciute, Aske Mottelson, Thomas Terkildsen, Guido Makransky

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

This paper reports findings from a between-subjects experiment that investigates how different learning content representations in virtual environments (VE) affect the process and outcomes of learning. Seventy-eight participants were subjected to an immersive virtual reality (VR) application, where they received identical instructional information, rendered in three different formats: as text in an overlay interface, as text embedded semantically in a virtual book, or as audio. Learning outcome measures, self-reports, and an electroencephalogram (EEG) were used to compare conditions. Results show that reading was superior to listening for the learning outcomes of retention, self-efficacy, and extraneous attention. Reading text from a virtual book was reported to be less cognitively demanding, compared to reading from an overlay interface. EEG analyses show significantly lower theta and higher alpha activation in the audio condition. The findings provide important considerations for the design of educational VR environments.
Original languageUndefined/Unknown
Title of host publicationProceedings of the 2020 CHI Conference on Human Factors in Computing Systems
Place of PublicationNew York, NY, USA
PublisherAssociation for Computing Machinery
Publication date2020
Pages1–13
ISBN (Print)9781450367080
Publication statusPublished - 2020
Externally publishedYes

Cite this