Abstract
This paper introduces a novel multimodal corpus consisting of 12 video recordings of Zoom meetings held in English by an international group of researchers from September 2021 to March 2023. The meetings have an average duration of about 40 minutes each, for a total of 8 hours. The number of participants varies from 5 to 9 per meeting. The participants’ speech was transcribed automatically using WhisperX, while visual coordinates of several keypoints of the participants’ head, their shoulders and wrists, were extracted using OpenPose. The audio-visual recordings will be distributed together with the orthographic transcription as well as the visual coordinates. In the paper we describe the way the corpus was collected, transcribed and enriched with the visual coordinates, we give descriptive statistics concerning both the speech transcription and the visual keypoint values and we present and discuss visualisations of these values. Finally, we carry out a short preliminary analysis of the role of feedback in the meetings, and show how visualising the coordinates extracted via OpenPosecanbeusedtoseehowgesturalbehavioursupports the use of feedback words during the interaction.
Originalsprog | Engelsk |
---|---|
Titel | Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) |
Antal sider | 11 |
Forlag | ELRA and ICCL |
Publikationsdato | maj 2024 |
Sider | 11890–11900 |
Status | Udgivet - maj 2024 |
Udgivet eksternt | Ja |
Begivenhed | The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation - Lingotto Conference Centre , Torino, Italien Varighed: 20 maj 2024 → 25 maj 2024 |
Konference
Konference | The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation |
---|---|
Lokation | Lingotto Conference Centre |
Land/Område | Italien |
By | Torino |
Periode | 20/05/2024 → 25/05/2024 |