Options
Real-Time Conversational Gaze Synthesis for Avatars
Canales, Ryan; Jain, Eakta; Jörg, Sophie (2023): Real-Time Conversational Gaze Synthesis for Avatars, in: Julien Pettré, Barbara Solenthaler, Rachel McDonnell, u. a. (Hrsg.), MIG ’23 : Proceedings of the 16th ACM SIGGRAPH Conference on Motion, Interaction and Games, New York: Association for Computing Machinery, S. 1–7, doi: 10.1145/3623264.3624446.
Faculty/Chair:
Author:
Title of the compilation:
MIG '23 : Proceedings of the 16th ACM SIGGRAPH Conference on Motion, Interaction and Games
Editors:
Pettré, Julien
Solenthaler, Barbara
McDonnell, Rachel
Peters, Christopher
Conference:
MIG '23: The 16th ACM SIGGRAPH Conference on Motion, Interaction and Games, November 15-17, 2023 ; Rennes
Publisher Information:
Year of publication:
2023
Issue:
17
Pages:
ISBN:
979-8-4007-0393-5
Language:
English
Abstract:
Eye movement plays an important role in face-to-face communication. In this work, we present a deep learning approach for synthesizing the eye movements of avatars for two-party conversations and evaluate viewer perception of different types of eye motions. We aim to synthesize believable gaze behavior based on head motions and audio features as they would typically be available in virtual reality applications. To this end, we captured the head motion, eye motion, and audio of several two-party conversations and trained an RNN-based model to predict where an avatar looks in a two-person conversational scenario. We evaluated our approach with a user study on the perceived quality of the eye animation and compared our method with other eye animation methods. While our model was not rated highest, our model and our user study lead to a series of insights on model features, viewer perception, and study design that we present.
Keywords: ; ;
Virtual reality
Perception
Animation
Type:
Conferenceobject
Activation date:
February 6, 2025
Versioning
Question on publication
Permalink
https://fis.uni-bamberg.de/handle/uniba/106268