Options
Real-Time Conversational Gaze Synthesis for Avatars
Canales, Ryan; Jain, Eakta; Jörg, Sophie (2025): Real-Time Conversational Gaze Synthesis for Avatars, in: Bamberg: Otto-Friedrich-Universität, S. 1–7.
Faculty/Chair:
Author:
Publisher Information:
Year of publication:
2025
Pages:
Year of first publication:
2023
Language:
English
Abstract:
Eye movement plays an important role in face-to-face communication. In this work, we present a deep learning approach for synthesizing the eye movements of avatars for two-party conversations and evaluate viewer perception of different types of eye motions. We aim to synthesize believable gaze behavior based on head motions and audio features as they would typically be available in virtual reality applications. To this end, we captured the head motion, eye motion, and audio of several two-party conversations and trained an RNN-based model to predict where an avatar looks in a two-person conversational scenario. We evaluated our approach with a user study on the perceived quality of the eye animation and compared our method with other eye animation methods. While our model was not rated highest, our model and our user study lead to a series of insights on model features, viewer perception, and study design that we present.
Keywords: ; ;
Virtual reality
Perception
Animation
Type:
Conferenceobject
Activation date:
November 24, 2025
Permalink
https://fis.uni-bamberg.de/handle/uniba/110683