GENEA Workshop 2022

Generation and Evaluation of Non-verbal Behaviour for Embodied Agents

Official ICMI 2022 Workshop – November 7-11 (Hybrid)

The GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Workshop 2022 aims at bringing together researchers that use different methods for non-verbal-behaviour generation and evaluation, and hopes to stimulate the discussions on how to improve both the generation methods and the evaluation of the results. We invite all interested researchers to submit a paper related to their work in the area and to participate in the workshop. This is the third installment of the GENEA Workshop, for more information about the 2021 installment, please go here.

Important dates

July 26, 2022
Abstract deadline
July 29, 2022
Submission deadline
August 19, 2022
August 29, 2022
Camera-ready deadline
November 7 - 11, 2022
Workshop (Hybrid)

Call for papers

GENEA 2022 is the third GENEA workshop and an official workshop of ACM ICMI’22, and will be hybrid: will take place both in Bangalore, India, and online. Accepted workshop submissions will be included in the adjunct ACM ICMI proceedings.

Generating non-verbal behaviours, such as gesticulation, facial expressions and gaze, is of great importance for natural interaction with embodied agents such as virtual agents and social robots. At present, behaviour generation is typically powered by rule-based systems, data-driven approaches, and their hybrids. For evaluation, both objective and subjective methods exist, but their application and validity are frequently a point of contention.

This workshop asks, “What will be the behaviour-generation methods of the future? And how can we evaluate these methods using meaningful objective and subjective metrics?” The aim of the workshop is to bring together researchers working on the generation and evaluation of non-verbal behaviours for embodied agents to discuss the future of this field. To kickstart these discussions, we invite all interested researchers to submit a paper for presentation at the workshop.

Paper topics include (but are not limited to) the following

  • Automated synthesis of facial expressions, gestures, and gaze movements
  • Audio- and music-driven nonverbal behaviour synthesis
  • Closed-loop nonverbal behaviour generation (from perception to action)
  • Nonverbal behaviour synthesis in two-party and group interactions
  • Emotion-driven and stylistic nonverbal behaviour synthesis
  • New datasets related to nonverbal behaviour
  • Believable nonverbal behaviour synthesis using motion-capture and 4D scan data
  • Multi-modal nonverbal behaviour synthesis
  • Interactive/autonomous nonverbal behavior generation
  • Subjective and objective evaluation methods for nonverbal behaviour synthesis
  • Guidelines for nonverbal behaviours in human-agent interaction

Submission guidelines

Please format submissions for double-blind review according to the ACM conference format.

We will accept long (8 pages) and short (4 pages) paper submissions, all in the double-column ACM conference format. Pages containing only references do not count toward the page limit for any of the paper types. Submissions should be made in PDF format through OpenReview.

To encourage authors to make their work reproducible and reward the effort that this requires, we have introduced the GENEA Reproducibility Award.

Reproducibility Award

Reproducibility is a cornerstone of the scientific method. Lack of reproducibility is a serious issue in contemporary research which we want to address at our workshop. To encourage authors to make their papers reproducible, and to reward the effort that reproducibility requires, we are introducing the GENEA Workshop Reproducibility Award. All short and long papers presented at the GENEA Workshop will be eligible for this award. Please note that it is the camera-ready version of the paper which will be evaluated for the reward.

The award is awarded to the paper with the greatest degree of reproducibility. The assessment criteria include:
  • ease of reproduction (ideal: just works, if there is code - it is well documented and we can run it)
  • extent (ideal: all results can be verified)
  • data accessibility (ideal: all data used is publicly available)

Invited speakers

Carlos T. Ishi (RIKEN and ATR)

Carlos T. Ishi received the PhD degree in engineering from The University of Tokyo, Japan. He joined ATR Intelligent Robotics and Communication Labs in 2005, and became the group leader of the Dept. of Sound Environment Intelligence since 2013. He joined the Guardian Robot Project, RIKEN, in 2020. His research topics include analysis and processing of dialogue speech and non-verbal behaviors applied for human-robot interaction.

Judith Holler (Radboud University)

Judith Holler
Judith is Associate prof and PI at the Donders Institute for Brain, Cognition, & Behaviour (Radboud University) and leader of the research group Communication in Social Interaction at the Max Planck Institute for Psycholinguistics. Judith has been a Marie Curie Fellow and currently holds an ERC Consolidator grant. The focus of her work is on the interplay of speech and visual bodily signals from the hands, head, face, and eye gaze, in communicating meaning in interaction. In her scientific approach, she combines analyses of natural language corpora with experimental testing, and methods from a wide range of fields, including gesture studies, linguistics, psycholinguistics, and neuroscience. In her most recent projects, she combines these methods also with cutting-edge tools and techniques, such as virtual reality, mobile eyetracking, and dual-EEG to further our insights into multimodal communication and coordination in social interaction.

Organising committee

The main contact address of the workshop is:

Workshop organisers

Pieter Wolfert
Pieter Wolfert
IDLab, Ghent University - imec

Taras Kucherenko
Taras Kucherenko
Electronic Arts (EA)

Gustav Eje Henter
Gustav Eje Henter
KTH Royal Institute of Technology

Zerrin Yumak
Zerrin Yumak
Utrecht University
The Netherlands

Youngwoo Yoon
Youngwoo Yoon
South Korea

Carla Viegas
Carla Viegas
Carnegie Mellon University
United States of America