GENEA Workshop 2021

Generation and Evaluation of Non-verbal Behaviour for Embodied Agents

The GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Workshop 2021 aims at bringing together researchers that use different methods for non-verbal-behaviour generation and evaluation, and hopes to stimulate the discussions on how to improve both the generation methods and the evaluation of the results. We invite all interested researchers to submit a paper related to their work in the area and to participate in the workshop. This is the second installment of the GENEA Workshop, for more information about the 2020 installment, please go here.

The workshop is as an official workshop of ACM ICMI’21.


Important dates

Timeline for regular workshop submissions:

12th Jun, 2021
Abstract deadline
15th Jun, 2021
Submission deadline
15th July, 2021
Notification of acceptance
3rd August, 2021
Deadline for camera-ready papers
18-22th October, 2021
Workshop

Call for papers

Overview

Generating nonverbal behaviours, such as gesticulation, facial expressions and gaze, is of great importance for natural interaction with embodied agents such as virtual agents and social robots. At present, behaviour generation is typically powered by rule-based systems, data-driven approaches, and their hybrids. For evaluation, both objective and subjective methods exist, but their application and validity are frequently a point of contention.

This workshop asks “What will be the behaviour-generation methods of the future? And how can we evaluate these methods using meaningful objective and subjective metrics?” The aim of the workshop is to bring together researchers working on the generation and evaluation of nonverbal behaviours for embodied agents to discuss the future of this field. To kickstart these discussions, we invite all interested researchers to submit a paper for presentation at the workshop.

GENEA 2021 is the second GENEA workshop and an official workshop of ACM ICMI’21, which will take place either in Montreal, Canada, or online. Accepted submissions will be included in the adjunct ACM ICMI proceedings.

Paper topics include (but are not limited to) the following

  • Automated synthesis of facial expressions, gestures, and gaze movements
  • Audio- and music-driven nonverbal behaviour synthesis
  • Closed-loop nonverbal behaviour generation (from perception to action)
  • Nonverbal behaviour synthesis in two-party and group interactions
  • Emotion-driven and stylistic nonverbal behaviour synthesis
  • New datasets related to nonverbal behaviour
  • Believable nonverbal behaviour synthesis using motion-capture and 4D scan data
  • Multi-modal nonverbal behaviour synthesis
  • Interactive/autonomous nonverbal behavior generation
  • Subjective and objective evaluation methods for nonverbal behaviour synthesis
  • Guidelines for nonverbal behaviours in human-agent interaction
For papers specifically on the topic of healthcare, whether for generating or understanding nonverbal behaviours, consider submitting to the workshop on Socially-Informed AI for Healthcare, also taking place at ICMI’21. The website of that workshop can be found at: social-ai-for-healthcare.github.io

Submission guidelines

Please format submissions for double-blind review according to the ACM conference format.

We will accept long (8 pages) and short (4 pages) paper submissions, along with posters (3 page papers), all in the double-column ACM conference format. Pages containing only references do not count toward the page limit for any of the paper types. Submissions should be made in PDF format through OpenReview.



Organising committee

The main contact address of the workshop is: genea-contact@googlegroups.com.

Workshop organisers

Taras Kucherenko
Taras Kucherenko
KTH Royal Institute of Technology
Sweden

Zerrin Yumak
Zerrin Yumak
Utrecht University
The Netherlands

Gustav Eje Henter
Gustav Eje Henter
KTH Royal Institute of Technology
Sweden

Pieter Wolfert
Pieter Wolfert
IDLab, Ghent University - imec
Belgium

Youngwoo Yoon
Youngwoo Yoon
ETRI & KAIST
South Korea

Patrik Jonell
Patrik Jonell
KTH Royal Institute of Technology
Sweden