GENEA Challenge 2023

Generation and Evaluation of Non-verbal Behaviour for Embodied Agents

The GENEA Challenge 2023 on speech-driven gesture generation aims to bring together researchers that use different methods for non-verbal-behaviour generation and evaluation, and hopes to stimulate the discussions on how to improve both the generation methods and the evaluation of the results.

This will be the third installment of the GENEA Challenge. You can read more about the previous GENEA Challenge here .

This challenge is supported Wallenberg AI, Autonomous Systems and Software Program ( WASP ) funded by the Knut and Alice Wallenberg Foundation with in-kind contribution from the Electronic Arts (EA) R&D department, SEED .

Important dates

April 1, 2023
Registration opens
May 2, 2023
Training dataset released to challenge participants
June 7, 2023
Test inputs released to participants
June 14, 2023
Deadline for participants to submit generated motion
July 3, 2023
Release of crowdsourced evaluation results to participants
July 14, 2023
Deadline for participants to submit system-description papers
August 4, 2023
Paper notification
August 11, 2023
Deadline for camera-ready papers
October 9 - 13, 2023
Challenge presentations at ICMI 2023

Call for participation

The state of the art in co-speech gesture generation is difficult to assess since every research group tends to use its own data, embodiment, and evaluation methodology. To better understand and compare methods for gesture generation and evaluation, we are continuing the GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Challenge, wherein different gesture-generation approaches are evaluated side by side in a large user study. This 2023 challenge is a Multimodal Grand Challenge for ICMI 2023 and is a follow-up to the first edition of the GENEA Challenge, arranged in 2020.

We invite researchers in academia and industry working on any form of corpus-based generation of gesticulation and non-verbal behaviour to submit entries to the challenge, whether their method is driven by rule or machine learning. Participants are provided a large, common dataset of speech (audio+aligned text transcriptions) and 3D motion to develop their systems, and then use these systems to generate motion on given test inputs. The generated motion clips are rendered onto a common virtual agent and evaluated for aspects such as motion quality and appropriateness in a large-scale crowdsourced user study.

The results of the challenge are presented in hybrid format at the 4th GENEA Workshop at ICMI 2023, together with individual papers describing each participating system. All accepted challenge papers will be published in the main ACM ICMI 2023 proceedings.


Challenge registration is open!

The rules of the GENEA Challenge 2023 can be found in this document. Please read them before you proceed to the registration.

Once you have read the rules, please use this sign-up form to register your team.

Reproducibility Award

Reproducibility is a cornerstone of the scientific method. Lack of reproducibility is a serious issue in contemporary research which we want to address at our workshop. To encourage authors to make their papers reproducible, and to reward the effort that reproducibility requires, we are introducing the GENEA Workshop Reproducibility Award. All short and long papers presented at the GENEA Workshop will be eligible for this award. Please note that it is the camera-ready version of the paper which will be evaluated for the reward.

The award is awarded to the paper with the greatest degree of reproducibility. The assessment criteria include:
  • ease of reproduction (ideal: just works, if there is code - it is well documented and we can run it)
  • extent (ideal: all results can be verified)
  • data accessibility (ideal: all data used is publicly available)

Organising committee

The main contact address of the workshop is:

Challenge organisers

Taras Kucherenko
Taras Kucherenko
Electronic Arts (EA)

Youngwoo Yoon
Youngwoo Yoon
South Korea

Rajmund Nagy
Rajmund Nagy
KTH Royal Institute of Technology

Jieyeon Woo
Jieyeon Woo
Sorbonne University

Gustav Eje Henter
Gustav Eje Henter
KTH Royal Institute of Technology