GENEA Challenge 2023

Generation and Evaluation of Non-verbal Behaviour for Embodied Agents

★ Latest news ★

The GENEA team is preparing an online leaderboard for benchmarking gesture generation models with human evaluation. This project is the evolution of the GENEA challenge, so stay tuned!
If you would like to stay updated on major developments in the leaderboard project, sign up with your e-mail address on this link!

Challenge results now available! Results and materials from the Challenge can be found on the main GENEA Challenge 2023 results page.


The GENEA Challenge 2023 on speech-driven gesture generation aims to bring together researchers that use different methods for non-verbal-behaviour generation and evaluation, and hopes to stimulate the discussions on how to improve both the generation methods and the evaluation of the results.

This will be the third installment of the GENEA Challenge. You can read more about the previous GENEA Challenge here .

This challenge is supported Wallenberg AI, Autonomous Systems and Software Program ( WASP ) funded by the Knut and Alice Wallenberg Foundation with in-kind contribution from the Electronic Arts (EA) R&D department, SEED .



Important dates

April 1, 2023
Registration opens
May 2, 2023
Training dataset released to challenge participants
June 7, 2023
Test inputs released to participants
June 14, 2023
Deadline for participants to submit generated motion
July 3, 2023
Release of crowdsourced evaluation results to participants
July 14, 2023
Deadline for participants to submit system-description papers
August 4, 2023
Paper notification
August 11, 2023
Deadline for camera-ready papers
October 9, 2023
Challenge presentations at ICMI 2023

GENEA Challenge programme

All times in Paris' local timezone (UTC+2)

The Challenge presentations will take place at ICMI in Paris on October 9th.

13:35 - 13:55
Opening statement and a presentation of the GENEA challenge 2023 by the organisers
13:55 - 14:05
[video presentation] Co-Speech Gesture Generation via Audio and Text Feature Engineering by Geunmo Kim, Jaewoong Yoo, Hyedong Jung [OpenReview]
14:05 - 14:15
[video presentation] DiffuGesture: Generating Human Gesture From Two-person Dialogue With Diffusion Models by Weiyu Zhao, Liangxiao Hu, Shengping Zhang [OpenReview]
14:15 - 14:25
[video presentation] The FineMotion entry to the GENEA Challenge 2023: DeepPhase for conversational gestures generation by Vladislav Korzun, Anna Beloborodova, Arkady Ilin [OpenReview]
14:25 - 14:40
The KCL-SAIR team's entry to the GENEA Challenge 2023 Exploring Role-based Gesture Generation in Dyadic Interactions: Listener vs. Speaker by Viktor Schmuck, Nguyen Tan Viet Tuyen, Oya Celiktutan [OpenReview]
14:40 - 14:55
Gesture Motion Graphs for Few-Shot Speech-Driven Gesture Reenactment by Zeyu Zhao, Nan Gao, Zhi Zeng, Guixuan Zhang, Jie Liu, Shuwu Zhang [OpenReview]
14:55 - 15:20
Break with coffee
15:20 - 15:35
The DiffuseStyleGesture+ entry to the GENEA Challenge 2023 by Sicheng Yang, Haiwei Xue, Zhensong Zhang, Minglei Li, Zhiyong Wu, Xiaofei Wu, Songcen Xu, Zonghong Dai [OpenReview]
15:35 - 15:50
Gesture Generation with Diffusion Models Aided by Speech Activity Information by Rodolfo Luis Tonoli, Leonardo Boulitreau de Menezes Martins Marques, Lucas Hideki Ueda, Paula Paro Dornhofer Costa [OpenReview]
15:50 - 16:05
Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation by Anna Deichler, Shivam Mehta, Simon Alexanderson, Jonas Beskow [OpenReview]
16:05 - 16:20
The KU-ISPL entry to the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesture generation by Gwantae Kim, Yuanming Li, Hanseok Ko [OpenReview]
16:20 - 16:45
Break without coffee
16:45 - 17:00
FEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation by Leon Harz, Hendric Voß, Stefan Kopp [OpenReview]
17:00 - 17:15
The UEA Digital Humans entry to the GENEA Challenge 2023 by Jonathan Windle, Iain Matthews, Ben Milner, Sarah Taylor [OpenReview]
17:15 - 17:25
[video presentation] Discrete Diffusion for Co-Speech Gesture Synthesis by Ankur Chemburkar, Shuhong Lu, Andrew Feng [OpenReview]
17:25 - 17:30
Closing remarks
17:30
End of event

Challenge papers

The FineMotion entry to the GENEA Challenge 2023: DeepPhase for conversational gestures generation

Vladislav Korzun, Anna Beloborodova, Arkady Ilin [OpenReview]


Gesture Motion Graphs for Few-Shot Speech-Driven Gesture Reenactment

Zeyu Zhao, Nan Gao, Zhi Zeng, Guixuan Zhang, Jie Liu, Shuwu Zhang [OpenReview]


Diffusion-based co-speech gesture generation using joint text and audio representation

Anna Deichler, Shivam Mehta, Simon Alexanderson, Jonas Beskow [OpenReview]


The UEA Digital Humans entry to the GENEA Challenge 2023

Jonathan Windle, Iain Matthews, Ben Milner, Sarah Taylor [OpenReview]


FEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation

Leon Harz, Hendric Voß, Stefan Kopp [OpenReview]


The DiffuseStyleGesture+ entry to the GENEA Challenge 2023

Sicheng Yang, Haiwei Xue, Zhensong Zhang, Minglei Li, Zhiyong Wu, Xiaofei Wu, Songcen Xu, Zonghong Dai [OpenReview]


Discrete Diffusion for Co-Speech Gesture Synthesis

Ankur Chemburkar, Shuhong Lu, Andrew Feng [OpenReview]


The KCL-SAIR team's entry to the GENEA Challenge 2023 Exploring Role-based Gesture Generation in Dyadic Interactions: Listener vs. Speaker

Viktor Schmuck, Nguyen Tan Viet Tuyen, Oya Celiktutan [OpenReview]


Gesture Generation with Diffusion Models Aided by Speech Activity Information

Rodolfo Luis Tonoli, Leonardo Boulitreau de Menezes Martins Marques, Lucas Hideki Ueda, Paula Paro Dornhofer Costa [OpenReview]


Co-Speech Gesture Generation via Audio and Text Feature Engineering

Geunmo Kim, Jaewoong Yoo, Hyedong Jung [OpenReview]


DiffuGesture: Generating Human Gesture From Two-person Dialogue With Diffusion Models

Weiyu Zhao, Liangxiao Hu, Shengping Zhang [OpenReview]


The KU-ISPL entry to the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesture generation

Gwantae Kim, Yuanming Li, Hanseok Ko [OpenReview]


Call for participation

The state of the art in co-speech gesture generation is difficult to assess since every research group tends to use its own data, embodiment, and evaluation methodology. To better understand and compare methods for gesture generation and evaluation, we are continuing the GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Challenge, wherein different gesture-generation approaches are evaluated side by side in a large user study. This 2023 challenge is a Multimodal Grand Challenge for ICMI 2023 and is a follow-up to the first edition of the GENEA Challenge, arranged in 2020.

We invite researchers in academia and industry working on any form of corpus-based generation of gesticulation and non-verbal behaviour to submit entries to the challenge, whether their method is driven by rule or machine learning. Participants are provided a large, common dataset of speech (audio+aligned text transcriptions) and 3D motion to develop their systems, and then use these systems to generate motion on given test inputs. The generated motion clips are rendered onto a common virtual agent and evaluated for aspects such as motion quality and appropriateness in a large-scale crowdsourced user study.

The results of the challenge are presented in hybrid format at the 4th GENEA Workshop at ICMI 2023, together with individual papers describing each participating system. All accepted challenge papers will be published in the main ACM ICMI 2023 proceedings.



Reproducibility Award

Reproducibility is a cornerstone of the scientific method. Lack of reproducibility is a serious issue in contemporary research which we want to address at our workshop. To encourage authors to make their papers reproducible, and to reward the effort that reproducibility requires, we are introducing the GENEA Workshop Reproducibility Award. All short and long papers presented at the GENEA Workshop will be eligible for this award. Please note that it is the camera-ready version of the paper which will be evaluated for the reward.

The award is awarded to the paper with the greatest degree of reproducibility. The assessment criteria include:
  • ease of reproduction (ideal: just works, if there is code - it is well documented and we can run it)
  • extent (ideal: all results can be verified)
  • data accessibility (ideal: all data used is publicly available)
This year's award is awarded to: The DiffuseStyleGesture+ entry to the GENEA Challenge 2023
by Sicheng Yang, Haiwei Xue, Zhensong Zhang, Minglei Li, Zhiyong Wu, Xiaofei Wu, Songcen Xu, Zonghong Dai.

reproducibility award

Organising committee

The main contact address of the workshop is: genea-challenge@googlegroups.com.

Challenge organisers

Taras Kucherenko
Taras Kucherenko
Electronic Arts (EA)
Sweden

Youngwoo Yoon
Youngwoo Yoon
ETRI
South Korea

Rajmund Nagy
Rajmund Nagy
KTH Royal Institute of Technology
Sweden

Jieyeon Woo
Jieyeon Woo
Sorbonne University
France

Gustav Eje Henter
Gustav Eje Henter
KTH Royal Institute of Technology
Sweden