Who are we?

Our international team includes leading experts on the evaluation of gesture-generation models, as well as experienced model developers and research engineers with experience in deploying gesture-generation models. Together, we are developing the leaderboard and its associated tooling, and we will be responsible for managing and funding year-round crowdsourced human evaluations.

Our cumulative experience covers all major aspects of gesture-generation research, including:

  • crowdsourced evaluation: e.g., organising the GENEA Challenges in 2020-2024 (the leading large-scale human evaluation efforts in gesture generation to date!)
  • data collection: e.g., TED Gesture dataset
  • model development: e.g., Gesticulator (ICMI 2020 Best paper), Gesture Generation from Trimodal Context (SIGGRAPH Asia 2020), StyleGestures (EUROGRAPHICS 2020 Honourable mention), Listen, Denoise, Action! (SIGGRAPH 2023), AQ-GT (ICMI 2023 Best paper)
  • visualisation tooling: e.g., Blender, Maya, and Unreal Engine development

Organizers

Image 1 description

Youngwoo Yoon

Principal Researcher

ETRI, South Korea

Image 2 description

Taras Kucherenko

Research Scientist

Electronic Arts - SEED, Sweden

Image 3 description

Gustav Eje Henter

Assistant Professor

KTH Royal Institute of Technology

Head of Research motorica.ai, Sweden

Image 4 description

Rajmund Nagy

Doctoral Student

KTH Royal Institute of Technology

Sweden

Image 5 description

Hendric Voß

Doctoral Student

Bielefeld University, Germany

Image 6 description

Thanh Hoang-Minh

MSc Student

VNUHCM - University of Science, Vietnam

Image 7 description

Teodor Nikolov

Research Engineer

motorica.ai, Sweden

Image 8 description

Mihail Tsakov

Unreal Engine Developer

Liahim, Netherlands

Scientific advisors

While the organising team handles day-to-day operations, we are fortunate to be advised in key strategic decisions and the leaderboard methodology by three leading experts of nonverbal communication, visual perception, human-agent interaction, and motion capture:

Image 1 description

Rachel McDonnell

Trinity College Dublin, Ireland

Image 2 description

Michael Neff

University of California, Davis, USA

Image 3 description

Stefan Kopp

Bielefeld University, Germany