Sign languages are spatial-temporal languages and constitute a key form of communication for Deaf communities. Recent progress in fine-grained gesture and action classification, machine translation and image captioning, point to the possibility of automatic sign language understanding becoming a reality. The study of isolated sign recognition has a rich history in the computer vision community stretching back over thirty years. Thanks to the recent availability of larger datasets, researchers are now focusing on continuous sign language recognition, sentence alignment to continuous signing and sign language translation. Advances in generative networks are also enabling progress on sign language production, where written language is converted into sign language video.

The "Sign Language Recognition, Translation & Production" (SLRTP) Workshop brings together researchers working on different aspects of vision-based sign language research (including body posture, hands and face) and sign language linguists. The focus of this workshop is to broaden participation in sign language research from the computer vision community. We hope to identify important future research directions, and to cultivate collaborations. The workshop will consist of invited talks and also a challenge with three tracks: individual sign recognition; English sentence to sign sequence alignment; and sign spotting.

Workshop languages/accessibility: The languages of this workshop are English, British Sign Language (BSL), and International Sign (IS). Interpretation between BSL/English and IS/English will be provided, as will English subtitles, for all pre-recorded and live Q&A sessions. If you have questions about this, please contact us.


See ECCV22_SLRTP_Challenge.pdf for challenge descriptions, instructions, terms and conditions.

Note: Participants are encouraged to request access to the dataset(s) used for the challenge as soon as possible, since it may take several days to obtain permission to download.

The challenge has three tracks. The first track is (1) sign recognition from co-articulated signing for a large number of classes – the task is to classify individual signs in continuous signing sequences given their approximate temporal extent. This should encourage discussion on how to best (i) exploit complementary signals across different modalities and articulators, (ii) model temporal information, (iii) account for long-tailed distributions.

The second track is for (2) alignment of spoken language sentences to continuous signing – the task of determining the temporal extent of a signing sequence, given its English translation. This is a key step for automatically constructing a parallel corpus for sign language translation. This should encourage discussion on how to best model video and text jointly.

The final track is (3) sign spotting : here the task is to identify whether and when a sign is performed in a given window of continuous signing. Sign spotting has a range of applications including: indexing of signing content to enable efficient search and “intelligent fast-forward” to topics of interest, automatic sign language dataset construction and “wake-word” recognition for signers.

Teams that submit their results to the challenges will also be required to submit a description of their systems. At the workshop, we will invite presentations from the challenge winners.


    Challenge development phase begins:
    August 5, 2022
    Challenge test phase begins:
    September 12, 2022
    Challenge close:
    October 7, 2022
    Winners announced:
    October 10, 2022
    Winners 10-min pre-recorded videos due:
    October 17, 2022
    Workshop date:
    October 24, 2022


Melissa Malzkuhn


Motion Light Lab

Mark Wheatley


Executive Director
European Union of the Deaf

Sarah Ebling


Senior researcher
University of Zurich

Adam Munder


Founder of

Tentative Schedule

Date: Monday 24th October

Time: 14:00-18:00 GMT+3 (Israel Time), (12:00-16:00 London Time)

The workshop is fully virtual, denotes pre-recorded videos and denotes live interaction.

The access to the virtual platform will be allowed for ECCV'22 attendees who are registered with a workshop pass.

  • 1400

               Opening Remarks

  • 1410

               Challenges Discussion

  • 1435
    Sarah Ebling

    +   Invited talk by Sarah Ebling:
                 Developing Sign Language Technologies for the Users: Insights from a NLP Perspective

  • 1505
    Mark Wheatley

    +   Invited talk by Mark Wheatley:
                 Co-creation in machine translation projects: the role of deaf organisations

  • 1530

                 Comments on Sign Language Data by Bencie Woll

  • 1535

                 Coffee Break

  • 1550
    Melissa Malzkuhn

    +   Invited talk by Melissa Malzkuhn:
                 Signing Avatars: Fluency, Comprehension, Acceptance

  • 1620
    Adam Munder

    +   Invited talk by Adam Munder:
    Enabling Inclusive Communication Between Deaf and Hearing with OmniBridge AI Translation

  • 1650

               Closing Remarks


Liliane Momeni


PhD Student
University of Oxford

Gul Varol


Assistant Professor
École des Ponts ParisTech

Samuel Albanie


Assistant Professor
University of Cambridge

Hannah Bull


PhD Student
University of Paris-Saclay

Prajwal KR


PhD Student
University of Oxford

Cihan Camgoz


Research Assistant

Ben Saunders


PhD Student
University of Surrey

Cihan Camgoz

Necati Cihan

Research Fellow
University of Surrey

Richard Bowden


University of Surrey

Andrew Zisserman


University of Oxford

Bencie Woll



Latest news

  • Aug 5

    Challenges begin.

  • Aug 2

    The tentative schedule is announced. More updates coming soon.

  • April 7

    Workshop website is up! SLRTP'22 will be held as a virtual event in conjunction with ECCV'22 as part of the Sign Language Understanding Workshop. See the previous SLRTP'20 edition at