top of page

Who Gave AI Sign Language Approval?

Deaf people spent centuries building language, education, careers, and equal communication. AI companies should not be allowed to reduce that to a one-way system and call it progress.


This is the question I keep returning to: who gave AI sign language approval?


That question matters because AI sign language is not just a technical experiment. It touches language, identity, culture, education, employment, and human dignity. It affects Deaf children and adults whose lives have been shaped by long struggles for recognition, access, and full participation in society.


When people talk about AI in sign language, they often speak as if technology will automatically improve communication. But that assumption hides a deeper problem. It risks treating human signers as secondary — as raw material to be captured, copied, anonymised, or replaced — rather than as people whose language and lived experience deserve protection.


A long history should not be reduced to a one-way system


I have been thinking about the history of sign language since the 1600s and the generations of Deaf people who fought for education, careers, recognition, and the right to communicate without unnecessary barriers.


That history was not built by machines.


It was built by Deaf people themselves — through language, community, resilience, and achievement.


That is exactly why AI companies should not be allowed to reduce history, language, and human communication to a one-way system and then call it progress.


Real sign language is not one-way communication.


Digital poster in dark blue tones with bold white and gold lettering. It shows two people facing each other in conversation, with one person visibly signing. The poster explains that sign language is not one-way communication and includes facial expression, body, rhythm, grammar, context, and culture.
Digital poster in dark blue tones with bold white and gold lettering. It shows two people facing each other in conversation, with one person visibly signing. The poster explains that sign language is not one-way communication and includes facial expression, body, rhythm, grammar, context, and culture.

It is living communication.

It is a shared understanding.

It includes face, body, rhythm, grammar, space, context, and cultural nuance.


AI should not be allowed to erase that complexity and then present itself as the solution.


Why standards matter more than hype


As a freelancer, and after important meetings in the USA and Europe, I have become more convinced that this field does not simply need discussion, networking, or marketing.


It needs authority.


There are already many organisations and associations in this space, and many of them do valuable work. But what is still missing is a body with real power — something closer to a College of Sign Language Standards for AI.


By that, I mean a body with:


  • professional standards

  • ethical oversight

  • linguistic review

  • Deaf leadership

  • safeguarding responsibilities

  • legal accountability

  • the power to censure or stop companies that fail


Without that kind of structure, AI companies can too easily rely on self-declared “readiness,” polished demos, and weak oversight.


Digital poster in dark blue tones with bold white and gold lettering. At the centre is a podium with a shield logo and the words “College of Sign Language Standards.” The poster says that without serious standards there are serious risks, listing wrong sign context, incomplete videos, privacy and consent issues, and false claims that AI is ready.
Digital poster in dark blue tones with bold white and gold lettering. At the centre is a podium with a shield logo and the words “College of Sign Language Standards.” The poster says that without serious standards there are serious risks, listing wrong sign context, incomplete videos, privacy and consent issues, and false claims that AI is ready.

Why is sign language AI different from voice AI


Voice AI and sign language AI are not the same.


With voice-to-text or voice-to-audio systems, users can often remain anonymous, and the outputs are often easier to measure in narrow technical terms.


But sign language AI carries different and more serious risks.


A signed video is not just data. It carries identity, language, culture, and personal style.


That means the risks go beyond simple technical error.


The concerns I keep seeing include:


  • wrong sign context

  • cut or incomplete videos

  • demos used as proof instead of real evidence

  • limited real-world testing

  • claims that systems are “ready” without proper validation

  • no clear review by qualified linguists, Deaf experts, or universities


Privacy, authorship, and consent


There is also a serious privacy issue.


Even when a person’s name is hidden, they may still be identifiable through their face, hands, arms, body shift, movement, and signing style.


I have seen cases where AI signing appeared to come from the same human signer because the face, body shift, and nuanced movement patterns were still recognisable.


These are not minor details. They can reflect identity, authorship, and lived language.


Other firms may try to avoid obvious recognition by framing or capturing the signer from head to waist, including the arms and body shift, and then using that movement to create a digital signing avatar with the same human patterns.


That raises serious questions:


Who gave consent?

Who owns the movement?

Who is protected?

Who is being copied?

Who benefits?


These are not optional questions. They are central ethical questions.


Digital poster in dark blue tones with bold white and gold lettering. It shows a blurred person signing, with a digital-style overlay suggesting scanning or tracking. The poster warns that AI sign language could exploit, copy, or misuse signed language and asks who verifies the systems and who protects Deaf communities.
Digital poster in dark blue tones with bold white and gold lettering. It shows a blurred person signing, with a digital-style overlay suggesting scanning or tracking. The poster warns that AI sign language could exploit, copy, or misuse signed language and asks who verifies the systems and who protects Deaf communities.

LLMs and RAG do not solve the problem


Another major concern is the use of LLMs and RAG in sign language AI.


These systems may sound sophisticated, but they do not solve the core linguistic and cultural problems involved.


They can still produce false confidence, wrong context, inaccurate meaning, and convincing outputs built on poor-quality source material.


If the source data is weak, incomplete, biased, badly translated, or not properly reviewed by Deaf experts and linguists, then the system can simply scale those errors.


RAG does not guarantee truth.

LLMs do not guarantee understanding.


And in sign language, understanding is not only verbal or textual. Meaning is carried through facial expression, body shift, timing, grammar, space, and cultural nuance.


That is why these systems should never be treated as proof that AI sign language is safe, accurate, or ready.


Data retention and system risk


Another concern is temporary data storage.


If data remains on servers or in the cloud for up to 30 minutes, then other connected systems may be able to extract words or phrases from speech and convert them into sign language outputs, for example, in places such as transport stations.


Even short-term retention can create risks around privacy, control, access, and misuse.


When sign language data can reveal identity visually, these risks must be taken seriously.


The real question


So the question remains:


Who gave AI sign language approval?

Who verifies its standards?

Who checks its linguistic accuracy?

Who confirms it is ethical?

Who makes sure it is safe for Deaf communities?

And if it fails, who has the authority to stop it?


At the moment, those answers are too unclear.


That is the problem.


What needs to happen next

If we do not yet have a true College of Sign Language Standards for AI, then we need to build one.


Not another logo.

Not another panel discussion.

Not another claim of “innovation.”


A real standard-setting body.

A real safeguarding body.

A real authority.


Deaf people have spent centuries building language, education, careers, and equal communication opportunities.


AI companies should not be allowed to reduce that to a one-way system and call it progress.


Tim Scannell

1 Comment

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Guest
6 days ago
Rated 5 out of 5 stars.

This publication and ethical framework from the European Union of the Deaf will be of interest. https://eud.eu/new-eud-publications-on-artificial-intelligence-and-sign-language/

Like
bottom of page