🔥 AI in Sign Language: Why Are We Ignoring What Deaf People Actually Asked For?
- Tim Scannell
- Nov 27
- 3 min read

A black-and-white linocut-style illustration shows a Deaf woman signing toward a simplified figure representing AI. The woman forms the “I-love-you / communication” handshape while her other hand rests open in front of her. She is drawn with realistic detail and a calm, focused expression. Opposite her, the AI figure is abstract and faceless, with a round head, glowing lines like rays around it, and the large letters “AI” printed on its chest. The contrast highlights the human signer’s depth and identity versus the simplified, symbolic technology. The artwork suggests communication, connection, and the need for AI to respect and understand real sign language.
Let’s be clear:
**I am NOT against AI.
I am against AI being marketed dishonestly.**
For years, the Deaf community’s number one request has been:
🧏♀️ “Build AI that understands sign language first → and converts it into text or audio.”
Real communication.Real independence.Real equality.
But instead of building sign-first technology, the AI industry keeps focusing on:
❌ Text → avatar “signing”
❌ Generative video based on English text
❌ Tools labelled “real-time” even though they are NOT live
Why?
Because it’s easier.Because it demos beautifully.Because it looks inclusive.Because it sells.
But “easy to build” is not the same as accessible.
✋ **And here’s the deeper issue: Text → Sign tools are everywhere.
But where is Sign → Text?
Sign → Audio?
Sign → Sign (cross-language)?**
AI companies keep producing text → sign avatars for instructions and scripted content.But Deaf people also need tools that support our side of the communication, not just the hearing side.
The technology missing — and truly needed — is:
✔ Sign → Text✔
Sign → Audio
✔ Sign → Sign (cross-language) with voice output
A tool that lets Deaf people sign naturally, checks the meaning, and expresses it in spoken English for human staff.
Not to replace interpreters —but to double-check meaning, to prevent misunderstandings, to support everyday communication safely.
It’s not “parroting.”
It’s a human-safety confirmation tool (BSL legal language). A real bridge between Deaf consumers and human staff.
This is what real accessibility looks like.
🔍 The technology we actually need IS possible
Emerging research — such as models like SignGemma — shows:
✔ AI can recognise sign language from video
✔ Accuracy is improving
✔ Sign → Text is becoming feasible
✔ Sign → Audio is no longer science fiction
This is what Deaf people asked for. This is where equality and independence live.
So why isn’t the industry prioritising it?
⚠️ Meanwhile, companies still promote “real-time translation” that is NOT real-time
No current AI can:
❌ interpret live speech
❌ understand tannoy announcements
❌ support emergencies
❌ handle live conversation
❌ produce accurate sign-language grammar on the spot
So why use the phrase “real-time translation”?
Who benefits from this wording — and who gets left behind?
✋ Are we repeating 1880 - digitally?
In 1880, the Milan Conference banned sign language, with no Deaf people present.
Hearing authorities decided the future of Deaf communication without us.
Today, some AI companies are designing the “future of sign language technology” the same way:
**Without Deaf leadership.
Without Deaf input. Without Deaf ownership.**
This is becoming a Digital Milan.
And we cannot let history repeat itself.
🧏♀️ Sign-first AI would allow:
✔ Real Deaf communication
✔ Equality at work
✔ Safer healthcare
✔ Emergency access
✔ Conversations with human society
✔ Access in real environments, not just scripted ones
✔ Independence without relying on English
This is true inclusion, not the illusion of inclusion.
📣 A message to AI teams, investors, policymakers, transport operators, and decision-makers:
1️⃣ Why invest in the easy technology, not the right technology?
2️⃣ Why ignore the sign → text/audio when Deaf people clearly asked for it?
3️⃣ Why avoid sign → sign (cross-language) with voice, which would help real interaction?
4️⃣ Why market text → avatar tools as “real-time” when they are NOT live?
5️⃣ Where are the safeguarding standards for public use?
6️⃣ When will Deaf people lead decisions about tools designed for Deaf people?
**Nothing about us, without us.
Nothing for show — only solutions with substance. Tell the truth. Build the future the right way.**
Human-first.
Sign-first.
Truth-first.



Comments