What Today’s Sign-Language AI Actually Does
- Tim Scannell
- 2 hours ago
- 3 min read
I reviewed current sign-language AI tools to understand what they can really do, what they cannot do, and where risks are being created for Deaf users and organisations.
The main issue
Many very different AI tools are being described as “sign-language translation”, even though they do very different things. This creates confusion, unrealistic expectations, and poor accessibility decisions.
Three types of sign-language AI tools
Across products, demos, and research projects, three categories appear again and again:
1️⃣ Sign or motion recognition AI that detects hand or body movements.
2️⃣ Caption workflow tools that help humans create subtitles faster, but do not translate sign language.
3️⃣ Claims of full sign-language translation AI that claims to understand and translate sign language both ways.
These three are not the same, even though they are often discussed as if they are.
What works today
Sign and gesture recognition works well and can be built quickly. These systems:
Can be created in hours or days
Can be used in real products
Detect movement and patterns
However, they do not understand sign-language meaning.

Infographic titled “What Today’s Sign-Language AI Actually Does.” It explains that current AI can detect signs and gestures and help with faster captioning, but cannot truly translate sign language. Sections highlight sign recognition, caption tools, and misleading translation claims. A warning notes that calling these tools “sign-language translation” is confusing. The bottom summary states: detection works, caption tools help, translation is not ready.
Some caption tools use sign detection only to help with timing. A human still writes and edits the captions. These tools do not translate sign language.
What looks impressive but is limited
Some AI systems can generate realistic sign-language videos from text or speech. These systems:
Look very polished
Use pre-made sign assets
Are difficult to customise
Do not understand signed input
They show signs — but they do not understand language.
AR glasses and captions
Smart glasses can show real-time captions from speech, identify speakers, and reduce background noise. This is helpful technology — but it is speech-to-text, not sign-language AI.
Research projects
Large projects like SignGPT are research programmes, not finished products. They aim to translate between sign language and written or spoken language, but this work is still in early stages and will take years.
A critical due diligence point
When organisations fund or adopt sign-language AI, it is essential to understand exactly what is being proposed. In particular, clarify whether a grant or project is aiming for:
One-way translation (e.g. text or speech → signed output), or
Bi-directional translation (signed input ↔ text or speech)
These are very different technical challenges, with very different timelines, risks, and accessibility implications.
People still matter
Hearing people can and should learn sign language, rather than relying on technology as a shortcut. Deaf people are not just “users” of these systems — they are language experts and should be the ones teaching, guiding, and shaping AI, when AI is used at all. Without Deaf leadership, sign-language AI risks encoding errors, bias, and misunderstandings at scale.
The biggest risk
The biggest risk today is how these tools are described, not the technology itself.
Calling detection tools, caption helpers, or video generators “sign-language translation”:
Misleads Deaf users
Confuses organisations buying the technology
Creates false expectations
Raises ethical and accountability concerns
The bottom line
✔ Sign and motion detection works
✔ Caption tools help humans work faster
✖ Full sign-language translation is not ready yet
Being honest about these differences is essential for trust, accessibility, and Deaf leadership.
