top of page

After SLxAI: Sign Language AI Needs Clarity Before Confidence

Recent public discussions after SLxAI have shown that Sign Language AI is no longer a quiet technical topic. It is now a public conversation involving Deaf communities, researchers, companies, universities, broadcasters, accessibility professionals, and AI developers.


I did not attend SLxAI myself, so this is not a review of the conference. My reflection is based on public posts, comments, media coverage, and discussions shared by people who attended or responded afterwards.


That public conversation is still important.


It demonstrates that Sign Language AI is advancing rapidly, but it also reveals that the field still lacks a shared understanding.


People are using the same words to describe very different technologies. “AI avatar,” “real-time translation,” “video generation,” “human-centred design,” and “Deaf-led” are appearing more often, but they are not always being used with the same meaning.


Activists advocate for Deaf-centered AI leadership, emphasising that sign language access is a right, not a privilege.
Activists advocate for Deaf-centered AI leadership, emphasising that sign language access is a right, not a privilege.

That matters, because language shapes trust.


This blog is not written to reject innovation. I welcome progress when it supports Deaf people, sign language access, education, inclusion, and communication. But after reading recent SLxAI-related posts, public demonstrations, media coverage, and community responses, I believe the next stage of the conversation must be more precise.


The issue now is not only whether the technology works


In my previous blogs, I have written about Deaf-led authority, approval, accountability, safeguarding, and independent ethical oversight. Those points remain important.


But the recent SLxAI public conversation adds another question:


Do people actually understand what is being shown to them?


That question matters for Deaf users, hearing audiences, funders, public services, schools, universities, broadcasters, and buyers.


  • A system may look impressive on stage.

  • A video may appear smooth on social media.

  • A demo may attract attention.

  • A conference presentation may sound exciting.


But none of that automatically proves that the system is linguistically accurate, culturally safe, legally accountable, or ready for public use.


“AI avatar” is becoming too broad


One concern raised in recent LinkedIn discussions is that the term “AI avatar” is being used too loosely.


This is important.


  • An AI avatar is not the same as video-to-video generation.

  • A human signer with a generated skin overlay is not the same as a fully generated signing system.

  • Text-to-sign output is not the same as real-time interpretation.

  • A recorded or prepared signing output is not the same as live sign language translation.

  • A demo is not the same as a reliable public service.


These differences are not minor technical details. They affect procurement, safeguarding, public trust, accessibility claims, and Deaf community confidence.


If an organisation presents a video-to-video system as an AI avatar, or describes prepared output as real-time translation, audiences may misunderstand what the technology actually does. That creates risk.


Protesters advocate for clear communication, emphasising the importance of accessible language and real-time interpretation over AI avatars.
Protesters advocate for clear communication, emphasising the importance of accessible language and real-time interpretation over AI avatars.

Clarity protects everyone


Clear terminology is not anti-innovation. It protects good innovation.


  • It helps Deaf communities understand what is being offered.

  • It helps buyers know what they are purchasing.

  • It helps public bodies avoid weak accessibility decisions.

  • It helps researchers explain the limits of their work.

  • It helps companies avoid misleading claims.

  • It helps the whole field build trust.


Every organisation working with Sign Language AI should be able to explain, plainly, whether its system is:

  • text-to-sign

  • speech-to-sign

  • sign-to-text

  • sign-to-speech

  • video-to-video generation

  • avatar-based

  • human-supported

  • fully automated

  • live

  • pre-recorded

  • prepared

  • experimental

  • or ready for public service delivery


If that explanation is not clear, the system should not be marketed as if it is clear.


Deaf-centred evidence must be visible

Many posts and presentations now use words such as “Deaf-led,” “human-centred,” “inclusive,” and “community feedback.”


Those words are welcome, but they need evidence.


  • Who was involved?

  • How many Deaf people gave feedback?

  • Were they native signers?

  • Which sign language or sign languages were used?

  • Were regional, cultural, age, gender, and language differences considered?

  • Was feedback paid?

  • Did the feedback change the product?

  • Who has authority to stop deployment if the output is not good enough?


Without answers, “community feedback” can become a marketing phrase rather than a safeguard.


Sign language is not only visual output


A visually smooth video is not enough.


Sign language includes facial expression, eye gaze, body movement, spatial grammar, timing, rhythm, context, culture, and lived experience. A system can look impressive to a hearing audience while still being unclear, unnatural, or wrong to native signers.


That is why native sign language users must not be added at the end of the process. They must be involved from the beginning, with real authority.


  • Not as decoration.

  • Not as data.

  • Not as last-minute testers.

  • Not only as public relations support.


As leaders, evaluators, designers, trainers, reviewers, and decision-makers.


One-off access is not the same as long-term accountability


Some projects create important visibility for sign language access. For example, sign language appearing in media, film, events, or public demonstrations can raise awareness and show what may be possible.


But real accessibility is not one-off visibility.


Real accessibility requires long-term quality, maintenance, governance, cost transparency, user support, interpreters, Deaf professionals, legal accountability, and proper standards.


AI must not become a reason to reduce Deaf services, weaken interpreter provision, avoid hiring Deaf professionals, or treat sign language as a cheaper substitute for human expertise.


Technology should strengthen Deaf access, not compete with Deaf people for resources.


What the public discussion after SLxAI shows

The most useful outcome from the public discussion after SLxAI may be that it has revealed where the field is still unclear.


People want answers.


They want to know what is real, what is generated, what is human-supported, what is experimental, what is validated, and what is ready for use.


They want to know whether Deaf leadership is real or symbolic.


They want to know whether companies are being honest about limitations.


They want to know whether AI is serving sign language or reshaping it without proper Deaf authority.


That is a worthwhile conversation.


My position

I remain open to innovation. I believe Sign Language AI may have a future if it is built carefully, honestly, and with Deaf people at the centre.


But the next stage must be more transparent.


  • We need clear terminology.

  • We need visible Deaf-centred evidence.

  • We need native signer involvement.

  • We need independent evaluation.

  • We need public accountability.

  • We need legal and procurement standards.

  • We need honesty about what each system can and cannot do.


The future of Sign Language AI should not be decided by hype, demos, or marketing language.


It should be shaped by Deaf leadership, sign language expertise, human rights, and trust.


A passionate advocate addresses a crowd, emphasising the importance of investing in the Deaf community. Signs highlight key messages: "Nothing about us without us," "Deaf leadership, stronger communities," and "Our language, our future." The campaign calls for leadership, training, evidence, and accountability in building sign language AI centered around Deaf people.
A passionate advocate addresses a crowd, emphasising the importance of investing in the Deaf community. Signs highlight key messages: "Nothing about us without us," "Deaf leadership, stronger communities," and "Our language, our future." The campaign calls for leadership, training, evidence, and accountability in building sign language AI centered around Deaf people.

My view remains simple: Invest in Deaf people first. Then talk about AI.


Related discussions and references

Recent LinkedIn posts and public coverage are worth reading because they show how quickly the Sign Language AI conversation is moving after SLxAI. Together, they raise important questions about Deaf leadership, AI avatars, video-to-video generation, cultural validation, real-time translation claims, accessibility, and governance.


I am sharing these links not because every post says the same thing, but because the wider discussion shows why the field now needs clearer language, stronger evidence, and more visible Deaf-centred accountability.


  1. ALANGU GmbH posts:https://www.linkedin.com/company/alangugmbh/posts/?feedView=all

  2. Tim Scannell:https://www.linkedin.com/posts/tim-scannell_signlanguage-deaf-bsl-share-7450090526696955904-8TZd/

  3. Jisu Lee:https://www.linkedin.com/posts/jisuleein_batoners-signlanguage-accessibility-share-7450385002573344768-KfFz/

  4. Bryan Leeper:https://www.linkedin.com/posts/bryanleeper_the-conversations-at-slxai-shouldnt-end-share-7450338627705831424-OXs3

  5. NBC Boston:https://www.nbcboston.com/video/on-air/as-seen-on/ai-tool-translates-asl-in-real-time/3935023/

  6. Nature article:https://www.nature.com/articles/s41598-026-43478-9

  7. Craig Radford:https://www.linkedin.com/posts/craigjradford_i-just-attended-the-slxai-conference-in-boston-share-7451123086969729024-EKbv

  8. Ryan Campbell:https://www.linkedin.com/posts/ryanhaitcampbell_reflecting-on-the-slxai-2026-conference-in-share-7451354312158633985-U985

  9. World Federation of the Deaf:https://www.linkedin.com/posts/wfdeaf_the-wfd-president-recently-delivered-a-presentation-activity-7452335345792200705-VsV5

  10. Jeffrey Shaul:https://www.linkedin.com/posts/jeffrey-shaul_last-week-i-attended-slxai-in-boston-i-ugcPost-7452349196764049408-z3sT

  11. Chris Holloway:https://www.linkedin.com/posts/chris-j-holloway_accessibility-disney-signlanguage-activity-7452457346527817728-6Rk2

  12. Josh Pennise:https://www.linkedin.com/posts/joshpennise_languageservices-accessibility-ai-activity-7452730193556955136-UANG

  13. Craig Radford / 360 Direct Access:https://www.linkedin.com/posts/craig-radford-ceo-of-360-direct-access-ugcPost-7451988853000679424-dKHo

  14. Higgsfield:https://higgsfield.ai/

  15. Kling AI:https://kling.ai/


Disclaimer: This blog is my personal opinion and reflection, written in good faith from publicly available posts, comments, media coverage, and discussions. It is not intended to make legal claims, accusations, or defamatory statements about any individual, organisation, conference, company, or project. I aim to encourage clearer language, transparency, accountability, and Deaf-centred governance in Sign Language AI.

 
 
 
bottom of page