AI Can Follow a Speaker. Human Interpreters Follow the Discussion.
- Tim Scannell
- 7 days ago
- 1 min read
Artificial intelligence is developing fast, but live interpreting shows exactly where its limits still are.
For prepared speeches, AI-generated sign language can look impressive. It performs best when the language is structured, the content is predictable, and the system operates on clean input.
But conferences are rarely that simple.
Live panels are unpredictable. Speakers interrupt each other. They react in the moment. They change direction. They overlap. They leave thoughts unfinished. They use humour, tone, emphasis, and audience interaction to shape meaning as they go.
And that is exactly why human interpreters remain essential.
At a recent London tech event, organisers used an AI sign language interpreter for an all-day conference. That may have worked for prepared content. But real accessibility cannot depend on whether AI performs well only in ideal conditions.

The real test is the live discussion.
Can AI instantly process multiple speakers, rapid context changes, unfinished thoughts, humour, emphasis, and audience interaction, then turn all of that into clear, natural, accessible sign language in real time?
Not reliably.
A human interpreter does far more than convert words from one language into another. They manage meaning, pace, context, and access. They read the room. They prioritise what matters. They adapt instantly when communication becomes messy, fast, or unpredictable.
AI can be a useful tool. It may support access in some settings. But it is not yet a replacement for skilled human interpreters in live environments where meaning moves faster than words.
AI can follow a speaker.
Human interpreters follow the discussion.




Comments