top of page
Blog


The Impact - Safety, the Unexpected, and Why One Avatar Is Not Enough
ALT + TEXT: Illustration showing a busy public transport environment during an emergency. Deaf and hearing passengers, staff, and a driver communicate using multiple methods: sign language, written text, mobile alerts, smartwatches with haptic warnings, and face-to-face interaction. A sign language avatar appears on a screen, but the focus is on real people communicating directly. The image emphasises that accessibility is not tokenistic or limited to one avatar, but requires
Tim Scannell
3 days ago2 min read


The Clarification: What “Real-Time Generative AI” Actually Means
“Real-time generative AI” is often misunderstood. Real-time = fast response Generative = creates output AI = automated decision-making software Put together, real-time generative AI means: ➡️ The system can generate content immediately when triggered. What it does not automatically mean: ❌ It understands language ❌ It interprets meaning or intent ❌ It handles live, free-form communication ❌ It performs true translation ❌ It manages unexpected situations Speed ≠ understan
Tim Scannell
4 days ago1 min read


Generative AI for Sign-Language Avatars Is Not Enough
When AI meets sign language, avatars are often the first thing we see. They’re visual, impressive, and easy to demonstrate. But if AI support for Deaf communities stops at avatars, we’ve misunderstood the real need. An avatar is a delivery layer , not intelligence. Infographic showing that sign language AI is more than avatars, highlighting the need for two-way communication, understanding context, and working everywhere to support real accessibility. True AI assistance for s
Tim Scannell
5 days ago1 min read
bottom of page





