top of page

The Evolution of Interaction: From Early HCI to AI and Sign Language

Between 1999 and 2003, when I was at university, one of my subjects was Human–Computer Interaction (HCI). At that time, many people were still avoiding the Internet. Today, smartphones and smart devices give almost everyone access to apps and platforms across the Internet — the shift has been significant.



We are now experiencing a similar moment with AI and sign language. There is increasing experimentation with signing via Meta platforms, AI-assisted accessibility tools, live captions within the Deaf ecosystem, and emerging concepts such as Artificial Signed Intelligence (ASI), Hand Gesture Interaction (HGI), Sign Language Translation Machines (SLTM), and Machine Learning (ML).


However, there is still considerable confusion and limitation. Many current systems are not equivalent to BSL Level 2 or Level 3, and some users rely heavily on English through social media, captions, subtitles, or audio-to-sign solutions. This often results in Sign Supported English, rather than natural British Sign Language.


It is important to recognise that under an established qualification board, if there is no correct BSL grammatical order or appropriate facial expression, candidates automatically fail at BSL Level 2 and Level 3. This highlights that non-manual features and sign order are essential elements of BSL, not optional additions.


Human interpreters and translators undergo formal certification, achieve full qualifications, register professionally, carry ID badges, and complete continuous professional development (CPD). They are trained to manage context, subject range, role shift, multi-channel signs, repair strategies, and overlap — areas that AI currently does not fully understand or apply.


That said, AI is increasingly filling gaps in Video Relay Services, particularly across social media. One clear advantage of AI is its ability to operate 24/7, 365 days a year, without fatigue.


There have been highly skilled, fully qualified human translators and interpreters since 1985, whereas AI in this space has only begun emerging since around 2022 and remains largely in a testing phase.


What would be valuable now is evidence-based research from universities, focusing on:


  • accuracy and reliability

  • response effectiveness

  • two-way conversational capability

  • linguistic and cultural validity


At present, there are still concerns and biases around AI — much like the scepticism that surrounded the Internet in 1999. As before, time, research, and responsible development will determine how this technology evolves.

 
 
 

Comments


bottom of page