top of page

SignGPT, starting points, and the risk of a “Digital Milan”


Many people think SignGPT is the same as ChatGPT, just with sign language added. This is where confusion begins.


ChatGPT starts with:

  • text

  • or speech

This makes sense because ChatGPT is built for people who write or speak first.

Because of this, many people assume SignGPT should work the same way:

  • start from text or audio

  • Then generate sign language


But this assumption creates a serious misunderstanding.

For Deaf people:

  • Sign language is the first language

  • Meaning starts in sign, not speech or writing

  • Sign language is not a translation layer


When a system called “SignGPT” does not start with sign language, people become confused:

  • Is sign language the core, or just the output?

  • Who is the system really built for?


This confusion matters — because the starting point equals power.


History reminds us why this is important.

In 1880, sign language was removed from Deaf education in Milan. That decision caused long-term harm by pushing sign language aside.


Today, with Generative AI and LLMs, there is a modern risk I think of as a “Digital Milan” not by banning sign language, but by ignoring it at the foundation.


When AI always starts from speech or text:

  • Education barriers can return

  • Communication barriers are repeated digitally

  • Sign-first people are excluded again, quietly

This is not about blame or intent. It is about design choices and their consequences.


There are two valid human starting points:

  • Speak/write first

  • Sign first

ChatGPT already supports the first.

SignGPT should support the second.


Clear language and clear design matter. Otherwise, we repeat old patterns in new technology — and call it innovation.

Comments


bottom of page