Sign language translating devices are cool. But are they useful?
Over the past several decades, researchers have regularly developed devices. They are meant to translate American Sign Language (ASL) to English. They hope to ease communication. They aim to help people who are deaf and hard of hearing and the hearing world talk. Many of these technologies use gloves. These gloves capture the motion of signing. They can be bulky and awkward.
A group of researchers at Michigan State University (MSU) has developed a glove-less device. It is the size of a tube of Chapstick. They hope it will improve ASL-English translation.
The technology is called DeepASL. It uses a camera device to capture hand motions. Then it feeds the data through a deep learning algorithm. It matches it to signs of ASL. It is unlike many previous devices. DeepASL can translate whole sentences rather than single words. It doesn't require users to pause between signs.
"This is a truly non-intrusive technology," says Mi Zhang. He is a professor of electrical and computer engineering. He led the research.
Zhang and his team hope DeepASL can help people who are deaf and hard of hearing by serving as a real-time translator. It could be especially useful in emergency situations, Zhang says. The device can be used with a phone or tablet. It can also be used with a computer. It can help teach ASL, Zhang says.
More than 90 percent of deaf children are born to parents who are hearing. There is a large community of adults who need to learn ASL quickly. DeepASL could serve as a digital tutor. It gives feedback on whether learners are signing correctly.
Zhang has applied for a patent. He hopes to have a device on the market within a year. Because it's based on affordable technology it could be more widely accessible than previous efforts.
Christian Vogler is a professor of communication studies at Gallaudet University. It is a university for people who are deaf or hard of hearing. He is skeptical of devices designed to translate ASL. His skepticism is shared by many in the Deaf community.
Devices generally do not truly 'translate' ASL. They merely recognize hand signs and turn them into an English word per sign, Vogler says. This means key grammatical information is lost. That includes information about whether a phrase is a question or a negation. And if a phrase is a relative clause and so forth.
DeepASL does translate full sentences. But some features of ASL grammar go beyond hand signs. Facial expressions are often used as modifiers. An eyebrow raising can turn a phrase into a question and body positioning can show when the ASL user is quoting someone else.
So far, "none of the systems have been even remotely useful to people who sign," Vogler says. He adds that researchers often seem to have "very little contact with the [Deaf and hard of hearing] community and very little idea of their real needs."
Zhang's team did not test the device on people who were deaf and hard of hearing. He tested it on students in a sign language translation program. Zhang emphasizes that DeepASL is designed to enable only basic communication. It is just a starting place. He says his team hopes to extend DeepASL's capabilities in the future. He wants it to capture facial expressions as well.
"That will be the next significant milestone for us to reach," he says.
Vogler says it's a positive that the MSU technology is using deep learning methods. These have had success with spoken language. The device doesn't need a glove. But it likely has the same pitfalls of any previous system. That's because it doesn't capture face and body movements.
Vogler thinks researchers should move away from the idea that sign language recognition devices can really meet in-person communication needs.
"We have many options for facilitating in-person communication. And until we have something that actually respects the linguistic properties of signed languages and the actual communication behaviors of signers, these efforts will go nowhere near supplanting or replacing them," he says.
"Instead, people need to work with actual community members, and with people who understand the complexities of signed languages."
Vogler says it would be useful for sign language recognition technology like MSU's to work with voice interfaces like Alexa. The growth of these interfaces is an accessibility challenge for people who are deaf and hard of hearing. That is similar to the challenges the internet has presented for people who are blind over the years. That's because it is a largely visual medium.
"We presently do not have an effective and efficient way to interact with these voice interfaces if we are unable to, or do not want to, use our voice," he says. "Sign language recognition is a perfect match for this situation, and one that actually could end up being useful and getting used."