They’re seated just a few feet apart in the classroom, but every question the student asks travels 300 miles before it reaches the ears of the teacher. The answer makes the same round-trip journey. Over the duration of a five-minute conversation, the words exchanged between the pair will be sent across a distance the equivalent of crossing their home state of Utah from east to west a dozen times. Despite how complex that may sound, it’s a very efficient method of communication.
The student in question has a severe hearing impairment and can communicate only through sign language. The teacher, however, doesn’t understand sign language. So, between the two sits a remote interpreter. Located more than 150 miles away, the interpreter joins the classroom conversation via a video conferencing connection on a laptop.
This isn’t a prediction of a future scenario. It’s happening today. It’s an example of how video conferencing for hearing impaired students is expanding learning opportunities, especially in rural areas.
Video Conferencing for Hearing Impaired Students
The exchange above was described in a recent post from Utah news outlet KSL. The student at the center of the article attends Piute High School in rural Utah. Under sponsorship from the Utah Schools for the Deaf and the Blind, she receives assistance from an interpreter located at Orem Junior High School. The interpreter can hear what is happening in class through the laptop connection and can speak directly to the teacher through earbuds linked to a live three-way video conferencing call.
Thanks to this digital dexterity, teacher and interpreter can hold an audio conversation at the same time as interpreter and student hold a visual one.
The connection also extends beyond the classroom and outside of school hours. The student can call on the interpreter for assistance over a similar arrangement using a mobile tablet video call.
None of the technology involved in the exchange is any more expensive or complex than the webcam, laptop, and video conferencing platform setup you yourself would need to carry out a basic Skype conversation with a friend. The accessibility of that technology has led to the use of video as a translation aid in several different forms, and there are some yet to be explored.
Video Translation and the Police
The low-cost nature of video conferencing hardware and software has allowed the technology to evolve in pursuit of any novel idea–you can get an HD-quality webcam for less than $100 and services such as Skype and Zoom will let you make a video call for free. As such, a fledgling industry has sprung up providing online sign language instruction and interpretation. Given the fact that every smartphone can host a video call, it means sign language speakers can take their digital assistants with them into the outside world.
That idea has inspired the New York Police Department to begin equipping its officers with mobile sign language interpretation devices. Beginning in April 2017, the NYPD has been issuing police on patrol with tablets that can instantly place an officer in a video call with a sign language interpreter so they can better communicate with the hearing impaired people in their community.
Again, there’s no technological difference between the tablet the NYPD uses and the one you buy online from Amazon. What is changing is our familiarity with such devices and our understanding of where they can be used beyond the standard social or workplace interactions.
Video Conferencing Sign Language Translation
The primary purpose video conferencing serves is to bring people into face-to-face proximity. In this case, that means families with historic and economic ties to rural areas don’t have to relocate to provide their children with special needs access to expert services. Video interpreting links could be used across a range of daily interactions once they can be hosted on a smartphone–trips to the doctor, bank, or even supermarket, for instance.
There’s also the possibility that video conferencing alone could provide a translation service. Skype, one of the most accessible and basic video vendors available, already has a live audio-to-text translation feature. Its service is far from perfect, but it can reproduce the spoken word with enough speed to be useful in a real-time conversation. The hearing-impaired student could potentially do away with the cost of a human interpreter by using a live Skype link in class. Both teacher and student would need screens on their desks, but the teacher’s words could be translated to text for the student automatically, and the student could interact through live messaging.
There’s even the possibility that sign language gestures could be translated digitally. Tactile gloves that are being tested by Canadian researchers to convey touch over video call could be perhaps be repurposed to interpret complicated hand movements in the same way they currently pick up pressure points on the skin. And then there’s the facial recognition technology that can read, interpret, and identify human faces. Couldn’t this visual technology be programmed to detect hand movements and derive meaning from a stored database of answers?
Current video conferencing technology can provide the hearing impaired with a live link to instant, mobile translation that helps them communicate more easily and smoothly with the world around them. In the future, they may need nothing more than Skype and a smartphone to make the connection themselves.