Article

Usability Issues for Deaf Users

Vena Chitturi

This post was previously on the Pathfinder Software site. Pathfinder Software changed its name to Orthogonal in 2016. Read more.

Working on a project involving deaf users made me curious to learn more about this group, the issues they face with language, and how that can affect how we design software and online experiences for them. Digging into the matter, I learned a few interesting things.

To begin with, a common perception is that deafness is a disability. But perhaps it’s more useful to understand deafness in a different way. In many cases, Sign Language is a deaf user’s first language. So English, or any other spoken language, is like a foreign language to them. This is an easy detail to forget and means that these deaf users actually have some things in common with hearing users that learned English as a second language.

However, the differences are many. Because native sign languages have no written form, deaf users don’t have the same experience with language that hearing users do. Native sign language is very visual, and signers depend heavily on visuals such as gestures and facial expressions to convey meaning and emphasis. In addition, Sign Language’s grammar and syntax rules are different from that of English and other spoken languages. Nuances of the language such as slang or a play on words are very difficult for a deaf user to pick up on as well. So it’s more useful to think about deaf users as communicating in a different language, than to see them as disabled (especially when looking into accessibility).

In order to help bridge the divide between written language and signed language, people use captioning and subtitling a great deal. These are two ways that information can be made more accessible for deaf users in the online world as well. Captioning, according to the Open & Closed Project, is the “transcription of speech and important sound effects for the benefit of deaf and hard-of-hearing viewers and others.” For deaf users who were once hearing, or for the hard of hearing, written English, for example, is their primary language. So captioning is quite accessible for them. But for other deaf users, because the information coming through captioning is in the user’s second language, and because their primary language is so heavily tied to gestures and facial expression, it is difficult for them to receive the full meaning out of what they are reading. Captioning, while important, isn’t sufficient enough to help deaf users.

That’s where subtitling comes in. The Open & Closed Project defines subtitling as a “written translation of the dialogue.” Subtitling for deaf users allows the content to be translated into words that are more common and understandable to deaf users. This can be a huge help to deaf users whose vocabulary is based on Sign Language, not in written/spoken English. It’s important to note that when translating from one written language to another (i.e. foreign language translation), subtitling can be quite effective. However, for deaf users, the difficulty with subtitling is that you are trying to translate a written language (English) for a user that communicates in a non-written language (Sign Language), but you are still using the written word to do so. And remember how Sign Language’s grammar/syntax are different from that of English? Unfortunately, in subtitling, the grammar/syntax is still based in English.

As someone focused on user experience and usability, I found these issues gave me a new appreciation for designing accessible software and websites. These challenges haven’t been completely solved yet, but designers are continually working toward finding ways to make the experiences of deaf users better.

For example, using a very direct, journalistic style (for subtitles as well as for textual copy on a website) can help deaf users to understand the content better because it mimics sign language’s very direct style of communication, as signers tend to state points clearly before expanding up on them. Writing with an active voice and staying away from slang and jargon as much as possible can help the deaf user understand the meaning better. Other methods include laying out the text so that there are fewer words per line, using headings and listing content out in bullets.

Designers can also use images to give context and greater meaning to content. Joseph Dolson wrote an interesting article about how to make content, including video and audio, more accessible for deaf users. He suggests that relying heavily on using interesting graphics can help get the narrative, meaning, and message across better than text alone. He gives an example of how an image of a broken glass can convey much more feeling and more of an experience than just textually indicating that a glass broke.

The great thing is that most of these solutions have the potential to improve the site or software beyond just for deaf users (for example, some of them are great principles to maintain for visually impaired users). Captioning itself is a good example of how designing for deaf users has benefitted users at large. Many elderly use it. Those waiting in airport lounges or other noisy locations often use it. Those trying to learn a new language have found it a useful aid. So taking accessibility into consideration doesn’t make sites and software accessible to just a few, but for everyone because many of the solutions are actually directly aligned with good usability in general.

Related Posts

Article

Patient Engagement & UX for Bluetooth Medical Devices

Article

How Design Can Improve Ratings for Medical Device Apps

Article

5 Keys to Integrating UX Design With Agile for SaMD

Article

Accelerate Your SaMD Pipeline with Product Analytics