Usability Issues for Deaf Users

Working on a project involving deaf users made me curious to learn more about this group, the issues they face with language, and how that can affect how we design software and online experiences for them. Digging into the matter, I learned a few interesting things.
To begin with, a common perception is that deafness is a disability. But perhaps it's more useful to understand deafness in a different way. In many cases Sign Language is a deaf user's first language. So English, or any other spoken language, is like a foreign language to them. This is an easy detail to forget, and means that these deaf users actually have some things in common with hearing users that learned English as a second language.
However the differences are many. Because native sign languages have no written form, deaf users don't have the same experience with language that hearing users do. Native sign language is very visual, and signers depend heavily on visuals such as gestures and facial expressions to convey meaning and emphasis. In addition, Sign Language's grammar and syntax rules are different from that of English and other spoken languages. Nuances in language such as slang or a play on words is very difficult for a deaf user to pick up on as well. So it's more useful to think about deaf users as communicating in a different language, than to see them as disabled (especially when looking into accessibility).
In order to help bridge the divide between written language and signed language, people use captioning and subtitling a great deal. These are two ways that information can be made more accessible for deaf users in the online world as well. Captioning, according to the Open & Closed Project, is the "transcription of speech and important sound effects for the benefit of deaf and hard-of-hearing viewers and others." For deaf users who were once hearing, or for the hard-of hearing, written English, for example, is their primary language. So captioning is quite accessible for them. But for other deaf users, because the information coming through captioning is in the user's second language, and because their primary language is so heavily tied to gestures and facial expression, it is difficult for them to receive the full meaning out of what they are reading. Captioning, while important, isn't sufficient enough to help deaf users.
That's where subtitling comes in. The Open & Closed Project defines subtitling as a "written translation of dialogue." Subtitling for deaf users allows the content to be translated into words that are more common and understandable to deaf users. This can be a huge help to deaf users whose vocabulary is based in Sign Language, not in written/spoken English. It's important to note that when translating from one written language to another (i.e. foreign language translation), subtitling can be quite effective. However for deaf users, the difficulty with subtitling is that you are trying to translate a written language (English) for a user that communicates in a non-written language (Sign Language), but you are still using the written word to do so. And remember how Sign Language's grammar/syntax are different from that of English? Unfortunately, in subtitling, the grammar/syntax is still based in English.
As someone focused on user experience and usability, I found these issues gave me a new appreciation for designing accessible software and websites. These challenges haven't been completely solved yet, but designers are continually working toward finding ways to make the experiences of deaf users better.
For example, using a very direct, journalistic style (for subtitles as well as for textual copy on a website) can help deaf users to understand content better, because it mimics sign language's very direct style of communication, as signers tend to state points clearly before expanding up on them. Writing with an active voice and staying away from slang and jargon as much as possible can help the deaf user understand the meaning better. Other methods include laying out the text so that there are fewer words per line, using headings and listing content out in bullets.
Designers can also use images to give context and greater meaning to content. Joseph Dolson wrote an interesting article about how to make content, including video and audio, more accessible for deaf users. He suggests that relying heavily on using interesting graphics can help get the narrative, meaning and message across better than text alone. He gives an example of how an image of a broken glass can convey much more feeling and more of an experience than just textually indicating that a glass broke.
The great thing is that most of these solutions have the potential to improve the site or software beyond just for deaf users (for example, some of them are great principles to maintain for visually impaired users). Captioning itself is a good example of how designing for deaf users has benefitted users at large. Many elderly use it. Those waiting in airport lounges or other noisy locations often use it. Those trying to learn a new language have found it a useful aid. So taking accessibility into consideration doesn't make sites and software accessible to just a few, but for everyone because many of the solutions are actually directly aligned with good usability in general.








I really don’t think there is any evidence that signing deaf people read and understand captioning worse than oral or late-deafened people. It may actually be true for some people, but I don’t know of any such evidence. (Not from the last 20 years, at least.) Hence it does not follow that subtitling sign-language videos into a written language works worse for that group, especially since they’re the ones who understand the sign language in the first place. (Those subtitles are for nonspeakers or speakers of other sign languages.)
Of course English-language subtitles will have English “grammar/syntax.” You seem to be suggesting we can subtitle in sign language; we can’t.
You should be more careful in your claim that sign language, of which there are actually many, shares no “grammar and syntax” with spoken languages. Grammar and syntax are synonyms. Underlying grammar of some sign languages has comparisons with some spoken languages (e.g., ASL and Chinese; ASL and Ga).
I think you were trying to say that a sign language from place X is not a reëncoding in manual gestures of the spoken language of place X. (Except of course when it is, as with Signing Exact English, which isn’t a “language.”)
The BBC usability test from circa 2002 did indeed show that BSL users preferred plain-language sites. This is not a reason never to write in language other than plain.
In response to the comment, obviously one can’t subtitle in sign language. And I don’t believe the author ever suggested subtitling a sign language video for the benefit of a deaf user! That would be absurd.
The point I believe is a simple but important one: in making sites and software more user friendly for the deaf person, a designer must stand in the shoes of the deaf person. Too often, in my experience, software is designed from the perspective of what the designer thinks is good for the user. This post challenges the designer to get out of his comfort zone and explore.
Also, I don’t believe the author said that sign language shares no grammar and syntax with spoken languages. I believe the author said only that “Sign Language’s grammar and syntax rules are different from that of English and other spoken languages.” It can be different and still share certain grammar and syntax rules. Two very different things.
Plus, we have to be careful to recognize that there are significant differences between a person born without hearing and a person who after having learned written and spoken English, then loses hearing. The author seems to focus on persons born deaf and developing tactics and means to make software more accessible for the born-deaf crowd.
As a final point, grammar and syntax are different things. Grammar is the framework of a language. It is a study or science that has two parts: morphology (the forms of words) and syntax (the combination of words into sentences). Morphology studies verbs, nouns, adjectives etc. Syntax deals with their functions in sentences – subjects, objects, attributes etc.
Just to clarify, to say “that relying heavily on using interesting graphics can help get the narrative, meaning and message across better than text alone” is not really an accurate description of what I said in my Practical eCommerce article. The key word which bothers me here is “relying” — in fact, no article should ever “rely” on visuals to convey meaning; the visuals should be made available in order to assist the reader in perceiving the intent of the text and any sound effects or important elements which may be lacking in a direct-to-text transcription.
[I know that I’m nitpicking, to a certain degree, but the statement “rely heavily images” grates on me, since a crucial part of accessible design is making the attempt to never rely on any media format.]
Best,
Joe
Vena,
How many deaf people actually use sign language? Less than 1% (one percent!):
http://openandclosed.org/docs/ALA265/
Also, over 95% of deaf and hard of hearing come from hearing families, so many of them are usually raised oral and some using signed English.
Do not exaggerate that all those who were born deaf cannot read English. I have been PROFOUNDLY deaf since early childhood, yet I am fluent in several spoken, written, signed languages. I know many people who were born deaf, but have excellent written and reading skills in addition to being native ASL users. There are many of them who know several languages, too.
Written language is a bridge between deaf and hearing people because not all hearing people can sign and not all deaf can talk or lipread. Only with written language you will have more success in the mainstream world.
It is the deaf education that is the problem. Any native sign user can learn to read and write a spoken language if taught properly.