Skip to content

Assistive Technologies

Software and hardware to navigate the web

Corina Murg smiling. Corina has brown hair and light skin. She is wearing a colorful beanieCorina Murg|
Published on November 2, 2024

Introduction to assistive technologies

Not everyone can use their sight and a mouse to access digital content. Users with certain disabilities will need other tools to navigate the web. For example, a blind person might rely on a keyboard paired with a screen reader software and a speech synthesizer software. These hardware and software are called assistive technologies.

Here's an overview of some of the most used assistive technologies:

The Keyboard

Using a mouse can be challenging for people with limited hand mobility. Keyboards, be they modern or traditional, are essential for these users.

With the Tab key, users navigate through interactive elements like links and buttons. Pressing the Enter key activates these elements, and for buttons, the Space bar works as well. The arrow keys are used to navigate through menus or a group of buttons, and there are shortcuts for opening, closing, or switching between apps and tabs, to name just a few.

It's important to note that the keyboard alone cannot guarantee access to all the elements a user might need to interact with. We'll cover this in more detail in the next posts, but let's remember that the underlying code of the website must be structured in a way that allows keyboard navigation. This is true for other assistive technologies as well. They cannot do their job unless the website is built with accessibility in mind.

Screen readers

A screen reader is software typically used by people who are blind or have vision impairments. It can also benefit users with no vision impairments. For example, a person who prefers auditory learning can use a screen reader to listen to the content instead of reading it.

At render time, the browser builds the accessibility tree with nodes that are relevant for accessibility. The screen reader accesses this tree via a set of APIs, and then makes the information available to the user through speech or braille output.

Screen readers paired with a keyboard and a speech synthesizer

If the user has no hearing impairments, they will navigate through the UI elements with the keyboard, while the content on the screen is announced through speech synthesizer software.

The most popular screen readers are:

  • JAWS (Job Access With Speech) for Windows

  • VoiceOver included in macOS and iOS devices

  • NVDA (NonVisual Desktop Access), an open-source option for Windows

  • ChromeVox for Chrome OS

Screen reader paired with a braille display

For users with both vision and hearing impairments, the screen reader can be used with a braille display. In this case, the screen reader translates the content on the screen into a format that can be read on a display fitted with braille cells. As the input from the screen reader changes, the cells change dynamically to convey the new information to the user.

Laptop fitted with a braille display

Photo credit: Elizabeth Woolner on Unsplash.

Voice recognition software

This technology helps people with mobility issues or those who need a hands-free interraction with the web. It allows users to interact with their devices using voice commands. For example, if the user needs to active a navigation button, they can do it with the command "Click navigation".

Dragon NaturallySpeaking is a popular example for Windows, while macOS offers Dictation. Windows users can also use Voice access, the built-in Windows 11 tool.

Screen magnifiers

Screen Magnifiers are designed to enlarge the content on screen, making it more readable for users with low vision. These tools often include additional features like inverted colors and speech output.

Web page with Windows screen magnifier set at 200%

Keyboard/mouse alternatives

These technologies simulate the functionality of the keyboard or the mouse for people with more severe physical disabilities. For example, a sip-and-puff device allows a person to send signals to the computer by inhaling or exhaling into a wand, while an eye-tracking system allows mouse control through eye movements.

Video: Watch Zack Collie, a quadriplegic young man and gamer, explaining how he uses sip-and-puff technology to play video games.

What next?

Given a web page, how does a screen reader know which information to share with the user? How does a voice recognition software know which commands to execute? To answer these questions, we need to learn about a very important concept: the accessibility tree.