header logo image

VoxLens: Adding one line of code can make some interactive visualizations accessible to screen-reader users – University of Washington

June 8th, 2022 1:53 am

Engineering | News releases | Technology

June 1, 2022

University of Washington researchers worked with screen-reader users to design VoxLens, a JavaScript plugin that with one additional line of code allows people to interact with visualizations. Millions of Americans use screen readers for a variety of reasons, including complete or partial blindness, learning disabilities or motion sensitivity. Shown here is a screen reader with a refreshable Braille display.Elizabeth Woolner/Unsplash

Interactive visualizations have changed the way we understand our lives. For example, they can showcase the number of coronavirus infections in each state.

But these graphics often are not accessible to people who use screen readers, software programs that scan the contents of a computer screen and make the contents available via a synthesized voice or Braille. Millions of Americans use screen readers for a variety of reasons, including complete or partial blindness, learning disabilities or motion sensitivity.

University of Washington researchers worked with screen-reader users to design VoxLens, a JavaScript plugin that with one additional line of code allows people to interact with visualizations. VoxLens users can gain a high-level summary of the information described in a graph, listen to a graph translated into sound or use voice-activated commands to ask specific questions about the data, such as the mean or the minimum value.

The team presented this project May 3 at CHI 2022 in New Orleans.

If Im looking at a graph, I can pull out whatever information I am interested in, maybe its the overall trend or maybe its the maximum, said lead author Ather Sharif, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. Right now, screen-reader users either get very little or no information about online visualizations, which, in light of the COVID-19 pandemic, can sometimes be a matter of life and death. The goal of our project is to give screen-reader users a platform where they can extract as much or as little information as they want.

Screen readers can inform users about the text on a screen because its what researchers call one-dimensional information.

There is a start and an end of a sentence and everything else comes in between, said co-senior author Jacob O. Wobbrock, UW professor in the Information School. But as soon as you move things into two dimensional spaces, such as visualizations, theres no clear start and finish. Its just not structured in the same way, which means theres no obvious entry point or sequencing for screen readers.

The team started the project by working with five screen-reader users with partial or complete blindness to figure out how a potential tool could work.

In the field of accessibility, its really important to follow the principle of nothing about us without us,' Sharif said. Were not going to build something and then see how it works. Were going to build it taking users feedback into account. We want to build what they need.

To implement VoxLens, visualization designers only need to add a single line of code.

We didnt want people to jump from one visualization to another and experience inconsistent information, Sharif said. We made VoxLens a public library, which means that youre going to hear the same kind of summary for all visualizations. Designers can just add that one line of code and then we do the rest.

The researchers evaluated VoxLens by recruiting 22 screen-reader users who were either completely or partially blind. Participants learned how to use VoxLens and then completed nine tasks, each of which involved answering questions about a visualization.

Participants learned how to use VoxLens and then completed nine tasks (one of which is shown here), each of which involved answering questions about a visualization. Each task was divided into three pages. Page 1 (labeled with a) presented the question a participant would be answering, page 2 (b) displayed the question and the visualization and page 3 (c) showed the question with four multiple choice responses.Sharif et al./CHI 2022

Compared to participants from a previous study who did not have access to this tool, VoxLens users completed the tasks with 122% increased accuracy and 36% decreased interaction time.

We want people to interact with a graph as much as they want, but we also dont want them to spend an hour trying to find what the maximum is, Sharif said. In our study, interaction time refers to how long it takes to extract information, and thats why reducing it is a good thing.

The team also interviewed six participants about their experiences.

We wanted to make sure that these accuracy and interaction time numbers we saw were reflected in how the participants were feeling about VoxLens, Sharif said. We got really positive feedback. Someone told us theyve been trying to access visualizations for the past 12 years and this was the first time they were able to do so easily.

Right now, VoxLens only works for visualizations that are created using JavaScript libraries, such as D3, chart.js or Google Sheets. But the team is working on expanding to other popular visualization platforms. The researchers also acknowledged that the voice-recognition system can be frustrating to use.

This work is part of a much larger agenda for us removing bias in design, said co-senior author Katharina Reinecke, UW associate professor in the Allen School. When we build technology, we tend to think of people who are like us and who have the same abilities as we do. For example, D3 has really revolutionized access to visualizations online and improved how people can understand information. But there are values ingrained in it and people are left out. Its really important that we start thinking more about how to make technology useful for everybody.

Additional co-authors on this paper are Olivia Wang, a UW undergraduate student in the Allen School, and Alida Muongchan, a UW undergraduate student studying human centered design and engineering. This research was funded by the Mani Charitable Foundation, the University of Washington Center for an Informed Public, and the University of Washington Center for Research and Education on Accessible Technology and Experiences.

For more information, contact Sharif at asharif@cs.washington.edu, Wobbrock at wobbrock@uw.edu and Reinecke reinecke@cs.washington.edu.

See the original post:
VoxLens: Adding one line of code can make some interactive visualizations accessible to screen-reader users - University of Washington

Related Post

Comments are closed.


2024 © StemCell Therapy is proudly powered by WordPress
Entries (RSS) Comments (RSS) | Violinesth by Patrick