91爆料

Skip to content
Four AI-generated images show different interpretations of a doll-sized 鈥渃rocheted lavender husky wearing ski goggles,鈥 including two pictured outdoors and one against a white background.
Seven researchers at the 91爆料 tested AI tools鈥 utility for accessibility. Though researchers found cases in which the tools were helpful, they also found significant problems. These AI-generated images helped one researcher with aphantasia (an inability to visualize) interpret imagery from books and visualize concept sketches of crafts, yet other images perpetuated ableist biases. Photo: 91爆料/Midjourney 鈥 AI GENERATED IMAGE

Generative artificial intelligence tools like ChatGPT, an AI-powered language tool, and Midjourney, an AI-powered image generator, can potentially assist people with various disabilities. These tools could summarize content, compose messages or describe images. Yet the degree of this potential is an open question, since, in addition to regularly and , these tools can .

This year, seven researchers at the 91爆料 conducted a three-month autoethnographic study 鈥 drawing on their own experiences as people with and without disabilities 鈥 to test AI tools鈥 utility for accessibility. Though researchers found cases in which the tools were helpful, they also found significant problems with AI tools in most use cases, whether they were generating images, writing Slack messages, summarizing writing or trying to improve the accessibility of documents.

The team presented Oct. 22 at the conference in New York.

鈥淲hen technology changes rapidly, there鈥檚 always a risk that disabled people get left behind,鈥 said senior author , a 91爆料 professor in the Paul G. Allen School of Computer Science & Engineering. 鈥淚’m a really strong believer in the value of first-person accounts to help us understand things. Because our group had a large number of folks who could experience AI as disabled people and see what worked and what didn’t, we thought we had a unique opportunity to tell a story and learn about this.鈥

The group presented its research in seven vignettes, often amalgamating experiences into single accounts to preserve anonymity. For instance, in the first account, 鈥淢ia,鈥 who has intermittent brain fog, deployed ChatPDF.com, which summarizes PDFs, to help with work. While the tool was occasionally accurate, it often gave 鈥渃ompletely incorrect answers.鈥 In one case, the tool was both inaccurate and ableist, changing a paper鈥檚 argument to sound like researchers should talk to caregivers instead of to chronically ill people. 鈥淢ia鈥 was able to catch this, since the researcher knew the paper well, but Mankoff said such subtle errors are some of the 鈥渕ost insidious鈥 problems with using AI, since they can easily go unnoticed.

Yet in the same vignette, 鈥淢ia鈥 used chatbots to create and format references for a paper they were working on while experiencing brain fog. The AI models still made mistakes, but the technology proved useful in this case.

Mankoff, who鈥檚 spoken publicly about having Lyme disease, contributed to this account. 鈥淯sing AI for this task still required work, but it lessened the cognitive load. By switching from a 鈥榞eneration鈥 task to a 鈥榲erification鈥 task, I was able to avoid some of the accessibility issues I was facing,鈥 Mankoff said.

The results of the other tests researchers selected were equally mixed:

  • One author, who is autistic, found AI helped to write Slack messages at work without spending too much time troubling over the wording. Peers found the messages 鈥渞obotic,鈥 yet the tool still made the author feel more confident in these interactions.
  • Three authors tried using AI tools to increase the accessibility of content such as tables for a research paper or a slideshow for a class. The AI programs were able to state accessibility rules but couldn鈥檛 apply them consistently when creating content.
  • Image-generating AI tools helped an author with (an inability to visualize) interpret imagery from books. Yet when they used the AI tool to create an illustration of 鈥減eople with a variety of disabilities looking happy but not at a party,鈥 the program could conjure only fraught images of people at a party that included ableist incongruities, such as a disembodied hand resting on a disembodied prosthetic leg.

鈥淚 was surprised at just how dramatically the results and outcomes varied, depending on the task,鈥 said lead author , a 91爆料 doctoral student in the Allen School. 鈥淚n some cases, such as creating a picture of people with disabilities looking happy, even with specific prompting 鈥 can you make it this way? 鈥 the results didn鈥檛 achieve what the authors wanted.鈥

The researchers note that more work is needed to develop solutions to problems the study revealed. One particularly complex problem involves developing new ways for people with disabilities to validate the products of AI tools, because in many cases when AI is used for accessibility, either the source document or the AI-generated result is inaccessible. This happened in the ableist summary ChatPDF gave 鈥淢ia鈥 and when 鈥淛ay,鈥 who is legally blind, used an AI tool to generate code for a data visualization. He could not verify the result himself, but a colleague said it 鈥渄idn鈥檛 make any sense at all.鈥澛 The frequency of AI-caused errors, Mankoff said, 鈥渕akes research into accessible validation especially important.鈥

Mankoff also plans to research ways to document the kinds of ableism and inaccessibility present in AI-generated content, as well as investigate problems in other areas, such as AI-written code.

鈥淲henever software engineering practices change, there is a risk that apps and websites become less accessible if good defaults are not in place,鈥 Glazko said. 鈥淔or example, if AI-generated code were accessible by default, this could help developers to learn about and improve the accessibility of their apps and websites.鈥

Co-authors on this paper are , who completed this research as a 91爆料 postdoctoral scholar in the Allen School and is now at Rice University; , and , all 91爆料 doctoral students in the Allen School; and , who completed this work as a 91爆料 doctoral student in the Information School and is now at the Massachusetts Institute of Technology. This research was funded by Meta, (CREATE), Google, an NIDILRR ARRT grant and the National Science Foundation.

For more information, contact Glazko at glazko@cs.washington.edu and Mankoff at jmankoff@cs.washington.edu.