AI is doubtlessly an essential tool for accessibility (I'll get into that more if you don't see it yet), but what happens when access to those tools is locked behind steep paywalls? What if it's "free" but you are the product?
This has been something that has been on my mind for a while. The original gpt-enabled Bing/Copilot experience from Microsoft was tremendously useful for me when I was recovering from eye surgery because it could converse with me about information from the internet with a voice-enabled interface, saving me from eye strain. Unfortunately, Microsoft later butchered the UX on their Bing/Copilot product, and I no longer use it.
While recovering from that same eye surgery, I was fortunate enough to discover a trove of papers my mom wrote for her computer science degrees in the 80s, including one titled "Speech to Text Conversion: Introduction to Artificial Intelligence." It focused on the work of Dennis H. Klatt. Even though the system described was mechanistically much different from modern AI that solves similar problems, I was struck by Klatt's commitment to the disabled community and how many of the use cases identified by the paper were accessibility related. It became clear to me that AI, as a loosely defined and evolving concept, has tremendous value in being able to adapt software interactions that are inaccessible for disabled users.
You might recognize Klatt's work as being used for the voice of Stephen Hawking for decades.
While Klatt’s work wasn’t based on neural networks, it emerged around the same time early neural net research was gaining traction. But it wasn’t until decades later—after breakthroughs like 2012’s AlexNet—that neural networks became computationally feasible for solving real-world problems at scale.
Anyway, one night, I hit a limit on my ChatGPT plan and was frustrated enough to post about it on BlueSky:
Weird & unfortunate from an #accessibility POV that OpenAI locked unlimited ChatGPT Advanced Voice Mode behind a $200/mo paywall. Could likely help disabled people a ton—but not at that price.
I have keratoconus & find it great for searching without relying on my eyes... for an hour
#disability
— Alex Kraieski (@alexkraieski.bsky.social) February 24, 2025 at 12:41 AM
In some ways, I am exaggerating, and this is no big deal. There's a "standard" voice mode that it falls back on, so maybe you can call it a reasonable compromise. And many disabled users already have a separate screenreader program that they use with ChatGPT regularly. But I really believe that advanced voice mode is a feature that is inherently compelling to disabled people because it offers a way to access information through fluid conversation instead of having to physically and visually navigate the internet. Imagine how you would feel if there was some Google feature that you found useful because of your disability, but it started nudging you towards a $200 a month "Google Search Pro" subscription after you spend a certain amount of time in a day searching...
There are a few major issues I think this example highlights with SaaS-based AI for disabled people:
- Cost/inequality: Living with disabilities has extra costs that tend to add up. AI shouldn't just be another way to extract wealth from vulnerable people. We should strive to make it lower the disability tax.
- Privacy/bias: I think there are a potentially a lot of situations where AI systems might be able to better help users if the users (or the system) tell the AI about their disabilities, but that's also problematic when the AI is hosted by a tech company that wants to train models and do God knows whatever else with your data. This general privacy issue affects all users, but disabled users are more vulnerable for obvious reasons. There's also all sorts of concerns with bias- both "primary" (users will be exposed to AI-generated content that is biased against them based on their disability) and "secondary" (a user's data will be used to train models that may produce outputs that are harmful to others in the future through implicit or explicit ableism)
- For any kind of useful feature delivered as a service (through a proprietary app or website), functionality that works to improve the lives of disabled people may be subject to breakage whenever a company wants to try a new UI design or a "next-generation model"
For these issues and more, I think open-source technology and models can provide a solution, or at least a counterweight to the capitalist, closed-source, proprietary tech machine. Thankfully, open-source AI models have flourished in recent years. And as Tiffany Yu articulates in The Anti-Ableist Manifesto, disabled people are "simply some of the most adaptable and innovative people out there."
Screen Readers: A Closed-Source and Open-Source Tale
Perhaps the first screen reader was Synthetic Audio Interface Driver (SAID) from IBM in 1978, a physical terminal which cost $10,000 and required bulky synthesizer hardware. As the computing paradigm shifted to desktop, accessibility software wasn't too far behind. JAWS (an acronym for Job Access with Speech) for Windows was released in 1995. JAWS is still maintained and offered today by a company called Core Scientific, and as of the time of writing, they charge $1,390.00 for a perpetual license (which is a lot but maybe worth it if it's truly a lifetime of "job access" and support). 2005's Mac OS X Tiger saw the introduction of Apple's voiceover, which brought a similar interface to Mac and later iOS.
Do you see the pattern? Proprietary screen readers have definitely helped blind people with their jobs and lives for decades now, and the tech has evolved to keep up with the advent of desktop and mobile computing. But high costs also historically made screen readers inaccessible to millions of blind people globally, leading to the launch of the NVDA open-source screen reader for Windows in 2006. It's a perfect example of the previously mentioned capability of disabled people to innovate. Solving accessibility challenges in tech and achieving widespread adoption/availability often requires contributions from both open-source and proprietary/corporate efforts. I believe this applies to AI too.
I think it's also worth commending Apple's economic approach to accessibility with VoiceOver. From what I understand, VoiceOver is well-regarded in the blind community. And sure, Apple devices are expensive, but when you buy one, it comes with the screenreader built into the operating system. There's no "blind iPhone" that certain users need to shell out extra for. Developing and maintaining screenreader software certainly isn't cheap, but Apple spreads the cost amongst all iPhone, Mac, and iPad buyers.
LLM/AI/ML Accessibility Use Cases
This list isn't intended to be all-inclusive, but here are some ways current AI (largely meaning LLMs here) can help with accessibility:
- ChatGPT Advanced Voice mode: the real-time and multi-modal nature of this mode makes it potentially useful for people with a wide variety of disabilities (motor, low vision, blind, neurodivergence, etc.)
- Alt text generation/suggestions for content creators and social media platforms: alt text helps make images accessible to blind and low-vision users of websites and social media platforms. Many LLMs like GPT-4o can use vision capabilities to help you create these textual descriptions that make your images on social media or your website accessible. I've been experimenting with using ChatGPT to generate Alt text for charts when I post them as images on BlueSky
- Assistance with organizing thoughts and making plans (TBI survivors, ADHD, etc.): it's not that we expect the AI to be genius, but it can be good at organizing information into bullet-point lists that are easy to digest or create a step-by-step plan through "dialogue" with the user
- Coding Assistants (github copilot-like): reducing the amount of typing involved in software development and data analysis can make this helpful for access for people with chronic pain, adhd, and more who work in those fields
This list is deliberately narrow. There are countless others that could be here. And obviously, for each of these, there is room for AI to be utilized/implemented either effectively or poorly.
AI in self-driving cars also has the potential to be a massive win for accessibility in a society designed around car ownership (I am providing a North American/US perspective here, but this surely applies to other parts of the world too). While I don't think the open-source model applies here (we don't want hobbyists loading custom AI models into their self-driving cars), advances in the open-source and academic worlds of computer vision can help inform the design and training of models for autonomous driving. This could eventually open so many doors for people who have access issues with driving currently.
Finally, I think it's also worth briefly analyzing AI in the context of the spoon theory metaphor that describes how people with a chronic condition may have limited physical or mental energy for everyday activities. This forces many disabled people to plan and ration their energy and activities throughout their days. This means that countless LLM use cases that might not immediately appear to be accessibility-related can actually be accessibility game-changers indirectly by conserving spoons. When disabled people successfully streamline or automate difficult tasks with AI, it can mean enjoying more time for friends, family, hobbies, advocacy, innovation, and more.
Simulated Empathy, Real Bias and Harm?
As recent research indicates that LLMs have strong capabilities on tests of emotional intelligence, people, including disabled people, will increasingly use these models in sensitive areas of their lives, advisable or not. The stakes will get higher, even though ableist language and microagressions are always harmful.
From my experience and experimentation, ChatGPT can initially seem highly "empathetic" about access needs and disabilities before longer chats devolve into highly inappropriate remarks and/or jokes. I've actually had this happen where ChatGPT will start making a bunch of terrible ADHD jokes and shit like that — no lie.
With the advent of open-source LLMs and communities/platforms like HuggingFace, disabled people and allies have the opportunity to create models that we train to be anti-ableist and anti-racist. And if we don't do this and we use commercial LLMs? We are risking exposing ourselves further to harmful language.
Some risk areas I am interested in are non-apparent (often called "invisible") disabilities and dynamic disabilities. People with these kinds of disabilities might be at additional risk of ableist microagressions from AI/LLM systems.
Finally, "big tech" products like social media platforms have traditionally been designed to use a data-driven approach (using algorithms and models) to maximize engagement and addiction. We should be fearful of this in AI/LLM platforms too, and it has the potential to detract from or even totally negate the accessibility benefits to users. This makes it even more clear that we need to continue to have access to open-source AI models of various types to innovate with.
The Regulatory Trap
At this point, you might be thinking something along the lines of "of course devs and disabled people should be free to develop and deploy systems that solve people's access issues. That's not controversial, and nobody would take that away." However, the myriad voices calling for comprehensive AI regulation threaten this.
A lot of the arguments for strong AI regulation appeal to the need to protect vulnerable people from discrimination. And obviously, this is always something we need to worry about with AI and ML. It is fairly well known that various ML systems like facial recognition work worse on nonwhites.
One thing I will caution against is having some sort of licensing system administered by the government for AI/ML scientists and engineers. Such systems would be harmful in terms of inclusion by putting up barriers for disabled devs. And then we would have ableism and racism further baked into our institutions and algorithms.
Trying to regulate "AI Safety" is a red herring that OpenAI is trying to use to get us to accept regulatory capture. If AI is heavily regulated, it is likely that a lot of those regulations will be drawn up with the collaboration of OpenAI to make it hard for other companies to comply. It might also entrench existing AI practices when government regulators don't know how to approve new forms of AI. Therefore, "AI regulation" should be seen as an inherent threat to innovation that could help make the world more accessible.
I am not saying that companies should get a free pass to feed sensitive data about disabled people (and everyone) to various kinds of AI systems, but I think laws/regulations need to be targeted against industries where there is the most danger and potential for harm. Finance, health insurance, and defense stand out to me as high-risk industries that should likely have AI/data regulations (and in a lot of ways, already do to some extent). But regulation on "foundational models" themselves (including LLMs and vision models) seems likely to stifle innovation for little benefit.
Appeals to nationalism will likely continue to drive calls for restrictions on "foreign AI." I've covered this in my article about Deepseek. Hopefully, we as a society and world can reject them. Who cares where a model comes from if it helps a disabled person with their access needs? China produces a lot of useful AI research and skilled AI scientists.
Environmental concerns are another reason people are worried about AI. I won't pretend that energy and water consumption by AI data centers isn't concerning. Many environmentalists use these concerns to dismiss AI entirely. My response to that is 3 pronged:
- We've already established that there are very legitimate reasons for AI to be used by people as an accessibility tool. We should look for ways to reduce emissions that don't punch down first. Unless you are ok with ableist policies that make disabled lives a second priority...
- Potentially, a lot of useful AI computing (inference) can be done on devices that people already own. And this goes hand-in-hand with the point about needing to try to train anti-ableist, anti-racist models. We need these models to be able to run on consumer laptops and phones for various logistical reason. "Using LLMs" does not need to go hand-in-hand with "AI data centers" that environmentalists love to demonize.
- It is possible to use/create AI in ways that actually help reduce emissions (or otherwise advance climate justice goals). For example: using reasoning models to identify potential ways to improve the power grid, neural-net powered climate models, etc.
Some of the environmental issues with LLMs are due to the consumer costs not reflecting all the costs of delivery, let alone the negative externalities. Currently, neither OpenAI nor Anthropic are profitable. They are in a VC-subsidized growth mode and are pricing their AI services accordingly. True discovery of supply, demand, and the power requirements are distorted in this regime, but that's not a fault of the technology itself (beyond its potential as a platform to generate ROI for investors and advertisers lol).
Intellectual property issues are another flashpoint in AI regulation and ethics, one where lawsuits and future legislation and regulation might drastically change the landscape from today. And it's one of the major reasons for widespread anti-AI sentiments. First, in the context of accessibility, I want to dispel the notion that LLMs are inherently only "plagiarism generators". Let's imagine a scenario where a blind person takes an inaccessible image from social media and puts it into ChatGPT to get some kind of a text description of the image. Is that plagiarism and "AI slop?" Or is it a disabled person using technology to transform data so they can enjoy internet content that didn't meet their access needs? Even if we have ethical/legal concerns about how the training data was used, this end use doesn't seem like plagiarism to me. I think the word "generative" has narrowed people's mind about the capabilities of AI as a general field too.
Additionally, I don't think the idea of an AI industry that's purely value-extractive from arts and culture is a given. This is an idea I expressed in a post on BlueSky in response to article about a quote from Nick Clegg (formerly of Meta)
Throughout history, the powerful in societies, like emperors and church leaders, have extensively commissioned elaborate works of art at great cost. AI-training companies missed the memo
If I were running one, I would be using some of that capital to commission art for the culture and training data
[image or embed]
— Alex Kraieski (@alexkraieski.bsky.social) May 26, 2025 at 5:17 PM
If historical popes could use their riches to commission massive works of art over years or decades, why can't Sam Altman?
For a second, that idea might sound totally foreign. A tech company investing in arts, culture, and journalism? But a (highly successful) company like that already exists. It's called Netflix. Netflix was always known for the quality of the software engineers it was able to attract, but it also realized that it had to gradually pivot into producing its own art (films and "TV") to keep growing sustainably. And like it or not- and whether you subscribe or not- Netflix has contributed to culture. Netflix has series that I talk about with my friends and family like F1: Drive To Survive and The Last Dance.
If Netflix can pivot from being mostly a tech company to an arts/"journalism" production company, maybe current AI companies, like OpenAI, can do the same. And even if not, I am still proposing it as a hypothetical business model for an AI company. Beyond the ethics and sustainability arguments, I think having high-quality, innovative human work has competitive and technical advantages. Right now, the AI industry is pretending that synthetic training data will solve all concerns, when the reality is that is it is a recipe for model collapse.
Anyway, I'm not here to defend stealing the IP of others and using it without permission or compensation. But whatever happens in the future, I hope we can remember that AI, as it exists today, has real accessibility uses for disabled people. I hope we don't steamroll everything that works without providing a framework for future innovation.
At the end of the day, many people think narrowly about accessibility like it's an accidental, secondary feature of AI. In reality, accessibility-related use cases have been some of the strongest motivators of "AI" development for decades. We need to keep AI free to adapt to the needs of disabled people, not regulated so that tech companies can keep their power entrenched.
Conclusion
As much as AI haters like to deny it, AI is a part of a multi-generational tradition of building accessibility tools and hoping for even better ones in the future. And when the modern AI industry strays from that, it highlights the need to have our own open-source solutions, not ban all solutions.
It is vital that disabled voices are included in debates about AI policy.
Attacks on AI that ignore or minimize its accessibility value often reinforce ableist assumptions, even if unintentionally.
If companies like OpenAI want to put accessibility features behind steep paywalls or design them without disabled input, that’s not inclusion. It's exclusion with a VC-friendly UI. Apple has generally served as a good model for how a tech company can offer accessibility by default.
Disabled people have always been innovators, and we shouldn’t have to ask for permission to build or use tools that help us live fuller, freer lives.
AI may be flawed, but it is already part of our accessibility landscape. If you’re fighting to make it disappear rather than fighting to make it equitable, ask yourself: who are you fighting for? Because it’s not us. And not for a more accessible world.
(Thank you for reading. I am neurodivergent. I am one voice. Please be open to a variety of perspectives and voices)
Additional Resources/Reading
Join the conversation and feel free to ask any questions you may have! Please note that submitted comments may be subject to approval before appearing on the site.