AI for All: Why disability inclusion is vital to the future of artificial intelligence.
Disabled people are embracing artificial intelligence (AI) in many parts of their lives. From education and employment to communication and navigation.
Many AI tools weren't made specifically with disabled people in mind, or as assistive technology. But disabled people often use them to make the world more accessible and increase autonomy.
For example, image recognition can describe a photo or an object to someone with a visual impairment. Smart home solutions can introduce automation to everyday routines. Natural language processing can help people understand documents by summarising or communicate by suggesting responses to emails.
These tools are starting to address some of the barriers that disabled people face, especially in the digital landscape. In theory, they can create a much fairer world. Particularly as the technology continues to develop and becomes easier to access.
But for that to happen, AI must be built on a foundation of fairness.
How is inaccessibility and inequality built into AI from the outset?
Data from disabled people’s experiences and usage is often seen as an outlier. So it’s then left out entirely and not used for training AI systems.
For example, people with speech conditions have found that most voice-activated technology doesn’t work for them. This is because the technology wasn’t trained on voices like theirs. Because they aren't considered a 'normal' scenario.
And people with facial differences can find that facial-recognition technology doesn’t work for them either - for the same reason.
If the data being used to train AI excludes disabled people, the result can't be inclusive.
AI bias is the underlying prejudice in the data that is used to create algorithms. Algorithms themselves don’t create biases. But they can sustain prejudices and inequalities that already exist in society. We've already seen too many examples of AI systems that demonstrate racial, gender and aged-based bias, too.
These biases can enter at any point of the development process. Whether through human developer bias, a lack of data, and even organisational practices.
Case study
Amazon experienced this in 2015. They realised their hiring algorithm was biased against women. This was because it was trained on applications submitted over the previous decade, and much fewer of these were from women. As a result, it was inadvertently trained to favour applications from men.
Considerations for removing bias from AI.
Creating unbiased algorithms is difficult. And that’s because it means using unbiased training data and relying on unbiased human involvement. But there are ways to minimise the risk. To start:
领英推荐
And it cannot be an afterthought, as accessibility often is.
In the end, it's quite a simple concept to keep in mind. If AI is only trained on data from an ableist and inaccessible society, how can we expect a fair result?
We can’t.
That’s why disabled people need to be involved from beginning to end. We need to be involved in research, development, accountability, and testing. And certainly, discussions around AI fairness and ethics.
And we must make sure this happens before we fully integrate AI into critical tasks. For example, making appointments, accessing education, and applying for jobs. We risk excluding disabled people, among other groups, if these technologies can't be used fairly by everyone.
Developing AI with disabled people in mind.
Some AI tools were created with disabled people in mind from the very beginning. And we can learn a lot from them.
Be My AI, from Be My Eyes and powered by GPT-4, is described as a 'virtual volunteer'. It's currently being beta tested by people with visual impairments worldwide.
It's quick and simple to use, and reduces the need for human volunteers. You simply open the app, take a picture, and you'll be provided with a detailed description of the image. You can then ask questions about the image to help build further context and clarity. People are already using it to choose outfits, organise their homes, and identify objects.
But importantly, it doesn't entirely replace human support. Human volunteers are still available to verify the AI results, or to approach directly. It gives disabled people a choice and the autonomy to do what works best for them. And that ability to choose makes it more accessible.
How is Scope using AI?
We are still at the beginning of our journey with AI. And we're taking time to learn. But we’re making some exciting progress. We are:
Overall, we’re enthusiastic about the potential positive impact AI could have not just in our own work but in the lives of disabled people.
And we’ll continue to share updates as we take this journey.
We believe in a world that is accessible and free of barriers. And while technology can’t solve everything, it can certainly go a long way if used considerately.
Software Developer, proven natural talent for improving the User Experience
6 个月I'm so glad that you're advocating for disability inclusion in AI! Interesting to hear how it's being used in positive ways by the disabled community and I really hope it can remain a force for good. I think the future of AI needs input from all areas of humanity, for the direction to be inclusive.