Where accessibility and AI meet: changing lives a few lines of code at a time
Changing lives a few lines of code at a time

Where accessibility and AI meet: changing lives a few lines of code at a time

Rory Preddy, Senior Content Engineer at Microsoft South Africa

Anyone who has felt what it is like to be stripped of one of your five senses – sight, taste, touch, hearing and smell – will know how much harder it can be navigating and making sense of the world. People with other accessibility challenges will most likely tell you the same. Certain day-to-day activities may take a little longer to complete – like reading if you’re visually impaired, for example – without tools to help make these tasks more accessible.

Making the world more accessible is a major focus for modern organisations and governments. Our own government annually recognises the importance of levelling the playing field for people of all capabilities and abilities by marking Disability Rights Awareness Month – and one of this year’s sub-themes is focusing on ‘Persons with disabilities as equal players in building inclusive economies’.

Accessibility is a key part of being able to achieve this. It’s increasingly a business imperative, and we are beginning to see some incredible developments around cognitive products and services using AI that show that it is possible to change lives a few lines of code at a time.

As a case in point, research has shown that sight is the sense that humans rely on most, because it forms the cornerstone of human learning, cognition and perception: between 80 and 85 percent of these activities take place through our visual recognition and intelligence system. So it’s not difficult to imagine how challenging it must be in a world without 20/20 vision. 

I myself got glasses a few months ago, and combined with a collection of clever tools that I’ve been using, my eyes have been opened to just how critical cognitive services and products are to change and improve everyday life.

This understanding of the essential role of cognitive tools in opening up accessibility has also long served as the driving force for Microsoft’s AI Cognitive Services team to create and develop AI tools and solutions that make a tangible difference. These include apps and tools like Azure Florence, Seeing AI, Azure Image Captioning and Immersive Reader. I have seen, worked with and benefitted from these technologies and have also witnessed how they are helping other people make sense of the world around them.

Azure Florence, which is one of Microsoft’s most recent cognitive projects, focuses on developing the best computer vision technologies and algorithms to detect and translate data from multiple channels, such as vision and language, to help humans more accurately and easily perceive the world around us.

Equipping AI with human capabilities to help make sense of the world

Seeing AI, for example, helps people with vision impairments by converting visual information into an audio narrative. It uses the device’s camera to identify people and objects and then describe them to the person. The app was originally a beta that the company took and has been refined to give qualities and capabilities that are able to even surpass human function. 

Likewise with our Azure image captioning bot, which is able to not only analyse images through facial recognition, but also recognise emotions and use human language to describe the picture.

Anyone who works with AI knows that describing an image accurately and as well as people do is the ultimate goal, so to be able to do that and help customers and developers improve accessibility in their own services – as well as assist people who are blind or have low vision at the same time – is a proud moment.

Immersive Reader also assists with a range of accessibility issues by improving readability. It was designed to support students with dyslexia and dysgraphia in the classroom, but can support anyone who wants to make reading on their device easier – it includes features like reading aloud, changing text options such as size and font, identifying parts of speech and syllables, setting line focus – which can also help people with Attention Deficit Hyperactivity Disorder (ADHD) – and translating the text into another language.  

Although just a small cross-section of what is being done to improve accessibility, these examples show that accessibility tools are becoming ever more accurate and user-friendly, truly helping people experience and make sense of the world around them and levelling the playing field for people with accessibility challenges.  

The time is now for people with accessibility challenges: with more companies making the move to remote and hybrid working models, we are no longer in offices that often act as barriers themselves. We are now, more than ever, part of the digital revolution – and can begin to enjoy a more accessible and inclusive world through the power of AI.

 

Karen Heydenrych

PR & Comms @DVTSoftware #avgeek ?? Supporter of #Spitfire5518's restoration

4 年

A great read! Thanks Rory!

Judemark Bwari

Director, GM, Sales Leader, DJ

4 年

Thanks for sharing Rory Preddy :) no one is to be left behind, we are all equal

Very interesting article, thank you Rory Preddy!

要查看或添加评论,请登录

Rory Preddy??的更多文章

  • New Agent API from Openai

    New Agent API from Openai

    Azure OpenAI has introduced the Responses API, enabling developers to create AI agents capable of performing tasks such…

    2 条评论
  • Vibe Coding with GitHub Copilot: A Game-Changer or a Risky Shortcut?

    Vibe Coding with GitHub Copilot: A Game-Changer or a Risky Shortcut?

    Checkout the DVT Tech Insights: Vibe Coding- AI-enhanced software development Imagine this: you’re coding a project…

    5 条评论
  • Burnout in the Coding World: Where It Came From and How to Beat It

    Burnout in the Coding World: Where It Came From and How to Beat It

    (This article was created using chat GPT's new 4o1 model and also utilized OpenAI SORA video creation tool) A Quick…

    2 条评论
  • GitHub Copilot Subscription Tiers: A Technical Breakdown

    GitHub Copilot Subscription Tiers: A Technical Breakdown

    Choosing the right GitHub Copilot subscription is a pivotal decision for developers and organizations aiming to…

    3 条评论
  • ??Copilot "Rewrite with new Java Syntax"

    ??Copilot "Rewrite with new Java Syntax"

    I started my journey in software development way back in 1998, using Microsoft Access. At the time, it was…

    5 条评论
  • End-to-End AI Stack with the Semantic Kernel

    End-to-End AI Stack with the Semantic Kernel

    In this article, we'll explore a Java application that integrates OpenAI's GPT-4 with Microsoft Semantic Kernel to…

    3 条评论
  • Exploring the New Features in Java 23

    Exploring the New Features in Java 23

    Java 23, brings a host of new features and enhancements that every Java developer should be aware of. From flexible…

    1 条评论
  • Exploring Leadership Program at Microsoft

    Exploring Leadership Program at Microsoft

    I am immensely proud to share the completion of the 8-week Exploring Leadership course at Microsoft, a program I had…

    1 条评论
  • Unlocking Your Full Potential: The 40% Rule in the World of Programming

    Unlocking Your Full Potential: The 40% Rule in the World of Programming

    In the relentless pursuit of excellence within the realm of software development, we often encounter barriers that seem…

    2 条评论
  • AI and Accessibility

    AI and Accessibility

    (co-authored by Yohan Lasorsa) Today, we're diving into the fascinating world of accessibility with our very own Rory…

社区洞察

其他会员也浏览了