How can tech improve the quality of life for people with disabilities?

How can tech improve the quality of life for people with disabilities?

  • AI can describe photos for Instagram users with visual impairment
  • Google released a new app that describes objects to blind people
  • A Japanese company is developing smart glasses that convert words into voice
  • Dot Watch is a smartwatch for the visually impaired that displays information in braille
  • Google releases two new apps aimed at the hearing impaired
  • Researchers from Columbia University have developed a system that converts thoughts into speech

The World Health Organization estimates that more than a billion people worldwide live with some form of disability. This figure accounts for approximately 15 per cent of the world’s population and is set to increase even further in the future, partly driven by factors such as the aging population and the growing number of people suffering from chronic health conditions. Disability can have a detrimental impact on a person’s quality of life, with somewhere between 110 and 190 million people over 15 years of age estimated to have significant difficulties in functioning.

Geen alternatieve tekst opgegeven voor deze afbeelding

With so many people affected by disabilities, it comes as no surprise that a number of companies, including tech giants such as Google, Facebook, Apple, and Microsoft, are increasingly using artificial intelligence, computer vision, and voice recognition technology to develop tools aimed at people who are blind, deaf, have motor impairment, or some other form of disability.

While tech companies often cite their desire to be more inclusive as the main motivation for the development of these tools, there are most certainly other factors involved as well. Companies are constantly striving to increase their profit and expand their customer base, which is difficult to achieve if they overlook such a large portion of the population and release a product they can’t use. The recent rise of voice-activated speakers and the increased use of captioning on websites and in social media has done a great deal to improve access to some internet services for people with disabilities, but there’s still a lot of work to be done for them to be able to enjoy everything able-bodied people do.

AI can describe photos for Instagram users with visual impairment

Instagram is the perfect example of some of the issues faced by people with disabilities on the internet. The beloved social media platform boasts more than a billion monthly active users. However, being visual in nature, it’s not exactly geared towards people with visual impairment. Instagram wants to change that, though, and the platform recently announced the launch of a new feature that should make the app more accessible for 253 million people around the world affected by moderate to severe visual impairment or blindness.

Visually impaired people usually rely on screen readers to browse through internet content. It’s a piece of software that describes images on a screen by reading custom descriptions provided in the form of alternative text, also known as ‘alt text’. Problems arise when this alternative text isn’t available, which unfortunately happens quite often. But Instagram has found a way to solve this issue with the help of artificial intelligence.

From now on, as users scroll through their feed, the Explore page, or another user’s profile, Instagram will use object recognition technology to automatically recognise what’s in a photo and describe it aloud in case no alt text has been provided. Furthermore, all users will also be able to add their own custom descriptions to each photo they upload by going into the photo’s advanced settings. These custom descriptions won’t be displayed to other users and will only be visible to screen readers, creating a smoother Instagram experience for people with visual impairment.

Google released a new app that describes objects to blind people

Google recently released an app that could provide assistance to people with visual impairment even beyond Instagram. The app is called Lookout, and it uses AI to identify and describe objects in the user’s surroundings. Based on the same underlying technology as another Google app, Google Lens, it allows users to learn more information about the scene or objects around them by simply pointing their phone at them.

The app features three different modes: Explore, Shopping, and Quick Read. The Explore mode allows users to navigate a new space by providing constant updates about objects they encounter as they move through it. The Shopping mode helps them scan barcodes or read currency, while the Quick Read mode reads out any text the camera is pointed at. So, regardless of the situation they find themselves in, Lookout will provide visually impaired people with an unprecedented degree of independence. All they have to do is launch the app and keep their phone pointed forward.

As promising as it sounds, the app does have some drawbacks. The biggest one is that it’s currently only available to owners of Google Pixel devices in the United States. Another issue is that it only speaks English, but Google promises to address both of these points in the near future. The company also warns that the app may not always be 100 per cent accurate at this stage. It will take its best guess to identify whatever it is the user points it at, but there’s always a chance it will guess wrong. Even with this in mind, the Lookout app could have a major impact on the quality of life for people with visual impairment.

A Japanese company is developing smart glasses that convert words into voice

The Oton Glass is a pair of smart glasses designed to help people with visual impairment read written text by converting it into sound. The glasses are equipped with two small cameras and an earpiece, both attached to a frame. One camera faces inwards and is used to track the user’s eye movement and detect when they blink, while the second one captures text. The glasses work by having the user take a look at a piece of text they want to read and blink, after which the outward-facing camera takes a photo of the text and uploads it to a dedicated Raspberry Pi cloud system. The photo is then analysed using optical character recognition technology and converted into sound, which is then played to the user through the earpiece.

If the system is unable to decipher the text, it will be forwarded to a remote worker who will troubleshoot the issue. The idea has drawn some criticism from people who claim it’s no better than Google Translate, which already has the same capabilities. However, unlike Google’s app, the Oton Glass doesn’t require the user to take out their phone, launch the app, and scan the text to read it. All it takes is a simple blink.

Dot Watch is a smartwatch for the visually impaired that displays information in braille

Developed by the South Korean company Dot, Dot Watch is the world’s first smartwatch aimed at people with visual impairment. The watch is characterised by a minimalist design that features a large circular face and a moving keyboard that displays information in braille in real time. Designed to help the visually impaired lead more independent lives, its primary function is to display date and time. However, when paired with a smartphone through the Dot Watch App, it can also display calls and text messages, weather notifications, road navigation, and social media alerts.

The watch is made from silver aluminium and uses four electromagnetic actuators to display information in the form of textural dots. The dots rise and fall automatically to spell out letters and numbers in braille, but unlike other braille displays that read one line at a time, the Dot Watch features a touch-sensitive active display that switches the braille formation to the next one as soon as the user takes their finger off the final dot in a sentence. Users can also switch between messages manually, either by tapping the face of the watch or by using the side buttons and dial. Unfortunately, the software currently supports only English and Korean braille, but the company hopes to add support for other languages in the near future, including Japanese, German, French, Spanish, Arabic, Italian, Chinese, and Hindu.

Google releases two new apps aimed at the hearing impaired

While there are all kinds of products out there aimed at the visually impaired, the same can’t be said for people who have impaired hearing. It’s estimated that more than 466 million people worldwide who have hearing loss, and since not everyone knows sign language, social communication can be very difficult for them sometimes. That’s why Google recently announced the launch of two new Android apps designed to help people who are deaf or hard of hearing.

As the name suggests, Live Transcribe is a transcription app that automatically transcribes what people around the user are saying in real time. The app also features a loudness indicator that informs the user about the level of ambient noise in the room and lets them know when they need to bring the microphone closer to the speaker or speak louder themselves. If the environment is too loud for the app to work, users can bring up a keyboard and use it to type the message instead. The app also features haptic feedback and will vibrate the phone to inform the user they’re being spoken to. It supports more than 70 different languages and dialects, allowing the user to choose their primary and secondary languages and quickly switch between them with a simple tap when necessary. And Google is working on enabling the app to automatically detect the language being spoken, further increasing convenience. However, since it uses the Google Cloud speech API, the app requires an internet connection to work, which could limit its usefulness.

Sound Amplifier, on the other hand, acts as a sort of hearing aid, helping users who are hard of hearing better understand conversations in loud environments with just a pair of headphones. The app features a number of sliders that allow users to reduce environmental sounds while amplifying the voices of those around them. Unlike Live Transcribe, the Sound Amplifier app will also work without an internet connection, with all of the sound processing performed locally.

Researchers from Columbia University have developed a system that converts thoughts into speech

Another innovation that could have a life-changing impact on people with limited or no ability to speak was recently developed by neuroengineers at Columbia University, who created a system capable of translating thoughts into intelligible, recognisable speech. Using artificial intelligence and speech synthesisers, the system monitors a person’s brain activity and reproduces the words they hear with incredible clarity. “Our voices help connect us to our friends, family and the world around us, which is why losing the power of one’s voice due to injury or disease is so devastating,” says Nima Mesgarani, a principal investigator at Columbia University’s Mortimer B. Zuckerman Mind Brain Behavior Institute adding that “we have a potential way to restore that power. We’ve shown that, with the right technology, these people’s thoughts could be decoded and understood by any listener.”

The system is built around a vocoder, a computer algorithm capable of synthesising human speech, which the researchers trained to interpret brain activity by feeding it data on neural patterns in the brains of five epilepsy patients. The patients were already scheduled for brain surgery and agreed to have electrodes attached to the surface of their brains. They were first asked to listen to some sentences spoken by other people, while the researchers recorded their brain activity and used it to train the vocoder. After that, the researchers once again asked the patients to listen to someone recite numbers zero through nine, recorded their brain signals, and instructed the vocoder to translate those signals into sound. The sounds generated by the vocoder were then analysed and cleaned up by neural networks, resulting in a robotic-sounding voice repeating the same sounds the patients originally heard 75 per cent of the time.

The only drawback is that the system can currently only translate thoughts that originate as a person listens to someone else speak. However, the researchers hope they’ll one day be able to train the system to do the same when there’s no listening involved, when the person is merely imagining the words. “This would be a game changer,” says Mesgarani. “It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them.”

Companies around the world are increasingly using technologies such as artificial intelligence, computer vision, and voice recognition to develop tools that can provide assistance to people with disabilities. While they tend to describe the development of these products through a desire to be more inclusive, it’s clear that altruism isn’t their only motivation. Since people with disabilities account for 15 per cent of the population, developing smart devices they can use can only help companies increase their revenue. Regardless of the underlying motivation, this is a welcome trend that could go a long way towards improving the quality of life for people who are blind, deaf, or have some other form of disability.

This article first appeared on the Richard van Hooijdonk blog at https://richardvanhooijdonk.com/blog/en/how-can-tech-improve-the-quality-of-life-for-people-with-disabilities/

要查看或添加评论,请登录

社区洞察

其他会员也浏览了