These days, robot design is less about the hardware and more about developing algorithms. It is becoming a powerful tool in healthcare technology to positively impact the lives of people with disabilities.
From precise technology to help blind children see again to mind-controlled mechanical limbs and wearable technology, machines are improving the quality of life and health outcomes for people. However, it’s not just a one-way relationship. People with disabilities are contributing to research and design for countless technology advances.
Autonomous vehicles, industrial robots and robotic vacuum cleaners all rely on computer vision. This has been a long-standing challenge in the scientific community. According to Engadget, computers can now match or exceed human sight, but they still require intense training to understand the images they see. A robot isn’t just a high-resolution camera. In order to truly have “sight,” both robots and humans need to understand the pictures they see. If robots have a bunch of high-resolution images, but they can’t make sense of them, the images are useless.
MIT professor Dr. Pawan Sinha developed a new machine learning technique that was inspired by his work with blind children in India. Sinha started Project Prakash, a charity that provides eye surgery to children with treatable conditions such as congenital cataracts.
Restoring sight to children with cataracts is more complicated than older patients because of brain development. Healthy babies are born with blurry vision, which gradually improves as they age. Fuzzy vision sounds like a problem, but it is actually crucial to babies’ cognitive development.
Sinha’s theory is that that blurry vision helps babies focus on the big picture without getting overwhelmed by small details while their minds are busy developing. He applied this concept to artificial intelligence by integrating blurry imagery into an artificial intelligence system to help it gradually understand high-resolution images.
Machine learning helps computers analyze data and learn what to do, so that they don’t have to be programmed to perform each task. This branch of artificial intelligence is helping give people with disabilities more independence.
“Machine learning is giving people like me that need accommodation in some situations the same independence as others,” Liat Kaver, a product manager at YouTube who is deaf, told MIT Technology Review.
The latest machine learning algorithms can understand images, sounds and language. This software is being used to improve healthcare technology for people with disabilities like deafness or autism. Closed captioning, for example, helps convey speed and sounds on TV shows and videos. YouTube uses speech-to-text software to automatically transcribe videos. But they also take it a step further by using algorithms that indicate additional sounds, such as applause, laughter and music.
Image Recognition Software
Accessibility used to mean providing ramps, wide doors and other physical accommodations. Today, accessibility also includes digital spaces.
People with disabilities face challenges in their everyday lives that give them a different perspective on new technology. According to MIT Technology Review, Austin Lubetkin developed a navigation app for people with autism. Lubetkin used image-recognition technology from startup Clarifai to prototype an app that provides directions in the form of landmarks.
Lubetkin has autism spectrum disorder, and he was inspired to create the app because of his personal experience with struggling to navigate, a common challenge in the autism community. He found it difficult to interpret the text and abstract images from conventional apps, so he used the image recognition software to add landmarks to navigation software.
“Honestly, AI can level the playing field,” Lubetkin said.
Disability Robot Design
A group of roboticists at Georgia Tech are researching ways to make robot control more seamless and accessible by developing new interfaces that allow people to control complex robots with only a single-button mouse, says IEEE Spectrum.
The researchers are working with Henry Evans, who is almost completely paralyzed and unable to speak, due to a stroke. Through the use of a robot, he is able to do everyday tasks, like scratching his head or wiping his mouth. The robot’s aid allows him to remain comfortable in bed without asking a human for help.
According to IEEE Spectrum, the roboticists hadn’t anticipated the disability robot to have this result. It’s aid to Evans was the most successful job task the robot completed, in terms of performance and user satisfaction, since “the deployed research system provided a clear, consistent benefit to the user and reduced the need for caregiver assistance during these times.”
A Mutually Beneficial Relationship
Prosthetic limbs, robotic exoskeletons, assistant robots and brain-to-machine interfaces are incredible tools that can help people with disabilities in many different ways. While these assistive technologies are improving, people with disabilities are also helping scientists to more precisely understand how to teach machines to learn, which is helping improve their lives and robot design.