Neuroscientist Amir Amedi, wearing a device setup that translates images into music
Amir Amedi Lab
Bats, dolphins, and even some whales all use sonar to determine the location of objects around them — by sending out sound waves and listening to how they bounce back. This allows these animals to do all sorts of amazing things, like hunt in total darkness.
New technology might help blind people 'see' the world with sonar
And it turns out that humans can use sonar, too (and not just in submarines). Some blind people are capable of using tongue clicks to "see" their surroundings. They make a sharp sound with their tongue and listen carefully to how the sound reflects off the objects around them.
So, more recently, researchers have been trying to push this capability much, much further. New technologies — lasers, cameras, and earphones — can give people even greater sonar capacity. More radically still, some researchers have now developed software that essentially translates the world into music, which can help blind people avoid obstacles, recognize facial expressions, and even read letters.
The basic idea of sonar is to send out a sound and then time how long it takes for the sound to bounce back. That can give a sense of how far away various objects are. Submarines do this, as do animals (it's often referred to here as "echolocation").
By using echolocation, Kish can 'see' objects the size of a SOFTball
And a few humans have mastered the trick. Take Daniel Kish, who has been blind since childhood and can echolocate by clicking his tongue. Using this technique, he says that he can see objects fairly far away, as long as they're at least the size of a softball. (Kish is president of the non-profit World Access for the Blind foundation, which among other things promotes the teaching of echolocation.)
And human echolocation has also attracted the attention of academic researchers. One group in Spain determined in 2010 that tongue clicking was more successful than snapping or clapping. And in 2011, a study led by David Whitney of the University of California at Berkeley found that six blind echolocators with at least 10,000 hours of echolocation practice had a spatial precision that was "comparable to that found in the visual periphery of sighted individuals."
New technology could enhance human sonar further
(CASBLiP)
In 2009, a research collaboration including a group at the Polytechnic University of Valencia, Spain, unveiled a helmet that takes real-time images of the world, distills essential information out of it, combines it with depth data from a laser range-finder, and presents that information as audio cues through headphones. This is essentially an enhanced version of human sonar — one augmented by technology.
Similar projects have popped up elsewhere. The SmartCane is a combination cane and ultrasound system that sells for approximately $50 in India. And Tacit is a similar, open-source project from inventor Steve Hoefer, which translates distance information into haptic feedback — vibration on the user's hand.
Some of the most impressive research in this arena, meanwhile, comes from the laboratory of Amir Amedi, a neuroscientist at The Hebrew University of Jerusalem. His group has been studying how software that translates images into noises can help people navigate their world.
In 2012 in a paper in PLOS ONE, they showed that blind people can use Peter Meijer's program called The vOICe to read letters and even recognize facial expressions — after only tens of hours of training.
And in later work, they showed that participants' brains used what has been previously thought of as visual areas when doing these tasks. "The input is arising through the ears, but then being delivered into the visual system," says Amedi, which suggests that these are task-related brain areas, not vision-specialized brain areas.
In 2014, Amedi's group demonstrated their own program, called EyeMusic, which essentially translates images into short musical pieces. How does it work? The software scans an image from left to right over the time of the musical piece. The higher the pixel in the image, the higher the pitch that is played, and different colors are represented by different musical instruments. Here are some examples:
EyeMusic is currently available as an app from the iTunes store, if you'd like to check it out. And The vOICe is available for Windows and Android.
And also in 2014, Amedi introduced the EyeCane, a small, handheld device that uses two narrow infrared beams to detect nearby obstacles and translate them to either sound or vibration — depending on the user's preference. It was intuitive enough to require almost no training, and people could use it to detect an open door about 15 feet away.
Going further still: Using sound to see the world in ultraviolet
Neil Harbisson (Luis Ortiz/LatinContent/Getty Images)
This technology won't just benefit the blind. Amedi says his lab is also exploring the possibilities of using this technology to help people "see" through walls (by sensing infrared).
And artist Neil Harbisson has already used similar technology to give himself essentially superhuman capabilities.
Harbisson was born completely color-blind, with only grayscale vision. But he now has a sensor implanted in his skull that detects the colors of nearby objects and translates them into different musical notes produced by a chip in his head. That helps him "see" the colors of the world around him.
But the color sensor also picks up things that no human can naturally see — light in the ultraviolet and infrared ranges. So this means, for example, that Harbisson can sense the invisible infrared signal of a TV remote or motion detector.
Here's his incredible TED talk from 2012:
So human sonar might not just help restore sight to the blind. One day, it could end up allowing everyone else to see things they've never been able to see before.
Further reading: For more on the long history of human echolocation, check out Daniel Engber's piece in Slate.
Correction: This story previously stated that Amedi's group demonstrated reading and other tasks with EyeMusic software in 2012. These studies actually used The vOICe software. The story has been corrected and updated with more information about these two different software programs.