Scientists have been experimenting with ways to translate the visual world around us into sound — in order to help people who are blind. Here's an example of how you would "read" through music:
Amir Amedi, who made the program EyeMusic above, is a neuroscientist at The Hebrew University of Jerusalem and is one of several researchers working on software like this.
These programs usually scan an image from left to right, translating each pixel into a corresponding sound. In this one, the higher each pixel in the image is, the higher pitch the note that represents it is. EyeMusic also uses different instruments to represent different colors.
In a 2012 paper in PLOS ONE, Amedi and colleagues showed that blind people could use a different program — one by Peter Meijer called The vOICe — to read letters and even recognize facial expressions. And that was after only 10 hours of training.
Here's what The vOICe sounds like in action:
These programs are already available on phones and computers, and eventually the idea is to integrate them with a real-time video device to help people navigate in the real world.