Image recognition technology is pretty powerful stuff. It lets Facebook auto-tag photos of your friend, Google Images show you photos that look similar to a photo you already have, and self-driving cars recognize pedestrians before hitting them. It also, crucially, lets Google generate totally whacked-out, trippy masterpieces like this:
Alexander Mordvintsev, Christopher Olah, and Mike Tyka, three software engineers at Google, explain at the company's research blog that the above image comes from flipping around an "artificial neural network" — a kind of artificial intelligence system that mirrors the structure of biological nervous systems — that does image recognition. Rather than taking in an image and trying to see what objects are contained within it, the flipped neural net takes an image and tries to "see" it in such a way that an object it already knows about emerges. The photo above was the result of feeding random noise to a neural net trained to recognize "places."
They can also find more specific things. This is what happened when they asked the neural net to find a banana in a bunch of meaningless pixels:
Pretty cool, huh? Here are a few more photos generated using the same place-recognizing net as the top image:
You can also send representational images through nets that have been trained to recognize different sorts of images, thus creating a new image that combines them both. For example, here's an image of a knight as interpreted by a neural net trained to recognize animals. The end result is a knight that's been morphed into a chimera of several dogs:
"Flipping" image recognition neural nets is an important for understanding how, exactly, they're recognizing objects. If you want a neural net to recognize dumbbells, for example, you might send it a bunch of images of people doing arm curls with dumbbells. Ideally, the nets will just notice the weights. But sometimes, as in the case of one Google neural net, they pick up too much, and think that all dumbbells have to have muscly arms attached:
By inverting this neural net, Google learned that there was a big flaw in how it was identifying dumbbells, which it could then fix to improve the net's recognition powers. As the Guardian's Alex Hern notes, in this example, it might give them reason to feed the net images of dumbbells sitting still on the ground, so that the net dissociates the concept of dumbbells from the arms of people holding them.
Here are a few more fun digital hallucinations that Google's neural nets have produced:
Thanks to Hern at the Guardian for the pointer.