clock menu more-arrow no yes mobile

Filed under:

Flipping Google's image recognition AI produces some amazingly trippy images

Dylan Matthews is a senior correspondent and head writer for Vox's Future Perfect section and has worked at Vox since 2014. He is particularly interested in global health and pandemic prevention, anti-poverty efforts, economic policy and theory, and conflicts about the right way to do philanthropy.

Image recognition technology is pretty powerful stuff. It lets Facebook auto-tag photos of your friend, Google Images show you photos that look similar to a photo you already have, and self-driving cars recognize pedestrians before hitting them. It also, crucially, lets Google generate totally whacked-out, trippy masterpieces like this:

trippy google image

Mike Tyka / Google

Alexander Mordvintsev, Christopher Olah, and Mike Tyka, three software engineers at Google, explain at the company's research blog that the above image comes from flipping around an "artificial neural network" — a kind of artificial intelligence system that mirrors the structure of biological nervous systems — that does image recognition. Rather than taking in an image and trying to see what objects are contained within it, the flipped neural net takes an image and tries to "see" it in such a way that an object it already knows about emerges. The photo above was the result of feeding random noise to a neural net trained to recognize "places."

They can also find more specific things. This is what happened when they asked the neural net to find a banana in a bunch of meaningless pixels:

banana noise

Mike Tyka / Google

Pretty cool, huh? Here are a few more photos generated using the same place-recognizing net as the top image:

trippy google

Mike Tyka / Google

trippy google

Mike Tyka / Google

google trippy

Mike Tyka / Google

trippy google

Mike Tyka / Google

You can also send representational images through nets that have been trained to recognize different sorts of images, thus creating a new image that combines them both. For example, here's an image of a knight as interpreted by a neural net trained to recognize animals. The end result is a knight that's been morphed into a chimera of several dogs:

google trippy

Mike Tyka / Google

"Flipping" image recognition neural nets is an important for understanding how, exactly, they're recognizing objects. If you want a neural net to recognize dumbbells, for example, you might send it a bunch of images of people doing arm curls with dumbbells. Ideally, the nets will just notice the weights. But sometimes, as in the case of one Google neural net, they pick up too much, and think that all dumbbells have to have muscly arms attached:

dumbbells neural nets

Google

By inverting this neural net, Google learned that there was a big flaw in how it was identifying dumbbells, which it could then fix to improve the net's recognition powers. As the Guardian's Alex Hern notes, in this example, it might give them reason to feed the net images of dumbbells sitting still on the ground, so that the net dissociates the concept of dumbbells from the arms of people holding them.

Here are a few more fun digital hallucinations that Google's neural nets have produced:

A picture of a red tree, with animals and places recognized.

Mike Tyka / Google

A filtered Seattle skyline.

Mike Tyka / Google

Addax, warped

A photo of white antelopes (addax), before and after filtering by a neural net that recognizes edges.

Left: Original photo by Zachi Evenor. Right: processed by Günther Noack, Software Engineer. Via

Clouds, run through an animal-recognizing neural net.

Google

Thanks to Hern at the Guardian for the pointer.