clock menu more-arrow no yes mobile

Scientists have invented a mind-reading machine. It doesn’t work all that well.

“We can take someone’s memory … and we can pull it out from their brains.” 

Shutterstock /  artenot
Brian Resnick is Vox’s science and health editor, and is the co-creator of Unexplainable, Vox's podcast about unanswered questions in science. Previously, Brian was a reporter at Vox and at National Journal.

Neuroscientist Brice Kuhl recently told me something astounding. “We can take someone’s memory — which is typically something internal and private — and we can pull it out from their brains,” said Kuhl, who’s at the University of Oregon.

That sounds a lot like … mind reading. So I had to ask: Can you read minds?

“Some people use different definitions of mind reading, but certainly, that’s getting close,” he said.

Kuhl and his colleague Hongmi Lee recently published a paper in The Journal of Neuroscience with a conclusion straight out of science fiction: Using an MRI, some machine learning software, and a few hapless human guinea pigs, Kuhl and Lee created images directly from memories.

Pretty damn close to mind reading.

Here’s how something close to mind reading works

This is not how how mind reading works.
Sergey Nivens / Shutterstock

First, Kuhl and Lee loaded participants ( a total of 23) into an MRI. The magnets of the MRI can detect subtle changes in blood flow. And in the brain, blood flow equals neurological activity.

Once the machine was on, the participants began to see images of hundreds of faces.

The first phase of the test is a training exercise. Not for the participant, but for an artificial intelligence program that’s hooked up to the MRI, reading the data in real time.

That AI program gets two sets of information. One is the patterns of brain activity from the participants. The other is a mathematical description of each face the participant is viewing. (Kuhl and Lee came up with 300 numbers for different physical aspects of a face — like skin color or emotional expression. Each photo was then assigned a code to describe its attributes.)

What the AI program does is try to make connections: How well do those bursts of brain activity correlate with those numbers?

As the AI program accrues this information, it grows smarter, or at least better at matching brain activity to faces.

The second phase of the test is where things get weird.

The participants, who are still in the MRI, are shown photos of brand new faces. The computer program can’t see these faces. But it can make some guesses.

Here are some examples. The top row are the original faces. The second two rows are guesses based on the activity in two different regions of the brain. The ANG is the angular gyrus, which is activated when we remember something vividly. The OTC is the occipitotemporal cortex, which responds to visual inputs.

The first five columns represent the most accurate reconstructions. The right two are examples of the least accurate.
The Journal of Neuroscience

I know what you’re thinking: These guesses are horrible!

Yes, they won’t be used for drafting sketches anytime soon.

But even these blurry pictures contain some useful information.

Kuhl and Lee showed these reconstructed images to a separate group of online survey respondents and asked simple questions like, “Is this male or female?” “Is this person happy or sad?” and “Is their skin color light or dark?” To a degree greater than chance, the responses checked out. These basic details of the faces can be gleaned from mind reading.

Scientists have actually done this type of mind reading before. (One cool example: In the past, they used a technique to reconstruct basic details of movie clips from brain activity.)

Okay, but could the program reconstruct a face purely from a memory?

In another trial, Kuhl and Lee showed participants two faces. They then asked them to hold one of the faces in their memories. The faces then were taken offscreen. While the participants were holding onto their thoughts, the MRI scanned their brains. And then the computer tried to recreate their thoughts in a photo.

The computer’s guesses got worse. Much worse.

For this trial, only the data from the ANG yielded significant data.

But it didn’t fail completely. “We compare the reconstruction to the two images, and we ask [the computer] just in terms of pixel values, does the reconstructed image look more similar to the one they were told to remember than the other one?” Kuhl explains. Around 54 percent of the time, the computer said it was closer to the target. It’s not a total breakthrough, but it’s an intriguing start that needs more work and more participants.

But what’s the point? Why design a mind-reading program?

The end goal of the science is not mind reading. It’s to better understand how the brain works.

Typically, neuroscientists use brain scans to observe which structures of the brain “light up” when engaged in a particular mental task. But there’s only so much information to glean from scans alone. The fact that a part of the brain is active doesn’t tell researchers too much about what task it’s specifically doing.

The regions Kuhl and Lee targeted in the MRI have been long known to be related to vivid memories. “Is that region representing details of what you saw — or just [lighting up] because you were just confident in the memory?” he says.

Kuhl and Lee’s results are evidence it’s the former. The AI was able to make the connection between the faces and brain activity. If the brain regions weren’t involved in visual details, it couldn’t have made those connections.

A few more burning questions about mind reading

That’s cool and all. But I was still stuck on the wildness of the mind-reading aspect of the experiment: How much better can the machine get at reconstructing faces? Could we ever reconstruct a face perfectly?

“I don’t want to put a cap on it,” Kuhl told me. “We can do better.”

The reason is simple: It’s a pure computer processing problem. If only the participants could spend more time in the MRI training the AI, the AI would grow “smarter,” and the reconstructed images would become more recognizable.

“We’d really like to have someone in the scanner and see 10,000, 20,000 faces,” he says. “You’d have to be in there for a couple of days.” So that’s not entirely feasible, or ethical. It’s difficult to bring the same subjects back for additional sessions. One, it’s very expensive to operate an MRI machine. And two, it’s very difficult to angle a participant’s head in the exact same position as a previous trial. Prediction works best when the testing is done in one very long session. (Kuhl himself said he wouldn’t volunteer for such a long trial.)

Is it in the realm of possibility to read a person’s mind without their permission?

“You need someone to play ball,” he said. “You can’t extract someone’s memory if they are not remembering it, and people most of the time are in control of their memories.”

Oh, well.

What about this: Could you use this technique to see what someone is dreaming about?

“One fMRI paper did try to decode the content of dreams,” Kuhl said. “It could decode — with whatever percent accuracy — that somebody was dreaming about a family member or something like that.” (That paper wasn’t reconstructing faces but was broadly predicting what categories of objects and people the participants were dreaming about.)

Whoa. This is all so cool.

“The stuff we’re doing now, if you asked people 20 years ago when fMRI was just getting going, if you asked them about this kind of stuff, they would have thought it was crazy,” he said.

Indeed.