When Dr. Eric Topol joined an experiment on using artificial intelligence to get personalized nutrition advice, he was hopeful.
For two weeks, Topol, a cardiologist at Scripps Research, dutifully tracked everything he ate, wore a sensor to monitor his blood-glucose levels, and even collected and mailed off a stool sample for an analysis of his gut microbiome.
The diet advice he got back stunned him: Eat Bratwurst, nuts, danishes, strawberries, and cheesecake. Stay away from oatmeal, melon, whole-wheat fig bars, veggie burgers, and grapefruit.
“It was crazy stuff,” Topol told me. Bratwurst and cheesecake are foods Topol generally shirks because he considers them “unhealthy.” And strawberries can actually be dangerous for Topol: He’s had kidney stones and has to avoid foods, such as berries, that are high in calcium oxalate, a chemical that can turn into stones.
All in all, Topol discovered that most of the companies currently marketing personalized diets can’t actually deliver. It’s just one of the great insights in his new book about artificial intelligence, Deep Medicine.
AI for diet is one of the most hyped applications of the technology. But in the book Topol uncovers more promising opportunities for artificial intelligence to improve health — some of which surprised me.
He also challenges the most common narrative about AI in health: that radiologists will soon be replaced by machines. Instead of robots coming into medicine and further eroding what’s left of the doctor-patient relationship, Topol argues, AI may actually enhance it. I’ve boiled down three of Topol’s most surprising findings, after reading the book and talking with him.
1) AI for your eyes and colon
Diagnosing disease is a notoriously difficult task, and doctors don’t always get it right — which is why there’s been a lot of excitement around the idea that AI might make the task both easier and more precise.
But as the quest to create a medical tricorder — a portable device capable of diagnosing diseases in humans — continues, there’ve been serious developments in automating diagnostics, and even triage, in several pretty specific areas of medicine.
Take ophthalmology. The top cause of loss of vision in adults worldwide is diabetic retinopathy, a condition that affects about a third of people with diabetes in the US. Patients should be screened for the condition, but that doesn’t always happen, which can delay sometimes diagnosis and treatment — and lead to more vision loss.
Researchers at Google developed a deep learning algorithm that can automatically detect the condition with a great deal of accuracy, Topol found. According to one paper, the software had a sensitivity score of 87 to 90 percent and 98 percent specificity for detecting diabetic retinopathy, which they defined as “moderate or worse diabetic retinopathy or referable macular edema by the majority decision of a panel of at least seven US board-certified ophthalmologists.”
Doctors at Moorfields Eye Hospital in London took that work a step further. They trained an algorithm that could recommend the correct treatment approach for more than 50 eye diseases with 94 percent accuracy. “They compared that to eye specialists, and the machine didn’t miss one referral, but the eye doctors did,” Topol said. “The eye doctors were only in agreement about the referrals 65 percent of the time. So that’s the beginning of moving from narrow AI to triage.”
In another example, doctors in China used AI to diagnose polyps on the colon during a colonoscopy. In one arm of the randomized trial, the diagnosis was made by AI plus the gastroenterologist. In another arm, just the specialist made the diagnosis. The AI system significantly increased polyp detection (29 percent compared to 20 percent). And this was mainly because AI spotted what are known as “diminutive adenomas,” or tiny polyps — less than 5 mm in size — that are notoriously easy for doctors to miss.
“Machine vision is starting to improve,” Topol said. And while we’re far from having a hand-held machine that can diagnose any disease, these small steps will probably eventually lead there, he added.
2) Avatars to help anxiety and depression
When we talk about the impact of computers and the internet on our mental health, we often talk about the negative: that they can be alienating, isolating, anxiety-provoking. Yet Topol found good evidence of just the opposite: They can be comforting in some cases.
In one elegant experiment, researchers at USC tested whether people would be willing to reveal their innermost secrets to an avatar named Ellie as compared to another human. “The shocking result — it wasn’t even a contest,” said Topol. “People far more readily would tell an avatar their deepest secret.”
That experiment has since been replicated, and researchers are finding chat bots and avatars also seem to help people with symptoms of anxiety and depression. “It’s an interesting finding in the modern era,” said Topol. “I don’t think it would have been predicted. It’s like going to confession — you’re laying it out there and you feel a catharsis.”
So why is this so important? “Some think it’s a breakthrough. Others are skeptical it’ll help. But there’s such an absurd mismatch between what we need to support people’s mental health conditions and what’s available,” Topol said. “So if this does work — and it looks promising — this could be a vital step forward to helping [more] people.”
3) AI could free up time for doctors
As the average doctor appointment time has dwindled to a few minutes, so too has any intimacy or sense of connection that can develop between doctors and patients. Topol went into the book thinking AI — and bringing more machines into hospitals and clinics — might further dampen the human side of medicine.
But by the end of his research, he ended up seeing a big opportunity: “I realized that as you can augment human performance at both the clinician level and the patient level, at a scale that is unprecedented, you can make time grow.” And giving more time to doctors, could, in theory mean the intimacy can come back.
To “make time grow,” Topol said, AI can help with time-consuming tasks, like note-taking by voice. Notes can then be archived for patients to review — and a correction function could be built into the process so patients can flag any errors in their records. “These are all features that can enhance the humanistic encounter we’ve lost over time,” Topol said.
AI can also free up time for specialists to meet with patients. Topol argues in the book that instead of AI replacing radiologists — widely viewed as the medical specialists most at risk of becoming extinct — AI will enhance them.
“The average radiologist today reads between 50 and 100 films in a day. There’s a significant error rate and a third of radiologists at some point in their career get sued for malpractice,” he said.
Enter deep learning. “You then have an amazing ability to scale where a radiologist could read 10 times as many films or 100 times as many films. But is that what we want? Or do want to use that capability [so radiologists] can start talking to patients, come out of the basement and review the results, sharing an expertise which they never otherwise get to.” So AI could liberate doctors in a tech-heavy specialty, like radiology, to help patients through a diagnosis — something that doesn’t happen now.
Two big hurdles
Topol is certainly an optimist about the power of AI to make things better — even about personalized diets. “Our health is not just absence of disease. It’s about the prevention of disease,” he told Vox. “And if we can use food as a medicine to help us prevent illness, that would be terrific. We’ll get there someday.”
But you might still be skeptical — that’s fair. The health care system has been abysmal at doing the very basics of incorporating new technology into medical practice, like digitizing medical records. And Topol makes clear in the book that many of these promising technologies, like avatars for mental health or AI for colonoscopies, need to be further validated and refined in clinical studies, and followed up with as they move beyond the study phase and into the real world.
To get there, there are also the privacy and data hurdles to contend with, which could make or break technologies like the avatar shrink. Machine learning is best when lots of data is fed into an algorithm — the more data, the better. “If we’re going to do deep learning and provide feedback, the only way it’ll work well is if we have all a person’s data: sensor data, genome data, microbiome data, [medical records]. It’s a long list.”
But “people don’t have their [personal] data today in this country,” Topol said. “They can’t get all their medical records for every time they’ve been to a doctor or hospital. We’d want each person to have all their data from when they’re still in their moms’ womb.”
Topol has some ideas for how to fix this too. US policymakers need to move in step with countries like Estonia, which found a way to allow people full control of their personal, including medical, data.
Empowering people with their data could also help with security. Our data right now is stored on massive servers and clouds. “The gurus say the best chance of data being secure and maintained privately is to store it in the smallest units possible,” Topol said. “It’ll help guide your health in the times ahead.”