Right now, many are wondering: “How long are people going to be dying of Covid-19, and when can businesses reopen?”
To answer those questions, scientists and policymakers have turned to a growing number of infectious disease forecast models. None of these models is perfect. Each uses different methods, makes different assumptions, grapples with uncertainty in different ways, and yields different results.
It’s hard to know — for policymakers, and for the public — which model to trust the most for decision-making. There also hasn’t been a lot of formal accountability at a time when the accuracy of models is so important and can mean lifting restrictions and possibly exposing more people to the virus.
“Right now, there’s nobody responsible for recording and archiving who said what,” Caitlin Rivers, an epidemiologist at the Johns Hopkins Center for Health Security, says. That’s a problem, she says. Because to better predict and quell future outbreaks, we need to understand what is working — and what’s not — during the current one.
Some researchers are doing this independently. Recently, a group of statisticians from Australia and the US took a careful look at modeling forecasts of the Institute for Health Metrics and Evaluation (IHME), which operates out of the University of Washington. IHME models have been among the most commonly cited during the outbreak, influencing even the White House.
In a preprint paper (that’s not yet been peer-reviewed), the statisticians found that the model — and specifically how its predictions on the number of daily Covid-19 deaths matched with reality — was inconsistent. (The model, they found, decently predicts what’s happening across the United States. “What it doesn’t do well is predict most individuals’ states,” Sally Cripps, a statistician at the University of Sydney who co-authored the analysis, told me.)
That’s a useful critique that could potentially help scientists make better models. But a one-time effort is not enough.
Rivers has a big idea to overhaul how infectious disease forecasts are done and evaluated in the United States. Just as the National Weather Service both studies the science of weather forecasting and produces forecasts, Rivers thinks there ought to be a national infectious disease forecasting center.
Recently, I called her up by phone to discuss.
This interview has been edited for length and clarity.
Diagnose the problem for me. How is the current system for forecasting infectious diseases failing? And how could we do better?
Modeling plays a really important role in decision-making during outbreaks. We saw this in 2009, during the H1N1 pandemic: we saw it during Ebola in 2014. In this outbreak ... we see President Trump at the White House press conferences describing the modeling results. They’re informing decision-making.
But right now, most infectious disease modelers work in academia. They have their regular projects that they are doing day in and day out. When there is a public health emergency like the one we’re facing, they drop what they are doing and volunteer — usually for free — in order to support the public response.
We have some of the best infectious disease modelers in the world. I’m really proud of my community. But if we’re going to rely on them so heavily — and if they are going to be so important — we need to think more about how to make that a formalized capability, and not a volunteer effort.
I think it should be something like a national infectious disease forecasting center.
Does this not exist? Does the Centers for Disease Control and Prevention do this? Or some other arm of the Department of Health and Human Services?
There are several small groups within government that do this work.
There’s a small group in HHS, and there’s a small group in CDC, there’s a small group at NIH [the National Institutes of Health], but it’s not that many people. They are staffed to support their agencies during peacetime — if you’ll forgive the war terminology. When there is an infectious disease emergency, there’s just a raw personnel problem.
What about the National Weather Service do you think is worth emulating in a disease forecasting center?
There’s a lot of lessons from the National Weather Service. They have a whole research and development layer, which involves building new models and finding new ways to review data sources, and really pushing that envelope.
There’s also the layer that focuses on implementing those models and telling us what the weather is going to be.
And then there’s the local layer, when you turn on the TV and you see a local meteorologist interpreting for you what the National Weather Service has said.
Thinking through those different pillars — of research and development, translation, and applied public health — is really inspirational.
Does this need to be a permanent workforce, or something like an Army reserves?
It needs to be a permanent workforce. There is a lot more work that could be done during downtime in order to improve the capacity and improve the science, and a lot of those functions don’t sit well in academia.
Right now, there’s nobody responsible for really recording and archiving who said what: What did modelers say, and what happened. What prediction did they make today, and how does it change tomorrow, how does it change the next day, and how does that relate to what actually happened? No one is doing that. That is something that a government function could do.
I report on hurricanes when they happen. And, yeah, you can go to the National Weather Service, and see decades’ worth of hurricane predictions and compare them to the actual storm paths. There’s even some reassuring charts that show us the hurricane track forecasting models are getting much, much better over time.
Yeah, we’re not doing that [for disease models]. We have not been doing that work because there is nobody responsible for it.
It’s hard to get a grant to do things like archiving forecasts, and finding out which ones were right and which ones were wrong and why. That’s not the kind of thing that academics … you can’t build a career off of that.
Why is it hard to make a career in checking up on disease modeling? Couldn’t the government encourage more of this by just increasing the number of research grants in this area?
For things that are so fundamental, we usually make those government functions.
Is there a path forward in which academia could be structured and funded so it could better support those needs? Maybe. But if we’re taking the opportunity to think through what would be best and most effective, that would be a government function.
The reason we have accurate weather forecast is because there was a federal agency responsible for improving the forecasts.
Also, again, on pushing the science forward: A lot of academic infectious disease modelers are focused on diseases that recur, like seasonal influenza or dengue or cholera. Those are excellent uses of their time. But there is not as much thinking specifically on these emerging infectious disease threats, like the one we’re facing now — again, because it’s hard to make a career out of something that doesn’t come around very often. A federal agency would be able to put more time and thinking into the very specific purpose of using modeling to support decision-making during crises.
What tools would an agency like this need? Weather forecasters have weather stations reporting conditions on the ground; they have the means to know what’s happening with air systems in the upper atmosphere. What tools will disease forecasters need to do this work better than it is being done now?
That’s exactly the kind of question that I think an agency should put a lot of time into answering. That’s the entire future of the field.
First of all, we need more detailed and timely data about who is getting sick. Right now, most modelers are having to rely on case counts over time. If you Google what’s happening with Covid today, you find a website with a little graph: That’s the exact same data that the modelers are using.
But if we can bring in more detailed, more patient-level data, we can add a lot of richness to our analyses.
I also think there should [be] more detailed information about human movement and human behavior: There are a lot of questions right now about “are people social distancing.” … The fact is we don’t really know because we don’t have the capacity to evaluate that.
But we could. Mobile phone data has been used in the past to understand population movement, but it has not been systematized. And then there are a lot of other ideas that haven’t been explored.
Is forecasting infectious diseases harder than forecasting the weather?
I am not a meteorologist. But it is harder because the weather is governed by physics. And some of the transmission dynamics is as well.
What we don’t have a good handle on is human behavior and how that changes. We are always changing the outcome with the choices that we make, and there are ways we can build that into the models and do that better, but it definitely adds complexity.
Humans are hard creatures to understand. Might there always be limits on what we can predict when it comes to disease outbreaks?
Even just going through the practice of finding out what the limits are, and being able to build that into our communications would be so helpful.
If we can add more detail about the ways we might be right and the ways we might be wrong, and add more richness to the variables we think are influencing the outcome, that would be really helpful.
Unlike the weather, with pandemics, we actually influence the outcome. ... People see these numbers, and they are motivated then to be more vigilant about staying home and doing all the things that change that outcome.