One of the more vexing aspects of climate change politics and policy is the longstanding gap between the models that project the physical effects of global warming and those that project the economic impacts. In a nutshell, even as the former deliver worse and worse news, especially about a temperature rise of 3 degrees Celsius or more, the latter remain placid.
The famous DICE model created by Yale’s William Nordhaus shows that a 6-degree rise in global average temperature — which the physical sciences characterize as an unlivable hellscape — would only dent global GDP by 10 percent.
Projections of modest economic impacts from even the most severe climate change affect climate politics in a number of ways. For one thing, they inform policy goals like those President Obama offered in Paris, restraining their ambition. For another, they fuel the arguments of “lukewarmers,” those who say that the climate is warming but it’s not that big a problem. (Lukewarmism is the public stance of most Trump Cabinet members.)
Climate hawks have long had the strong instinct that it’s the economic models, not the physical-science models, that are missing something — that the current expert consensus about climate economic damages is far too sanguine — but they often lack the vocabulary to do any more than insist.
As it happens, that vocabulary exists. At this point, there is a fairly rich literature on the shortcomings of the climate-economic models upon which so much political weight rests. (Here’s an old post of mine from 2015 bashing them.)
Two recent papers help simplify and summarize that literature. They are addressed to different audiences (one the US, one the international community), but both stress the importance of improving these lagging models before the next round of policymaking. I’ll touch on the US-focused one first, the international one second.
US climate policy is moribund; it’s a good time to update models
The first paper is “Time to refine key climate policy models,” a commentary in Nature Climate Change by Alexander Barron, a former Environmental Protection Agency and congressional staffer (on the Waxman-Markey bill) who is now at Smith College.
He notes that a relatively small set of models — “a handful of computational general equilibrium (CGE) models, sector-specific models, or hybrids like the US Energy Information Administration (EIA)’s National Energy Modeling System (NEMS)” — tends to set expectations and policymaking in the US. And there are reasons to believe those models are systematically underestimating climate change’s economic impacts and overstating the costs of mitigating it.
Barron summarizes the areas where current modeling falls short.
Technology costs: When it comes to clean energy technology, economics and policy are moving quickly, and because of the vicissitudes of academic review, the cost data used in official models is often years old and well out of date. Plus, models assume learning curves that renewables have exceeded again and again. “Work to incorporate empirically supported learning rates and induced innovation is challenging but possible,” Barron writes, “and research suggests that including innovation can significantly lower required carbon prices for a given target.”
Opportunities outside the electricity sector: As I’ve written recently, climate policy urgently needs to broaden its gaze from the power sector and start taking on other sectors. But models inhibit that. In models, the transportation sector and especially the industrial sector are resistant to carbon prices.
Research is needed to disaggregate those sectors into subsectors and find the places where policy can gain traction, and to explore the effects of electrification, widespread behavioral changes, cutting-edge technologies (like autonomous vehicles), and other things to which models remain largely blind.
Demand and energy efficiency: Though virtually all decarbonization scenarios involve massive amounts of energy efficiency, we remain unable to model it very well. It is often “forced” into models as an exogenous variable, but at a flat per-kWh cost that allows little differentiation between the different potentials of different subsectors. Data on efficiency is not sophisticated enough to allow models to intelligently allocate resources to it and within it.
“All models would benefit from sustained investment in improved efficiency cost estimates, more publicly available data, updated information on adoption behaviors, and careful examination of model responses,” Barron writes.
Social benefits: Models often omit or undercount social benefits like health improvements and reductions in premature mortality from lower air pollution, reductions in disaster management costs, and the, uh, “use value” of a clean environment (hiking and stuff). “In fact,” Barron writes, “none of the modeling platforms historically used to analyze US climate policy produce direct estimates of the economy-wide reductions in air pollutants, let alone their economic impacts.”
(In late 2017, EPA’s Science Advisory Board released detailed recommendations for how to improve cost-benefit analyses along these lines. EPA Administrator Scott Pruitt is currently working to neuter the board and make cost-benefit analysis even worse.)
Uncertainty: To “reduce fixation on a single scenario,” Barron says, policymakers should be presented with a range of projections emphasizing how outcomes vary with assumptions and sensitivities. Modelers should strain to ensure that journalists and policymakers understand that models are not predictions and that outcomes depend entirely on our choices.
These are fairly technical problems, but solutions are within the grasp of the research community. US climate policy is likely on hold for (at least) four years, as Trump madness is worked out, but this is an area where improvement is both possible and achievable.
“Shortcomings in existing policy models represent barriers to climate policy that could be reduced with modest resources and limited political capital,” Barron writes. It would be nice for better models to be available when the US returns to sane climate policymaking.
The IPCC is working on its next big report and still using models that underestimate economic damages
The second paper, in Review of Environmental Economics and Policy, makes the same point — commonly used models are underestimating the economic impacts of climate change — in a slightly different way, to a different audience.
The audience in this case is the Intergovernmental Panel on Climate Change (IPCC), which is preparing to pull together its Sixth Assessment Report, to be released over 2021 and 2022. IPCC assessment reports are hugely influential in global policymaking.
The models typically used to estimate effects are integrated assessment models (IAMs), using an “expected utility function” — that is, they add up effects based on their probability of occurring. Such models are “integrated” in that they include economic and climate models in interaction. The economy produces emissions, which feed into the climate models, which produce effects, which are applied as a “damage function” to the economic models.
The problems with IAMs are well-explored at this point (see this collection of papers from the National Academy of Sciences; or, see above), but the paper’s authors — Thomas Stoerk of the Environmental Defense Fund, Gernot Wagner of the Harvard University Center for the Environment, and Bob Ward of the ESRC Centre for Climate Change Economics and Policy and Grantham Research Institute at the London School of Economics and Political Science — focus on one in particular.
The expected utility function does not allow modelers to indicate their subjective confidence in various sources of input data. And many difficult-to-quantify effects are omitted entirely; “physical impacts are often not translated into monetary terms and they have largely been ignored by climate economists,” the authors write.
In other words, IAMs have the effect of undercounting risks and masking uncertainty, which is unfortunate since risks and uncertainties are the signal features of climate policymaking.
The heart of the critique is that IAMs do not properly account for “tipping points,” levels of atmospheric change “beyond which impacts accelerate, become unstoppable, or become irreversible.” We do not have a great understanding of tipping points or exactly when they might occur. But we know that they become more likely as temperature climbs and become almost certain as temperatures rise more than 4, 5, or 6 degrees.
Such high temperatures are not the most likely outcome, but they are, shall we say, less unlikely than one might like. The distribution of climate outcomes has a “fat tail,” which means even on the far end of extreme possibilities, a 6-degree apocalypse, the chances are still between 5 and 10 percent.
IAMs do not account well for fat-tail risks. Nor do they account for “ambiguity aversion,” the authors write, “a widely held preference to avoid uncertainty.” If they properly understood it, people probably wouldn’t like the idea of betting millions of lives and possibly the future of the species on avoiding an outcome with 5 to 10 percent probability.
The most straightforward way to better integrate tipping points into IAMs would be to increase the steepness of the damage function. In a 2012 paper, famed Harvard climate economist Martin Weitzman “proposes a steeper damage function that relies on input from an expert panel that explicitly considered physical tipping points,” the authors write. “This damage function leads to a loss of global output of around 50 percent for a temperature increase of 6°C.”
Subsequent research has supported the contention that proper consideration of tipping points raises both expected economic impacts and the optimal size of a carbon price.
The authors summarize their conclusions about the state of climate economics:
First, the expected utility framework fails to capture important dimensions of the climate decision problem. Second, when uncertainty is explicitly considered within the expected utility framework, estimates of the economic damages from climate change generally increase, often by as much as an order of magnitude. [my emphasis]
The authors urge the IPCC to look beyond the expected utility framework and explore other models of decision-making under uncertainty. “Instead of the technical expert calculus that is currently used,” the authors write, “decisions concerning optimal climate policy should ideally move to public debates about the ethical choices that underlie different decision frameworks.” (Amen to that.)
And they urge the IPCC to better account for tipping points, which will have the effect of raising economic impact estimates and reducing estimated policy costs.
Model talk is kind of boring, but models underlie everything
There’s a lot of technical mumbo-jumbo flying around in these conversations about models, so it’s important to step back and recall the point of all this.
Policymakers want to know how much climate change will hurt the economy. They want to know how much policies to fight climate change will cost. Models provide them with answers. Right now, models are (inaccurately) telling them that damage costs will be low and policy costs will be high.
Political mobilization on climate change is going to fight a headwind as long as policymakers are getting those answers from models.
We need models that negatively weigh uncertainty, properly account for tipping points, incorporate more robust and current technology cost data, better differentiate sectors outside electricity, rigorously price energy efficiency, and include the social and health benefits of decarbonization.
One, such models would be more accurate, better at their task of informing policymakers. And two, they would justify far more policy and investment to fight climate change than has been seen to date in the US or any other major economy. We shouldn’t let the blind spots and shortcomings of current models undermine political ambition.
Save the models, save the world.