Posted on December 27, 2022 by Amy Sinden
As we approach the two-year mark, there are plenty of unfinished items on the President’s to-do list. One in particular that tends to escape the limelight is of crucial importance to implementing Biden’s policy priorities, especially as we enter these next two years of divided government. That’s the promised overhaul of the decades-old system of centralized regulatory review, under which the White House Office of Information and Regulatory Affairs (OIRA) scrutinizes regulations before they’re issued to—among other things—make sure they pass a cost-benefit test.
Maybe the problem is that the good folks at OIRA are still waiting for a boss. But with Biden’s nominee, former NYU Dean and ACOEL member, Ricky Revesz, now (hopefully) poised for a floor vote in the Senate, that may soon be remedied.
When OIRA is finally able to turn to the task of “improving and modernizing” regulatory review, I hope it will heed the lesson evident in a number of recent EPA rulemakings: Many of the most important values we charge our regulatory agencies with protecting—things like saving lives or preventing neurological damage to kids from pollution exposure—are simply not reducible to dollar-and-cents terms. It’s not that scientific understanding is entirely lacking, it’s just that the data often are not sufficiently granular to allow quantification.
In its proposed revision to the Risk Management Program rules aimed at preventing accidents at facilities handling hazardous chemicals, for example, EPA tallied up $76 million in annual costs but was unable to quantify any of the benefits. The problem was, as the agency noted, “accident frequency and severity are difficult to predict.” Nor could EPA project precisely how each provision of the proposed rule would affect the scope and magnitude of any given accident’s impacts. EPA faced similar challenges in its proposed rule to list PFOA and PFAS as hazardous substances under CERCLA, where it was unable to quantify any benefits at all and only a small sliver of costs.
Generally, the Clean Air Act has been the one area in which EPA has had some success in quantifying benefits—regularly producing estimates in the billions or tens-of-billions of dollars that swamp cost estimates by wide margins. But here too, EPA faces challenges. These big benefits numbers have been almost entirely attributable to two pollutants—particulate matter (PM) and, to a lesser extent, ozone—which happen to be particularly amenable to epidemiological study. When it comes to the hazardous air pollutants (HAPs), the picture looks quite different. Like its predecessors, the Biden EPA has been entirely unable to quantify the benefits associated with reductions in HAPs, even when they’re the target of the rule. (See examples here, here, and here.) Instead, the monetized benefits for these rules are entirely attributable to the salutary fact that the same methods that reduce HAPs also happen to reduce PM and ozone.
While industry lawyers are crying foul [subscription required] and teeing up claims for future court challenges arguing that EPA’s failure to fully quantify costs and benefits renders these rules “arbitrary and capricious,” in my view, EPA and OIRA should be commended for being honest and transparent about what they do and don’t know. It’s not that these rules aren’t justifiable. It’s just that there’s a big difference between having the science to make a convincing qualitative case that something causes serious harm to public health and the environment and having the granular data necessary to quantify that effect.
These examples are emblematic of larger patterns. Empirical work (by myself and others) indicates that information gaps and uncertainties like these are pervasive in EPA rulemaking. In the vast majority of cases, they stymie quantification and preclude any meaningful calculation of net benefits.
In prior administrations (both Republican and Democrat) agencies have, despite these yawning data gaps, felt enormous pressure to monetize both sides of the equation and have hesitated to submit rules unless they can make their case on the numbers alone. This hyper-attention to dollars and cents has effectively ended up imposing on agencies a burden of proof that is in many instances insurmountable, putting a chilling effect on the implementation of regulatory safeguards.
The good news is that members of Congress were well aware of these pervasive data gaps when they passed our environmental statutes. In response, they came up with a lot of creative ways to make sure costs are kept in check and are not disproportionate to benefits, without requiring them to be directly weighed against each other. In this way, they avoided the need for agencies to precisely quantify and monetize regulatory benefits. In contexts in which significant benefits (or costs) can’t be quantified, these tools—feasibility analysis, cost-effectiveness analysis, qualitative cost-benefit analysis, and multi-factor qualitative balancing—can often provide a more useful framework for rational decision making.
While we can only read the tea leaves at this point, it does appear there may be at least some reason to hope that Biden’s OIRA is moving away from the hyper-formalistic version of cost-benefit analysis that has been prevalent in previous administrations. The agency’s willingness to be forthright about their inability to quantify costs and benefits is a good sign, as is the way in which President Biden talked about regulation in his day-one memo—as something that “promote[s] the public interest [and is] vital for tackling national priorities.”
I hope that OIRA’s overhaul of the regulatory review process will align the practice of analyzing costs and benefits with President Biden’s progressive vision. This would include reaffirming the primacy of federal agencies and their statutory mandates in regulatory decision-making and directing agencies to use the context-specific methods specified in their authorizing statutes for considering costs and benefits rather than applying an overly formalistic version of cost-benefit analysis as a one-size-fits-all tool.