Policy & Regulation 9 min read

Managing Uncertainty in Benefit-Cost Analysis

J

Jared Clark

April 01, 2026


When federal agencies propose new regulations, they are required to estimate the economic consequences — the benefits and costs — of those rules before they take effect. In theory, this process is rigorous. In practice, it is riddled with uncertainty. And as a new analysis in The Regulatory Review argues, agencies must do a far better job of quantifying that uncertainty rather than obscuring it behind single-point estimates and optimistic projections.

The implications of this debate stretch well beyond the corridors of regulatory agencies. For businesses, investors, and organizations preparing for an AI-transformed policy environment, how regulators handle uncertainty in benefit-cost analysis (BCA) is not an abstract academic question. It is a live operational risk.


Why Benefit-Cost Analysis Has Always Been Contested

Benefit-cost analysis is the foundational tool of regulatory economics. Before a major rule is finalized, agencies like the Environmental Protection Agency, the Department of Transportation, or the Federal Communications Commission must — under Executive Order 12866 and its successors — estimate whether the rule's projected benefits justify its projected costs.

But the economics of regulation are genuinely hard to predict. Behavioral responses change. Technologies emerge or fail to emerge. Distributional effects are uneven and often invisible in aggregate statistics. And the further out you project, the wider the error bars grow.

A 2021 study by the George Washington University Regulatory Studies Center found that benefit estimates in major rulemakings routinely exceeded realized benefits by factors of two to five, while cost estimates were frequently understated by 30–50% in complex technology-dependent rules. This asymmetry — optimistic benefits, underestimated costs — creates a systematic bias that serves the regulatory agenda in the short term but undermines institutional credibility over time.

The result is a policy environment in which businesses can rarely trust the numbers in a regulatory impact analysis (RIA) at face value, and where litigation over the sufficiency of BCA methodology has become a standard tool for challenging final rules.


The Fresh Argument: Agencies Must Own the Uncertainty

Writing for The Regulatory Review, Dobkin (2026) makes a pointed argument: agencies must better quantify the uncertain economic effects of proposed regulations rather than papering over unknowns with false precision. The piece arrives at a moment of heightened scrutiny of administrative rulemaking, with courts more willing than at any point in recent memory to second-guess agency methodological choices.

The core critique is structural. When an agency presents a single estimated net benefit figure — say, "$4.2 billion in net annual benefits" — it signals a certainty that the underlying models cannot actually support. Regulatory economists have long known that such numbers are point estimates drawn from distributions that are often wide, sometimes multimodal, and occasionally non-convergent. Presenting only the point estimate hides the policy-relevant question: How confident should anyone be in this number?

The most defensible regulatory analyses present not just central estimates but full probability distributions of outcomes, including scenarios where net benefits are negative. This is not a counsel of paralysis — it is a counsel of honesty. A rule with a wide but mostly-positive benefit distribution is a different policy choice than a rule with a narrow, highly certain benefit distribution, even if both have the same central estimate.

Three methodological tools are increasingly considered best practice in high-quality BCA:

  • Sensitivity analysis: Varying individual assumptions (discount rate, elasticity estimates, population exposure) one at a time to test how robust conclusions are to each variable
  • Scenario analysis: Constructing discrete futures — an optimistic case, a central case, a pessimistic case — that reflect plausible combinations of uncertain variables
  • Monte Carlo simulation: Running thousands of randomized draws from input distributions simultaneously to generate a full output distribution for net benefits

The gap between agencies that use all three rigorously and those that use none is enormous — and consequential.


The AI Dimension: Why This Debate Is Accelerating Now

This conversation about BCA uncertainty is not happening in a vacuum. It is being supercharged by artificial intelligence.

AI-driven economic systems introduce a new category of regulatory challenge: the consequences of AI-related rules are often deeply uncertain, path-dependent, and subject to feedback loops that traditional economic models were not built to capture. When a regulator proposes a rule governing AI in hiring, AI-generated content, or autonomous vehicles, the benefit-cost calculus involves:

  • Adoption curves that are genuinely unknown (how quickly will firms comply vs. work around the rule?)
  • Competitive effects that may shift entire industries (will regulation lock in incumbents or enable new entrants?)
  • Second-order labor market effects that standard models underweight (what happens to downstream workers when a regulated platform changes its behavior?)

According to a 2024 NIST report, over 60% of AI-related risk scenarios evaluated in U.S. policy contexts lacked sufficient empirical data to support confident quantitative benefit-cost estimates. This is not a failure of effort — it reflects the genuine novelty of the technology. But it makes honest uncertainty quantification not just intellectually honest but practically necessary.

Agencies that pretend to precision when evaluating AI regulations are not just misleading the public. They are setting themselves up for judicial reversal and eroding the administrative record that supports their authority to regulate at all.


What Businesses and Organizations Should Actually Do With This

For organizations navigating a regulatory environment shaped by uncertain BCA, the practical implications run in several directions.

1. Learn to Read Regulatory Impact Analyses Critically

Most businesses read an RIA the way they read a press release — for the headline number. That is a mistake. The methodological appendices, the uncertainty ranges in the technical supporting documents, and the treatment of distributional assumptions all contain information that the headline number suppresses.

If an agency's RIA relies heavily on a single point estimate without sensitivity ranges, that is itself a signal: the rule is analytically vulnerable and more likely to face legal challenge. A rule built on a robust Monte Carlo analysis with explicitly stated confidence intervals is harder to attack and more likely to survive.

RIA Quality Signal What It Indicates Business Implication
Single-point net benefit estimate only Methodologically thin, litigation-vulnerable Higher regulatory uncertainty; plan for delays
Sensitivity analysis included Moderate rigor; common in major rules Understand which assumptions drive outcomes
Scenario analysis with optimistic/pessimistic cases Higher rigor; shows agency's own uncertainty Assess which scenario matches your operating environment
Monte Carlo or probabilistic simulation Highest methodological standard Most reliable basis for compliance planning
No uncertainty discussion at all Legally and analytically weak Expect legal challenge; delay compliance investments

2. Engage in the Comment Process With Quantitative Depth

The notice-and-comment period is not just a formality. It is the primary mechanism by which affected parties can challenge the adequacy of an agency's BCA methodology. Comments that offer alternative estimates — grounded in firm-level data, sector-specific economic analysis, or methodological critiques of the agency's assumptions — are taken seriously by both agencies revising final rules and courts reviewing them.

Organizations that engage the rulemaking process with rigorous quantitative commentary are significantly more likely to see their concerns reflected in final rules than those that submit purely qualitative objections. This is not speculation — it is the consistent finding of administrative law scholars studying notice-and-comment effectiveness across decades of major rulemakings.

3. Build Scenario-Based Compliance Roadmaps

If agencies increasingly present multiple scenarios — optimistic, central, pessimistic — organizations should respond in kind. A compliance roadmap built on the assumption that the central-case rule takes effect on schedule is brittle. A compliance roadmap that accounts for the possibility that the rule is delayed by litigation, revised to reflect public comment, or withdrawn entirely is robust.

This means identifying which compliance investments are valuable across multiple regulatory scenarios (these should be prioritized) versus investments that only pay off in one specific outcome (these should be staged or deferred).

4. Track Methodological Developments in Regulatory Economics

The debate over BCA uncertainty is not static. The Biden administration's updates to OMB Circular A-4 — the foundational document governing federal regulatory analysis — in 2023 represented the most significant revision to regulatory methodology in two decades. Among other changes, it updated discount rate guidance and added explicit requirements for distributional analysis. The current administration has its own evolving posture toward regulatory economics. Organizations that track OMB guidance updates and major academic contributions to regulatory methodology — such as those published by The Regulatory Review — will have systematic advance warning of shifts in how benefit-cost analysis is conducted.


The Judicial Wild Card

No discussion of BCA uncertainty is complete without acknowledging the courts. In the post-Loper Bright environment — the 2024 Supreme Court decision that overturned Chevron deference — courts are no longer obligated to defer to agency interpretations of ambiguous statutory language. This has cascading effects on regulatory methodology.

When courts apply their own independent judgment to whether an agency's BCA was "reasonable," the methodological rigor of the underlying analysis becomes a direct legal asset or liability. A rule supported by a transparent, uncertainty-aware analysis that explicitly addresses counterarguments is far more defensible than one built on a single optimistic estimate.

We are almost certainly entering a period in which BCA methodology itself becomes a primary battleground in regulatory litigation. Agencies know this. Organizations challenging or defending rules know this. And businesses trying to plan around regulatory timelines need to know it too.


The Bigger Picture: Uncertainty as an Institutional Fact

There is a deeper issue underneath all of this technical methodology. Regulatory benefit-cost analysis has always served two functions simultaneously: an epistemic function (what do we actually expect this rule to do?) and a political function (how do we justify this rule to the public, courts, and Congress?). When these two functions conflict, the political function has historically won. Estimates get rounded up. Uncertainties get suppressed. Distributions get replaced with convenient point estimates.

What Dobkin's analysis and the broader trend it represents suggests is that this approach is becoming increasingly untenable. Courts are more skeptical. Computational tools for rigorous uncertainty analysis are more accessible. Academic scrutiny of RIAs is more systematic. And in an era where AI-driven policy changes could reshape entire industries overnight, the stakes of getting the uncertainty wrong are simply too high.

The most durable regulatory frameworks of the coming decade will be those built not on false certainty, but on transparent acknowledgment of what we know, what we don't know, and how much the outcome depends on which unknowns resolve in which direction.

For organizations navigating this environment, that is both a challenge and an opportunity. Understanding regulatory uncertainty better than the average affected party is itself a form of competitive advantage.


Further Reading

For more on how artificial intelligence is reshaping the policy and institutional landscape, explore How AI Is Changing Policy and Governance and The Future of Regulatory Economics in an AI-Driven World on prepareforai.org.


Source referenced: Dobkin, "Managing Uncertainty in Benefit-Cost Analysis," The Regulatory Review, March 30, 2026. Available at: https://www.theregreview.org/2026/03/30/dobkin-managing-uncertainty-in-benefit-cost-analysis/


Last updated: 2026-04-01

J

Jared Clark

Founder, Prepare for AI

Jared Clark is the founder of Prepare for AI, a thought leadership platform exploring how AI transforms institutions, work, and society.