This is an excerpt from The Cost-Benefit Revolution by Cass Sunstein. The Cost-Benefit Revolution explains why policies should be based on careful consideration of their costs and benefits rather than on intuition, popular opinion, interest groups, and anecdotes.

Cost-Benefit Analysis as a Foreign Language 

Read Archive

Page 37-38

A fanciful question: When public officials in English-speaking nations are trying to resolve a difficult problem, should they try to conduct their meetings in French? That would be crazy, of course. But cost-benefit analysis is essentially a foreign language, and it has the same effect identified in research on the foreign-language effect: it reduces people’s reliance on intuitive judgments that sometimes go wrong, especially in highly technical areas.

Imagine, for example, that because of some recent event—a railroad accident, an outbreak of food-borne illness—both ordinary citizens and public officials are quite alarmed, and they strongly favor an aggressive (and immediate) regulatory response. Imagine too that the benefits of any such response would be relatively low, because the incident is highly unlikely to be repeated, and that the costs of the response would be high. The language of cost-benefit analysis imposes a firm brake on an evidently unjustified initiative.

Or suppose that a 1 in 50,000 lifetime risk of, say, getting cancer in the workplace, faced by millions of workers, is not much on the government’s viewscreen because it seems to be a mere part of the social background, a fact of life. But suppose the risk could be eliminated at a relatively low cost and that it would save at least five hundred lives annually. Cost-benefit analysis would seem to require that risk to be eliminated, even if the public is not clamoring for it.

Whatever the problem, cost-benefit analysis should also reduce and perhaps even eliminate the power of behavioral biases and heuristics (such as availability)—both of which strengthen or weaken the public demand for regulation. It forces officials to speak in a language that people do not ordinarily use and that they even find uncongenial—but insofar as it helps correct biases, it is all the better for that. True, people can frame costs and benefits in certain ways to try to make one or another outcome seem better, but doing that is not exactly easy. If a regulation would cost $900 million annually and prevent thirty premature deaths, it is not clear how to present those facts to make them sound a lot different—and if someone tries, public officials will probably see right through their efforts.

Cost-benefit analysis is essentially a language, one that may defy ordinary intuitions and that must ultimately be evaluated in terms of its fit with what people, after a process of sustained reflection, really value. One of its virtues is that it weakens the likelihood that policymakers will make decisions on the basis of intuitive reactions that seem hard to resist and that cannot survive that reflection.
 
Work on the foreign-language effect seems to be about psychology, not politics or law. But it clarifies and fortifies the claim that in legislatures, bureaucracies, and courtrooms, as in ordinary life, we often do best to translate social problems into terms that lay bare the underlying variables and make them clear for all to see. In that sense, there is also a democratic argument for cost-benefit balancing.

N’est-ce pas?

Page 140-145

When a product or activity creates a risk, even a small one, many people argue in favor of precautions and, in particular, in favor of the Precautionary Principle. The idea takes diverse forms, but the central idea is that regulators should take aggressive action to avoid environmental risks, even if they do not know that those risks will come to fruition, and indeed even if the likelihood of harm is very low. Suppose, for example, that there is some probability, even a small one, that genetic modification of food will produce serious environmental harm or some kind of catastrophe. For those who embrace the Precautionary Principle, it is important to take precautions against potentially serious hazards, simply because it is better to be safe than sorry. Especially if the worst-case scenario is very bad, strong precautions are entirely appropriate. Compare the medical situation, in which it is tempting and often sensible to say that even if there is only a small probability that a patient is facing a serious health risk, doctors should take precautions to ensure that those risks do not come to fruition.

In an illuminating account, the Precautionary Principle is understood as holding “that if an action or policy has a suspected risk of causing severe harm to the public domain (affecting general health or the environment globally), the action should not be taken in the absence of scientific near-certainty about its safety. Under these conditions, the burden of proof about absence of harm falls on those proposing an action, not those opposing it.” The Wingspread Declaration puts it more cautiously: “When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically. In this context the proponent of an activity, rather than the public, should bear the burden of proof.”

The influential 1992 Rio Declaration states, also with relative caution: “Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.” In Europe, the Precautionary Principle has sometimes been understood in a still stronger way, suggesting that it is important to build “a margin of safety into all decision making.” This stronger version takes the form a suggestion that when an activity, product, or situation might create risks, it is appropriate to take precautions against those risks, even if the probability of harm is very low.

Some of the central claims on behalf of the Precautionary Principle involve uncertainty, learning over time, irreversibility, and the need for epistemic humility on the part of scientists. With respect to uncertainty: any consensus might turn out to be wrong; today’s assurance might be tomorrow’s red alert. With respect to learning over time: in a decade, we are likely to have a lot more information than we have today; why should we allow practices to continue when we might learn that they are dangerous? With respect to irreversibility: some practices threaten to cause irreversible harm, and for that reason, we might want to pay something, possibly a great deal, to prevent that harm from occurring. For those who emphasize irreversibility, the general attitude in the face of uncertainty is “act, then learn,” as opposed to the tempting and often sensible alternative of “wait and learn.” With respect to epistemic humility: knowing that we do not know, we might want to take precautions, rather than to proceed on the assumption that we are omniscient.

...

GMOs are often thought to trigger the Precautionary Principle, with special emphasis on the need for continued monitoring, residual uncertainty, and potentially irreversible or catastrophic environmental risks. This is no mere theoretical point. As one commentator explains, European “legislation that governed GMOs used a precautionary approach, and precaution was one basis for the de facto moratorium on authorizations of GM varieties.”

...

Let us bracket the most complicated questions here and simply note that in this light, a precautionary argument for labeling GM foods (or otherwise for regulating them) depends largely on answering questions of fact. Do such foods promise modest benefits, or instead large ones? With respect to harm, are we speaking of risk, uncertainty, or ignorance? The scientific consensus appears to be risk—and that the underlying danger is very low. The consensus may or may not prove correct, but however important, its correctness raises no interesting conceptual questions for our purposes. At the same time, it is true that those who favor a kind of epistemic humility, even for scientific consensus, will be drawn to a precautionary approach.

It should be added that if GM foods really do create a potentially catastrophic risk and if a sensible version of the Precautionary Principle is therefore triggered, GM labels are hardly an obvious response. In the abstract, they seem far too weak and modest; more aggressive regulation is justified. Indeed, GM labels might do no good at all. The counterargument is that they might be able to diminish the risk, in light of certain assumptions about the likely consumer response, and so might count as one reasonable step. I have raised a question about whether the science, and gaps in scientific understanding, justify invocation of precautionary thinking here, but if they do, labeling might be a justified if partial response.

On one view, the Precautionary Principle is not only or even fundamentally about irreversibility, catastrophe, and decision theory. It has an insistently democratic foundation. Its goal is to assert popular control over risks that concern the public. It is about values, not facts. If members of the public are concerned about GMOs, nuclear power, or nanotechnology, then the Precautionary Principle provides them with a space in which to assert those concerns. It ensures democratic legitimation of the process of risk regulation.

For those who embrace the Precautionary Principle on this ground, efforts to speak of costs and benefits will fall on deaf ears. For those who believe that, in this domain or others, scientists are in the grip of powerful private interests, and that the system is “rigged,” a precautionary approach will seem especially appealing—not least for democratic reasons. If the science is compromised and hence unreliable, it should hardly be decisive. For those who believe that popular concerns often turn out to be justified even if scientists discount them, the democratic justification for the Precautionary Principle might even turn out to be appealing on epistemic grounds.

No abstract argument can rule out the possibility that scientists are mistaken or that they have been compromised. It is true that a scientific consensus in favor of safety can be wrong; the same is the case for a scientific consensus in favor of danger. For those who favor the Precautionary Principle on democratic grounds—and believe that popular concerns about GM foods are a legitimate basis for invocation of the principle—the arguments offered here cannot be decisive. The only response is that some form of welfarism, embodied in the self-conscious efforts to catalog the human consequences of regulation, should not be trumped by baseless fear—and that cost-benefit analysis, understood as a form of applied welfarism, should not be abandoned merely because people are needlessly worried.

In numerous contexts, Congress requires or authorizes federal agencies to impose disclosure requirements. In all those contexts, executive agencies are required to catalog the benefits and costs of disclosure requirements and to demonstrate that the benefits justify the costs. As we have seen, agencies use four different approaches: estimates of willingness to pay for the relevant information; breakeven analysis; projection of end states, such as economic savings or health outcomes; and in some cases, a flat refusal to project benefits on the ground that quantification is not feasible.

Each of these approaches runs into strong objections. In principle, the right question involves willingness to pay, but in practice, agencies face formidable problems in trying to answer that question. If answers are unavailable, breakeven analysis is the very least that should be required, and it is sometimes the most that agencies can do. If it is accompanied by some account of potential outcomes, acknowledging uncertainties, breakeven analysis often will show that mandatory disclosure is justified on welfare grounds—and often that it is not.