Suppose a large asteroid is hurtling toward Earth and has a 5 percent chance of hitting us, creating $10 trillion worth of physical damage to the U.S. Should the president authorize a $700 billion mission to destroy the asteroid and stave off disaster? If you reason in purely statistical terms, the expected cost of failing to act (0.05 × $10,000 billion = $500 billion) is much less than the cost of acting.
But if the president spends the money to stop the asteroid, nobody will know whether it would indeed have hit the Earth, had he neglected to act. By contrast, if he does nothing, he has a 5 percent chance of going down in history as the president who knowingly failed to avoid catastrophe. Doesn’t the operation to destroy the asteroid suddenly look much more appealing? And, after all, the aerospace industry would be delighted to be paid to work on the mission. Perhaps because all of the experts would, directly or indirectly, benefit from the proposed mission, the public would start hearing that the chances of disaster are really 10 percent to 20 percent. With those odds, the $700 billion mission would make sense, both politically and statistically.
The circumstances that make policy makers succumb to the “too big to fail” doctrine are similar. An important difference, however, is that a Federal Reserve chairman’s resolve to bail out banks actually increases the likelihood of disaster, since the implicit promise to intervene has a perverse influence on the banks’ willingness to take risk.