"The Last Temptation of Risk
by Barry Eichengreen

04.28.2009

THE GREAT Credit Crisis has cast into doubt much of what we thought we knew about economics. We thought that monetary policy had tamed the business cycle. We thought that because changes in central-bank policies had delivered low and stable inflation, the volatility of the pre-1985 years had been consigned to the dustbin of history; they had given way to the quaintly dubbed “Great Moderation.” We thought that financial institutions and markets had come to be self-regulating—that investors could be left largely if not wholly to their own devices. Above all we thought that we had learned how to prevent the kind of financial calamity that struck the world in 1929.

We now know that much of what we thought was true was not. The Great Moderation was an illusion. Monetary policies focusing on low inflation to the exclusion of other considerations (not least excesses in financial markets) can allow dangerous vulnerabilities to build up. Relying on institutional investors to self-regulate is the economic equivalent of letting children decide their own diets. As a result we are now in for an economic and financial downturn that will rival the Great Depression before it is over.

The question is how we could have been so misguided. One interpretation, understandably popular given our current plight, is that the basic economic theory informing the actions of central bankers and regulators was fatally flawed. The only course left is to throw it out and start over. But another view, considerably closer to the truth, is that the problem lay not so much with the poverty of the underlying theory as with selective reading of it—a selective reading shaped by the social milieu. That social milieu encouraged financial decision makers to cherry-pick the theories that supported excessive risk taking. It discouraged whistle-blowing, not just by risk-management officers in large financial institutions, but also by the economists whose scholarship provided intellectual justification for the financial institutions’ decisions. The consequence was that scholarship that warned of potential disaster was ignored. And the result was global economic calamity on a scale not seen for four generations.

SO WHERE were the intellectual agenda setters when the crisis was building? Why did they fail to see this train wreck coming? More than that, why did they consort actively with the financial sector in setting the stage for the collapse?

For economists in business schools the answer is straightforward. Business schools see themselves as suppliers of inputs to business. Just as General Motors provides its suppliers with specifications for the cold-rolled sheet it needs for fabricating auto bodies, J. P. Morgan makes clear the kind of financial engineers it requires, and business schools deem to provide. In the wake of the 1987 stock-market crash, Morgan’s chairman, Dennis Weatherstone, started calling for a daily “4:15 Report” summarizing how much his firm would lose if tomorrow turned out to be a bad day. His counterparts at other firms then adopted the practice. Soon after, business schools jumped to supply graduates to write those reports. Value at Risk, as that number and the process for calculating it came to be known, quickly gained a place in the business-school curriculum.

The desire for up-to-date information on the risks of doing business was admirable. Less admirable was the belief that those risks could be reduced to a single number which could then be estimated on the basis of a set of mathematical equations fitted to a few data points. Much as former–GM CEO Alfred Sloan once sought to transform automobile production from a craft to an engineering problem, Weatherstone and his colleagues encouraged the belief that risk and return could be reduced to a set of equations specified by an MBA and solved by a machine.

Getting the machine to spit out a headline number for Value at Risk was straightforward. But deciding what to put into the model was another matter. The art of gauging Value at Risk required imagining the severity of the shocks to which the portfolio might be subjected. It required knowing what new variables to add in response to financial innovation and unfolding events. Doing this right required a thoughtful and creative practitioner. Value at Risk, like dynamite, can be a powerful tool when in the right hands. Placed in the wrong hands—well, you know.

These simple models should have been regarded as no more than starting points for serious thinking. Instead, those responsible for making key decisions, institutional investors and their regulators alike, took them literally. This reflected the seductive appeal of elegant theory. Reducing risk to a single number encouraged the belief that it could be mastered. It also made it easier to leave early for that weekend in the Hamptons.

Now, of course, we know that the gulf between assumption and reality was too wide to be bridged. These models were worse than unrealistic. They were weapons of economic mass destruction.

For some years those who relied on these artificial constructs were not caught out. Episodes of high volatility, like the 1987 stock-market crash, still loomed large in the data set to which the model was fit. They served to highlight the potential for big shocks and cautioned against aggressive investment strategies. Since financial innovation was gradual, models estimated on historical data remained reasonable representations of the balance of risks.

WITH TIME, however, memories of the 1987 crash faded. In the data used by the financial engineers, the crash became only one observation among many generated in the course of the Great Moderation. There were echoes, like the all-but-failure of the hedge fund Long-Term Capital Management in 1998. (Over four months the company lost $4.6 billion and had to be saved through a bailout orchestrated by the Federal Reserve Bank of New York.) But these warning signs were muffled by comparison. This encouraged the misplaced belief that the same central-bank policies that had reduced the volatility of inflation had magically, perhaps through transference, also reduced the volatility of financial markets. It encouraged the belief that mastery of the remaining risk made more aggressive investment strategies permissible. It made it possible, for example, to employ more leverage—to make use of more borrowed money—without putting more value at risk.

Meanwhile, deregulation was on the march. Memories of the 1930s disaster that had prompted the adoption of restrictions like the Glass-Steagall Act, which separated commercial and investment banking, faded with the passage of time. This tilted the political balance toward those who, for ideological reasons, favored permissive regulation. Meanwhile, financial institutions, in principle prohibited from pursuing certain lines of business, found ways around those restrictions, encouraging the view that strict regulation was futile. With the elimination of regulatory ceilings on the interest rates that could be paid to depositors, commercial banks had to compete for funding by offering higher rates, which in turn pressured them to adopt riskier lending and investment policies in order to pay the bill. With the entry of low-cost brokerages and the elimination of fixed commissions on stock trades, broker-dealers like Bear Stearns, which had previously earned a comfy living off of such commissions, now felt compelled to enter riskier lines of business.

But where the accelerating pace of change should have prompted more caution, the routinization of risk management encouraged precisely the opposite. The idea that risk management had been reduced to a mere engineering problem seduced business in general, and financial businesses in particular, into believing that it was safe to use more leverage and to invest in more volatile assets.

Of course, risk officers could have pointed out that the models had been fit to data for a period of unprecedented low volatility. They could have pointed out that models designed to predict losses on securities backed by residential mortgages were estimated on data only for years when housing prices were rising and foreclosures were essentially unknown. They could have emphasized the high degree of uncertainty surrounding their estimates. But they knew on which side their bread was buttered. Senior management strongly preferred to take on additional risk, since if the dice came up seven they stood to receive megabonuses, whereas if they rolled snake eyes the worst they could expect was a golden parachute. If an investment strategy that promised high returns today threatened to jeopardize the viability of the enterprise tomorrow, then this was someone else’s problem. For a junior risk officer to warn the members of the investment committee that they were taking undue risk would have dimmed his chances of promotion. And so on up the ladder.

WHY CORPORATE risk officers did not sound the alarm bells is thus clear enough. But where were the business-school professors while these events were unfolding? Answer: they were writing textbooks about Value at Risk. (Truth in advertising requires me to acknowledge that the leading such book is by a professor at the University of California.) Business schools are rated by business publications and compete for students on the basis of their record of placing graduates. With banks hiring graduates educated in Value at Risk, business schools had an obvious incentive to supply the same.

But what of doctoral programs in economics (like the one in which I teach)? The top PhD-granting departments only rarely send their graduates to positions in banking or business—most go on to other universities. But their faculties do not object to the occasional high-paying consulting gig. They don’t mind serving as the entertainment at beachside and ski-slope retreats hosted by investment banks for their important clients.

Generous speaker’s fees were thus available to those prepared to drink the Kool-Aid. Not everyone indulged. But there was nonetheless a subconscious tendency to embrace the arguments of one’s more “successful” colleagues in a discipline where money, in this case earned through speaking engagements and consultancies, is the common denominator of success.

Those who predicted the housing slump eventually became famous, of course. Princeton University Press now takes out space ads in general-interest publications prominently displaying the sober visage of Yale University economics professor Robert Shiller, the maven of the housing crash. Not every academic scribbler can expect this kind of attention from his publisher. But such fame comes only after the fact. The more housing prices rose and the longer predictions of their decline looked to be wrong, the lonelier the intellectual nonconformists became. Sociologists may be more familiar than economists with the psychic costs of nonconformity. But because there is a strong external demand for economists’ services, they may experience even-stronger economic incentives than their colleagues in other disciplines to conform to the industry-held view. They can thus incur even-greater costs—economic and also psychic—from falling out of step.

WHY BELABOR these points? Because it was not that economic theory had nothing to say about the kinds of structural weaknesses and conflicts of interest that paved the way to our current catastrophe. In fact, large swaths of modern economic theory focus squarely on the kind of generic problems that created our current mess. The problem was not an inability to imagine that conflicts of interest, self-dealing and herd behavior could arise, but a peculiar failure to apply those insights to the real world.

Take for example agency theory, whose point of departure is the observation that shareholders find it difficult to monitor managers, who have an incentive to make decisions that translate into large end-of-current-year bonuses but not necessarily into the long-term health of the enterprise. Risk taking that produces handsome returns today but ends in bankruptcy tomorrow may be perfectly congenial to CEOs who receive generous bonuses and severance packages but not to shareholders who end up holding worthless paper. This work had long pointed to compensation practices in the financial sector as encouraging short-termism and excessive risk taking and heightening conflicts of interest. The failure to heed such warnings is all the more striking given that agency theory is hardly an obscure corner of economics. A Nobel Prize for work on this topic was awarded to Leonid Hurwicz, Eric Maskin and Roger Myerson in 2007. (So much for the idea that it is only the financial engineers who are recognized by the Nobel Committee.)

Then there is information economics. It is a fact of life that borrowers know more than lenders about their willingness and capacity to repay. Who could know better what motivation lurks in the mind of the borrower and what opportunities he truly possesses? Taking this observation as its starting point, research in information economics has long emphasized the existence of adverse selection in financial markets—when interest rates rise, only borrowers with high-risk projects offering some chance of generating the high returns needed to service and repay loans will be willing to borrow. Indeed, if higher interest rates mean riskier borrowers, there may be no interest rate high enough to compensate the lender for the risk that the borrower may default. In that case lending and borrowing may collapse.

These models also show how borrowers have an incentive to take on more risk when using other people’s money or if they expect to be bailed out when things go wrong. In the wake of recent financial rescues, the name for this problem, “moral hazard,” will be familiar to even the casual newspaper reader. Again this is hardly an obscure corner of economics: George Akerlof, Michael Spence and Joseph Stiglitz were awarded the Nobel Prize for their work on it in 2001. Here again the potential problems of an inadequately regulated financial system would have been quite clear had anyone bothered to look.

Finally there is behavioral economics and its applications, including behavioral finance. Behavioral economics focuses on how cognition, emotion, and other psychological and social factors affect economic and financial decision making. Behavioral economists depart from the simpleminded benchmark that all investors take optimal decisions on the basis of all available information. Instead they acknowledge that decision making is not easy. They acknowledge that many decisions are taken using rules of thumb, which are often formed on the basis of social convention. They analyze how, to pick an example not entirely at random, decision making can be affected by the psychic costs of nonconformity.

It is easy to see how this small step in the direction of realism can transform one’s view of financial markets. It can explain herd behavior, where everyone follows the crowd, giving rise to bubbles, panics and crashes. Economists have succeeded in building elegant mathematical models of decision making under these conditions and in showing how such behavior can give rise to extreme instability. It should not be a surprise that people like the aforementioned George Akerlof and Robert Shiller are among the leaders in this field.

Moreover, what is true of investors can also be true of regulators, for whom information is similarly costly to acquire and who will similarly be tempted to follow convention—even when that convention allows excessive risk taking by the regulated. Indeed, these theories suggest that the attitudes of regulators may be infected not merely by the practices and attitudes of their fellow regulators, but also by those of the regulated. Economists now even have a name for this particular version of the intellectual fox-in-the-henhouse syndrome: cognitive regulatory capture.

And what is true of investors and regulators, introspection suggests, can also be true of academics. When it is costly to acquire and assimilate information about how reality diverges from the assumptions underlying popular economic models, it will be tempting to ignore those divergences. When convention within the discipline is to assume efficient markets, there will be psychic costs if one attempts to buck the trend. Scholars, in other words, are no more immune than regulators to the problem of cognitive capture.

What got us into this mess, in other words, were not the limits of scholarly imagination. It was not the failure or inability of economists to model conflicts of interest, incentives to take excessive risk and information problems that can give rise to bubbles, panics and crises. It was not that economists failed to recognize the role of social and psychological factors in decision making or that they lacked the tools needed to draw out the implications. In fact, these observations and others had been imaginatively elaborated by contributors to the literatures on agency theory, information economics and behavioral finance. Rather, the problem was a partial and blinkered reading of that literature. The consumers of economic theory, not surprisingly, tended to pick and choose those elements of that rich literature that best supported their self-serving actions. Equally reprehensibly, the producers of that theory, benefiting in ways both pecuniary and psychic, showed disturbingly little tendency to object. It is in this light that we must understand how it was that the vast majority of the economics profession remained so blissfully silent and indeed unaware of the risk of financial disaster.

WITH THE pressure of social conformity being so powerful, are we economists doomed to repeat past mistakes? Will we forever follow the latest intellectual fad and fashion, swinging wildly—much like investors whose behavior we seek to model—from irrational exuberance to excessive despair about the operation of markets? Isn’t our outlook simply too erratic and advice therefore too unreliable to be trusted as a guide for policy?

Maybe so. But amid the pervading sense of gloom and doom, there is at least one reason for hope. The last ten years have seen a quiet revolution in the practice of economics. For years theorists held the intellectual high ground. With their mastery of sophisticated mathematics, they were the high-prestige members of the profession. The methods of empirical economists seeking to analyze real data were rudimentary by comparison. As recently as the 1970s, doing a statistical analysis meant entering data on punch cards, submitting them at the university computing center, going out for dinner and returning some hours later to see if the program had successfully run. (I speak from experience.) The typical empirical analysis in economics utilized a few dozen, or at most a few hundred, observations transcribed by hand. It is not surprising that the theoretically inclined looked down, fondly if a bit condescendingly, on their more empirically oriented colleagues or that the theorists ruled the intellectual roost.

But the IT revolution has altered the lay of the intellectual land. Now every graduate student has a laptop computer with more memory than that decades-old university computing center. And she knows what to do with it. Just like the typical twelve-year-old knows more than her parents about how to download data from the internet, for graduate students in economics, unlike their instructors, importing data from cyberspace is second nature. They can grab data on grocery-store spending generated by the club cards issued by supermarket chains and combine it with information on temperature by zip code to see how the weather affects sales of beer. Their next step, of course, is to download securities prices from Bloomberg and see how blue skies and rain affect the behavior of financial markets. Finding that stock markets are more likely to rise on sunny days is not exactly reassuring for believers in the efficient-markets hypothesis.

The data sets used in empirical economics today are enormous, with observations running into the millions. Some of this work is admittedly self-indulgent, with researchers seeking to top one another in applying the largest data set to the smallest problem. But now it is on the empirical side where the capacity to do high-quality research is expanding most dramatically, be the topic beer sales or asset pricing. And, revealingly, it is now empirically oriented graduate students who are the hot property when top doctoral programs seek to hire new faculty.

Not surprisingly, the best students have responded. The top young economists are, increasingly, empirically oriented. They are concerned not with theoretical flights of fancy but with the facts on the ground. To the extent that their work is rooted concretely in observation of the real world, it is less likely to sway with the latest fad and fashion. Or so one hopes.

The late twentieth century was the heyday of deductive economics. Talented and facile theorists set the intellectual agenda. Their very facility enabled them to build models with virtually any implication, which meant that policy makers could pick and choose at their convenience. Theory turned out to be too malleable, in other words, to provide reliable guidance for policy.

In contrast, the twenty-first century will be the age of inductive economics, when empiricists hold sway and advice is grounded in concrete observation of markets and their inhabitants. Work in economics, including the abstract model building in which theorists engage, will be guided more powerfully by this real-world observation. It is about time.

Should this reassure us that we can avoid another crisis? Alas, there is no such certainty. The only way of being certain that one will not fall down the stairs is to not get out of bed. But at least economists, having observed the history of accidents, will no longer recommend removing the handrail.

Barry Eichengreen is the George C. Pardee and Helen N. Pardee Professor of Economics and Political Science at the University of California, Berkeley."