Monday, January 5, 2009

"The piece so badly misses the basics about VaR that it is hard to take it seriously, although many no doubt will"

Yves Smith attacks VAR:

"
Sunday, January 4, 2009

Woefully Misleading Piece on Value at Risk in New York Times

Listen to this article. Powered by Odiogo.com
The New York Times Sunday Magazine has a long piece by Joe Nocera on value at risk models, which tries to assess how much they can be held accountable for risk management failures on Wall Street.

The piece so badly misses the basics about VaR that it is hard to take it seriously, although many no doubt will( I LIKED THE POST, SO LET'S SEE WHERE SHE'S GOING. ).

The article mentions that VaR models (along with a lot of other risk measurement tools, such as the Black-Scholes options pricing model) assumes that asset prices follow a "normal" distribution, or the classical bell curve( IT DOESN'T SEEM LIKE ALL DO ). That sort of distribution is also known as Gaussian.

But it is well known that financial assets do not exhibit normal distributions. And NO WHERE, not once, does the article mention this fundamentally important fact.

The distribution of prices in financial markets are subject to both "skewness" and "kurtosis". Skewness means results are not symmetrical around the mean:



Stocks and bonds are subject to negative skewness (longer tails of negative outcomes) while commodities exhibit positive skewness (and that factor, in addition to their low correlation with financial asset returns, makes them a useful addition to a model portfolio).

Kurtosis is also known informally as "fat tails". That means that events far away from the mean are more likely to happen that a normal distribution( THEN IT'S NOT A NORMAL DISTRIBUTION ) would suggest. The first chart below is a normal distribution, the second, a so-called Cauchy distribution, which has fat tails:




Now when I say it is well known that trading markets do not exhibit Gaussian distributions, I mean it is REALLY well known. At around the time when the ideas of financial economists were being developed and taking hold (and key to their work was the idea that security prices were normally distributed), mathematician Benoit Mandelbrot learned that cotton had an unusually long price history (100 years of daily prices). Mandelbrot cut the data, and no matter what time period one used, the results were NOT normally distributed. His findings were initially pooh-poohed, but they have been confirmed repeatedly. Yet the math on which risk management and portfolio construction rests assumes a normal distribution!( I DIDN'T SEE THAT )

Let us turn the mike over to the Financial Times' John Dizard:
As is customary, the risk managers were well-prepared for the previous war. For 20 years numerate investors have been complaining about measurements of portfolio risk that use the Gaussian distribution, or bell curve. Every four or five years, they are told, their portfolios suffer from a once-in-50-years event. Something is off here.

Models based on the Gaussian distribution are a pretty good way of managing day-to-day trading positions since, from one day to the next, risks will tend to be normally distributed. Also, they give a simple, one-number measure of risk, which makes it easier for the traders' managers to make decisions.

The "tails risk" ....becomes significant over longer periods of time. Traders who maintain good liquidity and fast reaction times can handle tails risk( SO VAR IS USEFUL )....Everyone has known, or should have known, this for a long time. There are terabytes of professional journal articles on how to measure and deal with tails risk....

A once-in-10-years-comet- wiping-out-the-dinosaurs disaster is a problem for the investor, not the manager-mammal who collects his compensation annually, in cash, thank you. He has what they call a "résumé put", not a term you will find in offering memoranda, and nine years of bonuses....

All this makes life easy for the financial journalist, since once you've been through one cycle, you can just dust off your old commentary.

But Nocera makes NO mention, zero, zip, nada, of how the models misrepresent the nature of risk. He does use the expressoins "kurtosis" and "fat tails" but does not explain what they mean. He merely tells us that VaR measures the risk of what happens 99% of the time, and what happens in that remaining 1% could be catastrophic. That in fact understates the flaws of VaR. The 99% measurement is inaccurate too.( SOME ARE )

Reliance on VaR and other tools based on the assumption of normal distributions leads to grotesque under-estimation of risk. As Paul De Grauwe, Leonardo Iania, and Pablo Rovira Kaltwasser pointed out in "How Abnormal Was the Stock Market in October 2008?":
We selected the six largest daily percentage changes in the Dow Jones Industrial Average during October, and asked the question of how frequent these changes occur assuming that, as is commonly done in finance models, these events are normally distributed. The results are truly astonishing. There were two daily changes of more than 10% during the month. With a standard deviation of daily changes of 1.032% (computed over the period 1971-2008) movements of such a magnitude can occur only once every 73 to 603 trillion billion years. Since our universe, according to most physicists, exists a mere 20 billion years we, finance theorists, would have had to wait for another trillion universes before one such change could be observed. Yet it happened twice during the same month. A truly miraculous event. The other four changes during the same month of October have a somewhat higher frequency, but surely we did not expect these to happen in our lifetimes.( IN ORDER FOR THEM TO SAY THIS, THE DATA MUST BE AVAILABLE )

Thus, Nocera's failure to do even a basic job of explaining the fundamental flaws in the construct of VaR renders the article grossly misleading. Yes, he mentions that VaR models were often based on a mere two years of data. That alone is shocking but is treated in an off-hand manner (as if it was OK because VaR was supposedly used for short term measurements. Well, that just isn't true. That is not how regulators use it, nor, per Dizard, investors). Indeed the piece argues that the problem with VaR was not looking at historical data over a sufficiently long period:
This was one of Alan Greenspan’s primary excuses when he made his mea culpa for the financial crisis before Congress a few months ago. After pointing out that a Nobel Prize had been awarded for work that led to some of the theories behind derivative pricing and risk management, he said: “The whole intellectual edifice, however, collapsed in the summer of last year because the data input into the risk-management models generally covered only the past two decades, a period of euphoria. Had instead the models been fitted more appropriately to historic periods of stress, capital requirements would have been much higher and the financial world would be in far better shape today, in my judgment.” Well, yes. That was also the point Taleb was making in his lecture when he referred to what he called future-blindness. People tend not to be able to anticipate a future they have never personally experienced.

Again, just plain wrong. Use of financial data series over long periods of time, as we said above, have repeatedly confirmed what Mandelbrot said: the risks are simply not normally distributed. More data will not fix this intrinsic failing.( I'M NOT SURE I UNDERSTAND THIS. NORMAL DISTRIBUTION WILL WORK MOST OF THE TIME FOR PREDICTION. IT SIMPLY IS LESS USEFUL THAN A MORE INCLUSIVE MEASURE. )

By neglecting to expose this basic issue, the piece comes off as duelling experts, and with the noisiest critic of VaR, Nassim Nicolas Taleb, dismissive and not prone to explanation, the defenders get far more air time and come off sounding far more reasonable.

It similarly does not occur to Nocera to question the "one size fits all" approach to VaR. The same normal distribution is assumed for all asset types, when as we noted earlier, different types of investments exhibit different types of skewness. The fact that VaR allows for comparisons across investment types via force-fitting gets nary a mention.

He also fails to plumb the idea that reducing as complicated a matter as risk management of internationally-traded multii-assets to a single metric is just plain dopey. No single construct can be adequate. Accordingly, large firms rely on multiple tools, although Nocera never mentions them. However, the group that does rely unduly on VaR as a proxy for risk is financial regulators( GOVERNMENT ALWAYS DOES THIS. CONSIDER HOW POVERTY IS MEASURED. GOVERNMENT NEEDS SIMPLE NUMBERS. ). I have been told that banks would rather make less use of VaR, but its popularity among central bankers and other overseers means that firms need to keep it as a central metric.

Similarly, false confidence in VaR has meant that it has become a crutch( THIS ISN'T THE FAULT OF THE MODEL ). Rather than attempting to develop sufficient competence to enable them to have a better understanding of the issues and techniques involved in risk management and measurement (which would clearly require some staffers to have high-level math skills), regulators instead take false comfort in a single number that greatly understates the risk they should be most worried about, that of a major blow-up.

Even though some early readers have made positive noises about Nocera's recounting of the history of VaR, I see enough glitches to raise serious questions. For instance:
L.T.C.M.’s collapse would seem to make a pretty good case for Taleb’s theories. What brought the firm down was a black swan it never saw coming: the twin financial crises in Asia and Russia. Indeed, so sure were the firm’s partners that the market would revert to “normal” — which is what their model insisted would happen — that they continued to take on exposures that would destroy the firm as the crisis worsened, according to Roger Lowenstein’s account of the debacle, “When Genius Failed.” Oh, and another thing: among the risk models the firm relied on was VaR.

I am a big fan of Lowenstein's book, and this passage fails to represent it or the collapse of LTCM accurately. Lowenstein makes clear that after LTCM's initial, spectacular success, the firm stated trading in markets where it lacked the data to do the sort of risk modeling that had been its hallmark. It was basically punting on a massive scale and thus deviating considerably from what had been its historical approach. In addition, the firm was taking very large positions in a lot of markets, yet was making NO allowance for liquidity risk( A CALLING RUN ) (not overall market liquidity, but more basic ongoing trading liquidity, that is, the size of its positions relative to normal trading volumes). In other words, there was no way it could exit most of its positions without having a price impact (both directly, via the scale of its selling, and indirectly, by traders realizing that the big kahuna LTCM wanted out and taking advantage of its need to unload). That is a Trading 101 sort of mistake, yet LTCM perpetrated it in breathtakingly cavalier fashion.( THAT'S TRUE )

Thus the point that Nocera asserts, that the LTCM debacle should have damaged VaR but didn't, reveals a lack of understanding of that episode. LTCM had managed to maintain the image of having sophisticated risk management up to the point of its failure, but it violated its own playbook and completely ignored position size versus normal trading liquidity. Anyone involved in the debacle and unwind (and the Fed and all the big Wall Street houses were) would not see the LTCM failure as related to VaR. There were bigger, far more immediate causes.

So Nocera, by failing to dig deeply enough, winds up defending a failed orthodoxy. I suspect we are going to see a lot of that sort of thing in 2009.
More on this topic (What's this?) "

Here's a VAR chart:



The 10% Value at Risk of a normally distributed portfolio"

It doesn't look Gaussian.

Here's another:

An example of VaR: Consider a NYMEX heating oil futures position, whose value has the probability distribution shown above. When we say that the position has a three-day, 95 per cent VaR of $5,000,000 as shown above, we mean that we are 95 per cent confident that the position's value will not decrease by more than $5,000,000 over the next three days. However, there is a five per cent chance that losses may exceed $5,000,000, and in extreme scenarios they can be significantly larger."

Here's a few more:

"Value at risk (VAR or sometimes VaR) has been called the "new science of risk management", but you do not need to be a scientist to use VAR. Here, in part 1 of this series, we look at the idea behind VAR and the three basic methods of calculating it. In Part 2, we apply these methods to calculating VAR for a single stock or investment.

The Idea behind VAR
The most popular and traditional measure of risk is volatility. The main problem with volatility, however, is that it does not care about the direction of an investment's movement: a stock can be volatile because it suddenly jumps higher. Of course, investors are not distressed by gains! (See The Limits and Uses of Volatility.)

For investors, risk is about the odds of losing money, and VAR is based on that common-sense fact. By assuming investors care about the odds of a really big loss, VAR answers the question, "What is my worst-case scenario?" or "How much could I lose in a really bad month?"

Now let's get specific. A VAR statistic has three components: a time period, a confidence level and a loss amount (or loss percentage). Keep these three parts in mind as we give some examples of variations of the question that VAR answers:
  • What is the most I can - with a 95% or 99% level of confidence - expect to lose in dollars over the next month?
  • What is the maximum percentage I can - with 95% or 99% confidence - expect to lose over the next year?
You can see how the "VAR question" has three elements: a relatively high level of confidence (typically either 95% or 99%), a time period (a day, a month or a year) and an estimate of investment loss (expressed either in dollar or percentage terms).

Methods of Calculating VAR
Institutional investors use VAR to evaluate portfolio risk, but in this introduction we will use it to evaluate the risk of a single index that trades like a stock: the Nasdaq 100 Index, which trades under the ticker QQQQ. The QQQQ is a very popular index of the largest non-financial stocks that trade on the Nasdaq exchange.

There are three methods of calculating VAR: the historical method, the variance-covariance method and the Monte Carlo simulation.

1. Historical Method
The historical method simply re-organizes actual historical returns, putting them in order from worst to best. It then assumes that history will repeat itself, from a risk perspective.

The QQQ started trading in Mar 1999, and if we calculate each daily return, we produce a rich data set of almost 1,400 points. Let's put them in a histogram that compares the frequency of return "buckets". For example, at the highest point of the histogram (the highest bar), there were more than 250 days when the daily return was between 0% and 1%. At the far right, you can barely see a tiny bar at 13%; it represents the one single day (in Jan 2000) within a period of five-plus years when the daily return for the QQQ was a stunning 12.4%!


Notice the red bars that compose the "left tail" of the histogram. These are the lowest 5% of daily returns (since the returns are ordered from left to right, the worst are always the "left tail"). The red bars run from daily losses of 4% to 8%. Because these are the worst 5% of all daily returns, we can say with 95% confidence that the worst daily loss will not exceed 4%. Put another way, we expect with 95% confidence that our gain will exceed -4%. That is VAR in a nutshell. Let's re-phrase the statistic into both percentage and dollar terms:
  • With 95% confidence, we expect that our worst daily loss will not exceed 4%.
  • If we invest $100, we are 95% confident that our worst daily loss will not exceed $4 ($100 x -4%).
You can see that VAR indeed allows for an outcome that is worse than a return of -4%. It does not express absolute certainty but instead makes a probabilistic estimate. If we want to increase our confidence, we need only to "move to the left" on the same histogram, to where the first two red bars, at -8% and -7% represent the worst 1% of daily returns:
  • With 99% confidence, we expect that the worst daily loss will not exceed 7%.
  • Or, if we invest $100, we are 99% confident that our worst daily loss will not exceed $7.
2. The Variance-Covariance Method
This method assumes that stock returns are normally distributed. In other words, it requires that we estimate only two factors - an expected (or average) return and a standard deviation - which allow us to plot a normal distribution curve. Here we plot the normal curve against the same actual return data:


The idea behind the variance-covariance is similar to the ideas behind the historical method - except that we use the familiar curve instead of actual data. The advantage of the normal curve is that we automatically know where the worst 5% and 1% lie on the curve. They are a function of our desired confidence and the standard deviation ():


The blue curve above is based on the actual daily standard deviation of the QQQ, which is 2.64%. The average daily return happened to be fairly close to zero, so we will assume an average return of zero for illustrative purposes. Here are the results of plugging the actual standard deviation into the formulas above:


3. Monte Carlo Simulation
The third method involves developing a model for future stock price returns and running multiple hypothetical trials through the model. A Monte Carlo simulation refers to any method that randomly generates trials, but by itself does not tell us anything about the underlying methodology.



For most users, a Monte Carlo simulation amounts to a "black box" generator of random outcomes. Without going into further details, we ran a Monte Carlo simulation on the QQQ based on its historical trading pattern. In our simulation, 100 trials were conducted. If we ran it again, we would get a different result--although it is highly likely that the differences would be narrow. Here is the result arranged into a histogram (please note that while the previous graphs have shown daily returns, this graph displays monthly returns):


To summarize, we ran 100 hypothetical trials of monthly returns for the QQQ. Among them, two outcomes were between -15% and -20%; and three were between -20% and 25%. That means the worst five outcomes (that is, the worst 5%) were less than -15%. The Monte Carlo simulation therefore leads to the following VAR-type conclusion: with 95% confidence, we do not expect to lose more than 15% during any given month.

Summary
Value at Risk (VAR) calculates the maximum loss expected (or worst case scenario) on an investment, over a given time period and given a specified degree of confidence. We looked at three methods commonly used to calculate VAR. But keep in mind that two of our methods calculated a daily VAR and the third method calculated monthly VAR. In Part 2 of this series we show you how to compare these different time horizons.

To read more on this subject, see Continuously Compound Interest.
by David Harper

Here's another:

Value at Risk (VaR) is a mathematical approach for estimating the maximum potential loss of a given portfolio within some period of time with some likelihood of occurrence.

BI's VaR Calculator calculates the Value at Risk of the set of securities in the portfolio for paired-trades (one long and the other short) typically made by relative-value hedge funds. Most traditional off-the-shelf VaR measurement tools (such as those from Bloomberg, and Algorithmics) are unable to correctly determine VaR by linking two or more trades together. This is important for hedge funds, otherwise the mis-reported VaR is typically much larger than the true VaR, restricting the amount of active bets the hedge fund can make.

The following financial instruments are covered in the calculation of VaR by our tool:

  • Equities, Equity ETFs
  • Corporate Bonds
  • Treasury Notes and Bonds
  • Interest Rate Swaps
  • Interest Rate Futures: Fed Funds Futures, Euro-dollar futures
  • Treasury Bond Futures
  • Commodity Futures
  • Credit Default Swaps

The VaR Calculator is provided as a product kit that is customized to each client's (hedge fund or money manager's) trading environment. Two integrations will be performed on-site.

Upon installation, the tool will be integrated with the system or spreadsheet from which it can automatically get the firm's trade position data on a daily basis.

The tool will also be integrated with the vendor's (such as Bloomberg, or Factset) programmatic API that provides live market price data on the financial instruments held in the portfolio, so that automatic daily price retrievals of financial time series can be made.

Methodology

The tool computes the portfolio VaR using the Delta-Normal method. It expects to use market price data on the securities from which we first derive discrete daily returns. Risk is measured over a day's horizon at the 99% confidence level as shown in the figure above. The delta-normal method assumes that all asset returns are normally distributed. As the portfolio return is a linear combination of normal variables, it is also normally distributed.

Standardized risk factors are identified for the various instruments, and variances on them computed automatically from the market data retrieved from daily price quotations:

  • Equities and Futures: price-based return
  • Bonds, Rate Swaps and other interest-rate sensitive instruments: yield shift

The Portfolio VaR computation method consists of going back in time, and computing variances and correlations for all risk factors. Portfolio risk is considered generated by a combination of linear exposures to many factors that are assumed to be normally distributed, and by the forecast of the covariance matrix (assumes a historical extrapolation).

Key Benefits of BI's VaR Calculator

  • Accurately prices VaR of financial instruments when used in a paired arbitrage trade
  • Very low cost compared to other vendors such as Algorithmics and RiskMetrics
  • Adaptable and customizable to your firm's unique risk monitoring requirements"
My conclusion, looking at a couple of the more technical papers, is that some of these VAR systems look more sensible than others. However, just like Nocera, Yves Smith has to do more technical explantion of this than pointing to a couple of charts. I wish her luck, as even the ones that make senses will be hard to assess without the actual records of use. By the way, all the graphs do seem to have fat tails to my eyes. I don't believe that the models are to blame. As near as I can tell, VAR has some uses, as do CDOs and CDSs, which are all erroneously blamed, where people are in fact to blame.

No comments: