On Oct. 16, 1987, the Standard & Poor's 500 index plunged 5.25% - the largest one-day drop since 1962. The question is, what was the worst-case scenario that could have been predicted for the index for the following Monday?
As anyone old enough to remember knows, the S&P 500 index plummeted 20.4% on Oct. 19, shocking investors around the world. But a simple application of extreme value theory - a branch of mathematics traditionally used in engineering to calculate a structure's ability to withstand acute stress - shows the drop could have been worse. In fact, a 24% fall was possible, based on annual data of the largest percentage stock-index drops in daily value since 1960, according to a paper by Alexander J. McNeil, a Swiss Re research fellow at the Swiss Federal Institute of Technology in Zurich.
Now, money managers are trying to find new ways to take into account the likelihood of an extreme event - such as an October 1987 crash or a 1998 currency crisis - occurring. The reality is that such events happen with far greater frequency than is suggested by the normal distribution shown on a standard bell curve.
In addition, some managers are rethinking the asset allocation puzzle, questioning whether the traditional mean-variance optimizer needs updating, in part because of its reliance on a normal distribution.
New ways of looking
In both asset allocation and portfolio construction, managers are trying to come up with new ways of building better models. They include:
* Deutsche Asset Management, which developed a "probabilistic efficient frontier" that incorporates asymmetric returns for various asset classes, allows plan sponsors to input their own return expectations and sets an optimal asset mix based on the investor's own investment goals;
* State Street Global Advisors, which adopted a new technique to help pick "rockets" - stocks with the greatest chance of drastically outperforming - and avoiding "torpedoes," or stocks likely to bottom out; and
* Samuelson Portfolio Strategies LLC, which has developed a model that advises managers on when to make tactical shifts by updating volatility on a daily basis, instead of longer intervals now generally used.
Because many mathematical models, such as the mean-variance optimizer, traditionally have relied on a normal distribution, the probability of extreme events occurring has been underestimated. Extreme events typically are cut off by bell-shaped assumptions, because the events fall outside of two standard deviations, or 95% of probability.
The problem is those tails often are fatter than what appears on the normal bell curve. And returns are not always symmetrically arrayed from the median.
The probabilistic frontier
Those issues helped lead experts at Deutsche Asset Management in New York to produce its "probabilistic efficient frontier." The traditional means-variance test - developed in 1959 by Nobel Prize winner Harry Markowitz - attempts to maximize the expected return of a portfolio at a fixed level of risk and uses a purely symmetrical return distribution. Those simplified assumptions, hampered by the limitations of technology available at the time, produce "suboptimal" asset allocations, said Dean Barr, Deutsche's global chief investment officer.
While the mean-variance optimizer is designed to maximize returns, it also tends to push a pension fund toward higher risk levels by seeking the highest risk-adjusted return possible.
The probabilistic frontier, originally designed to maximize the chances of beating a given benchmark, recognizes that incremental returns shrink as the portfolio goes farther out on the risk spectrum. This non-linear relationship causes the probabilistic optimizer to rein in riskier allocations sooner, according to a Deutsche paper. In addition, the paper claims, a portfolio based on a probabilistic frontier typically is more robust than one based on the mean-variance optimizer, which tends to exaggerate estimation errors.
Three main benefits
The new methodology, which Deutsche makes available, free of charge, only to its larger, more sophisticated clients, has three principal benefits, Mr. Barr explained.
First, the probabilistic frontier can create optimal portfolios that include asset classes with asymmetric returns. Typically, those are alternative investments, such as private equity, real estate, absolute-return strategies, and futures and options.
Second, a pension executive can enter his or her own investment assumptions. For example, say the executive doesn't trust historical data showing the normal return on equities is 17%. Instead, he might combine 60% of the historical return and a 40% inflation factor. The model then will create an optimal allocation based on those inputs.
Third, the asset allocation can be based on the pension executive's preferences.
But not everybody is persuaded by Deutsche's argument. Mr. Markowitz said it's not true that the mean-variance optimizer has to rely on a normal distribution.
What's more, the old methodology has been proven to be a reliable tool. "If you know the mean and variance of a distribution, and you have any of these utility functions, then you can guess the expected utility fairly well if you have a probability distribution like historical returns on portfolios," he said. (He noted that if returns are outside a 30% loss and a 40% gain, the mean-variance optimizer breaks down.)
"So my challenge to these guys (is): Suppose you only knew the probability of outperformance, and the probability of shortfall; how good are you at guessing the expected utility of that distribution? It's not clear to me that this criteria will do any better than (the mean-variance) methodology would," he said.
Rockets vs. torpedoes
Meanwhile, State Street Global Advisors, Boston, is implementing a research tool that will help predict the best and worst performers during the upcoming quarter for its domestic, core, small-cap and growth portfolios. The analysis might be extended to the manager's European equity products and to currencies.
"The general theme of what we're trying to do here is to go beyond simple statistical issues and get to the issue of fat tails," said Tony Foley, managing director, at SSgA's advanced research center.
SSgA researchers have attempted to identify the top and bottom 2.5% of stocks in a given universe as a way of enhancing performance. And with good reason: from Jan. 1, 1992, to March 31, 2000, stocks in the top 2.5% of the Russell 1000 index on average returned more than 58% per quarter, while the bottom 2.5% of that universe averaged less than
-37%, according to a paper by Daniel Glickman, a principal at SSgA; A. Gregory DiRienzo, an assistant professor in the biostatistics department at Harvard University; and Richard J. Ochman, an SSgA associate.
The contrast is even starker for smaller-cap stocks: the top 2.5% of the Russell 2000 universe returned more than 85% per quarter, while the bottom 2.5% returned less than -49%, the paper said.
After selecting which stocks are most likely to be extreme performers, the firm applies technical and fundamental variables to sort out the likely winners - the rockets - from expected losers - the torpedoes. Historical daily volatility during the past three months is the most important factor, but the I/B/E/S long-term growth estimate, diluted earnings-to-price, and sales declines or earnings losses also play roles.
Paul Samuelson tackles the knotty problem of when to make short-term shifts into and out of stocks. If volatility increases but stock returns do not, his model advises managers to take some assets off the table because they are not being rewarded for the extra risk being taken.
Mr. Samuelson - son of the Nobel Prize-winner of the same name - said many investors are prone to a fallacy: if they have a long investment horizon, they use long intervals to estimate risk.
Mr. Samuelson, managing partner of Samuelson Portfolio Strategies, Boston, said his model makes an estimate of tomorrow's volatility or range of returns. It employs the previous day's estimate of what the range was, revised by the volatility of yesterday's return.