The financial crisis exposed limitations in risk management tools and the way they predict risk, including value at risk, a popular measure for evaluating overall portfolio risk.
“There is a move away from just focusing on a single number like value at risk ... this idea that by just looking at one number you can know what the risk in your portfolio is,” Mr. Ceria said. “One number is too little. I don't care how sophisticated that number is ... one number is not enough.”
VaR could lead asset owners to take on more risk than they realized. VaR is backward looking, assumes markets will behave as they have in the past and favors assets with low risk and low correlations, leading investors to believe they “are in a safer position,” said Bruce I. Jacobs, principal, Jacobs Levy Equity Management Inc., Florham Park, N.J.
That confidence encourages investors to use more leverage than they might otherwise, taking on more risk while attracting others to invest in these assets, leading to potential illiquidity.
But when “safe assets” turn into high-risk assets — such as in the mortgage-backed securities collapse — it can bring on a systemic crisis, an extreme event, Mr. Jacobs said.
Risk management tools such as VaR, credit-risk models and faulty assumptions underestimated risk in the financial crisis — or as Mr. Jacobs also put it, “overestimated predictability.”
“That's why many people were caught off guard,” said Shafiq K. Ebrahim, principal with AJO LP, Philadelphia. “It's always hard to model these things when you have to try to foresee what may happen in the future even though it hasn't happened in the past.”
So institutional investors have been looking to refine existing risk measurement tools and use newer ones, seeking a better framework for measuring risk.
Many risk management tools, including VaR, failed to capture non-linear events in the markets, Mr. Jacobs said.
VaR is relatively “simple in concept and easy to calculate” — attributes in its widespread appeal, Mr. Jacobs said. But newer techniques require more complicated calculations, data and assumptions, he said.
Among them, network analysis “looks at how entities are connected to one another and how risk will propagate across these various connections,” said Mr. Jacobs.
Connections include counterparty risks in swaps and other complex investments, but extend further in relationships among financial companies.
The challenge is getting the data to measure risk and connections across firms “because there is so much that is unknown,” he said.
He added: “Managing and containing risk at an individual firm level can allow systemic risk to rise because of the interconnections between entities. Controlling risk at the individual entity level might not necessarily prevent systemic risk.”
Mr. Ceria added that asset owners “are trying to figure out how they can measure and hedge their counterparty risk more effectively. You may have the perfect security (achieving a hedging goal), but if the underlying counterparty goes under for whatever reason” you might not be able to hedge your risk or recover your entire investment.
Agent-behavior analysis or agent-based analysis models financial markets from the bottom up. It “means determining how the behavior of agents” such as investors, analysts or traders “impact asset prices,” Mr. Jacobs said. It's designed to try to replicate the real world to simulate outcomes.
One challenge in developing risk tools is moving away from using normal distributions in modeling. Normal distribution “is a good first approximation,” AJO's Mr. Ebrahim said. “For the most part it captures the general distribution of stock returns. ... But it isn't ideal. We've known for a long time stock returns have fatter tails than the normal distribution.”
“There are other distributions one can use that have fatter tails,” or extend further out than a normal distribution that can lead to unexpected losses at the negative end, Mr. Ebrahim said.
Extreme value theory aims to tackle limitations of normal distribution, although it has its weakness. It “tries to model the tails using different types of distributions” other than a normal dispersal of returns, Mr. Ebrahim said. “That is something that is extremely hard to do because we don't have very many data points in the tails.”
The reason is because there have been relatively few extreme events that extend to tails.
“We haven't seen in the past very many extreme observations,” Mr. Ebrahim said. “But I think some of these techniques are better able (at trying) to infer what those extreme events would look like.
“Unfortunately, the normal distribution is very easy to use,” which is one of its most desirable properties, Mr. Ebrahim said. “That's why it's so commonly used.” Other types of distributions “are more complex to deal with.”
In addition, “some of the desirable properties we have with the normal distributions don't exist with some of these (other) distributions,” Mr. Ebrahim said. For example, with a normal distribution “we have a well-defined mean and variance. That doesn't exist with” some other types.