Posts Tagged ‘LTCM’

“Dating back to work on the random walk hypothesis by French economist Louis Bachelier (1870-1946), the efficient market hypothesis asserts that stock market prices are the best available estimates of the real value of shares since the market has taken account of all available information on an individual stock.”

Economy Professor

Now from New York Times Magazine, which had an interesting 10 page spread about Risk valuation last Saturday:

VaR is often measured daily and rarely extends beyond a few weeks, and because it is a very short-term measure, it assumes that tomorrow will be more or less like today. Even what’s called “historical VaR” — a variation of standard VaR that measures potential portfolio risk a year or two out, only uses the previous few years as its benchmark. As the risk consultant Marc Groz puts it, “The years 2005-2006,” which were the culmination of the housing bubble, “aren’t a very good universe for predicting what happened in 2007-2008.”

How are these two phenomena related? The risk models our world of finance has been relying upon for several years are grounded in the Efficient Market Hypothesis; VaR models – or models which price risk – would not function without the corollary of the underlying asset being priced properly. We took comfort in this convenient theory, that everything is priced appropriately, all the time.

The counter to this argument is not another theory, but a series of real life outliers. If markets are efficient, How does George Soros continually compound his fortune by trading according to his boom/bust empiricism (like shorting the British Pound)? Warren Buffet also acknowledges fault with this theory: “I would be a bum on the street with a tin cup if the markets were always efficient.”

Every theory has shortcomings, and the admitted shortcoming of the Efficient Market Hypothesis is one of “black swans”, or in economic terms, exogenous shocks. These are events which cannot be predicted, ones which are often described as lying outside a 99% confidence interval, and ones which continually disprove the efficiency of market pricing – especially during times of panic.

The second glaring shortcoming is one which lies about the applications of the EMH, or one of our perception. One assumption of the hypothesis, like an assumption in micro-economic theory, is that the participants are completely rational (much like human calculators). This assumption has proven to hold under times of tranquility – like the times LTCM succeeded in making money – but during times of deception and opacity, many of us are hopelessly irrational.

To quote Michael Lewis’s most recent book, Panic!, on the pricing of Bear Stearns:

“If the market got the value of Bear Stearns so wrong, how can it possibly believe it knows even the approximate value of any Wall Street firm?” (P. 342)

The pillar of the EMH that comes tumbling down in times like today is “known information”…On the surface, who is to say that Bear’s balance sheet was all that bad? I’d like to believe we could extrapolate everything from the footnotes (never mind having everyone read them), but the population of people who called for the failure of Bear Stearns in 2006 lie far outside the depths of the “normal distribution.” Furthermore, here’s a real life example: how could the Nasdaq be accurately priced at 1,400 in 1997; at 5,000 in 2000; and back at 1,400 in 2002?

All of this is to argue that the notion of efficient markets, which we have taken shelter in for much of our modern financial history, stands paralyzed in times of uncertainty; this would provide insight into why panic ensues at the very signal that we are “in the dark” concerning anything (this could also be a reflection of our poor sentiment; the constant belief that we will be disappointed by Wall Street is becoming a part of our psychology). This would also provide an explanation for our dependence on models as a “crutch”, to grant us what has turned out to be a false sense of security by quantifying risk with numerical values.

The fallout of the sub-prime mortgage crisis has uncovered many of the issues with deriving models after this hypothesis – the only problem is, we haven’t come up with anything better. VaR were used heavily in the late 1990’s by none other than LTCM, but as the NYT article points out:

Firms viewed it as a human failure rather than a failure of risk modeling. The collapse only amplified the feeling on Wall Street that firms needed to be able to understand their risks for the entire firm. Only VaR could do that. (Page 7)

We then reverted back to their use.

I will end with a phrase which has time and again been a sure way to see oneself proven wrong – Maybe this time it’s different.


Read Full Post »

Just found a cool site: www.arbitrageview.com

In addition to defining all of the different forms which arbitrage can take, this site has a section which lists all of the announced M&A deals which are awaiting approval; useful for a strategy called “risk arbitrage,” or betting that announced acquisitions will actually close – you purchase the stock of the targeted company, and short the acquirer (like Yahoo being targeted by Microsoft).

The name in itself is quite appropriate, because should the deal fall through (as the Yahoo and Microsoft did), you lose money on both sides of the bet (Microsoft goes up, and Yahoo goes down…this is the opposite of what the risk arbitrager thought would happen).

One of the most prevalent misuses of arbitrage came out of the Long Term Capital Management debacle. The members of the hedge fund had specialized in bond arb, by which they would use tremendous leverage (often 30 to 1) and make millions on very small moves in the fixed income markets (by selling one country’s debt short, and buying another). But once the rest of Wall Street got wind of their strategy, the already small margins were pinched by the increase in market participants.

This is what prompted the beginning of many dumb decisions they made over the next 2-3 years; They decided to apply their knowledge of market volatility from the world of bonds to the incongruous world of equities. They tried to find irregular relationships between companies, then act accordingly. Their stint with risk arbitrage resulted badly, as you might imagine…here’s an excerpt from the book which explains it all, When Genius Failed (p.146):

“Long-Term had dubiously invested in Ciena Corporation, a telecommunications company that was planning to merge with Tellabs Inc., and had continued to hold the stock even when it had crept to within 25 cents of the acquisition price. On that same day, August 21, the merger was postponed and Ciena stock plummeted $25.50 to 31.25 a share. Long-Term lost $150 million.”

That was one of their smaller losses in the grand scheme of things…

Even before the depth of this financial crisis unfolded, I’ve always believed that this is one of the most important books that one could ever read to have a broader understanding of the sheer greed and irrationality which exists in financial markets, to understand why it can does happen again continually, and how the culprits respond as it’s all going wrong.

Read Full Post »