Feeds:
Posts
Comments

Misunderstanding Risk

The New York Times published an article on the role of VaR (Value at Risk) financial models in the current fiscal crisis:

VaR isn’t one model but rather a group of related models that share a mathematical framework. In its most common form, it measures the boundaries of risk in a portfolio over short durations, assuming a “normal” market. For instance, if you have $50 million of weekly VaR, that means that over the course of the next week, there is a 99 percent chance that your portfolio won’t lose more than $50 million. That portfolio could consist of equities, bonds, derivatives or all of the above; one reason VaR became so popular is that it is the only commonly used risk measure that can be applied to just about any asset class. And it takes into account a head-spinning variety of variables, including diversification, leverage and volatility, that make up the kind of market risk that traders and firms face every day.

Another reason VaR is so appealing is that it can measure both individual risks — the amount of risk contained in a single trader’s portfolio, for instance — and firmwide risk, which it does by combining the VaRs of a given firm’s trading desks and coming up with a net number. Top executives usually know their firm’s daily VaR within minutes of the market’s close.

As you might expect of a discussion of complicated statistical modeling in the mainstream press, the story oversimplifies and comes up a little short. Naked Capitalism does a good job picking the article apart in “Woefully Misleading Piece on Value at Risk in New York Times” – essentially the article make the classic mistake of assuming everything is normally distributed (the ludic fallacy):

Now when I say it is well known that trading markets do not exhibit Gaussian distributions, I mean it is REALLY well known. At around the time when the ideas of financial economists were being developed and taking hold (and key to their work was the idea that security prices were normally distributed), mathematician Benoit Mandelbrot learned that cotton had an unusually long price history (100 years of daily prices). Mandelbrot cut the data, and no matter what time period one used, the results were NOT normally distributed. His findings were initially pooh-poohed, but they have been confirmed repeatedly. Yet the math on which risk management and portfolio construction rests assumes a normal distribution!

It similarly does not occur to Nocera to question the “one size fits all” approach to VaR. The same normal distribution is assumed for all asset types, when as we noted earlier, different types of investments exhibit different types of skewness. The fact that VaR allows for comparisons across investment types via force-fitting gets nary a mention.

He also fails to plumb the idea that reducing as complicated a matter as risk management of internationally-traded multii-assets to a single metric is just plain dopey. No single construct can be adequate. Accordingly, large firms rely on multiple tools, although Nocera never mentions them. However, the group that does rely unduly on VaR as a proxy for risk is financial regulators. I have been told that banks would rather make less use of VaR, but its popularity among central bankers and other overseers means that firms need to keep it as a central metric.

Similarly, false confidence in VaR has meant that it has become a crutch. Rather than attempting to develop sufficient competence to enable them to have a better understanding of the issues and techniques involved in risk management and measurement (which would clearly require some staffers to have high-level math skills), regulators instead take false comfort in a single number that greatly understates the risk they should be most worried about, that of a major blow-up.

Advertisements

Scott Adams on survivor bias

Still, there are plenty of civilian investors who have done well buying value stocks and holding for the long run. But wouldn’t you expect a wide distribution of luck in any gambling arena? If every investor picked stocks entirely randomly, you would still produce a good number of Warren Buffetts entirely by chance. And our brains are wired to assume those winners had the secret formula for investing.

Mr Adams modest proposal for overhauling stock investing is here.

One of my least favorite parts of the tech business is the army of predictors of future Apple products. It’s a great game because anyone can play from professional analyst firm, to financial news reporters or random bloggers and since there are so many people playing, the game is rife with survivor bias.  So how do you play? Here’s some tips for making “predictions” about future products:

  • The vaguer the better and don’t forget to pepper your prognostications with weasel language like “sometime in the first half of 2009” or broad categories of products  like “netbooks”.
  • Don’t have a source for this info? Make one up. Or don’t – just hypothesize about how it “makes sense based on the market”.
  • Use other wild speculation as a primary source – it’s the wisdom of crowds!

Once Apple makes (or doesn’t make) an announcement, determine if you won. With so many people playing, just by random chance someone will “predict” what’s going on which will only make your future predictions that much more “valuable”. But what if you weren’t right?

  • It depends on what the meaning of is is: If you did things right you can fudge a vague prediction into the win column by talking about generalities or redefining your terms after the fact. Try squinting.
  • Blame the victim: They should have produced this product so they were wrong or it wasn’t ready in time. This is a science man!
  • Better luck next time: Hey, nobody is right all the time but if you play for long enough, you can appear to be!

This really works for any kind of prediction and with the end of the year fast approaching, it’s time to make those 2009 predictions and weasel around those 2008 ones!

For more on this topic, check out Nassim Taleb on the Scandal of Prediction.

UPDATE: Brad Feld seems to feel similarly:

This has been one of my pet peeves for 20+ years.  For a while I managed to ignore them completely.  At some point I started getting asked for my predictions and succumbed to my ego for a few years and participated in the prediction folly.  At some point I realized that there was zero correlation between my predictions and reality and that by participating, I was merely helping perpetuate this silliness.

Good to not so great?

Over at Freakonomics, Steven Levitt looks into the book Good to Great – it analyses 11 companies that transformed themselves and became “great” companies. Turns out the companies aren’t all doing that well:

Ironically, I began reading the book on the very same day that one of the eleven “good to great” companies, Fannie Mae, made the headlines of the business pages. It looks like Fannie Mae is going to need to be bailed out by the federal government. If you had bought Fannie Mae stock around the time Good to Great was published, you would have lost over 80 percent of your initial investment.

Another one of the “good to great” companies is Circuit City. You would have lost your shirt investing in Circuit City as well, which is also down 80 percent or more. Best Buy has cleaned Circuit City’s clock for the last seven or eight years.

Hmmm. While it’s true that you can learn from the mistakes and successes of those that came before us, what the salient lesson is (presuming there is one), the role of causality and luck or how to generalize to new situations make it a fool’s errand. Good to Great and other successmanship manuals pick and choose winners to fit the evidence and ignore the losers.

The subprime gray swan

Nassim Taleb has been getting a lot of press in the wake of the subprime mortgage fiasco – the general theme is that Taleb ideas are harkening in a new era of financial rationality in the markets. But it’s more likely this is the fear part of the usual greed-fear rollarcoaster. From Fortune:

Most people seem to have been caught off-guard by the subprime crisis, yet such an event was not only predictable but also inevitable. It was a Black Swan, yes?

The Black Swan is a matter of perspective. A turkey is fed for 1,000 days – every day lulling it more and more into the feeling that the human feeders are acting in its best interest. Except that on the 1,001st day, the butcher shows up and there is a surprise. The surprise is for the turkey, not the butcher. Anyone who knows anything about the history of banking (or remembers the 1982 Latin American debt crisis or the 1990s savings and loan collapse) will tell you that the subprime crisis was so bound to happen. Banks are exposed to such blowups. Bankers have been the turkey, historically.

So I call these crises “gray swans.” I’ve been telling anyone willing to listen that banks have a tendency to sit on time bombs while convincing themselves that they are conservative and nonvolatile.

Michael Lewis has an article about how the markets have begun to realize that the Black-Scholes model for risk doesn’t really work:

“No one believes the original assumptions anymore,” says John Seo, who co-manages Fermat Capital, a $2 billion-plus hedge fund that invests in catastrophe bonds—essentially bonds with put options that are triggered by such natural catastrophes as hurricanes and earthquakes. “It’s hard to believe that anyone—yes, including me—ever believed it. It’s like trying to replicate a fire-insurance policy by dynamically increasing or decreasing your coverage as fire conditions wax and wane. One day, bam, your house is on fire, and you call for more coverage?”

Does this mean they have to give back the Nobel prize? I would have thought Black-Scholes was pretty universally questioned after the whole LTCM fiasco

Airport Security Kabuki

Airline Pilot Patrick Smith has a great post on the pointlessness of the current wave of airline security. It seems every time I fly, there is some new hoop to jump through all in the name of safer travel. But does any of it really make air travel any safer or is it just designed to make people feel safer?

How we got to this point is an interesting study in reactionary politics, fear-mongering and a disconcerting willingness of the American public to accept almost anything in the name of “security.” Conned and frightened, our nation demands not actual security, but security spectacle. And although a reasonable percentage of passengers, along with most security experts, would concur such theater serves no useful purpose, there has been surprisingly little outrage. In that regard, maybe we’ve gotten exactly the system we deserve.

Unfortunately, a lot of legislation since 9/11 has been driven by fear and the need to do something to address the problem that it seems that efficacy has not really the top priority. It easy to say that it’s mostly harmless and makes people feel safer but security is always a set of tradeoffs. Resources spent checking passenger shoes has to be weighed against the opportunity cost that those resources could have been used for. I’m still somewhat amazed that there has been no visible effort to enact the kind of behavioral style passenger screenings at the airports that are common in Israel. To be fair, the immediate addition of locking cockpit doors was a simple, effective and common sense move that would most likely prevent a similar event from occurring in the future.

Nicholas Nassim Taleb talked about how this type of legislation comes to pass – there is almost no incentive to pass effective legislation but every incentive to pass reactionary legislation:

Assume that a legislator with courage, influence, intellect, vision, and perseverance manages to enact a law that goes into universal effect and employment on September 10, 2001; it imposes the continuously locked bulletproof doors in every cockpit (at high costs to the struggling airlines)— just in case terrorists decide to use planes to attack the World trade Center in new York City. I know this is lunacy, but it is just a thought experiment (I am aware that there may be no such thing as a legislator with intellect, courage, vision, and perseverance; this is the point of the thought experiment). The legislation is not a popular measure among the airline personnel, as it complicates their lives. But it would certainly have prevented 9/11.
The person who imposed locks on cockpit doors gets no statues in public squares, not so much as a quick mention of his contribution in his obituary. “Joe Smith, who helped avoid the disaster of 9/11, died of complications of liver disease.” Seeing how superfluous his measure was, and how it squandered resources, the public, with great help from airline pilots, might well boot him out of office. Vox clamantis in deserto. He will retire depressed, with a great sense of failure. He will die with the impression of having done nothing useful. I wish I could go attend his funeral, but, reader, I can’t find him. and yet, recognition can be quite a pump. Believe me, even those who genuinely claim that they do not believe in recognition, and that they separate labor from the fruits of labor, actually get a serotonin kick from it. See how the silent hero is rewarded: even his own hormonal system will conspire to offer no reward.now consider again the events of 9/11. In their aftermath, who got the recognition? Those you saw in the media, on television performing heroic acts, and those whom you saw trying to give you the impression that they were performing heroic acts. The latter category includes someone like the new York Stock exchange Chairman Richard Grasso, who “saved the stock exchange” and received a huge bonus for his contribution (the equivalent of several thousand average salaries). All he had to do was be there to ring the opening bell on television—the television that is the carrier of unfairness and a major cause of Black Swan blindness. Everybody knows that you need more prevention than treatment, but few reward acts of prevention.

Bruce Schneier has done a great job in pointing out a lot of the security theatre that has occurred in the past few years and has even interviewed the head of the TSA on the subject:

BS: This feels so much like “cover your ass” security: you’re screening our shoes because everyone knows Richard Reid hid explosives in them, and you’ll be raked over the coals if that particular plot ever happens again. But there are literally thousands of possible plots.

So when does it end? The terrorists invented a particular tactic, and you’re defending against it. But you’re playing a game you can’t win. You ban guns and bombs, so the terrorists use box cutters. You ban small blades and knitting needles, and they hide explosives in their shoes. You screen shoes, so they invent a liquid explosive. You restrict liquids, and they’re going to do something else. The terrorists are going to look at what you’re confiscating, and they’re going to design a plot to bypass your security.

That’s the real lesson of the liquid bombers. Assuming you’re right and the explosive was real, it was an explosive that none of the security measures at the time would have detected. So why play this slow game of whittling down what people can bring onto airplanes? When do you say: “Enough. It’s not about the details of the tactic; it’s about the broad threat”?

KH: In late 2005, I made a big deal about focusing on Improvised Explosives Devices (IEDs) and not chasing all the things that could be used as weapons. Until the liquids plot this summer, we were defending our decision to let scissors and small tools back on planes and trying to add layers like behavior detection and document checking, so it is ironic that you ask this question—I am in vehement agreement with your premise. We’d rather focus on things that can do catastrophic harm (bombs!) and add layers to get people with hostile intent to highlight themselves. We have a responsibility, though, to address known continued active attack methods like shoes and liquids and, unfortunately, have to use our somewhat clunky process for now.