Feeds:
Posts
Comments

Good to not so great?

Over at Freakonomics, Steven Levitt looks into the book Good to Great – it analyses 11 companies that transformed themselves and became “great” companies. Turns out the companies aren’t all doing that well:

Ironically, I began reading the book on the very same day that one of the eleven “good to great” companies, Fannie Mae, made the headlines of the business pages. It looks like Fannie Mae is going to need to be bailed out by the federal government. If you had bought Fannie Mae stock around the time Good to Great was published, you would have lost over 80 percent of your initial investment.

Another one of the “good to great” companies is Circuit City. You would have lost your shirt investing in Circuit City as well, which is also down 80 percent or more. Best Buy has cleaned Circuit City’s clock for the last seven or eight years.

Hmmm. While it’s true that you can learn from the mistakes and successes of those that came before us, what the salient lesson is (presuming there is one), the role of causality and luck or how to generalize to new situations make it a fool’s errand. Good to Great and other successmanship manuals pick and choose winners to fit the evidence and ignore the losers.

The subprime gray swan

Nassim Taleb has been getting a lot of press in the wake of the subprime mortgage fiasco – the general theme is that Taleb ideas are harkening in a new era of financial rationality in the markets. But it’s more likely this is the fear part of the usual greed-fear rollarcoaster. From Fortune:

Most people seem to have been caught off-guard by the subprime crisis, yet such an event was not only predictable but also inevitable. It was a Black Swan, yes?

The Black Swan is a matter of perspective. A turkey is fed for 1,000 days – every day lulling it more and more into the feeling that the human feeders are acting in its best interest. Except that on the 1,001st day, the butcher shows up and there is a surprise. The surprise is for the turkey, not the butcher. Anyone who knows anything about the history of banking (or remembers the 1982 Latin American debt crisis or the 1990s savings and loan collapse) will tell you that the subprime crisis was so bound to happen. Banks are exposed to such blowups. Bankers have been the turkey, historically.

So I call these crises “gray swans.” I’ve been telling anyone willing to listen that banks have a tendency to sit on time bombs while convincing themselves that they are conservative and nonvolatile.

Michael Lewis has an article about how the markets have begun to realize that the Black-Scholes model for risk doesn’t really work:

“No one believes the original assumptions anymore,” says John Seo, who co-manages Fermat Capital, a $2 billion-plus hedge fund that invests in catastrophe bonds—essentially bonds with put options that are triggered by such natural catastrophes as hurricanes and earthquakes. “It’s hard to believe that anyone—yes, including me—ever believed it. It’s like trying to replicate a fire-insurance policy by dynamically increasing or decreasing your coverage as fire conditions wax and wane. One day, bam, your house is on fire, and you call for more coverage?”

Does this mean they have to give back the Nobel prize? I would have thought Black-Scholes was pretty universally questioned after the whole LTCM fiasco

Airport Security Kabuki

Airline Pilot Patrick Smith has a great post on the pointlessness of the current wave of airline security. It seems every time I fly, there is some new hoop to jump through all in the name of safer travel. But does any of it really make air travel any safer or is it just designed to make people feel safer?

How we got to this point is an interesting study in reactionary politics, fear-mongering and a disconcerting willingness of the American public to accept almost anything in the name of “security.” Conned and frightened, our nation demands not actual security, but security spectacle. And although a reasonable percentage of passengers, along with most security experts, would concur such theater serves no useful purpose, there has been surprisingly little outrage. In that regard, maybe we’ve gotten exactly the system we deserve.

Unfortunately, a lot of legislation since 9/11 has been driven by fear and the need to do something to address the problem that it seems that efficacy has not really the top priority. It easy to say that it’s mostly harmless and makes people feel safer but security is always a set of tradeoffs. Resources spent checking passenger shoes has to be weighed against the opportunity cost that those resources could have been used for. I’m still somewhat amazed that there has been no visible effort to enact the kind of behavioral style passenger screenings at the airports that are common in Israel. To be fair, the immediate addition of locking cockpit doors was a simple, effective and common sense move that would most likely prevent a similar event from occurring in the future.

Nicholas Nassim Taleb talked about how this type of legislation comes to pass – there is almost no incentive to pass effective legislation but every incentive to pass reactionary legislation:

Assume that a legislator with courage, influence, intellect, vision, and perseverance manages to enact a law that goes into universal effect and employment on September 10, 2001; it imposes the continuously locked bulletproof doors in every cockpit (at high costs to the struggling airlines)— just in case terrorists decide to use planes to attack the World trade Center in new York City. I know this is lunacy, but it is just a thought experiment (I am aware that there may be no such thing as a legislator with intellect, courage, vision, and perseverance; this is the point of the thought experiment). The legislation is not a popular measure among the airline personnel, as it complicates their lives. But it would certainly have prevented 9/11.
The person who imposed locks on cockpit doors gets no statues in public squares, not so much as a quick mention of his contribution in his obituary. “Joe Smith, who helped avoid the disaster of 9/11, died of complications of liver disease.” Seeing how superfluous his measure was, and how it squandered resources, the public, with great help from airline pilots, might well boot him out of office. Vox clamantis in deserto. He will retire depressed, with a great sense of failure. He will die with the impression of having done nothing useful. I wish I could go attend his funeral, but, reader, I can’t find him. and yet, recognition can be quite a pump. Believe me, even those who genuinely claim that they do not believe in recognition, and that they separate labor from the fruits of labor, actually get a serotonin kick from it. See how the silent hero is rewarded: even his own hormonal system will conspire to offer no reward.now consider again the events of 9/11. In their aftermath, who got the recognition? Those you saw in the media, on television performing heroic acts, and those whom you saw trying to give you the impression that they were performing heroic acts. The latter category includes someone like the new York Stock exchange Chairman Richard Grasso, who “saved the stock exchange” and received a huge bonus for his contribution (the equivalent of several thousand average salaries). All he had to do was be there to ring the opening bell on television—the television that is the carrier of unfairness and a major cause of Black Swan blindness. Everybody knows that you need more prevention than treatment, but few reward acts of prevention.

Bruce Schneier has done a great job in pointing out a lot of the security theatre that has occurred in the past few years and has even interviewed the head of the TSA on the subject:

BS: This feels so much like “cover your ass” security: you’re screening our shoes because everyone knows Richard Reid hid explosives in them, and you’ll be raked over the coals if that particular plot ever happens again. But there are literally thousands of possible plots.

So when does it end? The terrorists invented a particular tactic, and you’re defending against it. But you’re playing a game you can’t win. You ban guns and bombs, so the terrorists use box cutters. You ban small blades and knitting needles, and they hide explosives in their shoes. You screen shoes, so they invent a liquid explosive. You restrict liquids, and they’re going to do something else. The terrorists are going to look at what you’re confiscating, and they’re going to design a plot to bypass your security.

That’s the real lesson of the liquid bombers. Assuming you’re right and the explosive was real, it was an explosive that none of the security measures at the time would have detected. So why play this slow game of whittling down what people can bring onto airplanes? When do you say: “Enough. It’s not about the details of the tactic; it’s about the broad threat”?

KH: In late 2005, I made a big deal about focusing on Improvised Explosives Devices (IEDs) and not chasing all the things that could be used as weapons. Until the liquids plot this summer, we were defending our decision to let scissors and small tools back on planes and trying to add layers like behavior detection and document checking, so it is ironic that you ask this question—I am in vehement agreement with your premise. We’d rather focus on things that can do catastrophic harm (bombs!) and add layers to get people with hostile intent to highlight themselves. We have a responsibility, though, to address known continued active attack methods like shoes and liquids and, unfortunately, have to use our somewhat clunky process for now.

The Hindsight Bias

Overcoming Bias has a good overview of the hindsight bias (which also figures prominently in Nassim Taleb’s The Black Swan) and how it has a (usually negative) effect on everyday life:

Viewing history through the lens of hindsight, we vastly underestimate the cost of effective safety precautions. In 1986, the Challenger exploded for reasons traced to an O-ring losing flexibility at low temperature. There were warning signs of a problem with the O-rings. But preventing the Challenger disaster would have required, not attending to the problem with the O-rings, but attending to every warning sign which seemed as severe as the O-ring problem, without benefit of hindsight. It could have been done, but it would have required a general policy much more expensive than just fixing the O-Rings.

NNT discussing The Black Swan on the Charlie Rose show (via Paul Kedrosky).

The New York Times wrote an article describing the travails of various Silicon Valley workers who are “single digit millionaires” who continue to work. My personal experience has been that people who continue to work when they don’t need to do it because they enjoy the work however the article highlighted people who don’t think they are wealthy enough. A lot of this is a simple “keeping up with the jones'” problem – people are wealthy enough, just not as much as their neighbors. At any rate, this highlights a missing aspect in the Times piece – did they go and find people who made enough money in Silicon Valley and decided to move some place else and / or not work? Arguably this isn’t the focus of the piece but it’s certainly not representative of all (and quite likely most) workers in Silicon Valley.

Peter Norvig talks about the different kinds of errors that occur in experimental design and in the interpretation of the results. It’s fairly common for media coverage of experimental results to be presented as facts without any of the caveats or critical analysis. For example, publication bias appears when the process of publication of experimental results is itself biased. If only positive results are considered interesting but without seeing the negative results it’s possible for the result to occur but not in a statistically significant way. From the article:

Here is my amazing claim: under the strictest of controls, I have been able, using my sheer force of will, to influence an electronic coin flip (implemented by a random number generator) to come up heads 25 times in a row. The odds against getting 25 heads in a row are 33 million to 1. You might have any number of objections: Is the random number generator partial to heads? No. Is it partial to long runs? No. Am I lying? No. Am I really psychic? No. Is there a trick? Yes. The trick is that I repeated the experiment 100 million times, and only told you about my best result. There were about 50 million times when I got zero heads in a row. At times I did seem lucky/psychic: it only took me 2.3 million tries to get 24 heads in a row, when the odds say it should take 16 million on average. But in the end, I seemed unlucky: I only got 25 in a row, not the expected 26.

Many experiments that claim to beat the odds do it using a version of my trick. And while my purpose was to intentionally deceive, others do it without any malicious intent. It happens at many levels: experimenters don’t complete an experiment if it seems to be going badly, or they fail to write it up for publication (the so-called “file drawer” effect, which has been investigated by many, including my former colleague Jeff Scargle in a very nice paper), or the journal rejects the paper. The whole system has a publication bias for positive results over negative results. So when a published paper proclaims “statistically, this could only happen by chance one in twenty times”, it is quite possible that similar experiments have been performed twenty times, but have not been published.

Literal Survivor Bias

I found this (probably apocryphal) anecdote demonstrating survivor bias which I think makes the point with more clarity than the classical example of mutual funds:

Every day in World War II, bombers would fly from England to drop bombs on targets in continental Europe. Some would never return, some others would make it home on a wing and a prayer, shot full of holes from German fighters and anti-aircraft guns.

In order to improve the odds of survival, the allies decided to improve the armor plating on their bombers. But since weight is important on airplanes, they only wanted to add armor to those places on the airplane where it would actually help. So the engineers began a research project to determine where on the aircraft it would be most useful to have additional armor.

They diligently measured the locations of bullet holes on damaged bombers, compiled the statistical data (today we would say they built a database, but back then it was done by hand), and discovered clear patterns: bombers were much more likely to have bullet holes in the wings, tail surfaces, and in the tail gunner’s position. Holes in the cockpit and fuel tanks were relatively rare.

So the engineers made their recommendation: add armor plating to the wings, tail surfaces, and tail gunner’s position, since those were the locations on the plane most likely to be hit by German fire.

Then a statistician looked at the data, and realized that the engineers had come to exactly the wrong conclusion: because the engineers could only examine bombers which made it home safely, the right thing to do was to put armor plating where the engineers didn’tfind find any bullet holes, over the fuel tanks and cockpit.

ITConversations has an interview with Nassim Nicholas Taleb about his new book The Black Swan (which I am sadly too swamped to read right now but will post when I have). Taleb talks about the effect of a world where rare and unforeseen phenomena (which he calls “Black Swans”) shape much of the world in which we live.

ITConversations also has a presentation Taleb gave at PopTech 2005 which is well worth a listen.