Archive for the ‘biases’ Category

Interesting (and depressing!) post on the exponential mortality rate:

“Your probability of dying during a given year doubles every 8 years. For me, a 25-year-old American, the probability of dying during the next year is a fairly miniscule 0.03% — about 1 in 3,000. When I’m 33 it will be about 1 in 1,500, when I’m 42 it will be about 1 in 750, and so on. By the time I reach age 100 (and I do plan on it) the probability of living to 101 will only be about 50%. This is seriously fast growth — my mortality rate is increasing exponentially with age.”


Read Full Post »

Interesting article on accidents caused in the aviation and medical industries where people notice deadly problems but don’t convey the seriousness to other team members out of politeness or deferring to a senior team mate. For example:

Korean Air Flight 801, almost the same exact situation as Air Florida. In trying to warn the captain of severe weather problems that would eventually lead to the deaths of 228 of the 254 people on board, the first officer says, “Don’t you think it rains more? In this area, here?” and “Captain, the weather radar has helped us a lot”…

Captain, the weather radar has helped us a lot?! What are these people doing? They’re hinting at the impending problem, in hopes that the guy who’s a little busy with the whole “flying an airplane” or “trying to bring 99 planes circling Kennedy airport in for a landing” thing is going to catch on, read their mind, and solve the problem for them self.

The official term for this is “mitigated speech” and Malcolm Gladwell provides a fascinating account of how it has effected the airline industry in his book Outliers. He defines it as “any attempt to downplay or sugarcoat the meaning of what is being said.” and explains that “we mitigate when we’re being polite, or when we’re ashamed or embarrassed, or when we’re being deferential to authority.”

Read Full Post »

One of the big criticisms after 9-11 was that terrorist attacks, while horrifying, were a lot less likely to kill you than mundane occurrences like car accidents and smoking.  In general people fixate on the newsworthy rather than the everyday risks in life because your more likely to hear about the former and more likely to fixate on it. Bruce Schneier has articulated this quite well:

I tell people that if it’s in the news, don’t worry about it. The very definition of “news” is “something that hardly ever happens.” It’s when something isn’t in the news, when it’s so common that it’s no longer news — car crashes, domestic violence — that you should start worrying.

Additionally, the very horrific nature of specific events makes us incorrectly assess the risk due to anchoring:

Anchoring and adjustment is a psychological heuristic that influences the way people intuitively assess probabilities. According to this heuristic, people start with an implicitly suggested reference point (the “anchor”) and make adjustments to it to reach their estimate.

An audience is first asked to write the last 2 digits of their social security number, and, second, to submit mock bids on items such as wine and chocolate. The half of the audience with higher two-digit numbers would submit bids that were between 60 percent and 120 percent higher than those of the other half, far higher than a chance outcome; the simple act of thinking of the first number strongly influences the second, even though there is no logical connection between them.

Lethal is an iPhone app designed to combat this irrationality – based on your current location it gives information about the relative likelyhood of different types of lethal events:

Want to know everything in your area which poses a threat? LETHAL uses auto-location to deliver information you need to be on your guard. Find out more about the dangers which could surround you — the hostile animals, the likelihood of crimes, the prevalence of disease, and the potential accidents and disasters.

Drawing from a proprietary database compiling information from government and academic statistics and research, LETHAL offers information on 650 locations in the US and Canada.

Read Full Post »

Survivor Bias in Log Cabins

Michael Graham Richard has a great post about survivor bias of frontier log cabins – essentially all the ones you see are well made because they were well made:

I have to chuckle whenever I read yet another description of American frontier log cabins as having been well crafted or sturdily or beautifully built. The much more likely truth is that 99% of frontier log cabins were horribly built—it’s just that all of those fell down. The few that have survived intact were the ones that were well made. That doesn’t mean all of them were.

He also makes the point that the classical music you hear today is good precisely because it is still around – history buries the mediocre.

Read Full Post »

Spolsky on Survivor Bias

Joel Spolsky (of Joel on Software fame) has a column “How Hard could it be? Startup Static” about survivor bias when trying to emulate successful companies:

The problem is that trying to copy one company’s model is a fool’s errand. It’s hard to figure out which part of the Starbucks formula made the business a smash hit while so many of its rivals failed. Starbucks’s success is the product of a combination of factors that came together in precisely the right way at precisely the right time. It’s nearly impossible to isolate which one was the most important. You would probably have to look at the hundreds of small coffee chains that didn’t make it big before you stood a chance of seeing what really distinguished Starbucks.

The survivorship bias in entrepreneurship was on my mind a few months ago. My company was putting together a conference in Boston, and I invited my friend Jessica Livingston to speak. Jessica is the co-founder of a small angel investment group called Y Combinator. Its model is to give a few thousand dollars to groups of two or three geeks to start tech companies. She has also written a book called Founders at Work, in which she interviews the founders of about 30 successful start-ups. When she asked me what she should speak about, I asked her to consider describing all the different ways a start-up can fail, rather than the usual stuff about lessons learned from people who succeeded.

“That would be boring,” she told me. “They all fail for the same reason: People just stop working on their business.” Um, yeah, well, sure, and most people die because their heart stops beating. But somehow dying in different ways is still interesting enough to support 40 hours a week of prime-time programming.

But the more I thought about it, the more I realized Jessica was onto something. Why do start-ups fail? As she pointed out, it’s usually a collapse of motivation — everyone wanders back to civilian life, and the start-up ends, not with a bang but a whimper.

I wrote on this topic earlier in “Beware advice from the successful” and “Good to Not so great“. Startups fail for lots of reasons, they also succeed for lots of reasons and what worked for Starbucks (even if you could rigorously determine why) won’t really help you much.

People have a strong tendency to attribute to skill everything they did right and luck to everything they did wrong. In reality, it’s best to think of the world as a very large roulette table (or Russian Roulette depending on your predicament) – skill will allow you to place more bets on the table but it’s not a guarantee of success. Couple this bias with the lack of data and an inability to reproduce the experiment – people generally don’t have the ability to rerun their lives to determine what would have happened if they had made some choice differently. This is where Survivor Bias appears – by asking the successful, your ignoring all the people who were unsuccessful.

Read Full Post »

Car Talk recently had a puzzler that involved another example of (literal) survivor bias:

It’s World War II, an RAF airfield north of London. A dimly lit Quonset hut filled with air crews just returned from bombing runs over Germany.

The meeting opens with the chaplain leading the men in prayer for their lost comrades. He is followed by the flight operations chief, who begins the debriefing by asking the airmen, “From what direction were you attacked by the German fighter planes?”

Without hesitation or dissent, the reply was, “From above and behind.”

As in our previous example of this (which also involved world war II pilots…), the problem is that they are only interviewing the survivors.

Read Full Post »

Misunderstanding Risk

The New York Times published an article on the role of VaR (Value at Risk) financial models in the current fiscal crisis:

VaR isn’t one model but rather a group of related models that share a mathematical framework. In its most common form, it measures the boundaries of risk in a portfolio over short durations, assuming a “normal” market. For instance, if you have $50 million of weekly VaR, that means that over the course of the next week, there is a 99 percent chance that your portfolio won’t lose more than $50 million. That portfolio could consist of equities, bonds, derivatives or all of the above; one reason VaR became so popular is that it is the only commonly used risk measure that can be applied to just about any asset class. And it takes into account a head-spinning variety of variables, including diversification, leverage and volatility, that make up the kind of market risk that traders and firms face every day.

Another reason VaR is so appealing is that it can measure both individual risks — the amount of risk contained in a single trader’s portfolio, for instance — and firmwide risk, which it does by combining the VaRs of a given firm’s trading desks and coming up with a net number. Top executives usually know their firm’s daily VaR within minutes of the market’s close.

As you might expect of a discussion of complicated statistical modeling in the mainstream press, the story oversimplifies and comes up a little short. Naked Capitalism does a good job picking the article apart in “Woefully Misleading Piece on Value at Risk in New York Times” – essentially the article make the classic mistake of assuming everything is normally distributed (the ludic fallacy):

Now when I say it is well known that trading markets do not exhibit Gaussian distributions, I mean it is REALLY well known. At around the time when the ideas of financial economists were being developed and taking hold (and key to their work was the idea that security prices were normally distributed), mathematician Benoit Mandelbrot learned that cotton had an unusually long price history (100 years of daily prices). Mandelbrot cut the data, and no matter what time period one used, the results were NOT normally distributed. His findings were initially pooh-poohed, but they have been confirmed repeatedly. Yet the math on which risk management and portfolio construction rests assumes a normal distribution!

It similarly does not occur to Nocera to question the “one size fits all” approach to VaR. The same normal distribution is assumed for all asset types, when as we noted earlier, different types of investments exhibit different types of skewness. The fact that VaR allows for comparisons across investment types via force-fitting gets nary a mention.

He also fails to plumb the idea that reducing as complicated a matter as risk management of internationally-traded multii-assets to a single metric is just plain dopey. No single construct can be adequate. Accordingly, large firms rely on multiple tools, although Nocera never mentions them. However, the group that does rely unduly on VaR as a proxy for risk is financial regulators. I have been told that banks would rather make less use of VaR, but its popularity among central bankers and other overseers means that firms need to keep it as a central metric.

Similarly, false confidence in VaR has meant that it has become a crutch. Rather than attempting to develop sufficient competence to enable them to have a better understanding of the issues and techniques involved in risk management and measurement (which would clearly require some staffers to have high-level math skills), regulators instead take false comfort in a single number that greatly understates the risk they should be most worried about, that of a major blow-up.

Read Full Post »

Older Posts »