I was surfing randomly yesterday and stumbled upon this article from the New Scientist. Enjoy reading! :)
From Meha,
The Black Swan..
Hahahaha
Life is unpredictable - get used to it
01 July 2006
NewScientist.com news service
Michael Bond
When did you first notice the way humans deal with unpredictable events?
I was brought up in Lebanon, where we always recreate memories, revise experiences and read more into them than necessary. During the war there, when I was 15, I wanted to be a philosopher. While I was hiding in basements I read William Shirer's Berlin Diary: The journal of a foreign correspondent 1934 to 1941. It made me realise three things about the people around me: that they were always predicting (wrongly) that the war was going to be "solved" soon; that they seemed confident about their estimates for the future even though crazy events were happening all the time; and that after the crazy events had happened, people acted as if they were predictable. I realised that you can find an infinity of narratives to fit your data.
Why do we all fall into this trap?
Our brains operate on autopilot most of the time. In a primitive environment, that was fine, since most random variables we encountered then were what I call "type one" randomness: things like the throw of a die, a person's height or weight, or the number of calories we consume on a particular day - the randomness disappears under averaging and is measurable and wellbehaved. The variation in the calories we consume each day disappears when you add up the calories we consume over time; the throws of a die will average out. Today, however, most random variables we encounter are "type two" randomness: socioeconomic variables that can impact the total in a disproportionate way. I call these unpredictable ones "black swans".
Can you give an example?
Individual wealth is a good example. Bill Gates's contribution to the wealth of the world is so huge you cannot discount it. The first world war was "type two" randomness - no one could have predicted its magnitude the year before it happened. The same goes for the invention of the computer. There was not much "type two" randomness in the primitive environments in which we developed our intuitions. The few extreme events we did encounter were not repetitive enough for us to learn from them, and they were often so catastrophic that they terminated the populations exposed to them. Sadly science has not been too interested in this type of randomness. Scientists do not want to tell you what you don't know, and it devalues the field of statistics, given that the statistics we use are based on "type one" randomness.
How bad we are at predicting "black swans"?
Our track record is quite dire. Look at the net, computers, lasers. The internet was designed as a military system, not for chat rooms. The person who first marketed computers didn't think he would sell more than five. The laser was designed by a physicist who had no idea how it might be used. You can't even forecast something that would affect us tomorrow - revolutions, wars, epidemics, political changes, economic variables.
But don't people make predictions based on history all the time?
As well as our ability to concoct empirically flawed narratives to explain past events, there are biases in history that we don't seem to be aware of and that make us overestimate the causal links between events: for example, when you see only the winners and not the losers. When you look at the fossil record, you see only the species that left a fossil. You cannot make a generalisation of all species just from fossils - you have totake into account the species that left none. History has a lot of hidden pockets. You can't take it any more seriously than a visit to a museum.
Does science suffer from this?
Yes, because of the way research gets published. Research that yields no result does not make it into print. The problem is that a finding of absence and an absence of findings get mixed together. Science as a discipline is very sceptical, but scientists are not necessarily so. It is exactly like financial markets. Markets can be extremely rational while dominated by entirely irrational individuals. It leads to overconfidence.
What are the consequences of this overconfidence?
It's massively dangerous. Let me give you an example. I was invited to give a talk on prediction to civil servants at the Woodrow Wilson International Center for Scholars in Washington DC. I told them that their social security forecasts were dangerous to society. At the end of the session a gentleman from the audience wanted to speak to me privately. He showed me oil price forecasts by his department for 25 years ahead: in January 2004 they estimated it would be $27 a barrel; six months later they revised it to $74 a barrel. If you had to double your forecast six months after making it, wouldn't you realise something is wrong with your forecasting? So we're depending on flimsy government forecasts. And there's a track record to our prediction failure: think of someone making an inflation forecast in Germany in 1913. My point is that a forecast is irrelevant unless you have an error rate on it. But if this happened, these people would realise there was no point in forecasting, because their error rate would be so monstrous. It wouldn't be a problem if we knew our limits in prediction. People are naturally over-optimistic, which is good because it helps us get by in the world. It's fine for individuals and corporations, but it's very bad for societies and governments when policies are set by such wildly unfounded predictions.
So successful forecasting is just luck?
Yes, and you need a lot of luck to forecast things accurately. There are enough people predicting crazy events for one of them to get it right. After a crazy event, such as 9/11, you will always find someone who predicted it out of luck, but they'll think they predicted it by skill.
Should we change the way we view the world?
We cannot help being fooled by randomness. We're too impressionable. I was in London when the second terrorist attack happened and I automatically behaved like anyone else, ducking for safety. Then I realised that my biggest danger in London came from my jet lag and being used to traffic driving on the other side of the road. We should worry about preventable sources of death. I should worry more about how much sugar I put in my tea than whether I am going to be hit by terrorists. The key is not to try to stop being a fool, but to be aware of when it matters not to be a fool. If you can't do anything about a problem, it's a waste of time analysing it.
From issue 2558 of New Scientist magazine, 01 July 2006, page 50
From Meha,
The Black Swan..
Hahahaha
0 comments:
Post a Comment