November 20, 2017 - Latest: Pensions – perhaps Nanny really does know best by Steve Herbert

10th August 2011

There are probably as many reasons as there are people, but I want to look at two:

* Overconfidence* (which I'll look at here), and

* Belief in perfect hindsight* (which I'll look at in part 2 ).

An economist, on a BBC programme in September 2009, was asked about the failure of his and his colleagues' predictions about economics and the unreality of their complex equations.

He said that people just didn't understand equations (which is often true), said that there was a massive ignorance of mathematics (also true), and commented that people had claimed "Newton had too many equations".

Lots of scientists despair of the public understanding equations and maths.

Like Nobel Prize winner Eugene Fama, who, in a 2007 interview, declared "the word ‘bubble' drives me nuts and went on to explain why we can trust the housing market.'

Why was that confidence, apparently confidence in science, so misplaced?

It's true that Newton was ahead of his time; but Newton's time was over 400 years ago.

He believed that if you could measure everything accurately enough you could predict it, that the universe was like a massive snooker table, and that if you could measure the position and the forces acting on each ball, you could predict where everything would end up at some future time.

We know now that a lot of "Newtonian physics" is wrong.

It's often a good approximation, but tends to go wrong at the big, fast end (relativity) and the sub-atomic end (quantum mechanics).

Imagine you were going to predict the Newtonian snooker game, and wanted to measure the positions of all the balls.

Heisenberg's Uncertainty Principle says we can't know both the position and momentum – basically speed, weight and direction – of even one particle, so we can't possibly measure a whole table full of balls made up of millions of particles.

We might measure where they are now, but don't know which direction they are going in, or how fast.

That's another problem, we can measure their position, "now". But what is "now"? In a hundredth of a second, they will have moved, so ‘now' is within a hundredth of a second. But where are they after a thousandth of a second? We get an infinite regression and keep defining "now" as a smaller and smaller instant of time.

Suppose you decide to predict the weather (another non-linear, self organising system on the edge of chaos, like a market). Imagine you discover that you need to measure, say air pressure, to 66 decimal places.

What happens if, in the 67th decimal place there is a slightly different figure? It's obviously a very small amount, so you approximate.

If you put the figures into a computer, ran the programme, then did it again with the same figures but got a totally different result, would you think that you'd got it wrong?

That's what meteorologist Edward Lorenz thought. He put in the parameters into the MIT computer in the 1960s and thought he'd got it wrong or that the computer had a bug.

But in fact it was right. The computer rounded the figures to do the calculation.

It displayed less than it used to do the calculation.

Lorenz entered the "approximate" figures for the second run.

They seemed to be identical and to an extent they were.

But somewhere, in what seems an insignificant position, a digit that the computer used but didn't display gave a significantly different end result, even from a computer that didn't have any of life's other source of uncertainty.

That is the "butterfly effect" of chaos.

Chaos doesn't mean that events are random; on the contrary, they follow quite simple rules.

But they are extremely sensitive to initial conditions.

You can start with weather (or market) conditions that look exactly like previous conditions and measure hundreds of factors to thousands of decimal places of accuracy and they still seem identical.

But one apparently insignificant difference – a butterfly flapping its wings – in an apparently insignificant factor can make a huge difference. Perhaps there is no hurricane, or Sevenoaks becomes one Oak, there is a rising market, or a crash.

Humans want to believe that we can control and predict. We have a great need for it, it makes us feel safe. Human evolution gave us that need, we're not going to lose it any time soon.

We can't predict. But we desperately want to, we need to. We have experts, Nobel prizes, computers, formulae, maths – we're masters of the universe.

We must be able to predict the future – surely?

No, we can't. But we're overconfident that we can.

*Kim Stephenson is an occupational psychologist and trained financial adviser*.

His website Taming the Pound is aimed at helping people get control of their thinking about money, so they can use their money – and avoid their money using them.

**More from Kim on Mindful Money **

Why a great leader doesn't always make for a great business.

Financial regulation: Why bother at all?

Sign up for our free email newsletter here, for your chance to win an iPad 2.