My previous post did not make it clear why climate models are so similar to complex financial models, which failed so spectacularly.
It is natural to think of climate models as being physical simulations (which they are) and therefore similar to many proven applications such as aerodynamic models used to design aircraft. The difference is that the models used by engineers are proven by testing and use. You can can test a model of aerodynamics against the results of wind tunnel tests and measurements taken on real aircraft.
Climate models have only one earth to test their models against. This is worse than many economic and finance models which may be tested against multiple economies, markets or securities. It is closer to financial models than to most used by engineers and scientists in that it is not possible to deliberately construct an experiment to test the model: one has to wait patiently for the world to produce the data to test against.
The solution is to back-test a model. The problem is that back testing needs a lot of historical data. Accurate deliberate measurement of temperature and other climate indicators is very recent compared to the pace of geological change.
Back testing also needs to cover periods in which a full range of conditions occurred: a model of financial markets needs to cover multiple serve bubbles and crashes. Similarly, a model of climate change needs to be back tested against multiple ice-ages and warm periods.
Some of the data is available from tree rings, ice-cores and similar sources, but, as far as I am aware, they are limited in the coverage they offer. In practice, as mentioned in my previous post, climate models appear to be tested against a few hundred years of data or less. This is for forecasts covering centuries and processes that take hundreds of thousands of years. By comparison the bank’s risk models (which only needed to make comparatively short term forecasts), look positively well tested.
The other problem with back testing complex models against limited data is that it is possible to construct many models that pass back-testing but which contradict each other when it comes to forecasts. Which are correct? The biggest problem with limited data is that it is possible to keep tweaking a fundamentally wrong complex model until it fits all the available data.
Another similarity is that climate models consist of many mutually interacting factors that cannot be analysed and modelled separately because they affect each other. In a climate model this may mean ocean and air temperatures in different parts of the world, in a financial model it may mean profit and capital investment. The global climate is clearly more complex than the subjects of most financial models (companies and markets), and can be better compared to models of national economies (not something that I would believe long term forecasts about).
Another problem with complex models is that they are very sensitive to small tweaks: a very small change to the model or to the data input into it can mean a huge difference in the output. This means that the results are uncertain because it is difficult to be sure which finely balanced choices to make.
This also creates a problem on the human side of modelling. The first problem is that small mistake or convenient simplification can drastically affect the results. The other is that models become biased towards the existing consensus.
A model that makes predictions very different from the consensus tends to be checked for mistakes far more carefully than a model that confirms the consensus view. This means that mistakes that cause a deviation from the consensus are far more likely to be found than those that wrongly confirm the consensus view.
Anyone who puts forward a model that gives very different results from everyone else’s is taking a risk: you look foolish if your model is found to be faulty or if the predictions are incorrect. Sell side analysts are notorious for running with the herd (when I was one I did not look at anyone else’s forecasts, and I still think that is a good idea when possible). At least financial modelling offers you the possibility of financial rewards if you are right when the consensus is wrong. Climate models lack even this (inadequate) counter-incentive as you will be long dead by the time your forecast is vindicated.
The forecasts that most matter made by climate models are those for hundreds, or even thousands, of years in the future. It is also forecasting a very slow process: it takes many years to establish trend or the breading of a trend. This means that it will take a long time for any mistakes in a models to become evident.
What is the solution? To rely on simple models (which are not so susceptible to tweaking to fit), to rely primarily on what the data actually shows has happenned, and to treat complex models as unproven theories.