Validation Of A Climate Model Is Mandatory: The Invaluable work of Dr. Vincent Gray.

by Dr. Tim Ball on August 27, 2015

in Data,Statistics,Theory

Early Awareness

Vincent Gray, M.A., Ph.D. is one of the most effective critics of the Intergovernmental Panel on Climate Change (IPCC) through his NZ Climate Truth Newsletter and other publications. He prefaces comments to the New Zealand Climate Science Coalition as follows.

As an Expert Reviewer for the Intergovernmental Panel on Climate Change for eighteen years, that is to say, from the very beginning. I have submitted thousands of comments to all of the Reports. My comments on the Fourth IPCC Report, all 1,898 of them, are to be found at IPCC (2007) and my opinions of the IPCC are in Gray (2008b)

His most recent publication is “The Global Warming Scam and the Climate Change Super Scam” that builds on his very effective first critique, “The Greenhouse Delusion: A Critique of “Climate Change 2001”. We now know that the 2001 Report included the hockey stick and Phil Jones global temperature record, two items of evidence essential to the claim of human causes of global warming. In the summary of that book he notes,

  • There are huge uncertainties in the model outputs which are recognized and unmeasured. They are so large that adjustment of model parameters can give model results which fit almost any climate, including one with no warming, and one that cools.
  • No model has ever successfully predicted any future climate sequence. Despite this, future “projections” for as far ahead as several hundred years have been presented by the IPCC as plausible future trends, based on largely distorted “storylines”, combined with untested models.
  • The IPCC have provided a wealth of scientific information on the climate, but have not established a case that increases in carbon dioxide are causing any harmful effects.

 

On page 58 of the book, he identifies what is one of the most serious limitations of the computer models.

No computer model has ever been validated. An early draft of Climate Change 95 had a Chapter titled “Climate Models – Validation” as a response to my comment that no model has ever been validated. They changed the title to “Climate Model – Evaluation” and changed the word “validation” in the text to “evaluation” no less than describing what might need to be done in order to validate a model.
Without a successful validation procedure, no model should be considered to be capable of providing a plausible prediction of future behaviour of the climate.


What is Validation?

The traditional definition of validation involved running the model backward to recreate a known climate condition. The general term applied was “hindsight forecasting”. There is a major limitation because of the time it takes a computer to recreate the historic conditions. Steve McIntyre at Climateaudit, illustrated the problem:

Caspar Ammann said that GCMs (General Circulation Models) took about 1 day of machine time to cover 25 years. On this basis, it is obviously impossible to model the Pliocene-Pleistocene transition (say the last 2 million years) using a GCM as this would take about 219 years of computer time.

Also, models are unable to simulate current or historic conditions because we don’t have accurate knowledge or measures. The IPCC accede this in Chapter 9 of the 2013 Report.

Although crucial, the evaluation of climate models based on past climate observations has some important limitations. By necessity, it is limited to those variables and phenomena for which observations exist.

Proper validation is “crucial” but seriously limited because we don’t know what was going on historically. Reducing the number of variables circumvents limited computer capacity and lack of data or knowledge of mechanisms.

However, as O’Keefe and Kueter explain:

As a result, very few full-scale GCM projections are made. Modelers have developed a variety of short cut techniques to allow them to generate more results. Since the accuracy of full GCM runs is unknown, it is not possible to estimate what impact the use of these short cuts has on the quality of model outputs.

One problem is that a variable considered inconsequential currently, may be crucial under different conditions. This problem occurred in soil science when certain minerals, called “trace minerals”, were considered of minor importance and omitted from soil fertility calculations. In the 1970s, the objective was increased yields through massive application of fertilizers. By the early 80s, yields declined despite added fertilizer. Apparently, the plants could not take up fertilizer minerals without some trace minerals. In the case of wheat, it was zinc, which was the catalyst for absorption of the major chemical fertilizers.

It is now a given in the climate debate that an issue or a person attacked by anthropogenic global warming (AGW) advocates is dealing with the truth. It proves they know the truth and are deliberately deflecting from it for political objectives. Skepticalscience is a perfect example and their attempt to justify validation of the models begins with an attack on Freeman Dyson’s observation that,

“[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere.”

They use “reliability” instead of validation and use the term “hindcasting”, but in a different context.

“If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.”

They claim, using their system that,

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.

And,

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened.

It is 25 years since the first IPCC model predictions (projections) and already the lie is exposed in Figure 1.


Source: University of Alabama’s John Christy presentation to the House Committee on Natural Resources on

May 15, 2015.
Figure 1

Fudging To Assure Reliability Masquerading As Validation

Attempts at validation during the 120 years of the instrumental period also proved problematic for the same reasons as for the historical record. A major challenge was the cooling period from 1940 to 1980 because it coincided with the greatest increase in human production of CO2. This contradicted the most basic assumption of the AGW hypothesis that a CO2 increase caused a temperature increase. Freeman Dyson described the practice, generally described as “tweaking”, and discussed in several WUWT articles. It is the practice of covering up and making up evidence designed to maintain the lies that are the computer models.

They sought an explanation in keeping with their philosophy that any anomaly, or now a disruption, is, by default, due to humans. They tweaked the model with human sourced sulfate, a particulate that blocks sunlight and produces cooling. They applied it until the model output matched the temperature curve. The problem was after 1980 warming began again, but sulfate levels continued. Everything they do suffers from the T. H. Huxley truth; “The great tragedy of science, the slaying of a beautiful hypothesis by an ugly fact.”

As Gray explained,

Instead of validation, and the traditional use of mathematical statistics, the models are “evaluated” purely from the opinion of those who have devised them. Such opinions are partisan and biased. They are also nothing more than guesses.

They sought an explanation in keeping with their philosophy that any anomaly, or now a disruption, is, by default, due to humans. They tweaked the model with human sourced sulfate, a particulate that blocks sunlight best online casino and produces cooling. They applied it until the model output matched the temperature curve. The problem was after 1980 warming began again, but sulfate levels continued. Everything they do suffers from the T. H. Huxley truth; “The great tragedy of science, the slaying of a beautiful hypothesis by an ugly fact.”

As Gray explained,

Instead of validation, and the traditional use of mathematical statistics, the models are “evaluated” purely from the opinion of those who have devised them. Such opinions are partisan and biased. They are also nothing more than guesses.

The 2013 IPCC Physical Science Basis Report Admits There Is No Validation.

Chapter 9 of the 2013 IPCC Report is titled Evaluation of Climate Models. They claim some improvements in the evaluation, but it is still not validation.

Although crucial, the evaluation of climate models based on past climate observations has some important limitations. By necessity, it is limited to those variables and phenomena for which observations exist.

In many cases, the lack or insufficient quality of long-term observations, be it a specific variable, an important processes, or a particular region (e.g., polar areas, the upper troposphere/lower stratosphere (UTLS), and the deep ocean), remains an impediment. In addition, owing to observational uncertainties and the presence of internal variability, the observational record against which models are assessed is ‘imperfect’. These limitations can be reduced, but not entirely eliminated, through the use of multiple independent observations of the same variable as well as the use of model ensembles.

The approach to model evaluation taken in the chapter reflects the need for climate models to represent the observed behaviour of past climate as a necessary condition to be considered a viable tool for future projections. This does not, however, provide an answer to the much more difficult question of determining how well a model must agree with observations before projections made with it can be deemed reliable. Since the AR4, there are a few examples of emergent constraints where observations are used to constrain multi-model ensemble projections. These examples, which are discussed further in Section 9.8.3, remain part of an area of active and as yet inconclusive research.

Their Conclusion

Climate models of today are, in principle, better than their predecessors. However, every bit of added complexity, while intended to improve some aspect of simulated climate, also introduces new sources of possible error (e.g., via uncertain parameters) and new interactions between model components that may, if only temporarily, degrade a model’s simulation of other aspects of the climate system. Furthermore, despite the progress that has been made, scientific uncertainty regarding the details of many processes remains.

These quotes are from the Physical Basis Science Report, which means the media and Policymakers don’t read them. What they get is a small Box (2.1) on page 56 of the Summary for Policymakers (SPM). It is carefully worded to imply everything is better than it was in AR4. The opening sentence reads,

Improvements in climate models since the IPCC Fourth Assessment Report (AR4) are evident in simulations of continental- scale surface temperature, large-scale precipitation, the monsoon, Arctic sea ice, ocean heat content, some extreme events, the carbon cycle, atmospheric chemistry and aerosols, the effects of stratospheric ozone and the El Niño-Southern Oscillation.

The only thing they concede is that

The simulation of large-scale patterns of precipitation has improved somewhat since the AR4, although models continue to perform less well for precipitation than for surface temperature. Confidence in the representation of processes involving clouds and aerosols remains low.

Ironically, these comments face the same challenge of validation because the reader doesn’t know the starting point. If your model doesn’t work, then “improved somewhat” is meaningless.

All of this confirms the validity of Dr Gray’s comments that validation is mandatory for a climate model and that,

 

“No computer model has ever been validated.”

And

Without a successful validation procedure, no model should be considered to be capable of providing a plausible prediction of future behaviour of the climate.