A number of the AGW crowd insist that all the evidence is on their side, and that none of the skeptics can get anything published in a refereed journal, proving that they have science on their side.
Nice try, fellows.
If you go here, you can find this.
Hydrological Science Journal, Aug 2008.
On the Credibility of Climate Predictions
From a team of authors from Greece.
D. KOUTSOYIANNIS, A. EFSTRATIADIS, N. MAMASSIS & A. CHRISTOFIDES
Department of Water Resources, Faculty of Civil Engineering, National Technical University of Athens, Heroon Polytechneiou 5, GR-157 80 Zographou, Greece
Here from the Abstract:
Geographically distributed predictions of future climate, obtained through climate models, are widely used in hydrology and many other disciplines, typically without assessing their reliability. Here we compare the output of various models to temperature and precipitation observations from eight stations with long (over 100 years) records from around the globe. The results show that models perform poorly, even at a climatic (30-year) scale. Thus local model projections cannot be credible, whereas a common argument that models can perform better at larger spatial scales is unsupported.
This is a critical paper: not merely critical in the sense of asking questions, but also critical in the sense of being a peer-reviewed paper that is being published.
In other words, it can't be ignored by the AGW crowd: they can say that the writers of the paper have things wrong, but they're going to have to show how that is so.
The key point here is: there have been no reliability assessments. None.
Ouch.
What else do they say?
In all examined cases, GCMs generally reproduce the broad climatic behaviours at different geographical locations and the sequence of wet/dry or warm/cold periods at a monthly scale. Specifically, the correlation of modelled time series with historical ones is fair and the resulting coefficient of efficiency seems satisfactory. However, where tested, replacement of the modelled time series with a series of monthly averages (same for all years) resulted in higher efficiency.
This means that the GCMs do a good job at what they were originally supposed to do: model changes on a monthly scale. But with a slight problem: when they replaced the original time series with a series of monthly averages, the same for all years, the models were more efficient.
Ouch.
You know what this means?
That the model results are, generally speaking, the result of the models, rather than the data: the models were more efficient with made-up data than with the original data. Tthis can only mean that the results of the models were programmed into the models themselves.
In other words, the data is irrelevant: you will always get the same results. Anyone remember the Hockey Stick, that lovely model that reported the same results when fed with ... noise?
At the annual and the climatic (30-year) scales, GCM interpolated series are irrelevant to reality. GCMs do not reproduce natural over-year fluctuations and, generally, underestimate the variance and the Hurst coefficient of the observed series. Even worse, when the GCM time series imply a Hurst coefficient greater than 0.5, this results from a monotonic trend, whereas in historical data the high values of the Hurst coefficient are a result of large-scale over-year fluctua-tions (i.e. successions of upward and downward "trends"). The huge negative values of coefficients of efficiency show that model predictions are much poorer than an elementary prediction based on the time average. This makes future climate projections at the examined locations not credible. ... However, the poor GCM performance in all eight locations examined in this study allows little hope, if any. An argument that the poor performance applies merely to the point basis of our comparison, whereas aggregation at large spatial scales would show that GCM outputs are credible, is an unproved conjecture and, in our opinion, a false one.
Double ouch: the model results, when run with historical data (GCM interpolated series), are irrelevant to reality.
The implications of the Hurst coefficient greater than 0.5 reflects on the results being built into the model: this means that the models themselves have a monotonic trend, i.e. produce the desired upwards trend regardless of the data.
Such modelled results are the result of scientific fraud on a grand scale: we're not talking a single GCM model, but six of them: CGCM3-A2, PCM-20C3M, ECHAM5-20C3M, CGCM2-A2, HadCM3-A2 and ECHAM4-GG.
These are, from what I can tell, the core of the current set of deterministic GCM models in place.
Ouch.
Let's condense the above: the models have a built-in trend and a time-trend average gives better results.
Orchids to the authors and those who peer-reviewed and to the publisher for their commitment to scientifc research. This is how the science should work: onions galore to those who haven't bothered to check the reliability (and hence plausability) of the models that gave them the results that have been in use, ideologically, for years to try and force political and economic changes in the world economy.
Nice try, fellows.
If you go here, you can find this.
Hydrological Science Journal, Aug 2008.
On the Credibility of Climate Predictions
From a team of authors from Greece.
D. KOUTSOYIANNIS, A. EFSTRATIADIS, N. MAMASSIS & A. CHRISTOFIDES
Department of Water Resources, Faculty of Civil Engineering, National Technical University of Athens, Heroon Polytechneiou 5, GR-157 80 Zographou, Greece
Here from the Abstract:
Geographically distributed predictions of future climate, obtained through climate models, are widely used in hydrology and many other disciplines, typically without assessing their reliability. Here we compare the output of various models to temperature and precipitation observations from eight stations with long (over 100 years) records from around the globe. The results show that models perform poorly, even at a climatic (30-year) scale. Thus local model projections cannot be credible, whereas a common argument that models can perform better at larger spatial scales is unsupported.
This is a critical paper: not merely critical in the sense of asking questions, but also critical in the sense of being a peer-reviewed paper that is being published.
In other words, it can't be ignored by the AGW crowd: they can say that the writers of the paper have things wrong, but they're going to have to show how that is so.
The key point here is: there have been no reliability assessments. None.
Ouch.
What else do they say?
In all examined cases, GCMs generally reproduce the broad climatic behaviours at different geographical locations and the sequence of wet/dry or warm/cold periods at a monthly scale. Specifically, the correlation of modelled time series with historical ones is fair and the resulting coefficient of efficiency seems satisfactory. However, where tested, replacement of the modelled time series with a series of monthly averages (same for all years) resulted in higher efficiency.
This means that the GCMs do a good job at what they were originally supposed to do: model changes on a monthly scale. But with a slight problem: when they replaced the original time series with a series of monthly averages, the same for all years, the models were more efficient.
Ouch.
You know what this means?
That the model results are, generally speaking, the result of the models, rather than the data: the models were more efficient with made-up data than with the original data. Tthis can only mean that the results of the models were programmed into the models themselves.
In other words, the data is irrelevant: you will always get the same results. Anyone remember the Hockey Stick, that lovely model that reported the same results when fed with ... noise?
At the annual and the climatic (30-year) scales, GCM interpolated series are irrelevant to reality. GCMs do not reproduce natural over-year fluctuations and, generally, underestimate the variance and the Hurst coefficient of the observed series. Even worse, when the GCM time series imply a Hurst coefficient greater than 0.5, this results from a monotonic trend, whereas in historical data the high values of the Hurst coefficient are a result of large-scale over-year fluctua-tions (i.e. successions of upward and downward "trends"). The huge negative values of coefficients of efficiency show that model predictions are much poorer than an elementary prediction based on the time average. This makes future climate projections at the examined locations not credible. ... However, the poor GCM performance in all eight locations examined in this study allows little hope, if any. An argument that the poor performance applies merely to the point basis of our comparison, whereas aggregation at large spatial scales would show that GCM outputs are credible, is an unproved conjecture and, in our opinion, a false one.
Double ouch: the model results, when run with historical data (GCM interpolated series), are irrelevant to reality.
The implications of the Hurst coefficient greater than 0.5 reflects on the results being built into the model: this means that the models themselves have a monotonic trend, i.e. produce the desired upwards trend regardless of the data.
Such modelled results are the result of scientific fraud on a grand scale: we're not talking a single GCM model, but six of them: CGCM3-A2, PCM-20C3M, ECHAM5-20C3M, CGCM2-A2, HadCM3-A2 and ECHAM4-GG.
These are, from what I can tell, the core of the current set of deterministic GCM models in place.
Ouch.
Let's condense the above: the models have a built-in trend and a time-trend average gives better results.
Orchids to the authors and those who peer-reviewed and to the publisher for their commitment to scientifc research. This is how the science should work: onions galore to those who haven't bothered to check the reliability (and hence plausability) of the models that gave them the results that have been in use, ideologically, for years to try and force political and economic changes in the world economy.
Keine Kommentare:
Kommentar veröffentlichen