May 03, 2008
10 Year Cooling Break From Global Warming Predicted

Many climate models show a steadily hotter century because of atmospheric carbon dioxide build-up. But Dr. Noel S. Keenlyside and colleagues at the Leibniz Institute of Marine Sciences in Kiel, Germany and at the Max Planck Institute for Meteorology in Hamburg predict in a paper in Nature that global temperatures might stay flat or decline in the next decade.

One of the first attempts to look ahead a decade, using computer simulations and measurements of ocean temperatures, predicts a slight cooling of Europe and North America, probably related to shifting currents and patterns in the oceans.

The team that generated the forecast, whose members come from two German ocean and climate research centers, acknowledged that it was a preliminary effort. But in a short paper published in the May 1 issue of the journal Nature, they said their modeling method was able to reasonably replicate climate patterns in those regions in recent decades, providing some confidence in their prediction for the next one.

This model might not be correct. But suppose this model is correct. Then the climate models which predict warming due to CO2 emissions are inaccurate at least for the next decade. These models might be accurate in their longer term predictions. But we might not know that based on what happens in the next decade.

If Keenlyside is correct then we'll see warming in 15 to 20 years.

It may partly explain why temperatures rose in the early years of the last century before beginning to cool in the 1940s.

"One message from our study is that in the short term, you can see changes in the global mean temperature that you might not expect given the reports of the Intergovernmental Panel on Climate Change (IPCC)," said Noel Keenlyside from the Leibniz Institute of Marine Sciences at Kiel University.

His group's projection diverges from other computer models only for about 15-20 years; after that, the curves come back together and temperatures rise.

On the bright side, in 15 to 20 years we'll have much better climate models and also better climate engineering technology.

Roger Pielke Jr. points to the basic unfalsifiability of current climate models. If a lack of warming doesn't falsify predictions then what does that say about how we should treat the predictions?

I am sure that this is an excellent paper by world class scientists. But when I look at the broader significance of the paper what I see is that there is in fact nothing that can be observed in the climate system that would be inconsistent with climate model predictions. If global cooling over the next few decades is consistent with model predictions, then so too is pretty much anything and everything under the sun.

This means that from a practical standpoint climate models are of no practical use beyond providing some intellectual authority in the promotional battle over global climate policy. I am sure that some model somewhere has foretold how the next 20 years will evolve (and please ask me in 20 years which one!). And if none get it right, it won't mean that any were actually wrong. If there is no future over the next few decades that models rule out, then anything is possible. And of course, no one needed a model to know that.

Global warming might well be a big real problem coming up on us. But science is about predicting behavior. If we can understand a system well enough in theory our model of that system ought to allow us to make accurate predictions about it. Well, climate models can't do that.

The development of climate models is a very worthwhile human endeavor. But we can't use climate models to decide what to do about global warming because those models are highly inaccurate and unverifiable.

Update: A Some months back a highly accomplished planetary scientist of my acquaintance (whose aversion to politicized science debates is strong enough that he doesn't want to be quoted by name unfortunately) explained to me that he sees science as the ability to predict. He thinks the sources of error remaining in existing climate models are so large that these models aren't predictive.

The models are going to get better. But unless you happen to have a Ph.D. in atmospheric physics (or, far better, a time machine) it is going to be hard to know when the models cross over into high accuracy. Even once the models get really good we won't know until some time has gone by so that we can see that they really predict. Even then their results will still need to be stated with qualifiers such as "assuming total solar radiation doesn't change much" unless we develop the ability to accurately forecast future output of the sun.

I see the climate change models as having problems similar to the Reagan era Star Wars (Strategic Defense Initiative or SDI) program when computer scientist David Parnas opined that there was no way to verify the correctness of the software that would control the anti-missile defense system. How to prove the correctness of the models? This is a very serious question. Verification and validation of software is hard. For climate models it is especially hard because the systems that the models seek to simulate are not sufficiently well understood, the systems have chaotic effects, and our computers do not have enough capacity. Plus, we can't know what correct outputs look like without using a time machine.

We end up needing to just decide that since CO2 causes more heat to be retained that higher CO2 means a substantial probability of warming which melts Antartica and Greenland ice and floods lots of land. We ought to do something to be on the safe side not because we can prove a future disaster but rather because it is just some unknown but probably substantial probability. That's a harder sell than the absolute certainty that you'll hear from the likes of Al Gore.

In a way the debate between elites in Europe and the United States on global warming is irrelevant. China has sailed past the United States in CO2 emissions and that gap is only going to grow larger in future years. The Chinese aren't going to restrain their CO2 emissions. They want economic growth. Even in Western Europe the voting publics have shown an aversion to severe sacrifice to cut back on CO2 emissions. 50 coal electric plants are coming on line in Europe in the next 5 years even as Germany maintains its commitment to phase out nuclear electric power.

Over the next five years, Italy will increase its reliance on coal to 33 percent from 14 percent. Power generated by Enel from coal will rise to 50 percent.

And Italy is not alone in its return to coal. Driven by rising demand, record high oil and natural gas prices, concerns over energy security and an aversion to nuclear energy, European countries are expected to put into operation about 50 coal-fired plants over the next five years, plants that will be in use for the next five decades.

In the United States, fewer new coal plants are likely to begin operations, in part because it is becoming harder to get regulatory permits and in part because nuclear power remains an alternative.

Before you cry out that Euroes do far more than ugly polluting Americans (who are more opposed to coal pollution than are Europeans probably because Americans have higher living standards) keep in mind that European nations have still done far less than needed if we accept some of the more pessimistic IPCC forecasts and arguments about the resulting deleterious effects. Also, looks to me that they've reached the political limits of how much they'll sacrifice.

In a nutshell, people aren't going to sacrifice very much and the number of people who are consuming fossil fuels goes up every year and the average amount of fossil fuels consumed per person goes up every year. Asian industrialization swamps all other effects.

Since a lack of willingness to sacrifice and the difficulty in proving what CO2 build-up will do to temperatures I think we need to approach the problem differently. Admit there is a risk of warming and resulting risks of flooding, crop failures, and other problems. But also admit that humans aren't going to inflict major sacrifices on themselves to do much about it. The big surge in Prius sales is coming mostly from high oil prices, not due to concern about the Greenland ice mass. Though you can expect many Prius buyers to try to claim higher status due to their supposed environmental consciousness.

So what to do? Accelerate the development of technologies for getting energy from non-fossil fuels sources. Also, develop technologies for climate engineering.

Share |      Randall Parker, 2008 May 03 11:31 PM  Climate Trends


Comments
David Govett said at May 4, 2008 1:32 AM:

Climatologists should confine themselves to predicting past climates. The only accurate prediction I've heard from them is that the side of earth facing away from the sun will be cooler than the side facing the sun.
As for computer models of future climates, some will be right some of the time. Gimme my Nobel Prize.

aa2 said at May 4, 2008 1:37 AM:

Totally anecdotal, but in my city it is by far the coldest spring since I've lived here, which is 15 years. Its a city where people love gardens, but this year the plants that were planted to bloom in the late spring or summer froze to death during the long cold spells. Other plants like older trees have been weeks and weeks behind when they usually bloom. Farmers are so late at planting, they are looking at maybe missing the entire season because the ground is just too cold to plant the seeds.

Comically that hasn't stopped the provincial government from passing a 7 cents a liter gas tax increase to stop global warming this year.

Phil said at May 4, 2008 1:47 AM:

I think your interpretation is slightly askew.

Take a look at what James Annan says:

http://julesandjames.blogspot.com/2008/05/another-decadal-prediction.html

And William Connolley:

http://scienceblogs.com/stoat/2008/05/the_malign_nature_effect_again.php

Cheers,

Phil


Phil said at May 4, 2008 1:58 AM:

And as for the "unfalsifiability" brigade (RP Jr really is getting annoying these days), Michael Tobis has it well summed up:

http://initforthegold.blogspot.com/2008/05/falsifiability-question.html

Phil

Bob Badour said at May 4, 2008 6:38 AM:

As for Michael Tobis, I am amazed that one can get an advanced degree in a scientific subject and yet have such little insight into the scientific method.

"Really, though, I don't understand the question."

He seems totally oblivious to the fact that the computational models he creates are hypotheses in exactly the same way Newton's Laws of Mechanics are hypotheses. As hypotheses, we evaluate them on the closely related concepts of falsifiability and predictive value. We call Newton's Laws laws because they are falsifiable, highly predictive and empirically accurate (at non-relativistic speeds and super-quantum distances.)

Granted, Tobis' models have more terms and require a few more calculations, but as hypotheses we must necessarily evaluate them on the same qualities. If the models are not falsifiable and have no predictive value, science must reject them even if religion embraces them. Climate models thus far seem like just so much ornate handwaving.

Jake said at May 4, 2008 8:37 AM:

Unbeliveable, Contrary to their models we have had 10 years without warming. Now they say warming is going to be delayed another 20 years.

But we are still supposed to endure high taxes, high unemployment, and massive starvation just so the global warming industry can get their $2.5 billion a year in grants, speaking fees and contributions.

Randall Parker said at May 4, 2008 9:29 AM:

Phil,

See my update. How to verify and validate climate models? I think it can't be done. This is SDI all over again but with an even more complex system.

dantealiegri said at May 4, 2008 10:52 AM:

Randall:

Sadly,

I don't think the models will get more accurate, for a long time, and the simple reason is that we don't have enough extremely high quality data from a long period.

While we have some decent temperature data going back 100 or so years, that's pretty sparse, and we don't have a good general method for doing something like the CPI does for economics. We don't have a super accurate way of base-lining that data.

While we have tree rings and ice, these also are very few data points for how much data we are extrapolating from them. I think the current issues show how fragile this whole setup is. Knowing firsthand how equations with small difference can produce vastly different answers, in what is trivial simulations compared to this. ( Just doing AI work in simulated environments, and dealing with how they got "confused" )

Truly, we won't have a good model until a model can predict fairly well ( given all inputs it needs ) for 10-15 years. I think something like that in standard in most computer products; work on all the data you have, and then work on data that you didn't get to test with.

averros said at May 4, 2008 1:13 PM:

Actually, the climate models cannot be accurate for fundamental reasons. Not now, not in the future. The climate is a chaotic system. A small perturbation in an initial state, or any parameter, can lead to huge divergence in predictions, and so could small inaccuracies in models and effects of computational techniques.

(Even really precise and predictable physical phenomena like orbits of planets and minor bodies are chaotic longer-term).

Improving data lets models to improve short-term predictions (i.e. up to 10 days) - but the improved accuracy of initial data is totally swamped by artifacts of model simplifications (i.e. granularity and inability to take into account miriads of factors, some of which (such as influence of air-born bacteria on cloud nucleation, etc) are not presently well-understood) after repeated iterations. By 2100 the modern models have error accumulation equivalent to +/- 100 deg C.

See also: http://skeptic.com/the_magazine/featured_articles/v14n01_climate_of_belief.html Note that Skeptic is definitely not a hangout of crackpots and science denialists.

The only honest way to assess our present state of knowledge about longer-term climate is "we don't know". But that means that no policy can be based on the (non-)predicted climate change. Poof go the political and academical carreers built on the confident predictions of trouble to come - so no wonder the establishment has a vested interest in keeping rather mum on the validity of their science.

Fat Man said at May 4, 2008 2:44 PM:

Now that the scientific debate is over, let's cut off their funding.

Fat Man said at May 4, 2008 8:08 PM:

"In a nutshell, people aren't going to sacrifice very much"

They won't even sacrifice their ignorance and their illusions.

Znorist said at May 5, 2008 7:55 AM:

Phil is really, really smart, isn't he? He knows just the right people to follow. People who calm his fears and uncertainties and keep him strong in the faith. We all need to be reassured that these voices of cognitive dissonance can be safely ignored. Only believe.

Michael Tobis said at May 5, 2008 8:39 AM:

So little time, so many misunderstandings.

1) The unforced climate system is not known to be chaotic itself and certainly is not on the time scales at issue.

Weather is chaotic. Climate is the statistics of weather. In the usual examples of chaotic systems, the "weather" of the system is chaotic and the "climate" is stable.

Whether or not we can say anything about a white Christmas in Boston (weather) we can safely predict that Christmas will be colder than the 4th of July (climate).

2) "He seems totally oblivious to the fact that the computational models he creates are hypotheses in exactly the same way Newton's Laws of Mechanics are hypotheses."

The "scientific method" is often stated in absolute terms to be something by people who don't actually do science. It really varies much more than you'd suspect from freshman or high school pop science classes.

Anyway. It is certainly false that any computer model is a perfect representation of the climate system. No modeler makes any such claim. I can give you many reasons for that, chaos not among them. "Resolution" is most important. The models are far too coarse to capture the behaviors of the real system and will always be so.

So what are we trying to achieve if we don't believe our models stand a chance of being "correct hypotheses"?

Well, a wind tunnel model is not a perfect representation of a real airplane, either, but we can draw conclusions about the real airplane and test those. A "model", then, is not a hypothesis, but a tool, a construct upon which we can build hypotheses, such as 1) the model represents the real world adequately for purpose A 2) the model fails to represent the real world adequately for purpose B. These are the sorts of formal hypotheses associated with models, computational or otherwise.

3) "Now that the scientific debate is over, let's cut off their funding."

This is exactly the argument that proves we aren't saying this stuff for the money.

I contend that climate is intrinsically interesting and should not be funded less than it would have been had there been no controversy. I suspect, by now, that it is.

As we try to point out how big the problem is, people see us as environmentalists, as politicians and not as scientists. Especially those who disbelieve the point are inclined to cut off our funds. Even if you believe the point, if you believe what we are doing is advocacy and not science, you might be inclined to say, point made.

Self-interest very much says we should shut up. It's a moral obligation to do otherwise, not self-interest. Therefore arguments that we are in it for the money are crazy.

Climate is a crucial testbed for high performance, high complexity modeling. We have the physics, we have the data, and we have room for improvement. It's the cutting edge of science (certain types of computational biology stand to benefit directly from our methods, for instance) and should be among the most important of the sciences whether or not there is a policy problem.

mt

PS: Phil, thanks for the link.

philw1776 said at May 5, 2008 9:10 AM:

Tobis et. al. impute a positive feedback factor for CO2. Depending upon the magnitude of that empirically assigned, not laws of physics derived, factor which varies according to different scientists' models you then get different amounts of predicted warming over the coming decades from the models. As a global lukewarmist who understands that climate is dynamic and ever changing, I'm skeptical that a simple forecast ~ doubling of CO2 percentage will have the effects predicted by the models. Earth's past climates have had many times current CO2 levels and while considerably warmer on some (not all!) occasions, no Gorish runaway occured.

Secondly, it's not just the un-named planetary scientist who's skeptical of the climate models' capabilities but also notables like Freeman Dyson. I find it depressing that scientists on the not PC side of AGW fear being besieged politically should they speak out scientificly in public.

Michael Tobis said at May 5, 2008 2:55 PM:

It's clear from what Dyson says that he has little idea what a climate model is and is not in close touch with any field relevant to climate change.

http://initforthegold.blogspot.com/search?q=dyson

IPCC (which is what I presume you mean by Gorish) does not say anything about a runaway.

If you put a positive feedback on the carbon side, things get much worse than IPCC says. (Not entirely implausible; such a mechanism must have been at work during the turnaround from the last glacial maximum; the much ballyooed lead of temperature over CO2 is bad news, not good.)

Most IPCC scenarios show accelerating temperature increases because economists predict growing emissions.

Finally, the claim that "Earth's past climates have had many times current CO2 levels and while considerably warmer on some (not all!) occasions" is very much in dispute. Compelling evidence to a close linking of CO2 and earth surface temperature on even very long time scales appears in Coupling of surface temperatures and atmospheric CO2 concentrations during the Palaeozoic era, Rosemarie E. Came , John M. Eiler , JŠn Veizer, Karem Azmy, Uwe Brand & Christopher R. Weidman.

http://www.nature.com/nature/journal/v449/n7159/full/nature06085.html


Bob Badour said at May 5, 2008 7:39 PM:
The "scientific method" is often stated in absolute terms to be something by people who don't actually do science. It really varies much more than you'd suspect from freshman or high school pop science classes.

As someone with a degree in hard science who does science every day, it continues to amaze me that someone can get an advanced degree in a scientific subject and still fail to comprehend science and the scientific method at even a high school level.

Consider the piece of text: f=ma

The piece of text represents Newton's Second Law, which is an hypothesis regarding the relation among measures of force, mass and acceleration. Technically, we should write it using the symbol for "is proportional to" or as f=kma where the constant k depends solely on the chosen units of measure. Often, we choose units of measure where k=1.

The texts f=ma and f=kma are symbolic representations, which is to say we can manipulate them symbolically. In this manner, they are no different from any other computer program. Perhaps the latter is the debugged version of the former. We can join the relations with other relations to derive new programs. Given any three measures (boundary conditions of a sort) we can solve for the remaining measure. We can use the computational model of f=kma for integration, interpolation, differentiation. All kinds of things.

We evaluate the hypothesis (ie. what the text represents) on two very important qualities: falsifiability and predictive value. These are the qualities by which we determine whether our model or hypothesis adequately (ie. usefully) represents the real world. We have determined empirically that Newton's Second Law does not usefully represent the real world at quantum distances or at relativistic speeds. These would fall under purpose B where the computational model or hypothesis fails. Luckily, Newton's Second Law has a purpose A, which is everything except quantum distances or relativistic speeds, where it is falsifiable, highly predictive and empirically accurate.

In the sense that Tobis' computer model is a program text that embodies an hypothesis, it is no different from f=kma. Granted, Tobis' text will be more complex. It will have more terms etc. but it too is suitable for symbolic manipulation, which is what the compiler does with it. Feed it the appropriate boundary conditions and integrate it over the appropriate timespan, and we can use it to solve for things like surface temperature of the earth.

Just like any other hypothesis, we evaluate it on the qualities of falsifiability and predictive value. So, it spits out an answer. How do we know whether that answer is useful? If it is falsifiable and predictive, we can make a lot of predictions and see whether the predictions are right. If it is not, we cannot. If just about every imaginable outcome is within the expected error bounds, the computer model is useless as is the hypothesis it embodies.

His wild gesticulation about wind tunnels shows he cannot tell the difference between an hypothesis and a piece of experimental equipment. A wind tunnel is experimental equipment. It allows an experimenter to control and measure variables like wind speed, air temperature, air pressure, humidity to see how changing each of these variables in isolation affects a given system.

Presumably, he doesn't understand the difference between a physical model, like a scaled version of an airfoil or vehicle, which is a physical artifact suitable for making empirical observations, and a computer model, which is an ephemeral abstraction suitable for symbolic manipulation and representing an hypothesis.

Tobis' defense of his position is just so much handwaving. Unless his computer models are falsifiable, they are not even wrong.

In the end, I think perhaps I should have cited Date's Principle of Incoherence: It is very difficult to respond coherently to that which is incoherent.

philw1776 said at May 6, 2008 10:19 AM:

Past CO2 levels may be in dispute, but they SHOULD be. That is science. Here's one view showing some much higher CO2 levels in past eras.

http://www.geocraft.com/WVFossils/Carboniferous_climate.html

Today's Climate Models use empirical, NOT physical law calculated, positive feedback multipliers for CO2 induced temperature rise. Such 'fudge factors' should be in dispute, but the strident warmist political faction intimidates those who are skeptical about such methodology. If a Freeman Dyson is not free from ridicule or attack, may the gods help an obscure planetary scientist skeptic, particularly one without tenure. A climate model is a tool. It is pure hubris to insist that today's models accurately factor in all forcing functions. We know THAT much about climate? Hardly. Scientists and the public should be skeptical about a process that does not demonstrable model past climate accurately, uses empirical factors and is not falsifyable.

Just to be clear, I am not disputing that CO2, methane and water vapor are 'greenhouse' gasses, or that the Earth is warming*, just that the alarmist projections of algore and the political scientists writing the IPCC SUMMARY are not to be accepted without question.

* the graph I referenced shows (unfortunately) that the 'natural' temperature for this planet is several degrees C above today's and the 'natural' state is NO ice at the poles.

Kelly Parks said at May 6, 2008 1:14 PM:

Is it possible that rather then an upcoming 20 year cooling break from global warming, the last 20 years of warming has been a warming break from global cooling? That would mean Lowell Ponte (author of "The Cooling", 1976) was right after all.

averros said at May 9, 2008 9:47 PM:

> Weather is chaotic. Climate is the statistics of weather. In the usual examples of chaotic systems, the "weather" of the system is
> chaotic and the "climate" is stable.

Which is complete nonsense, for a very simple reason - you cannot have statistics over chaotic systems. They are not ergodic. Averaging over chaotic systems is simply mathematically invalid. Incidentally, there are other processes which do not allow statistics for a variety of reasons - the most widely known are self-similar traffic found (within a range of time scales) on the Internet and utility functions in economics.

Most people have hard time grasping that there are situations when addition is an invalid operation and produces results about as meaningful as division by zero.

> Whether or not we can say anything about a white Christmas in Boston (weather) we can safely predict that Christmas will be colder
> than the 4th of July (climate).

Yep, that's because it is a sum of directly acting regular forcing of large magnitude (solar radiation) and lower-magnitude chaotic process, not because you can do meaningful averaged prediction of that process. Besides, we're talking not about prediction of Christmas temps next year, but about predicting Christmas temps 100 years from now - when that chaotic process can drift pretty far from its present range.

Michael Tobis said at May 10, 2008 7:58 PM:

Averros is confused. The Lorenz system is chaotic has stable dynamics. It's not even stochastic, strictly speaking.

If you apply some additive noise, the system is chaotic and ergodic. Readers, possibly including Averros, may not know what that means. Ergodic means a long enough time sample is equivalent to an ensemble sample. Less formally, it means that the statistical properties of the system are constant on a long enough time scale.

I don't know about utility functions, but "self-similar traffic" surely has statistics like any other arrival process.

If Averros believes that in 100 years it may be likely that july is cooler than December in Boston, it follows for me that Averros believes that the atmosphere is not subject to the laws of physics.

PhilW's statement "Today's Climate Models use empirical, NOT physical law calculated, positive feedback multipliers for CO2 induced temperature rise." is incorrect as well though not in as balatant a way. There are indeed empirically determined constants, but they are determined to get *observed* climate right. Then the CO2 sensitivity emerges from that. There's little variation between the resulting sensitivities and sensitivities obtained from direct measurement or from paleoclimate observations. CO2 has indeed varied considerably on the longest time scales, but the page he cites is very much out of date. There's a strong recent indication that on Phanerozoic time scales (the ones in his graph) do indeed respond directly to CO2 concentrations.

Came RE, Eiler JM, Veizer J, Azmy K, Brand U,Weidman CR (2007) Coupling of surface temperatures and atmospheric CO2 concentrations during the Palaeozoic era. Nature 449(7159): 198.

As for Bob Badour, a climate model is not a hypothesis, it is precisely a piece of lab equipment. If we could afford to play with real planets, well then, there wouldn't be a problem, would there? There is no lack of useful theory in the climate sciences, but the sum total of the knowledge has consequences which lead to equations with no closed form solution. Hence we resort to computational approximations, which have been enormously useful in the past.

It is very peculiar to have outsiders try to describe what one is doing and why with such adamance, especially when they get it wrong. Could you folks please at least have the kindness to state your suspicions as a question? Thanks.

Randall Parker said at May 10, 2008 8:40 PM:

Michael Tobis,

Your "piece of lab equipment" is producing predictions that we are supposed to treat as predictions coming from scientists. We normally think that scientists make predictions based on theories. Theories that do not accurately predict are not worth much scientifically.

But you can't state a pen and paper theory for why global warming is going to happen because the system in question is really really complex. Hence the resort to computer models. So the computer models become part of a statement of a theory or a statement of a hypothesis. If it isn't then why should we treat it as science?

Michael Tobis said at September 4, 2008 7:59 AM:

"We normally think that scientists make predictions based on theories. Theories that do not accurately predict are not worth much scientifically."

I sympathize with the idea of "model code as embodiment of theory", but in practice these models are used as analogous systems. We work on making the analogy better, and whether or not we are still making progress is something I myself question. Nevertheless, past successes have been suibstantial.

At present, these analogous systems do confirm rather simple global average calculations that have existed for over a century indicating that a realistic accumulation of artificial greenhouse gases can substantially affect climate, directly through a pattern of warming the surface and cooling the upper atmosphere, and indirectly as a consequence of that change mediated by fluid dynamics, cloud physics, etc.

The analogous systems create a model of very high dimension using a relatively small number of free parameters which produce weather behavior and climate statistics in substantial agreement with observations. Accordingly they are the best we can do in understanding and prognosing the behavior of the whole system (conditional on human inputs).

If you reject this approach you are left with far less constrained back-of-the-envelope calculations, wherein the various feedbacks are not well constrained, and paleoclimate observations which do not provide perfect analogies. Consequently, a formal risk calculation comes out much worse than if you include the calculations from global models, and the rationale for restraint of emissions becomes far more urgent.

Randall Parker said at September 4, 2008 9:04 PM:

Michael Tobis,

The analogous systems agree with the simpler calculations. I'm not sure they confirm those calculations.

I do not reject model building. I also understand that the models are getting better. But I reject treating the outputs of the models as absolute proof. We are left with guesses about the probability that our models are producing outputs with errors that are not too big.

Earlier you commented:

Well, a wind tunnel model is not a perfect representation of a real airplane, either, but we can draw conclusions about the real airplane and test those.

But we can't run experiments on the real climate to test the accuracy of the climate models. Whereas we can build real planes and see how well the aircraft models hold up. That's an important difference. We'll only find out decades from now if some of the climate models are accurate.

Post a comment
Comments:
Name (not anon or anonymous):
Email Address:
URL:
Remember info?

                       
Go Read More Posts On FuturePundit
Site Traffic Info
The contents of this site are copyright ©