L'Ombre de l'Olivier

The Shadow of the Olive Tree

being the maunderings of an Englishman on the Côte d'Azur

01 December 2009 Blog Home : All December 2009 Posts : Permalink

Science, Computer Science and Climate Science

I (and the visitors and commenters at this blog and elsewhere) have had another week or so to look through the CRU leaked code so it's time for an update. I should note that I am, making a VERY ARTIFICAL (ha ha) decision to concentrate purely on the CRU temperature series code and ignore all the tree-ring stuff. However with that said quite a few of the more generic comments will certainly apply to that code too.

Firstly it is obvious that there is a serious disconnect between the worlds of acedemic science and (commercial) software development. No one from a software development background who has looked at this code has anything polite to say about it. One of my commenters wrote:

As for the code review, I've seen a ton of awful FORTRAN when I was designing death for bux at the missile factory There is always a tension between The Scientists, who tend to be pathetic coders, and the programmers who don't understand the theory. And in many cases - ie, probably this one - academic environments don't have professional programmers.

In well-done scientific programming, you have developers write a framework and APIs and let the scientists implement The Magic Algorithm in an environment where they don't have to do things like file I/O, sorting, etc.

This indeed is the problem here. The CRU folks have, for various reasons, not outsourced much if any of the development to actual professional programmers and the result is therefore nasty.

In some respects the code itself is actually the lesser problem. The greater problem is the lack of process management tools - version control, archiving, etc. - which means that we have in fact no idea whether the code in the leak is the version used in the current hadcru ts (v3) or one of the earlier v2.x editions. This is important because it would help us understand whether this code is the one where certain bugs identified by "Harry" have been fixed or not. Take the notorious overflow bug in HARRY chapter 17:

Inserted debug statements into anomdtb.f90, discovered that a sum-of-squared variable is becoming very, very negative! [...]

DataA val = 49920, OpTotSq=-1799984256.00

..so the data value is unbfeasibly large, but why does the sum-of-squares parameter OpTotSq go negative?!!

Probable answer: the high value is pushing beyond the single-precision default for Fortran reals?

Now the code for this is fairly easy to find* and when one looks at it one discovers that the bug has not been fixed. The code still says:

integer, pointer, dimension (:,:,:)        :: Data,DataA,DataB,DataC
          if (DataA(XAYear,XMonth,XAStn).NE.DataMissVal) then
          end if

As we see from the declaration line DataA is an Integer (and not even a long integer but a regular one with a range of 0..65535 for unsigned values or -32768..32767 for signed ones). Hence the overflow problem when the value read in turns out to be 49920. I have done a quick check for **2 in other parts of the code and haven't found any other integers that get squared but there may well be some.

Talking of bugs, I mentioned the "silently continue on error" 'feature' in my first post but my former boss John Graham-Cumming points out that the code actually has a bug in it:
; avoids a bug in IDL that throws out an occasional
; plot error in virtual window
if error_value ne 0 then begin

[...]The first bug appears to be in IDL itself. Sometimes the polyfill function will throw an error. This error is caught by the catch part and enters the little if there.

Inside the if there's a bug, it's the line i=i+1. This is adding 1 to the loop counter i whenever there's an error. This means that when an error occurs one set of data is not plotted (because the polyfill failed) and then another one is skipped because of the i=i+1.

Given the presence of two bugs in that code (one which was known about and ignored), I wonder how much other crud there is in the code.

To test that I was right about the bug I wrote a simple IDL program in IDL Workbench. Here's a screen shot of the (overly commented!) code and output. It should have output 100, 102, 103 but the bug caused it to skip 102.

Also, and this is a really small thing, the error_value=0 is not necessary because the catch resets the error_value.

In fact the whole error handling code could be reduced to the "goto" line. John ends his post with a question "BTW Does anyone know if these guys use source code management tools?" and I'm 99.99% sure the answer to his question is that they do not use SCM tools.

This is a problem because, as another commenter of mine wrote:

Code this bad is the equivalent of witchcraft. There is essentially no empirical test to distinguish its output from nonsense. Sad to say, I've seen things like this before. Multi-author, non-software engineer-written codebases tend to have these sorts of hair-raising betises liberally sprinkled throughout (although this an extreme example - I wouldn't want to go into that code without a pump-action shotgun and a torch). Ian Harris certainly deserves our sympathy. Trying to hack your way through this utter balderdash must still have him sitting bolt upright in the middle of the night with a look of horror on his face.

The problem all this hits is the difference between "Science" and "Computer Science".

When scientists submt papers to proper academic journals they are supposed to write up enough methodology so that someone else can replicate their results. In theory at least. In practice in much of modern science this is not adhered to, but in theory - as Feynman explained in "Cargo-cult Science" - what you should do is the following:

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can--if you know anything at all wrong, or possibly wrong--to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

In summary, the idea is to try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.

The easiest way to explain this idea is to contrast it, for example, with advertising. Last night I heard that Wesson oil doesn't soak through food. Well, that's true. It's not dishonest; but the thing I'm talking about is not just a matter of not being dishonest, it's a matter of scientific integrity, which is another level. The fact that should be added to that advertising statement is that no oils soak through food, if operated at a certain temperature. If operated at another temperature, they all will-- including Wesson oil. So it's the implication which has been conveyed, not the fact, which is true, and the difference is what we have to deal with.

We've learned from experience that the truth will come out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature's phenomena will agree or they'll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven't tried to be very careful in this kind of work. And it's this type of integrity, this kind of care not to fool yourself, that is missing to a large extent in much of the research in cargo cult science.

For science that involves computer programs it seems blindingly obvious that the code must be included in the supplemental information. Even more, in cases like this where the data sources are mixed, it is vital that the actual raw data be available as well. And the excuses proffered by the CRU and its apologists (e.g. the CRU record largely agrees with GISS or a desire to not confuse the non-scientists with confusing detail) are neatly skewered in subsequent passages of the essay/speech.

Part of the problem is that as Feynman went on to note further down is that there has become a trend in science to not actually repeat experiments:

One of the students told me she wanted to do an experiment that went something like this--it had been found by others that under certain circumstances, X, rats did something, A. She was curious as to whether, if she changed the circumstances to Y, they would still do A. So her proposal was to do the experiment under circumstances Y and see if they still did A.

I explained to her that it was necessary first to repeat in her laboratory the experiment of the other person--to do it under condition X to see if she could also get result A, and then change to Y and see if A changed. Then she would know that the real difference was the thing she thought she had under control.

She was very delighted with this new idea, and went to her professor. And his reply was, no, you cannot do that, because the experiment has already been done and you would be wasting time. This was in about 1947 or so, and it seems to have been the general policy then to not try to repeat psychological experiments, but only to change the conditions and see what happens.

Nowadays there's a certain danger of the same thing happening, even in the famous (?) field of physics. I was shocked to hear of an experiment done at the big accelerator at the National Accelerator Laboratory, where a person used deuterium. In order to compare his heavy hydrogen results to what might happen with light hydrogen" he had to use data from someone else's experiment on light hydrogen, which was done on different apparatus.

This, I think, explains why the climate science "community" has such a hard time with people like Steve McIntyre. They simply are not used to people actually trying to replicate their results instead of taking them on trust.Sharp PC-Z1(B) Netwalker As a result they have never really thought about data and code archiving policies and all the other techniques that are required if someone is to replicate a complex piece of data analysis. They aren't helped by the rise of the internet and the power of modern computers. These days you can spend about €400 and get a netbook (and a terabyte external harddrive) with more processing power and more data storage than a mainframe 25 years ago.

The photo to the left shows my latest toy - it is a Sharp Netwalker PC-Z1(B) which fits in a pocket and yet has 512Mbytes of RAM and a 4GB flash hard drive. It cost me Y39,000 (about €300) in Japan. It also has a USB socket into which you can plug in an external hard disk connected at USB 2 speeds. A 1Terabyte harddrive cost me US$120 a few months back and it can be used (as illustrated) as storage for the Netwalker. That $120 disk could store all the raw files, the intermediate files and the final output of the HADCRU temperature series and the cpu - despite it being merely an ARM, not even an Intel Atom - is almost certainly more powerful that the workstations that the version 1 and 2 of the temperature series were developed on.

Furthermore the availability of broadband internet access (at c.10Mbit/s - the same as Ethernet LANs promised 20 years ago) means that it is easy to transfer data to anyone not just those in academic research institutes. A current desktop PC with two or maybe four 64 bit cores, gigabytes of main memory etc. and a high speed internet connection is likely to be able to process a century of climate data from thousands of stations in a few hours at most and quite possibly the longest part of the process (once the code has been got running) would be downloading the raw data to start it off.

This challenges climate scientists because it lets the amateur dilettante try and reproduce scientific results that he or she is interested in. Climate science interests a lot of the engineering geeky sorts because it is a politically important topic and one where it seems like we should be able to easily verify the model results that predict the imminent end of the world as we know it. A lot of us geeks are also involved in open source development, work in IT departments in businesses etc. and hence have a view of software development and a knowledge of how change management and other related tools help to reduce the inevitable bugs in code.

And this leads us to the expectations of computer scientists/programmers and is why we get so upset when we are finally able to look at the CRU code. If the code were for a one off then what we see is excusable, however we are now at version 3 of this code and it is rerun every month. This means the code should have moved from the hack it together prototype form to one with clear datastructures, use of SQL databases, version control etc. etc. A product will have a clear list of dependencies, list of files and directories required (and ideally a config file where these things will be place so they aren't hard coded), test data and usecases to show how to get it working and so on.

*the link is to anomdtb.f90, which is mentioned quite a few times in the READ_ME.

03 December 2009 Blog Home : All December 2009 Posts : Permalink

Fisking Nature and other AGW Alarmists

Nature magazine has come out swinging in defence of "climate science" and in the process displays attitudes that are more suited to greenpeace than a scientific journal. I'm going to fisk it to show just how much I find to disagree wth it:

Stolen e-mails have revealed no scientific conspiracy, but do highlight ways in which climate researchers could be better supported in the face of public scrutiny.

This lede is actually pretty unobjectionable in what it says but it might perhaps be worthwhile noting what it fails to say. The evidence of the e-mails themselves may not show a major scientific conspiracy but it does show evidence of malfeasance such as conspiring to delete data/e-mails when they are requested under FoI legislation and nobbling journal editors.

The e-mail archives stolen last month from the Climatic Research Unit at the University of East Anglia (UEA), UK, have been greeted by the climate-change-denialist fringe as a propaganda windfall (see page 551). To these denialists, the scientists' scathing remarks about certain controversial palaeoclimate reconstructions qualify as the proverbial 'smoking gun': proof that mainstream climate researchers have systematically conspired to suppress evidence contradicting their doctrine that humans are warming the globe.

As various people have pointed out the apprently deliberate conflation of climate change skepticism with the neo-nazi holocaust denial by environmental activists is insulting to the memory of the six million plus dead, as well as being distinctly innaccurate. Not everyone who has read or commented on theseis a "denialist". In fact I think the majority of informed comment has come from people who are not deniers that the globe has warmed but rather are skeptical about the anthropogenic CO2 global warming hypothesis. Lumping together skeptical engineering and scientific folk with the lunatic fringe is not helpful, neither is pretending that skepticism does not exist in this issue.

What we "skeptics" consider to be far more of a smoking gun is the distinctly shoddy nature of the source code which Nature doesn't deign to admit as havng been leaked (stolen) too.

This paranoid interpretation would be laughable were it not for the fact that obstructionist politicians in the US Senate will probably use it next year as an excuse to stiffen their opposition to the country's much needed climate bill. Nothing in the e-mails undermines the scientific case that global warming is real — or that human activities are almost certainly the cause. That case is supported by multiple, robust lines of evidence, including several that are completely independent of the climate reconstructions debated in the e-mails.

This is correct, nothing in the e-mails undermines (or proves) the case the global warming is real - or that human activities are the cause. What does undermine the case is the afforementioned messy software. It may well be that the code produces the correct answer but looking at it, and looking at HARRY_READ_ME and indeed e-mails on the topic, does not convince. Neither do comparisons with GISS which seems to be equally low quality and comparison with which seems to be one of the "QA" tests in the code, along with checking that it gets similar answers to previous versions.

First, Earth's cryosphere is changing as one would expect in a warming climate. These changes include glacier retreat, thinning and areal reduction of Arctic sea ice, reductions in permafrost and accelerated loss of mass from the Greenland and Antarctic ice sheets. Second, the global sea level is rising. The rise is caused in part by water pouring in from melting glaciers and ice sheets, but also by thermal expansion as the oceans warm. Third, decades of biological data on blooming dates and the like suggest that spring is arriving earlier each year.

We skeptics or "lukewarmers" don't deny that the earth has warmed up over the last century or so. Nor do we deny that humans may have had some influence in the process. However we do question the degree of warming, whether it is "unprecedented" and whether humans really have caused the majority of it. Nothing that we have seen in these files convinces us that it does and lots suggests that actually the scientists seem to be partly working back from the answer. It would also help if blythe comments about antarctic ice melting etc. were actually backed up by the facts. As it happens it would seem that antactic ice as a whole has grown rather than shrunk in recent times even though some parts by the Antacrtic peninsular have indeed seen ice loss.

Likewise "decades of biological data" show warming but they do not show anything unprecedented as far as I can tell. No one is growing grain on Greenland, the Yamal peninsular - to take a place of recent dendroclimatological interest - is not seeing the treeline move further north than it has been in past warmer eras and so on. Yes the earth warmed over the 20th century. The earth also warmed over the 10th century and then cooled down again a few hundred years later.

Denialists often maintain that these changes are just a symptom of natural climate variability. But when climate modellers test this assertion by running their simulations with greenhouse gases such as carbon dioxide held fixed, the results bear little resemblance to the observed warming. The strong implication is that increased greenhouse-gas emissions have played an important part in recent warming, meaning that curbing the world's voracious appetite for carbon is essential (see pages 568 and 570).

It isn't just "denialists" who maintain that these changes are a part of natural climate variability. Even the hardest core "alarmist" will agree that in the past the climate has changed with no help from humans.

And the climate modelling assertion is laughable. If climate modellers had been able to predict the last 10 years of roughly flat temperatures then I'd have more sympathy with their inability to get warming without CO2 changes. The fact is that over the last 8 -10 years models have vastly overpredicted the warming, to the extent that they are mostly rejectable at a 95% confidence level.

A fair reading of the e-mails reveals nothing to support the denialists' conspiracy theories. In one of the more controversial exchanges, UEA scientists sharply criticized the quality of two papers that question the uniqueness of recent global warming (S. McIntyre and R. McKitrick Energy Environ. 14, 751–771; 2003 and W. Soon and S. Baliunas Clim. Res. 23, 89–110; 2003) and vowed to keep at least the first paper out of the upcoming Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). Whatever the e-mail authors may have said to one another in (supposed) privacy, however, what matters is how they acted. And the fact is that, in the end, neither they nor the IPCC suppressed anything: when the assessment report was published in 2007 it referenced and discussed both papers.

So the fact that a conspiracy failed makes discussing it all right? Moreover it seems to me that at the very least some of the IPCC/journal related emails show attempts to block if not totally silence publication of alternative vewpoints. More to the point the whole Wahl/Amman thread shows that they had no qualms about bending the rules one way but not the other.

If there are benefits to the e-mail theft, one is to highlight yet again the harassment that denialists inflict on some climate-change researchers, often in the form of endless, time-consuming demands for information under the US and UK Freedom of Information Acts. Governments and institutions need to provide tangible assistance for researchers facing such a burden.

Now here's where we come to a total disagreement of fact, not interpretation. The timeline that Willis Eschenbach posts shows that this group where proactively seeking to thwart FoI requests and other attempts to get them to show their code and data. This group of scientists have spent years deliberately obscuring their data sources and methods and have relied on a lack of curiousity of peer reviewers and jurnal editors to get away with it. The fact that when they were finally called on this they still dragged their feet resulted in their inquirers seeking multiple different ways to get them to repsond. Of course from these emails and documents it is also clear that one reason why they were so unwilling to let any one see what they had done was that the internal details were a big mess! Governments and institutions probably ought to insist that researchers actually adhere to basic software quality standards then there wouldn't be a problem about complying with FoI requests.

The e-mail theft also highlights how difficult it can be for climate researchers to follow the canons of scientific openness, which require them to make public the data on which they base their conclusions. This is best done via open online archives, such as the ones maintained by the IPCC (http://www.ipcc-data.org) and the US National Climatic Data Center (http://www.ncdc.noaa.gov/oa/ncdc.html).

Umm I'm not at all sure that the reserchers found it difficult. They just didn't do it and they made no attempt to adopt methods that are standard in industry to help them to do it. However that's water under the bridge now and I agree that open archines are a good idea. Moreover it is key that the code also be archived so that it is possible to see how the raw data is processed to produce the results. Right now this is a mysterious black box process with no guarantee that the algorithm descibed in a publication is the one employed in the computer code that produces the results.

But for much crucial information the reality is very different. Researchers are barred from publicly releasing meteorological data from many countries owing to contractual restrictions. Moreover, in countries such as Germany, France and the United Kingdom, the national meteorological services will provide data sets only when researchers specifically request them, and only after a significant delay. The lack of standard formats can also make it hard to compare and integrate data from different sources. Every aspect of this situation needs to change: if the current episode does not spur meteorological services to improve researchers' ease of access, governments should force them to do so.

To put it bluntly I don't believe the statement about the contractual restictions. Last summer while many of us Climate Audit readers filed relevant FoI requests with the UEA/CRU, we tried to figure out what nations would have restricted the access to the raw historical meterological data and for the most part concluded that there were esentially none. More to the point, as numerous folks pointed out then, if climate change is a global priority then surely national governments ought to be interested in getting people to show how bad it is and thus would waive any confidentiality agreements from 20 years back. We skeptics were willing then (and I have no doubt still are) to contact each government in turn and get a document waiving confidentiality. For some reason the CRU not only refused to take us up on the offer they also refused to tell us which countries they thought they had confidentiality agreements from.

The lack of standard formats is not an issue. The raw data just needs to be stored in some open location and a dozen or more programmers, myself likely one of them, will write homogenizing routines. Governments should not have to do anything except require their met offices to post historical data on a publicly available server.

The stolen e-mails have prompted queries about whether Nature will investigate some of the researchers' own papers. One e-mail talked of displaying the data using a 'trick' — slang for a clever (and legitimate) technique, but a word that denialists have used to accuse the researchers of fabricating their results. It is Nature's policy to investigate such matters if there are substantive reasons for concern, but nothing we have seen so far in the e-mails qualifies.

That's nice of Nature. I can think of a couple of papers where I might be a tad more curious. For example there is:

Jones, P.D., Groisman, P.Ya., Coughlan, M., Plummer, N., Wang, W-C. and Karl, T.R., 1990“Assessment of urbanization effects in time series of surface air temperature over land.” Nature 347, 169-172 (R)

which is, I believe, mentioned in this email. And some of the tree-ring papers where the "trick" was used to "hide the decline" might merit a little checking to see if the trick is mentioned. And whether the code to do it can be seen.

The UEA responded too slowly to the eruption of coverage in the media, but deserves credit for now being publicly supportive of the integrity of its scientists while also holding an independent investigation of its researchers' compliance with Britain's freedom of information requirements (see http://go.nature.com/zRBXRP).

The UEA seems to have let themselves get guided by Jones & co. throughout the summer so that, instead of deciding then that they should invest a little in openness and transparencey they decided to help the CRU fort up. If they had done this then I suspect that there would have been far fewer FoI requests and almost certainly no leak.

In the end, what the UEA e-mails really show is that scientists are human beings — and that unrelenting opposition to their work can goad them to the limits of tolerance, and tempt them to act in ways that undermine scientific values. Yet it is precisely in such circumstances that researchers should strive to act and communicate professionally, and make their data and methods available to others, lest they provide their worst critics with ammunition. After all, the pressures the UEA e-mailers experienced may be nothing compared with what will emerge as the United States debates a climate bill next year, and denialists use every means at their disposal to undermine trust in scientists and science.

It seems to me that the scientists have largely been hoist by their own petard here. The vast majority of skeptics have written politely at first and only resorted to nasty words or FoI requests when the scientists have failed to respond helpfully. Moreover by respo nding helpfully and honestly to the polite requests it would have been far simpler to justify ignoring the loons. However that wasn't what happened and instead the scientists have lumped all their critics together which is why many of us skeptics no longer trust them. If they can't see the difference between someone asking how they reach an outcome and someone saying they are liars then they are the ones with the problem.

And there is a fairly simple fix for the trust in science thing. It is called transparency and allowing external audit, precisely as people like Steve McIntyre have been suggesting. While I'm sure Nature, Science and all the various scientists involved are going to deny it and not take him or anyone else up on our offers to help, I'm fairly sure that what will emerge from this is precisely the sort of open transparent system he's been campaigning for all the time.

04 December 2009 Blog Home : All December 2009 Posts : Permalink

20091204 - Friday Olive Tree Blogging

The harvest begins. So far we've picked about 35 kg of olives. We've delivered them to the mill to be added to the general stock and in return we'll get about 3.5 litres of oil back. Actually it takes about 8kg of olives to make a litre of oil but the mill keeps a portion as its price for milling them. I expect we'll pick some more olives tomorrow and end up delivering 55-60kg of olives and thus getting some 5-6l of oil.
20091204 - Friday Olive Tree Blogging

As always click on the image to see it enlarged and don't forget to visit of the olive tree blogging archives for further reminders of how nice olive trees are.

04 December 2009 Blog Home : All December 2009 Posts : Permalink

Google Glimategate

There's been an oddity reported here and there (e.g. at Lucia's Rank Exploits) that Google isn't suggesting "climate gate" as a query when you type the first few letters in. It is also reported that arch rival Bing does.

This seems to be a partially true claim. At first I thought google was doing terribly since it didn't suggest it at all, then I realized that I was by mistake at Google.fr (the dangers of living in France).
google.fr no climate gate
However, when I went to Google.com it was reported.
google.com has climate gate
On the other hand Bing is definitely keener to suggest climate gate to me. I only had to type in CL for the suggestion to pop up.
bing shows climate gate very quickly
I thought perhaps I should get a third opinion so I went to Yahoo and it behaved (once I told it to not be French) like Google but I had to type in a bit more
Yahoo worse than google for climate gate
Well this piqued my curiousity so I wondered what about "HARRY_READ_ME"? as a search. Here Yahoo and Bing both did fairly well though Bing was slightly better.
Yahoo shows up harry eventually tooBing liks Harry read me
but Google, on the other hand, isn't interested in Harry at all, not even when you type the whole phrase in.
Google doesn't like harry
What does this mean? probably nothing other than that more peple use Google and search for more things via it, but it is kind of odd. It also illustrates the same lack of transparency credibility problem as HADRU and GISTEMP do: i.e. since these recommendations are a black box we have no idea how they come up with an answer and hence we may suspect nefarious conspiracy.

04 December 2009 Blog Home : All December 2009 Posts : Permalink

A Skeptic not a Denier

It seems to me that there is a deliberate policy by some believers in (C)AGW* to lump all their critics into one box and label them as "Deniers". As I said yesterday,
the apprently deliberate conflation of climate change skepticism with the neo-nazi holocaust denial by environmental activists is insulting to the memory of the six million plus dead, as well as being distinctly innaccurate. Not everyone who has read or commented on these [emails] is a "denialist". In fact I think the majority of informed comment has come from people who are not deniers that the globe has warmed but rather are skeptical about the anthropogenic CO2 global warming hypothesis. Lumping together skeptical engineering and scientific folk with the lunatic fringe is not helpful, neither is pretending that skepticism does not exist in this issue.
I think it is fair to say that most of the leading bloggers beating up about climate change are skeptics: Anthony Watts, Steve McIntyre, "Bishop Hill", The "Devil", Jeff Id and Lucia to list the blogs I read frequently are all loudly saying "show me" not "nah nah nah I can't hear you". I can't speak for them but I too am a skeptic, a "lukewarmer" as it has been described. I think a good deal of this from Roger "the Prat" Pielke Sr and others describes my position admirably:

“In addition to greenhouse gas emissions, other first-order human climate forcings are important to understanding the future behavior of Earth’s climate. These forcings are spatially heterogeneous and include the effect of aerosols on clouds and associated precipitation [e.g., Rosenfeld et al., 2008], the influence of aerosol deposition (e.g., black carbon (soot) [Flanner et al. 2007] and reactive nitrogen [Galloway et al., 2004]), and the role of changes in land use/land cover [e.g., Takata et al., 2009]. Among their effects is their role in altering atmospheric and ocean circulation features away from what they would be in the natural climate system [NRC, 2005]. As with CO2, the lengths of time that they affect the climate are estimated to be on multidecadal time scales and longer.

Therefore, the cost-benefit analyses regarding the mitigation of CO2 and other greenhouse gases need to be considered along with the other human climate forcings in a broader environmental context, as well as with respect to their role in the climate system.”

To put it another way it is blindingly obvious that humans have some effect on the climate because we're turning what would be otherwise forests or steppes into fields, cities and (sometimes) deserts. We are also consuming terawatts of energy which will eventually be radiated as heat and while the energy received from the sun is much greater (174 petawatts according to wikipedia) it is of a similar order of magnitude to the energy generated by the earth's interior. This must have some effect and simple physics, not to mention various well documented Urban Heat Islands, suggests that it will result in the surface warming up slightly on average.

But this is about as far as I'm going. 174 petawatts is roughly 10,000 times more energy than humanity produces so absent some clear proof to the contrary I'm skeptical that humans are causing all or even most of the rise in temperatures over the last 50-100 years. The Pedant General in a guest post at the Devil's Kitchen put it very well and even had some nifty diagrams:

The various sorts of views on AGWLet's start at the top, and bear with me.

Only if you can answer "yes" all the way down that chain can you get to Copenhagen.

Really you should read the lot because it puts both HADCRU and the tree-ring hockeysticks into context. But for this post that extract is all we need. The (C)AGW alarmists are all the way down at the bottom right with the Danish prostitutes. The outright deniers are in the pub on the top left. The rest of us are somewhere in the middle getting neither beer not sex. Personally I'm somewhat schizophrenic in that I'm in the "Find out" camp and the "Adapt" camp because I think we really do need to find out whether (C)AGW is correct but I'm fairly sure that no matter what we learn there the best response is going to be adapt to the changes and not try stopping them (although if AGW is proven then the geo-engineering ideas may well be a good "adapt" strategy).

A nice garden wallHowever I suspect if put the global warming case to the majority of folks without chants of "ZOMG we're doomed!" in the background they'd land in the skeptics camp. Especially when they learn just how much lifestyle and income change they will have to embrace to satisfy the green zealots.

Since, thanls to Copenhagen and the various ETS etc. schemes being proposed by world leaders, the required sacrifice is becoming clear it is surely no surprise that more and more people are saying the equivalent of "hang on a second, are you sure? let me see your working" And that is why the CRU leak is so serious for the "alarmists" because it exposes the fact that their working isn't quite as pristine as they want you to think.

This leads me to my "garden wall" metaphor. AGW is presented as being like the wall I've got pictured over on the left.Wall with a crack in it Nice spiffy paintwork, straight lines and so on. When you look at a wall like that you think it's well built, solid and so on. It is, basically, a wall you can rely on (and it has some nice green ivy to make it eco-friendly). Now the various exposures of Steve McIntyre regarding hockeysticks and the GISS temperature series does weaken your belief a little so perhaps its a bit like the wall to the right - still stright and nicely painted but with a whacking great crack in it.

Now you can look at this crack and think that it's a bit worrying. But on the other hand you then look at the rest of the wall and it's like first wall photo (in fact advertent readers will note that there's a bit of overlap and that the two photos are copped parts of the same wall) and decide that one crack is no big deal. Sure the story wall isn't quite as perfect as you'd prefer but it's not a major problem.

Then there's the CRU leak and you discover how your nice solid wall with its straight lines and all has been built. And that's kind of like scraping off the plaster and paintwork and discovering this:
A wall with adecline that needs to be hidden
That is to say, a wall with a rather embarrassing dip in the middle that needs some concrete on top and plaster on the side to err "hide the decline".

It may well be that just as this second wall is likely to be just as good as the first one when it comes to stopping curious passers-by peeking, the science behind AGW is going to turn out to be right but messy but to stretch the metaphor slightly, while its OK knowing that a garden wall has been been built a bit sloppily it's not OK to have that same quality as the loadbearing wall of a 4 story building. The problem with (C)AGW is that it's proponents have indeed built a massive great structure of carbon emissions trading, wind power etc. on top of the "human emissions of CO2 are unprecedented" foundation and, thanks to the CRU leak it looks very much as if that foundation is about as sturdy as this second wall.


*(Catastrophic) Anthropic Global Warming

07 December 2009 Blog Home : All December 2009 Posts : Permalink

Prejudging a Book by its Cover

I don't buy many books in bookshops these days as most of the books I want are more readily available online and indeed I buy most of my reading material as ebooks anyway. However I found this description of book buyers literally judging books by their covers to be amusing and informative:

One thing that I've noticed happening more and more often in the store when people are browsing and chatting in front of the New Release Trade paperback shelf is that a customer will point at a specific book and say:

"Is this self-published?"


"Wow, there are a lot of self-published books here."

In fact, none of the books at which they're pointing are self-published.

I finally realized this weekend that the reason they're asking is because of the cover stock used on those specific trade paperbacks. If the trade paperback has a flat, glossy cover, they ignore the art, the type and the cover design. If it's flat gloss, with no foil, no embossing, no textures, they are now assuming that the book is self-published; they won't even pick it up.

The informative part is that it shows how "real" publishers might manage to find a new business model in a world where they have lost the distribution advantage. Essentially readers trust publishers to do the gatekeeping thing of producing only the good well written books and not the crud ones. And perhaps more specifically "good" as meaning "the ones this particular reader likes".

One can note that successful publishers such as Mills & Boon/Harlequin and Baen have very distinctive covers and that both frequently see their covers mocked for being tacky etc. But the standout bit is definitely important because it allows the potential purchaser to concentrate on the books that are of interest to him/her and ignore the not so good ones.

We also note that for the most part small press and "self-published" works are considered to be of poorer quality that those printed by the bigger houses. And that readers are quite strict about filtering out the poor quality choices.

07 December 2009 Blog Home : All December 2009 Posts : Permalink

"Harry" gets a rewrite

The most important news on the CRU leak (aka Climategate) over the weekend was that the UK Met Office is not just opening up its data it is also checking the processing and rewriting the code.

I strongly suspect that this only happened because of the leak and that all the discussions about "HARRY_READ_ME" were influential in this decision. It will be very interesting to observe just how the process will develop and whether they take on board the suggestions of many of us who've looked at the code about the choice of a database or of language use.

The really good news is that we ought to be able to build some kind of third party hooks onto the database so as to identify the problem stations and so on. The one interesting thing that I note is that we are not, as far as I can tell, seeing a release of the sea data - only the land data. I'm going to email them to verify whether this is the case or not because it would seem to be quite important regarding the global temperature.

08 December 2009 Blog Home : All December 2009 Posts : Permalink

Deconstructing Climategate

Climategate as it now seems to have been called is possibly the first "-gate" scandal since the original Watergate to actually deserve the tag. But in many ways this has been a bit of a post modern Watergate with lot of the labels not matching the actions of those labeled. Lets face the first "journalist" to really get it and complain about the lack of scientific method was that well known news-reporter comedian Jon Stewart

Stewart, as in the case of the Acorn videos, seems able to elucidate the basic facts in a way that other journalists can not. At least not on TV anyway.

Then yesterday Steve McIntyre was on CNN in a group discussion of the emails and we see the labels get even more confused.

In this case Steve comes across as the rational scientist while Michael Oppenheimer comes across as the frothing activist. Steve is, as he himself admits, merely an amateur climatologist, whereas Oppenheimer is an Ivy League Professor and editor of the IPCC reports on global warming. Given the difference you might expect Steve to be the one making the frothy denunciations of "tricks" and the blocking of access to data etc. while Oppenheimer provided the adult voice of reason explaining why all this was not really a problem. In fact Steve stuck, scientist like, to the facts: the trick hid the decline/divergence in tree-rings and this was important because if there's been a divergence now how do we know whether or not there has been one earlier etc. Oppenheimer on the other hand did the classic "reframing" trick and spoke about what he wanted to talk about instead of what the debate was (or should have been) about.

Interestingly Oppenheimer mentioned the CRU temperature series which for those following along at home without a crib sheet must have seemed like a total non-sequitur. Hopefully though it will make people do some research on the code and discover HARRY_READ_ME. In fact listening to Oppenheimer I think anyone slightly cynical about government and science, and as the Tea Parties have shown that's a lot of people, is going to have his BS meter go off repeatedly. For example Oppenheimer said that China was going to reduce its CO2 emissions which is only true in a Washington/Westminster sort of way in that what they have announced is a plan to reduce the rate of growth of their emissions. A few years ago he might have got away with it but these days we're rather too well aware of that government "cuts" mean and so I think he just comes across as a shifty liar.

This leads me on to the very excellent couple of posts by Lucia on how the folks at RealClimate are reframing the questions instead of answering them. As one of her commenters points out what works OK verbally (or on TV) fails in print

Reframing is a communication answer that works well in person. It’s nothing new ( although lakoff did popularize it for people on the left) so when you go through media training the media specialists will teach you how to reframe. In a conversational situation the person asking the questions has the power. If you answer his question you live in his frame. There are several ways to “flip the script” ( a pimp game) Most annoying is to answer a question with a question. Another is reframing. It works in person because most people are not quick enough to realize their question wasnt answered and most thing it rude to say “answer my question”

Anyways what works in person doesnt fair so well in text. Because one can see exactly what was said.

So flipping the script, reframing ( run for the ice) are all old hat.

It seems to me that possibly one of the unanticipated consequences of the Internet is that the soundbite issue and verbal debating techniques may decrease in importance. It also occurs to me that Climate Scientists suffer from the same problem Chip Morningstar identified as facing the academic Lit Crit crowd nearly 20 years ago:

Contrast this situation with that of academia. Professors of Literature or History or Cultural Studies in their professional life find themselves communicating principally with other professors of Literature or History or Cultural Studies. They also, of course, communicate with students, but students don't really count. Graduate students are studying to be professors themselves and so are already part of the in-crowd. Undergraduate students rarely get a chance to close the feedback loop, especially at the so called "better schools" (I once spoke with a Harvard professor who told me that it is quite easy to get a Harvard undergraduate degree without ever once encountering a tenured member of the faculty inside a classroom; I don't know if this is actually true but it's a delightful piece of slander regardless). They publish in peer reviewed journals, which are not only edited by their peers but published for and mainly read by their peers (if they are read at all). Decisions about their career advancement, tenure, promotion, and so on are made by committees of their fellows. They are supervised by deans and other academic officials who themselves used to be professors of Literature or History or Cultural Studies. They rarely have any reason to talk to anybody but themselves -- occasionally a Professor of Literature will collaborate with a Professor of History, but in academic circles this sort of interdisciplinary work is still considered sufficiently daring and risquÝ as to be newsworthy.

What you have is rather like birds on the Galapagos islands -- an isolated population with unique selective pressures resulting in evolutionary divergence from the mainland population.

To update this to climate science you could perhaps replace history with paleontology or meteorology and you would have to note that climate scientists do in fact talk to slightly more people because they also work with environmentalists. Unfortunatley this doesn't help because the environmentalists are generally not interested in know why or how but just what. The result is that Climate Science has a communication problem when it comes to not only the general public but also other scientists. Chip Morningstars' previous paragraph explains why statisticians and computer techies tend to be better at explaining the esoterica of their professions to outsiders:

The really telling factor that neither side of the debate seems to cotton to, however, is this: technical people like me work in a commercial environment. Every day I have to explain what I do to people who are different from me -- marketing people, technical writers, my boss, my investors, my customers -- none of whom belong to my profession or share my technical background or knowledge. As a consequence, I'm constantly forced to describe what I know in terms that other people can at least begin to understand. My success in my job depends to a large degree on my success in so communicating. At the very least, in order to remain employed I have to convince somebody else that what I'm doing is worth having them pay for it.

To get a good example of this consider how Michael Mann explains (or rather IMHO fails to explain) his "hiding the decline" trick to CNN. This segment was apparently transmitted just before the Steve M & co. panel and Steve M's explanation is far clearer even though he does his one live and Mann does it as part of an interview that was clearly cut and spliced, allowing him to be prepared for the question.

Oh and there is one other problem that climate science faces with the Internet. Cynical Internetizens, unlike trusting environmental activists, tend to want to see the raw data and the intermediate steps. "Show us your working" is key and climatologists clearly don't do this well and react all huffy when people ask them to justify their choices of smoothing or statistical manipulation.

If you can't do that and you don't even release the raw data then we suspect a cover up.

08 December 2009 Blog Home : All December 2009 Posts : Permalink

Darwin in the HADCRU Dataset

Over at WattsUpWithThat there is a fascinating post about the adjustments done to the raw data at Darwin, Australia.

I was curious to see whether the newly relased HADCRU data also contained Darwin and if so how that version compared to the ones in that post. So I downloaded the data and unzipped it. Then I used John Graham-Cumming's handy google map (last page) to identify the station, checked that it was somewhere sensible - it is a rounding error away from Darwin Airport which is promising - and did some data parsing.

What I got is this graph:
HADCRU data for Darwin
which looks to me to be identical to the raw data in Willis' figure 7.
Darwin airport station GHCN - raw and adjusted
This is helpful as it tells us something about the level of "value add" in the HADCRU raw data - and the good news is that it looks like there isn't a great deal of adjustment - at least for GHCN stations - from the raw data.

I suspect that HADCRU and the met office are going to regret their data openness. Because we'll be able to do all sorts of plotting and going on this and on the similar New Zealand data I expect that the graphs we get won't look anything like as warm as the ones they get,

28 December 2009 Blog Home : All December 2009 Posts : Permalink

Dark Ship Thieves - By Sarah A Hoyt

Darkship Thieves Dark Ship Thieves is a fascinating story driven along by a wonderful and at times highly irritating heroine. In many ways this is a lighthearted "romp through the spaceways" but it isn't all surface gloss. There is, in fact, a solid skeleton underneath and plenty of meat on the bones but the two combine to produce a highly attractive exterior, just the way that similar combinations make for pretty young ladies that cause even staid old men to drool like teenagers.

This metaphor is not quite as strained as it might be because the aforementioned wonderful but irritating heroine is clearly babelicious with s remarkable propensity for losing her clothes. Indeed the book starts off with her fleeing nefarious pursuers clad only in a thigh length silk nightie and to the delight of heterosexual teenage boys  everywhere this then gets ripped in about chapter 4.

But, just as our heroine is more than just a pretty face, this book is more than a fantasy for adolescent males. The book is set a few hundred years in the future where humanity has made it into the solar system, but no further. In between that time and this though humanity, and planet earth, have suffered quite a bit of pain and torment. However at this time the planet is pretty peaceful and run by a number of despots - the "Good Men" -  with abundant energy available via the genetically engineered powertrees which orbit the earth and turn sunlight into highly energetic fruit called powerpods. These powerpods are then harvested and brought to earth to be used to power civilization. Genetic engineering is one of the themes of this book: not so much the technical details but the ethics and the likely consequences of not thinking things through.

But genetic engineering is not the only serious theme to the book, there is also considerable thought given to a libertarian society, including the problems of one as well as the benefits, and somewhere in the backstory some thoughts on how creative sorts might avoid the high-tax statism that seems to be becoming the current global norm. These details are deftly woven into the story without infodumps or other clunkiness. The fact that there is this econopolitical detail means that the reader (well this one anyway) is willing to forgive the author for a certain amount of hand-waving regarding the various technologies in use. There's nothing wrong with the science particularly but the author uses the book to discuss the potential consequences of, say, genetically engineering humans rather than discussing how the GE takes place. A cynic might suggest that this is because the author isn't terribly technical - and might well be right - but if so then she is far from alone in that failing and it is not something that is critical to the book.

The key here is that Sarah Hoyt has created wonderful characters and a plausible plot with interesting twists and turns that engage the reader. Although the heroine, Athena Hera Sinistra, is somewhat reminiscent of a Heinlein heroine (e.g. Friday Baldwin), it seems to me that the closest fictional hero to Athena is Lois M Bujold's Miles Vorkosigan. Both characters have the same "full speed is the only speed" mentality and both clearly have the sort of magnetic personality that attracts followers. In fact there are a number of similarities including the fact that both were born and inhabit the very top echelons of their societies. One key difference is that Athena is not, however, handicapped by physical limitations the way Miles is.

Athena is not the only interesting character in the book. Her rescuer from her initial scantily dressed flight is rather less charismatic and slightly more law-abiding but just as interesting and the romantic tension between the two is quite gripping - especially since both seem determined to deny even to themselves that they would be even slightly interested in the other. I don't think it is too spoilerific to say that the two do finally admit their passion for each other and that this doesn't necessarily help matters. One of the weaknesses of previous Hoyt works is that villains and minor characters have tended to be rather cardboardy, with limited insights into their motivations. In this book there is more attention paid to both and that helps because their motivations and goals enrich the fabric of the tale and turn it into something more than a simple adventure story.

I am also pleased to say that this tale is complete in itself, it has a satisfying ending with most loose ends tied up so one is not left shouting "And then what?" when reaching the final page. However, just as with Bujold's Miles Vorkosigan books, you do want to see a sequel and discover what happens next to our intrepid heroine and her friends.

PS A full disclosure disclaimer: I am tuckerized in this book. I'm sure I would have liked the book anyway but it is possible this swayed me a bit.