The HARRY_READ_ME.txt file

Part 35h

So, now the luxury of a little experiment.. I merged the MCDW and CLIMAT 'spc' databases into
the existing one *separately*. Here were the results:

MCDW:
<BEGIN_QUOTE>
uealogin1[/cru/cruts/version_3_0/db/cld] ./newmergedb

WELCOME TO THE DATABASE UPDATER

Before we get started, an important question:
If you are merging an update - CLIMAT, MCDW, Australian - do
you want the quick and dirty approach? This will blindly match
on WMO codes alone, ignoring data/metadata checks, and making any
unmatched updates into new stations (metadata permitting)?

Enter 'B' for blind merging, or <ret>: B
Please enter the Master Database name: spc.0312221624.dtb
Please enter the Update Database name: spc.0711271420.dtb

Reading in both databases..
Master database stations: 2100
Update database stations: 1693

New master database: spc.0711271504.dtb

Update database stations: 1693
> Matched with Master stations: 867
(automatically: 867)
(by operator: 0)
> Added as new Master stations: 826
> Rejected: 0
<END_QUOTE>

CLIMAT:
<BEGIN_QUOTE>
Enter 'B' for blind merging, or : B
Please enter the Master Database name: spc.0312221624.dtb
Please enter the Update Database name: spc.0711271421.dtb

Reading in both databases..
Master database stations: 2100
Update database stations: 2020

98 reject(s) from update process 0711271505

New master database: spc.0711271505.dtb

Update database stations: 2020
> Matched with Master stations: 917
(automatically: 917)
(by operator: 0)
> Added as new Master stations: 1005
> Rejected: 98
Rejects file: spc.0711271421.dtb.rejected
<END_QUOTE>

So, as expected, a few of the CLIMAT stations couldn't be matched for metadata.. no worries.
what's interestng is that roughly the same ratio of stations were matched with existing in both
cases (867/1693 vs 917/2020). Slightly better for MCDW though.

Now, as our updates only start in 2003, that means we've just lost between 826 and 1005 sets of
data (added as new). We can't be exact as we don't know the overlap between the MCDW and the CLIMAT
bulletins.. but we will have a better idea when I try the anomdtb experiment on the combined update.
First, add the CLIMAT update again, this time to the MCDW-updated database:

CLIMAT:
<BEGIN_QUOTE>
Enter 'B' for blind merging, or : B
Please enter the Master Database name: spc.0711271504.dtb
Please enter the Update Database name: spc.0711271421.dtb

Reading in both databases..
Master database stations: 2926
Update database stations: 2020

38 reject(s) from update process 0711271514

New master database: spc.0711271514.dtb

Update database stations: 2020
> Matched with Master stations: 1736
(automatically: 1736)
(by operator: 0)
> Added as new Master stations: 246
> Rejected: 38
Rejects file: spc.0711271421.dtb.rejected
<END_QUOTE>

Note several bits of good news! Firstly, rejects are down to 38 (60 having matched with MCDW stations).
That's not *that* good of course - those will be new and so 2003 onwards only. Similarly, (1005-246=)
759 CLIMAT bulletins matched MCDW ones, they will also be 2003 onwards only. In other words, there were
only (1736-759=) 977 updates to existing stations. So.. yes I'm being sidetracked again.. I found and
downloaded ALL the MCDW bulletins, back to 1994!

<BEGIN_QUOTE>
uealogin1[/cru/cruts/version_3_0/incoming/MCDW] ./mcdw2cru

MCDW2CRU: Convert MCDW Bulletins to CRU Format

Enter the earliest MCDW file: ssm9409.fin
Enter the latest MCDW file (or for single files): ssm0708.fin

All Files Processed
tmp.0711271645.dtb: 2785 stations written *** SEE LATER RUNS ***
vap.0711271645.dtb: 2786 stations written *** SEE LATER RUNS ***
rdy.0711271645.dtb: 2781 stations written *** SEE LATER RUNS ***
pre.0711271645.dtb: 2791 stations written *** SEE LATER RUNS ***
sun.0711271645.dtb: 2184 stations written *** SEE LATER RUNS ***

Thanks for playing! Byeee!
<END_QUOTE>

Now I'm not planning to re-run all the previous parameters! Hell, they should have had the older data
in already! But for sun/cloud, this could help enormously. Here's the plan:

1. Merge the CLIMAT-sourced database into the new MCDW-sourced database.
2. Convert this modern sun hours database into a modern cloud percent database.
3. Add normals for 95-02.
4. Use the new program 'normshift.for' to calculate 95-02 normals from TS 2.10 CLD.
5. Calculate difference between TS 2.10 6190 normls and the above.
6. Modify the in-database normals (step 3) with the difference (step 5).
7. Carry on as before?

No.. this won't work. anomdtb.for calculates normals on the fly - it would have to know too much.

The next opportunity comes at the output from anomdtb - the normalised values in the *.txt files that
the IDL gridder reads. These are just files - one per month - with lists of coordinates and values, so
ideal to add normalised values to. Decided that this will be the process:

Modern SunH DB -> Hsh2cld.for -> Modern Cld% DB
Modern Cld% DB -> newprog.for -> 6190anomalies.txt

..meanwhile, as before..

Normal Cld% DB -> anomdtb.for -> 6190anomalies.txt

So we then just have to merge the two 6190 anomaly sets! Which could just be a concatenation.

Easy, then.. the only thing we need is the miraculous 'newprog.for'! With three days before delivery.

No, no, no - HANG ON. Let's not try and boil the ocean! How about:

1901-2002 Static, as published, leave well alone (or recalculate with better DTR).
2003-2006/7 Calc from modern SunH and use the suggested mods after gridding.

This is what was originally intended. But there will be problems:

1. MCDW only goes back to 2006, so what's the data density for 2003-2005? Should this also use synthetic
cloud from DTR? I guess yes.

2. No guarantee of continuity from 2002 to 2003. This could be the real stickler. Moving from one system
to the other - this is why it might be better to re-run 1901-2002 as well.

OKAY.. normshift.for now creates a gridded set of conversion data between whatever period you choose
and 1961-1990. Such that it can be added to the gridded output of the process run with the 'false'
normalisation period.

So.. first, merge your bulletins:

Well FIRSTLY, you realise that your databases don't have normals lines, so you modify mcdw2cru.for and
climat2cru.for to optionally add them, then you re-run them on the bulletins, ending up with:

<BEGIN_QUOTE>
uealogin1[/cru/cruts/version_3_0/incoming/MCDW] ./mcdw2cru

MCDW2CRU: Convert MCDW Bulletins to CRU Format

Enter the earliest MCDW file: ssm9409.fin
Enter the latest MCDW file (or for single files): ssm0708.fin
Add a dummy normals line? (Y/N): Y

All Files Processed
tmp.0711272156.dtb: 2785 stations written
vap.0711272156.dtb: 2786 stations written
rdy.0711272156.dtb: 2781 stations written
pre.0711272156.dtb: 2791 stations written
sun.0711272156.dtb: 2184 stations written

Thanks for playing! Byeee!
<END_QUOTE>

<BEGIN_QUOTE>
uealogin1[/cru/cruts/version_3_0/incoming/CLIMAT] ./climat2cru

CLIMAT2CRU: Convert MCDW Bulletins to CRU Format

Enter the earliest CLIMAT file: climat_data_200301.txt
Enter the latest CLIMAT file (or for single file): climat_data_200707.txt
Add a dummy normals line? (Y/N): Y

All Files Processed
tmp.0711272219.dtb: 2881 stations written
vap.0711272219.dtb: 2870 stations written
rdy.0711272219.dtb: 2876 stations written
pre.0711272219.dtb: 2878 stations written
sun.0711272219.dtb: 2020 stations written
tmn.0711272219.dtb: 2800 stations written
tmx.0711272219.dtb: 2800 stations written

Thanks for playing! Byeee!
<END_QUOTE>

So.. NOW can I merge CLIMAT into MCDW?!

As expected, thank goodness:

<BEGIN_QUOTE>
uealogin1[/cru/cruts/version_3_0/incoming/merge_CLIMAT_into_MCDW] ./newmergedb

WELCOME TO THE DATABASE UPDATER

Before we get started, an important question:
If you are merging an update - CLIMAT, MCDW, Australian - do
you want the quick and dirty approach? This will blindly match
on WMO codes alone, ignoring data/metadata checks, and making any
unmatched updates into new stations (metadata permitting)?

Enter 'B' for blind merging, or <ret>: B
Please enter the Master Database name: sun.0711272156.dtb
Please enter the Update Database name: sun.0711272219.dtb

Reading in both databases..
Master database stations: 2184
Update database stations: 2020

Looking for WMO code matches..
28 reject(s) from update process 0711272225

Writing sun.0711272225.dtb

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

OUTPUT(S) WRITTEN

New master database: sun.0711272225.dtb

Update database stations: 2020
> Matched with Master stations: 1775
(automatically: 1775)
(by operator: 0)
> Added as new Master stations: 217
> Rejected: 28
Rejects file: sun.0711272219.dtb.rejected
<END_QUOTE>

Wahey! Lots of stations to play with!

So, next.. convert to cloud!

<BEGIN_QUOTE>
crua6[/cru/cruts/version_3_0/db/cld] ./Hsh2cld

Hsh2cld - Convert a Sun Hours database to a Cloud Percent one

Please enter the Sun Hours database: sun.0711272225.dtb
Data Factor detected: *1.000

Completed - 2401 stations converted.

Sun Percentage Database: spc.0711272230.dtb
Cloud Percentage Database: cld.0711272230.dtb
<END_QUOTE>

So.. bated breath..

..and yay!

<BEGIN_QUOTE>
crua6[/cru/cruts/version_3_0/secondaries/cld] ./anomdtb

> ***** AnomDTB: converts .dtb to anom .txt for gridding *****

> Enter the suffix of the variable required:
.cld
> Select the .cts or .dtb file to load:
cld.0711272230.dtb

> Specify the start,end of the normals period:
1995,2002
> Specify the missing percentage permitted:
12.5
> Data required for a normal: 7
> Specify the no. of stdevs at which to reject data:
3
> Select outputs (1=.cts,2=.ann,3=.txt,4=.stn):
3
> Check for duplicate stns after anomalising? (0=no,>0=km range)
0
> Select the generic .txt file to save (yy.mm=auto):
cld.txt
> Select the first,last years AD to save:
1995,2007
> Operating...

/tmp_mnt/cru-auto/cruts/version_3_0/secondaries/cld/cld.0711272230.dts

> NORMALS MEAN percent STDEV percent
> .dtb 0 0.0
> .cts 83961 49.3 83961 49.3
> PROCESS DECISION percent %of-chk
> no lat/lon 95 0.1 0.1
> no normal 86174 50.6 50.7
> out-of-range 28 0.0 0.0
> accepted 83933 49.3
> Dumping years 1995-2007 to .txt files...
<END_QUOTE>

Well.. a 'qualified' yay.. only half got normals! But I don't like to raise the 'missing percentage'
limit to 25% because we're only talking about 8 values to begin with!!

The output files look OK.. between 400 and 600 values in each, not a lot really but hey, better than
nowt. So onto the conversion data (must stop calling 'em factors, they're not multiplicative).

<BEGIN_QUOTE>
crua6[/cru/cruts/version_3_0/secondaries/cld] ./normshift

NORMSHIFT - Normals from any period

Please enter the source file: cru_ts_2_10.1901-2002.cld.grid
Enter the start year of this file: 1901
Enter the end year of this file: 2002
Enter the normal period start year: 1995
Enter the normal period end year: 2002
Enter the 3-character parameter: cld

Normals file will be: clim.9502.to.6190.grid.cld
<END_QUOTE>

So, erm.. now we need to create our synthetic cloud from DTR. Except that's the thing we CAN'T do because
pro cal_cld_gts_tdm.pro needs those bloody coefficients (a.25.7190, etc) that went AWOL. Frustratingly we
do have some of the outputs from the program (ie, a.25.01.7190.glo), but that's obviously no use.

So, erm. We need synthetic cloud for 2003-2007, or we won't have enough data to run with. And yes it's
taken me this long to realise that. Oh, bugger.

Had a detailed search around Mark New's old disk (still online thankfully). Found this:

<BEGIN_QUOTE>
crua6[/cru/mark1/markn/gts/cld/val] ls -l
total 7584
lrwxrwxrwx 1 f080 cru 25 Sep 12 2005 c1 -> /cru/u1/f080/isccp/c1_mon
-rw-r--r-- 1 f080 cru 1290 Mar 24 1998 cld_corr.j
-rw-r--r-- 1 f080 cru 938 Mar 17 1998 cld_scat.j
-rw-r----- 1 f080 cru 922584 Mar 24 1998 cru_hahn_corr.ps
-rw-r----- 1 f080 cru 922588 Mar 24 1998 cru_isccp_corr.ps
-rw-r----- 1 f080 cru 533 Mar 27 1998 cruobs_hahn_corr.j
-rw-r----- 1 f080 cru 868561 Mar 27 1998 cruobs_hahn_corr.ps
-rw-r--r-- 1 f080 cru 697 Mar 20 1998 dtr_corr.j
-rw-r----- 1 f080 cru 50 Mar 27 1998 foo
-rw-r----- 1 f080 cru 248832 Mar 27 1998 glo25.cld.1980
-rw-r----- 1 f080 cru 248832 Mar 27 1998 glo25.cld.1981
-rw-r----- 1 f080 cru 248832 Mar 27 1998 glo25.cld.1982
-rw-r----- 1 f080 cru 248832 Mar 27 1998 glo25.cld.1983
-rw-r----- 1 f080 cru 248832 Mar 27 1998 glo25.cld.1984
-rw-r----- 1 f080 cru 248832 Mar 27 1998 glo25.cld.1985
-rw-r----- 1 f080 cru 248832 Mar 27 1998 glo25.cld.1986
-rw-r----- 1 f080 cru 248832 Mar 27 1998 glo25.cld.1987
-rw-r----- 1 f080 cru 248832 Mar 27 1998 glo25.cld.1988
-rw-r----- 1 f080 cru 248832 Mar 27 1998 glo25.cld.1989
-rw-r----- 1 f080 cru 248832 Mar 27 1998 glo25.cld.1990
-rw-r----- 1 f080 cru 248832 Mar 27 1998 glo25.cld.1991
-rw-r----- 1 f080 cru 248832 Mar 27 1998 glo25.cld.1992
-rw-r----- 1 f080 cru 248832 Mar 27 1998 glo25.cld.1993
-rw-r----- 1 f080 cru 248832 Mar 27 1998 glo25.cld.1994
-rw-r----- 1 f080 cru 248832 Mar 27 1998 glo25.cld.1995
-rw-r----- 1 f080 cru 922592 Mar 24 1998 hahn_isccp_corr.ps
-rw-r----- 1 f080 cru 2378 Mar 24 1998 test.j
<END_QUOTE>

..which looks to me like the place where he calculated the coefficients. The *.j files are IDL 'Journal' files,
so can be run from within IDL. This was my first attempt:

<BEGIN_QUOTE>
IDL> .run cld_corr.j
% Compiled module: $MAIN$.
% Compiled module: RD25_GTS.
YEAR: 1981
% Compiled module: RDBIN.
% Compiled module: STRIP.
foo: Permission denied.
foo: Permission denied.
foo: Permission denied.
% OPENR: Error opening file. Unit: 99, File: /home/cru/f098/u1/hahn/hahn25.1981
No such file or directory
% Execution halted at: RDBIN 63 /cru/u2/f080/Idl/rdbin.pro
% RD25_GTS 11 /cru/u2/f080/Idl/rd25_gts.pro
% $MAIN$ 1 /tmp_mnt/cru-auto/mark1/f080/gts/cld/val/cld_corr.j
IDL>
<END_QUOTE>

I then had to chase around to find three sets of missing files.. to fulfil these five conditions:

if keyword_set(hgrid) eq 0 then rd25_gts,$
hgrid,'~/u1/hahn/hahn25.',1981,1991
if keyword_set(rgrid) eq 0 then rd25_gts,$
rgrid,'../glo_reg_25/glo.cld.',1981,1991
if keyword_set(hgrid2) eq 0 then rd25_gts,$
hgrid2,'~/u1/hahn/hahn25.',1983,1991
if keyword_set(igrid) eq 0 then rdisccp_gts,$
igrid,'c1/isccp.',1983,1991
if keyword_set(rgrid2) eq 0 then rd25_gts,$
rgrid2,'../glo_reg_25/glo.cld.',1983,1991

I managed to find the hahn25 files (on Mark's disk), and some likely-looking isccp files (also on Mark's disk).
But although there were plenty of files with 'glo', 'cld' and '25' in them, there were none matching the filename
construction above. However, as some of those were in the same directory - I'll take that chance!!

I did try, honestly. Very hard. I found all the files, and put them in directories. I made a local copy of the job
file, 'H_cld_corr.j', with the local directory refs in. Hell, I even precompiled the correct version of rdbin!

All for nothing, as usual. It runs quite happily, zipping through things, until:

% Compiled module: RDISCCP_GTS.
YEAR: 1983
% Compiled module: RDISCCP.
c1/isccp.83.07.72
c1/isccp.83.07.72.Z: No such file or directory
c1/isccp.83.08.72
c1/isccp.83.08.72.Z: No such file or directory
c1/isccp.83.09.72
c1/isccp.83.09.72.Z: No such file or directory
c1/isccp.83.10.72
c1/isccp.83.10.72.Z: No such file or directory
c1/isccp.83.11.72
c1/isccp.83.11.72.Z: No such file or directory
c1/isccp.83.12.72
c1/isccp.83.12.72.Z: No such file or directory
YEAR: 1984
c1/isccp.84.01.72
c1/isccp.84.01.72.Z: No such file or directory
(etc)

It isn't seeing the isccp files EVEN THOUGH THEY ARE THERE. Odd. If I create Z files it says they aren't compressed.

It ends with:

YEAR: 1991
yes
filesize= 248832
gridsize= 2.50000
% Compiled module: MARK_CORRELATE.
% Compiled module: CORRELATE.

I have no idea what it's actually done though. It doesn't appear to have produced anything.. ah:


IDL> help
% At $MAIN$ 1 /tmp_mnt/cru-auto/cruts/version_3_0/cloud_synthetics/H_cld_corr.j
CRU_HAHN_CORR FLOAT = Array[144, 72, 12]
CRU_ISCCP_CORR FLOAT = Array[144, 72, 12]
HGRID LONG = Array[144, 72, 12, 11]
HGRID2 LONG = Array[144, 72, 12, 9]
IGRID LONG = Array[144, 72, 12, 9]
ILAT INT = 72
ILON INT = 144
IM INT = 12
ISCCP_HAHN_CORR FLOAT = Array[144, 72, 12]
N LONG = Array[5225]
NN LONG = 5225
RGRID LONG = Array[144, 72, 12, 11]
RGRID2 LONG = Array[144, 72, 12, 9]
Compiled Procedures:
$MAIN$ DEFXYZ RD25_GTS RDBIN RDISCCP RDISCCP_GTS

Compiled Functions:
CORRELATE MARK_CORRELATE STRIP

IDL>

..so this is one of a set of tools *that you have to know how to use*. All the work's done in the IDl data space.

Well as we don't have any instructions, that's a complete waste of two-and-a-half days' time.

Let's forget about CLD and start worrying about NetCDF.


NETDCF

Well now, we have to make the data available in NetCDF and ASCII grid formats. At the moment, it might be best to
just post-process the final ASCII grids into NetCDF; though more elegant to have mergegrids.for produce both! As it
has the data there anyway.. so I modified mergegrids.for into makegrids.for, with added NetCDF goodness. as
usual, lots of problems getting the syntax right..

************************************************************************
BADC Work.. at RAL, 3-5 December 2007

Finally got NetCDF & Fortran working on the chosen server here (damp.badc.rl.ac.uk). I am definitely not a
chamaeleonic life form when it comes to unfamiliar computer systems. Shame. The elusive command line compile
statement is:

gfortran -I/usr/local/netcdf-3.6.2/include/ -o fileout.o filein.f /usr/local/netcdf-3.6.2/lib/libnetcdf.a

Hunting for CDDs I found a potential problem with binary DTR (used in the construction of Frost Days, Vapour
Pressure, and (eventually) Cloud. It looks as though there was a mistyping when the 2.5-degree binaries were
constructed:

IDL> quick_interp_tdm2,1901,2006,'dtrbin/dtrbin',50,gs=2.5,dumpbin='dumpbin',pts_prefix='dtrtxt/dtr.'

That '50' should have been.. 750! Oh bugger. Well, might as well see if generation does work here. DTR/bin/2.5:

..er.. hang on while I try and get IDL to recognise a path.. meh. As usual I find this effectively
impossible, so have to issue manual .compile statements. The suite of progs required to compile for
quick_interp_tdm2.pro is:

glimit.pro
area_grid.pro
strip.pro
wrbin.pro

..and, of course, quick_interp_tdm2.pro. Actually, others need others.. so I wrote a
generic IDL script to load all of Satan's little helpers:

IDL> @../../programs/idl/loads4idl.j
% Compiled module: GLIMIT.
% Compiled module: AREA_GRID.
% Compiled module: STRIP.
% Compiled module: WRBIN.
% Compiled module: RDBIN.
% Compiled module: DEFXYZ.
% Compiled module: FRSCAL.
% Compiled module: DAYS.
% Compiled module: RNGE.
% Compiled module: SAVEGLO.
% Compiled module: SELECTMODEL.
% Compiled module: TVAP.
% Compiled module: ESAT.
IDL>

This is just because I still don't have IDL_PATH working so
having to issue each of the above as manual compile statements (in that order)
was getting tedious. n00b. [this now fixed - ed] Anyway, here's the corrected
binary DTR production:

IDL> quick_interp_tdm2,1901,2006,'dtrbin/dtrbin',750,gs=2.5,dumpbin='dumpbin',pts_prefix='dtrtxt/dtr.'
Defaults set
1901
% Compiled module: MAP_SET.
% Compiled module: CROSSP.
% Compiled module: MEAN.
% Compiled module: MOMENT.
% Compiled module: STDDEV.
grid 1901 non-zero 0.9415 1.8771 1.8417 cells= 5608
1902
grid 1902 non-zero 0.8608 1.8713 1.8752 cells= 5569
(etc)

And so to regenerate FRS:

<BEGIN_QUOTE>
IDL> frs_gts,dtr_prefix='dtrbin/dtrbin',tmp_prefix='tmpbin/tmpbin',1901,2006,outprefix='frssyn/frssyn'
IDL> quick_interp_tdm2,1901,2006,'frsgrid/frsgrid',750,gs=0.5,dumpglo='dumpglo', nostn=1,synth_prefix='frssyn/frssyn'
-bash-3.00$ ./glo2abs
Welcome! This is the GLO2ABS program.
I will create a set of absolute grids from
a set of anomaly grids (in .glo format), also
a gridded version of the climatology.
Enter the path and name of the normals file: clim.6190.lan.frs
Enter a name for the gridded climatology file: clim.6190.lan.frs.delme
Enter the path and stem of the .glo files: frsgrid/frs.
Enter the starting year: 1901
Enter the ending year: 2006
Enter the path (if any) for the output files: frsabs/
Now, CONCENTRATE. Addition or Percentage (A/P)? A
Do you wish to limit the output values? (Y/N): Y
1. Set minimum to zero
2. Set a single minimum and maximum
3. Set monthly minima and maxima (for wet/rd0)
4. Set all values >0, (ie, positive)
Choose: 3
Right, erm.. off I jolly well go!
frs.01.1901.glo
frs.02.1901.glo
(etc)
<END_QUOTE>

Now looking to get makegrids.for working.. managed to get the data to write by declaring REALs as
DOUBLE PRECISION - later realising I could/should have changed the NetCDF interface calls to REAL
instead! Ah well. Still tussling with the 'time' variable.. not clear how to handle observations.
Luckily, Mike S knew what the standard was:

<BEGIN_QUOTE>
>I need to define the time parameter in the NetCDF version of the CRUTS
>dataset. I suspect I need to use 'Gregorian' (which to all intents and
>porpoises is accurate although it reverts to Julian before xx/yy/1582) but
>I wondered if there was a convention (in CRU, or wider) for allocating
>standard timestamps to observations?

For the gridded temperature we use

short time(time) ;
time:units = "months since 1870-1-1" ;

Remember to start with zero!

Mike
<END_QUOTE>

And that seems to now be working! Here's the run with the compile statement included:

<BEGIN_QUOTE>
-bash-3.00$ gfortran -I/usr/local/netcdf-3.6.2/include/ -o makegrids ../../programs/fortran/makegrids.for /usr/local/netcdf-3.6.2/lib/libnetcdf.a
-bash-3.00$ ./makegrids Welcome! This is the MAKEGRIDS program.
I will create decadal and full gridded files,
in both ASCII text and NetCDF formats, from
the output files of (eg) glo2abs.for.

Writing: cru_ts_3_00.1901.1910.frs.dat
cru_ts_3_00.1901.1910.frs.nc
Writing: cru_ts_3_00.1911.1920.frs.dat
cru_ts_3_00.1911.1920.frs.nc
Writing: cru_ts_3_00.1921.1930.frs.dat
cru_ts_3_00.1921.1930.frs.nc
Writing: cru_ts_3_00.1931.1940.frs.dat
cru_ts_3_00.1931.1940.frs.nc
Writing: cru_ts_3_00.1941.1950.frs.dat
cru_ts_3_00.1941.1950.frs.nc
Writing: cru_ts_3_00.1951.1960.frs.dat
cru_ts_3_00.1951.1960.frs.nc
Writing: cru_ts_3_00.1961.1970.frs.dat
cru_ts_3_00.1961.1970.frs.nc
Writing: cru_ts_3_00.1971.1980.frs.dat
cru_ts_3_00.1971.1980.frs.nc
Writing: cru_ts_3_00.1981.1990.frs.dat
cru_ts_3_00.1981.1990.frs.nc
Writing: cru_ts_3_00.1991.2000.frs.dat
cru_ts_3_00.1991.2000.frs.nc
Writing: cru_ts_3_00.2001.2006.frs.dat
cru_ts_3_00.2001.2006.frs.nc
-bash-3.00$
<END_QUOTE>

And here, for a combination of posterity and boredom, is a (curtailed) dump from ncdump:

<BEGIN_QUOTE>
-bash-3.00$ ncdump cru_ts_3_00.1901.2006.frs.nc |head -300
netcdf cru_ts_3_00.1901.2006.frs {
dimensions:
lon = 720 ;
lat = 360 ;
time = UNLIMITED ; // (1272 currently)
variables:
double lon(lon) ;
lon:long_name = "longitude" ;
lon:units = "degrees_east" ;
double lat(lat) ;
lat:long_name = "latitude" ;
lat:units = "degrees_north" ;
int time(time) ;
time:long_name = "time" ;
time:units = "months since 1870-1-1" ;
time:calendar = "standard" ;
double frs(time, lat, lon) ;
frs:long_name = "ground frost frequency" ;
frs:units = "days" ;
frs:scale_factor = 0.00999999977648258 ;
frs:correlation_decay_distance = 750. ;
frs:_FillValue = -9999. ;
frs:missing_value = -9999. ;

// global attributes:
:title = "CRU TS 3.00 Mean Temperature" ;
:institution = "BADC" ;
:contact = "BADC <[email protected]>" ;
data:

lon = -179.75, -179.25, -178.75, -178.25, -177.75, -177.25, -176.75,
-176.25, -175.75, -175.25,
(etc)
170.75, 171.25, 171.75, 172.25, 172.75,
173.25, 173.75, 174.25, 174.75, 175.25, 175.75, 176.25, 176.75, 177.25,
177.75, 178.25, 178.75, 179.25, 179.75 ;

lat = -89.75, -89.25, -88.75, -88.25, -87.75, -87.25, -86.75, -86.25,
-85.75, -85.25, -84.75,
(etc)
79.75, 80.25, 80.75, 81.25, 81.75,
82.25, 82.75, 83.25, 83.75, 84.25, 84.75, 85.25, 85.75, 86.25, 86.75,
87.25, 87.75, 88.25, 88.75, 89.25, 89.75 ;

time = 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385,
386, 387, 388, 389, 390,
(etc)
1620, 1621, 1622, 1623, 1624, 1625,
1626, 1627, 1628, 1629, 1630, 1631, 1632, 1633, 1634, 1635, 1636, 1637,
1638, 1639, 1640, 1641, 1642, 1643 ;

frs =
-999, -999, -999, -999, -999, -999, -999, -999, -999, -999, -999, -999,
(etc - probably some real data there somewhere)

-bash-3.00$
<END_QUOTE>

And VAP:
<BEGIN_QUOTE>
IDL> vap_gts_anom,dtr_prefix='dtrbin/dtrbin',tmp_prefix='tmpbin/tmpbin',1901,2006,outprefix='vapsyn/vapsyn.',dumpbin=1
% Compiled module: VAP_GTS_ANOM.
% Compiled module: RDBIN.
% Compiled module: STRIP.
% Compiled module: DEFXYZ.
Land,sea: 56016 68400
Calculating tmn normal
% Compiled module: TVAP.
Calculating synthetic vap normal
% Compiled module: ESAT.
Calculating synthetic anomalies
% Compiled module: MOMENT.
1901 vap (x,s2,<<,>>): 1.67770e-05 6.23626e-06 -0.160509 0.222689
1902 vap (x,s2,<<,>>): -0.000122533 3.46933e-05 -0.268891 0.0644855
(etc)
<END_QUOTE>

These numbers are different from the original runs - so that was a genuine mistyping. Eek, that's not
very promising, is it?

<BEGIN_QUOTE>
IDL> quick_interp_tdm2,1901,2006,'vapglo/vap.',1000,gs=0.5,dumpglo='dumpglo',synth_prefix='vapsyn/vapsyn.',pts_prefix='vaptxt/vap.'
% Compiled module: QUICK_INTERP_TDM2.
% Compiled module: GLIMIT.
Defaults set
1901
% Compiled module: RDBIN.
% Compiled module: STRIP.
% Compiled module: DEFXYZ.
% Compiled module: MAP_SET.
% Compiled module: CROSSP.
% Compiled module: SAVEGLO.
% Compiled module: SELECTMODEL.
1902
(etc)

-bash-3.00$ ./glo2abs
Welcome! This is the GLO2ABS program.
I will create a set of absolute grids from
a set of anomaly grids (in .glo format), also
a gridded version of the climatology.
Enter the path and name of the normals file: clim.6190.lan.vap
Enter a name for the gridded climatology file: clim.6190.lan.vap.delme
Enter the path and stem of the .glo files: vapglo/vap.
Enter the starting year: 1901
Enter the ending year: 2006
Enter the path (if any) for the output files: vapabs/
Now, CONCENTRATE. Addition or Percentage (A/P)? A
Do you wish to limit the output values? (Y/N): Y
1. Set minimum to zero
2. Set a single minimum and maximum
3. Set monthly minima and maxima (for wet/rd0)
4. Set all values >0, (ie, positive)
Choose: 4
Right, erm.. off I jolly well go!
vap.01.1901.glo
vap.02.1901.glo
(etc)

-bash-3.00$ ./makegrids
Welcome! This is the MAKEGRIDS program.
I will create decadal and full gridded files,
in both ASCII text and NetCDF formats, from
the output files of (eg) glo2abs.for.

Enter a gridfile with YYYY for year and MM for month: vapabs/vap.MM.YYYY.glo.abs
Enter Start Year: 1901
Enter Start Month: 01
Enter End Year: 2006
Enter End Month: 12

Please enter a sample OUTPUT filename, replacing
start year with SSSS and end year with EEEE, and
ending with '.dat', eg: cru_ts_3_00.SSSS.EEEE.tmp.dat : cru_ts_3_00.SSSS.EEEE.vap.dat

Now please enter the 3-ch parameter code: vap
Enter a generic title for this dataset, eg:
CRU TS 3.00 Mean Temperature : CRU TS 3.00 Vapour Pressure
Writing: cru_ts_3_00.1901.1910.vap.dat
cru_ts_3_00.1901.1910.vap.nc
Writing: cru_ts_3_00.1911.1920.vap.dat
cru_ts_3_00.1911.1920.vap.nc
Writing: cru_ts_3_00.1921.1930.vap.dat
cru_ts_3_00.1921.1930.vap.nc
Writing: cru_ts_3_00.1931.1940.vap.dat
cru_ts_3_00.1931.1940.vap.nc
Writing: cru_ts_3_00.1941.1950.vap.dat
cru_ts_3_00.1941.1950.vap.nc
Writing: cru_ts_3_00.1951.1960.vap.dat
cru_ts_3_00.1951.1960.vap.nc
Writing: cru_ts_3_00.1961.1970.vap.dat
cru_ts_3_00.1961.1970.vap.nc
Writing: cru_ts_3_00.1971.1980.vap.dat
cru_ts_3_00.1971.1980.vap.nc
Writing: cru_ts_3_00.1981.1990.vap.dat
cru_ts_3_00.1981.1990.vap.nc
Writing: cru_ts_3_00.1991.2000.vap.dat
cru_ts_3_00.1991.2000.vap.nc
Writing: cru_ts_3_00.2001.2006.vap.dat
cru_ts_3_00.2001.2006.vap.nc
-bash-3.00$
<END_QUOTE>

A quick look at the VAP NetCDF headers & data looked good. So - yay, that's the damage repaired,
pity it took over a day of the time at RAL. But I didn't have to fix it now - it was an opportunity
to get the process working in this environment.

Next problem - station counts. I had this working fine in CRU - here it's insisting on stopping
indefinitely at January 1957. Discovered - after 36 hours of fretting and debugging - that it's
popping its clogs on the South Polar base:

890090 -900 0 2853 AMUNDSEN-SCOTT ANTARCTICA 1957 2006 101957 -999.00

And what d'you know, when I debug it, it's as simple as being too close to the pole and not having
any loop restrictions in the East and West hunts for valid cells.. just looping forever! Added
a few simple conditionals and all seems to run.. but outputs don't look right, the Jan 1957 station
counts have missing values for the polar regions.

Managed to get anomdtb compiled with gfortran, after altering a few lines (in anomdtb and its mods)
where Tim had shrugged off the surly bounds of strict F90.. it must be compiled in programs/fortran/
though, with the line (embedded in the anomdtb comments too):

gfortran -o anomdtb filenames.f90 time.f90 grimfiles.f90 crutsfiles.f90 loadperfiles.f90
saveperfiles.f90 annfiles.f90 cetgeneral.f90 basicfun.f90 wmokey.f90
gridops.f90 grid.f90 ctyfiles.f90 anomdtb.f90

As part of the modifications I removed the unused options - meaning that a .dts file is no longer
required (and, of course, neither is 'falsedts.for'). Ran it for the temperature database and got
apparently-identical anomaly files to the ones I generated in CRU :-))) Ran quick_interp_tdm2,
glo2abs and makegrids, ended up with grids very similar (though sadly not identical) to the
originals (could be IDL, could be compiler, could be system).

Scripting. Now this was always going to be the challenge, for a large suite of highly-interactive
programs in F77, F90 and IDL which didn't follow universal file naming conventions. So to start with,
I thought it might be 'fun' to compile an exhaustive/ing list of the commands needed to make a
complete update. Headings begin with *, notes are in brackets

* Add MCDW Updates
mcdw2cru (interactive)
newmergedb (per parameter, interactive)
* Add CLIMAT Updates
climat2cru (interactive)
newmergedb (per parameter, interactive)
* Add BOM Updates
au2cru (unfinished, interactive, should do whole job)
* Regenerate DTR Database
tmnx2dtr (interactive)
* Produce Primary Parameters (TMP, TMN, TMX, DTR, PRE)
anomdtb (per parameter, interactive)
quick_interp_tdm2 (per parameter)
glo2abs (per parameter, interactive)
makegrids (per parameter, interactive)
* Prepare Binary Grids (TMP, DTR, PRE) for Synthetics
quick_interp_tdm2 (per parameter)
* Produce Secondary Parameter (FRS, uses TMP,DTR)
frs_gts_tdm
quick_interp_tdm2
glo2abs (interactive)
makegrids (interactive)
* Produce Secondary Parameter (VAP, uses TMP,DTR)
vap_gts_anom
anomdtb (interactive)
quick_interp_tdm2
glo2abs (interactive)
makegrids (interactive)
* Produce Secondary Parameter (WET/RD0, uses PRE)
rd0_gts_anom
anomdtb (interactive)
quick_interp_tdm2
glo2abs (interactive)
makegrids (interactive)



Go on to part 35i, back to index or Email search