[MITgcm-devel] lab_sea sensitivity between platforms/compilers

Martin Losch Martin.Losch at awi.de
Fri Jul 17 09:37:51 EDT 2009


Hi there,

while going through the EVP code again for obvious reasons, I could  
isolate at least some of the sensitivity of the lab_sea experiment:

In the EVP code there is a line
             deltaCreg = SQRT(MAX(deltaC,SEAICE_EPS_SQ))
If you replace this line with
             deltaCreg = SQRT(deltaC)/(SQRT(deltaC)+SEAICE_EPS)
the number of matching digits increase from 5 to 12 (for cg2d) for the  
lab_sea/input experiment, between a
Linux csysl18 2.6.22.19-0.2-bigsmp #1 SMP 2008-12-18 10:17:03 +0100  
i686 i686 i386
gfortran (gcc version 4.2.1)
and a
Darwin csysm15.dmawi.de 9.7.0 Darwin Kernel Version 9.7.0: Tue Mar 31  
22:52:17 PDT 2009; gfortran (gcc version 4.3.1)

Both expressions give about the same results for large DeltaC, but  
still I would like to find out what would be the correct replacement  
of the SQRT(MAX(deltaC,SEAICE_EPS_SQ)). Here's what did *NOT* work:
             deltaCreg = max(sqrt(deltaC),seaice_eps)
             deltaCreg = sqrt(dmax1(deltaC,seaice_eps_sq))
             deltaCreg = SQRT(MAX(DBLE(deltaC),DBLE(SEAICE_EPS_SQ)))
             deltaCreg = SQRT(MAX(SEAICE_EPS_SQ,deltaC))

Any idea, what might be going on? If I cannot solve this problem I  
would suggest replacing this expression with the more stable one, also  
in seaice_calc_viscosities (although it will change all results).

Martin



More information about the MITgcm-devel mailing list