[MITgcm-support] ifort optimization puzzle

Martin Losch mlosch at awi-bremerhaven.de
Tue Mar 8 02:47:38 EST 2005


together with Michael Schodlok I have set up a 1D experiment: 1 column 
of water (nx=ny=1, overlap=2, nr=30, dz=30*10). We specify zero net 
surface heat flux (qnet = 0), but -50W/m^2 shortwave heat flux (warming 
due to the sun), in order to drive kpp. This works all very nicely and 
we were happy, the mean temperature was constant, as it should (why 
didn't we stop there?). Then I suggested to move to a faster machine 
(from a $%^&* SUN to a nice linux-box with ifort). We used the standard 
ifort build-options file with paths to netcdf appended. But suddenly 
the system was losing heat at 1200W/m^2. Turning off optimization (-O0) 
or using g77 gave the old results (with constant mean temperature). We 
clearly have an optimization problem here. How serious do you think 
this is (I admit that we are using a somewhat pathological 


PS. in sea.mit.edu:/data4/mlosch/test1d.tgz I have a case similar to 
our original one, but with different initial conditions for T and S, 
here the mean temperature decreases by 0.5degC in 10 days, 
corresponding to approximately 690W/m^2 heat loss
build_g77 has the Makefile for g77 (result in run00/mnc_0001)
build_o0 for ifort -O0 (result in run00/mnc_0002)
build_o1 for ifort -O1 (gives almost the same results as -O3, result in 
data files etc. in run00
I included the build option files I used, but I manipulated the 
makefiles afterwards to change the optimization level for ifort.

More information about the MITgcm-support mailing list