[MITgcm-support] single v. multiple processor
Dimitris Menemenlis
menemenlis at sbcglobal.net
Tue Dec 18 09:44:00 EST 2007
Also:
1. Take a look at horizontal pattern of differences. If the differences occur mostly at tile edges then this could be an indication that your overlap regions, olx & oly, are too small.
2. Try increasing cg2dMaxIters and decreasing cg2dTargetResidual.
3. If your are using the lsr solver, try decreasing lsr_error. This parameter sets the accuracy of the solver at the tile edges.
Both 2 and 3 will decrease differences between single and multi-tile domains but at the expense of higher computational cost.
Dimitris Menemenlis
cell: 818-625-6498
-----Original Message-----
From: Martin Losch <Martin.Losch at awi.de>
Subj: Re: [MITgcm-support] single v. multiple processor
Date: Tue Dec 18, 2007 6:00 am
Size: 3K
To: mitgcm-support at mitgcm.org
Hi Ian,
if you do just a few time steps (<10) then the difference between
1cpu and >1cpu should be at working precision (different summation
order in global sums being the main problem as far as I know).
Depending on your system (nonlinearity) these small differences can
evolve into something more complicated and significant. 0.05deg after
15 weeks is not so surprising (to me). It also gives you a infimum of
model error, basically a measure for the numerical round-off error.
Numerical modelling is not always "deterministic"
Check, if you have very similar results for a few time steps, then
you should be OK.
Martin
On 18 Dec 2007, at 14:44, Ian G. Fenty wrote:
> Hello,
> Can somebody suggest any reasons why output from a single processor
> gcm compilation would differ from the output from a multiple
> processor compilation? I'm getting differences of order 0.05
> degrees after about 15 weeks. Here is my "data" file.
> &PARM01
> tRef= 24.0 , 23.0 , 22.0 , 21.0 , 20.0 ,
> 19.0 , 18.0 , 17.0 , 16.0 , 15.0 ,
> 14.0 , 13.0 , 12.0 , 11.0 , 10.0 ,
> 9.0 , 8.0 , 7.0 , 6.0, 5.0 ,
> 4.0 , 3.0 , 2.0 ,
> sRef= 35.65, 35.65 ,35.65, 35.65, 35.65,
> 35.65, 35.65 ,35.65, 35.65, 35.65,
> 35.65, 35.65 ,35.65, 35.65, 35.65,
> 35.65, 35.65 ,35.65, 35.65, 35.65,
> 35.65, 35.65 ,35.65,
> no_slip_sides=.false.,
> no_slip_bottom=.true.,
> viscAz =1.E-3,
> viscA4Grid = 0.10,
> viscAHGrid = 0.10,
> diffKhT=5.E1,
> diffKzT=1.E-5,
> diffKhS=5.E1 ,
> diffKzS=1.E-5,
> tempAdvScheme=30,
> saltAdvScheme=30,
> inADExact=.true.,
> StaggerTimeStep=.true.,
> allowFreezing=.false.,
> multiDimAdvection=.false.
> beta=1.E-11,
> tAlpha=2.E-4,
> sBeta =7.4E-4,
> gravity=9.81,
> gBaro=9.81,
> rigidLid=.FALSE.,
> implicitFreeSurface=.true.,
> eosType='JMD95Z',
> readBinaryPrec=32,
> writeBinaryPrec=32,
> tempStepping=.TRUE.,
> momStepping=.true.,
> saltStepping=.TRUE.,
> implicitDiffusion=.true.,
> implicitViscosity=.true.,
> globalFiles=.FALSE.,
> useSingleCpuIO=.true.,
> useCDscheme=.FALSE.,
> exactConserv=.true.,
>
> debugLevel=0,
> vectorInvariantMomentum=.TRUE.,
> useRealFreshWaterFlux = .false.,
> /
> &PARM02
> cg2dMaxIters=15000,
> cg2dTargetResidual=1.E-20,
> /
> &PARM03
> cAdjFreq = -1,
> nIter0 = 0,
> nTimeSteps = 8784,
> forcing_In_AB = .FALSE.,
> deltaTmom = 3600.0,
> deltaTtracer= 3600.0,
> deltaTClock = 3600.0,
> abEps=0.1,
> /
> &PARM04
> usingCartesianGrid=.FALSE.,
> usingSphericalPolarGrid=.FALSE.,
> usingCurvilinearGrid=.TRUE.,
> delZ=10.,10.,15.,20.,20.,25.,35.,50.,75.,100.,150.,200.,275.,350.,415.
> ,450.,500.,500.,500.,500.,500.,500.,500.,
> &
> /
> &PARM05
> bathyFile='ib_open_40.bin',
> hydrogSaltFile = 'salt_init.ecco_19920901T060000_19920817T000000',
> hydrogThetaFile = ' theta_init.ecco_19920901T060000_19920817T000000',
> /
>
>
> _______________________________________________
> MITgcm-support mailing list
> MITgcm-support at mitgcm.org
> http://mitgcm.org/mailman/listinfo/mitgcm-support
_______________________________________________
MITgcm-support mailing list
MITgcm-support at mitgcm.org
http://mitgcm.org/mailman/listinfo/mitgcm-support
More information about the MITgcm-support
mailing list