[MITgcm-support] Optimizing the 2D pressure solver
Van Thinh Nguyen
vtnguyen at moisie2.math.uwaterloo.ca
Mon Sep 24 11:56:56 EDT 2007
Hi all,
Regarding your discussion on scaling the code, I wonder I should provide
some info what I did, you may interest in this.
I did scale my run with the domain: 4000*500*60 for internal wave
simulation (free surface, non-hydrostatic) on shared and distributed
platforms, as follows:
shared : Altix 4700,CPU Itanium2 1.6 GHz, interconnect NUMA
distributed: HP cluster, CPU 2xopteron 2.6 GHz, interconect Quadrics Elan4
Here are some results: compared the runtime in one timestep (=2sec)
Number of cpus: 64 80 128
shared 50s/step 30s/step 190s/step
distributed 40s/step 38s/step 25s/step
I used:
cg2dMaxIters=1000,
cg2dTargetResidual=1.E-13,
cg3dMaxIters=400,
cg3dTargetResidual=1.E-13
Matt is right, it doesn't make sense if we use too many cpus for a domain
not large enough, the communication between interior overlap cells will
make the code runs worse.
Van Thinh
----------------------------------------------------------
On Sat, 22 Sep 2007, Michael Schaferkotter wrote:
> greetings;
>
> i looked over the preceding threads for this topic and found no mention of tile sizes or overlap size.
> aren/t these specified in code/SIZE.h?
>
> so how is the domain being decomposed?
>
> michael
>
> _______________________________________________
> MITgcm-support mailing list
> MITgcm-support at mitgcm.org
> http://mitgcm.org/mailman/listinfo/mitgcm-support
>
More information about the MITgcm-support
mailing list