[MITgcm-support] Optimizing the 2D pressure solver
Jinlun Zhang
zhang at apl.washington.edu
Tue Sep 25 13:32:42 EDT 2007
I wonder if the 2D pressure could be solved following the sea ice LSR
dynamics model.
Jinlun
Christopher L. Wolfe wrote:
>
> Hello modelers,
>
> I've found that a major barrier to scaling the MITgcm to large
> numbers (>1000) of processors is the 2D pressure solver ( I run in
> hydrostatic mode), which consumes between 30% to 70% of the runtime,
> depending on the machine and the number of processors. The overhead
> of the pressure solver gets worse as the number of processors
> increases, which is not surprising given that the pressure inversion
> is the most nonlocal procedure in the time-stepping cycle.
>
> The current version of the 2D conjugate gradient solver does 2
> exchanges and 3 global sums per iteration, which adds up quickly
> since the solver does hundreds of iterations per time step. It seems
> likely that communication overhead is dominating the time-to-solution
> of the pressure solver. Someone apparently had the idea to reduce
> communication overhead by including a flag (cg2dChkResFreq) that
> allows the CG residual to be checked every N iterations instead of
> every iteration. However, this flag in fact does nothing and the
> variable cg2dChkResFreq is not even referenced in cg2d.F.
>
> Before I start mucking around with the internals of the CG solver, I
> was hoping that someone more knowledgeable than I might have some
> straightforward pointers on how to optimize the 2D pressure solver
> for large numbers of processors.
>
> For reference, my domain is 1792 x 448 x 20, I have
>
> rigidLid=.FALSE.,
> implicitFreeSurface=.TRUE.,
>
> and my PARM02 block contains
>
> cg2dMaxIters=400,
> cg2dTargetResidual=1.5E-7,
>
> I'm currently running in an all-MPI mode. I haven't tried running in
> mixed OpenMP/MPI mode, but I'm willing to try if anyone thinks it'll
> improve the solver performance. I suspect gains of going this route
> will be modest since several of the machines I run on have very small
> numbers (2) of processors per node.
>
> Thanks in advance for any advice,
> Christopher
>
> -----------------------------------------------------------
> Dr. Christopher L. Wolfe 858-534-4560
> Physical Oceanography Research Division OAR 357
> Scripps Institution of Oceanography, UCSD clwolfe at ucsd.edu
> -----------------------------------------------------------
>
>
>
>
> _______________________________________________
> MITgcm-support mailing list
> MITgcm-support at mitgcm.org
> http://mitgcm.org/mailman/listinfo/mitgcm-support
--
Jinlun Zhang
Polar Science Center, Applied Physics Laboratory
University of Washington, 1013 NE 40th St, Seattle, WA 98105-6698
Phone: (206)-543-5569; Fax: (206)-616-3142
zhang at apl.washington.edu
http://psc.apl.washington.edu/pscweb2002/Staff/zhang/zhang.html
More information about the MITgcm-support
mailing list