[MITgcm-support] inefficient pressure solver

Constantinos Evangelinos ce107 at ocean.mit.edu
Wed Jul 15 00:19:02 EDT 2009


On Tuesday 14 July 2009 23:02:52 David Wang wrote:
> Hi Martin et al.,
>
> I switched on the flag GLOBAL_SUM_SEND_RECV in CPP_EEOPTIONS.h, and
> re-compiled the model (with ifort11, openmpi1.3.2, the opt file very
> similar to linux_amd64_ifort+mpi_beagle). It became worse and way too
> slower. I actually didn't get the statistics because I set a pretty small
> walltime limit (15 min, enough for my previous 4-node test runs), and the
> job got killed. Is there anything else I can tweak?

GLOBAL_SUM_SEND_RECV is supposed to be slower as it foregoes using 
MPI_Reduce/MPI_Allreduce for a sequence of Send/Recv that always enforces the 
same summation order (and therefore can produce bit-reproducible results).

I suggest using 
mpiP http://mpip.sourceforge.net/ 
or 
ipm http://ipm-hpc.sourceforge.net/
to get a better idea of where your slowdown comes from exactly. They are not 
too difficult to setup (on both your cluster and on Ranger if they are not 
already installed) and should allow you a comparison. 

Also, with openmpi have you been using task affinity?

Constantinos
-- 
Dr. Constantinos Evangelinos
Department of Earth, Atmospheric and Planetary Sciences
Massachusetts Institute of Technology





More information about the MITgcm-support mailing list