[MITgcm-devel] SERIOUS performance problems with latest code

Dimitris Menemenlis menemenlis at sbcglobal.net
Tue Oct 31 10:51:12 EST 2006


Patrick, one thing that has changed is exf_interp.F
I have reintroduced singlecpuio, which had been inadvertently turned off
when we tested threaded code on Columbia.

This fixes the file-not-found problem and apparently it also speeds up code for 
the 216-processor cubed sphere configuration:

_Section                                Wall      User      System
## Parallel I/O
"EXF_GETFORCING [LOAD_FLDS_DRIVER]":    57.194    26.895    20.511
## singleCPUIO
"EXF_GETFORCING [LOAD_FLDS_DRIVER]":    12.722    11.203     0.326

But there's no guarantee that this will result in speed-up for all 
configurations and on all platforms.

Another change, suggested by Baylor, is:

#ifndef DISABLE_MPI_READY_TO_RECEIVE
             CALL MPI_SEND (ready_to_receive, 1, MPI_INTEGER,
      &           npe, itag, MPI_COMM_MODEL, ierr)
#endif
#ifndef DISABLE_MPI_READY_TO_RECEIVE
          CALL MPI_RECV (ready_to_receive, 1, MPI_INTEGER,
      &        idest, itag, MPI_COMM_MODEL, istatus, ierr)
#endif

in all the gather* routines.  But that is off by default, so it should not 
affect your performance, and so far tests on Columbia are not conclusive.  This 
change seems to increase memory requirement quite a bit, which could be 
affecting other parts of the code due to cache issues.

One thing to keep in mind is that there is quite a bit of variation in 
performance, I have seen up to factor of 3 sometimes, between the various 
Columbia nodes, disk subsystems, and Columbia usage at the time of your test.

Dimitris



More information about the MITgcm-devel mailing list