[Mitgcm-support] Re: MPI with ifc broken ?
mitgcm-support at dev.mitgcm.org
mitgcm-support at dev.mitgcm.org
Wed Jul 9 15:40:26 EDT 2003
Hi again,
> It seems that something is broken in the code
> for MPI with ifc (Linux, like the myrinet-3 cluster)
> between checkpoint48e (still working) and checkpoint48f:
> no output (all STDOUT + STDERR are empty) and the
> error message is (on 2 cpus):
Finally it turns to be weekly related to modifications
for scatter_2d.F gather_2d.F : the problem is in
ini_procs.F (eesup/src) and is due to the changes between 1.14 and 1.15:
> C-- To speed-up mpi gather and scatter routines, myXGlobalLo
> C and myYGlobalLo from each process are transferred to
> C a common block array. This allows process 0 to know
> C the location of the domains controlled by each process.
> DO npe = 0, numberOfProcs-1
> CALL MPI_SEND (myXGlobalLo, 1, MPI_INTEGER,
> & npe, mpiMyId, MPI_COMM_MODEL, ierr)
> ENDDO
> DO npe = 0, numberOfProcs-1
> CALL MPI_RECV (itemp, 1, MPI_INTEGER,
> & npe, npe, MPI_COMM_MODEL, istatus, ierr)
> mpi_myXGlobalLo(npe+1) = itemp
> ENDDO
> DO npe = 0, numberOfProcs-1
> CALL MPI_SEND (myYGlobalLo, 1, MPI_INTEGER,
> & npe, mpiMyId, MPI_COMM_MODEL, ierr)
> ENDDO
> DO npe = 0, numberOfProcs-1
> CALL MPI_RECV (itemp, 1, MPI_INTEGER,
> & npe, npe, MPI_COMM_MODEL, istatus, ierr)
> mpi_myYGlobalLo(npe+1) = itemp
> ENDDO
When I comment those lines, it works fine.
Jean_Michel
More information about the MITgcm-support
mailing list