[MITgcm-devel] archer CRAY_XC30
David Ferreira
dfer at mit.edu
Mon Apr 27 11:45:56 EDT 2015
Hi Martin,
I'm doing good! I'm almost fully British by now.
So to answer you are question:
the /home/n02/n02/dfer/linux_Archer_cray is indeed the one checked-in. I
should fix this to avoid confusion.
I did not tweak the cray environment. In fact I went for the bare
minimum, getting the testreports (normal+restart) running and giving
decent results. I failed to get mixed mode (MPI+OpenMP) running.
Whatever I tried, I got an error with the synchronization of the threads
(did you run into such an issue?).
That said, I did not find the model to be slow, it seems comparable or
faster than on the NASA Pleiades.
Not very useful for you, sorry.
One reason I did not do much is that we got a small technological grant
to install and optimize the MITgcm and its adjoint on Archer (an
engineer from the Archer team will carry the work). This includes
improving the speed of the forward model (optimization obviously + for
example, using the PETSc library in place of cg2d).
I'm happy to transmit whatever good comes out of this testing.
cheers,
david
On 4/27/15 8:47 AM, Martin Losch wrote:
> Hi David,
>
> how are you doing?
>
> I am running the MITgcm on ECMWF’s Cray XC30, which is probably similar to archer. I have a few scaling issues that may be related to environment variables, compile options, etc.
>
> From testing.html I saw that you don’t use the checked-in build options file for linux_ia64_cray_archer, but /home/n02/n02/dfer/linux_Archer_cray. Are there any significant difference between these files? What else do you do to tweak the cray environment? Do you link with dmapp-libraries, hugepages, etc.? I really don’t know what these things mean, but I am seeing only very little improvement (solve_for_pressure, and seaice_dynamics with their multible calls to mpi_allreduce in global_sum or global_max are the problem) with these options. I also use the “single reduction” cg2d version (ALLOW_SRCG + use it), which also helps; mixing MPI with OpenMP reduces the overhead in solve_for_pressure a little (currently I use only 4 threads).
>
> What’s your experience?
>
> Martin
>
>
> _______________________________________________
> MITgcm-devel mailing list
> MITgcm-devel at mitgcm.org
> http://mitgcm.org/mailman/listinfo/mitgcm-devel
More information about the MITgcm-devel
mailing list