[MITgcm-support] problems with compile with MPI

Miroslaw Andrejczuk Andrejczuk at atm.ox.ac.uk
Wed Dec 18 07:54:23 EST 2013


Hi  Lizhi,

Looks the directory where mpi library is installed is not in your path. Consider following modifications to your 
linux_amd64_pgf90+mpi_xd1 file:

FC='/dcfs2/program/mpi/openmpi/1.4.3/pgcc_pgf90/bin/mpif90'
CC='/dcfs2/program/mpi/openmpi/1.4.3/pgcc_pgf90/bin/mpicc'
LINK='/dcfs2/program/mpi/openmpi/1.4.3/pgcc_pgf90/bin/mpif90'

and when running the mode use full path to mpirun: /dcfs2/program/mpi/openmpi/1.4.3/pgcc_pgf90/bin/mpirun.

Or add /dcfs2/program/mpi/openmpi/1.4.3/pgcc_pgf90/bin/ to your PATH variable.

Mirek


________________________________________
From: Oliver Jahn [jahn at MIT.EDU]
Sent: Wednesday, December 18, 2013 12:40 PM
To: mitgcm-support at mitgcm.org
Subject: Re: [MITgcm-support] problems with compile with MPI

First you should verify that you can compile a simple mpi program, like
the ones here:
http://www.dartmouth.edu/~rc/classes/intro_mpi/hello_world_ex.html
Maybe you need to load some module first (module load openmpi or
similar, check module avail) or check with your system admin how to do it.

On 2013-12-17 20:49, 李志远 wrote:
> thanks for your quick reply, Oliver ;
>      I tried as you told , I cd hs94.1x64x5/build  ,then type "
> ../../../tools/genmake2 -mods=../code -of=../
>  ../../tools/build_options/linux_amd64_pgf90+mpi_xd1"
> then,it occured "
> getting OPTFILE information:
>
>     using OPTFILE="../../../tools/build_options/linux_amd64_pgf90+mpi_xd1"
>
>   getting AD_OPTFILE information:
>
>     using AD_OPTFILE="../../../tools/adjoint_options/adjoint_default"
>
>   check makedepend (local: 0, system: 0, 0)
>
>   Turning on MPI cpp macros
>
> ===  Checking system libraries  ===
>
>   Do we have the system() command using mpif90...  no
>
>   Do we have the fdate() command using mpif90...  no
>
>   Do we have the etime() command using mpif90...  no
>
>   Can we call simple C routines (here, "cloc()") using mpif90...  no
>
>   Can we unlimit the stack size using mpif90...  no
>
>   Can we register a signal handler using mpif90...  no
>
>   Can we use stat() through C calls...  no
>
>   Can we create NetCDF-enabled binaries...  no
>
>   Can we create LAPACK-enabled binaries...  no
>
>   Can we call FLUSH intrinsic subroutine...  no
>
> ===  Setting defaults  ===
>
>   Adding MODS directories: ../code
>
>   Making source files in eesupp from templates
>
>   Making source files in pkg/exch2 from templates
>
>   Making source files in pkg/regrid from templates
>
>
>
> ===  Determining package settings  ===
>
>   getting package dependency info from  ../../../pkg/pkg_depend
>
>   getting package groups info from      ../../../pkg/pkg_groups
>
>   checking list of packages to compile:
>
>     using PKG_LIST="../code/packages.conf"
>
>     before group expansion packages are: gfd shap_filt diagnostics mypackage mnc
>
>     replacing "gfd" with:  mom_common mom_fluxform mom_vecinv generic_advdiff debug mdsio rw monitor
>
>     after group expansion packages are:  mom_common mom_fluxform mom_vecinv generic_advdiff debug mdsio rw monitor shap_filt diagnostics mypackage mnc
>
>   applying DISABLE settings
>
>   applying ENABLE settings
>
>     packages are:  debug diagnostics generic_advdiff mdsio mnc mom_common mom_fluxform mom_vecinv monitor mypackage rw shap_filt
>
> *********************************************************************
>
> WARNING: the "mnc" package was enabled but tests failed to compile
>
>   NetCDF applications.  Please check that:
>
>
>
>   1) NetCDF is correctly installed for this compiler and
>
>   2) the LIBS variable (within the "optfile") specifies the correct
>
>        NetCDF library to link against.
>
>
>
>   Due to this failure, the "mnc" package is now DISABLED.
>
> *********************************************************************
>
>   applying package dependency rules
>
>     packages are: debug diagnostics generic_advdiff mdsio  mom_common mom_fluxform mom_vecinv monitor mypackage rw shap_filt
>
>   Adding STANDARDDIRS='eesupp model'
>
>   Searching for *OPTIONS.h files in order to warn about the presence
>
>     of "#define "-type statements that are no longer allowed:
>
>     found CPP_EEOPTIONS="../../../eesupp/inc/CPP_EEOPTIONS.h"
>
>     found CPP_OPTIONS="../../../model/inc/CPP_OPTIONS.h"
>
>   Creating the list of files for the adjoint compiler.
>
>
>
> ===  Creating the Makefile  ===
>
>   setting INCLUDES
>
>   Determining the list of source and include files
>
>   Writing makefile: Makefile
>
>   Add the source list for AD code generation
>
>   Making list of "exceptions" that need ".p" files
>
>   Making list of NOOPTFILES
>
>   Add rules for links
>
>   Adding makedepend marker
>
>
>
> ===  Done  ===
>
>
>
> I don't know where does it go wrong  , I can compile and run
> successfully without mpi before .
>
>
> thank you very much !
>
>
>
>
>
>
> At 2013-12-18 01:00:02,mitgcm-support-request at mitgcm.org wrote:
>>Send MITgcm-support mailing list submissions to
>>      mitgcm-support at mitgcm.org
>>
>>To subscribe or unsubscribe via the World Wide Web, visit
>>      http://mitgcm.org/mailman/listinfo/mitgcm-support
>>or, via email, send a message with subject or body 'help' to
>>      mitgcm-support-request at mitgcm.org
>>
>>You can reach the person managing the list at
>>      mitgcm-support-owner at mitgcm.org
>>
>>When replying, please edit your Subject line so it is more specific
>>than "Re: Contents of MITgcm-support digest..."
>>
>>
>>Today's Topics:
>>
>>   1. Re: Parameters to parallelise a coupled run ? (Jean-Michel Campin)
>>   2. problems with compile with MPI (???)
>>   3. Re: problems with compile with MPI (MIT)
>>
>>
>>----------------------------------------------------------------------
>>
>>Message: 1
>>Date: Mon, 16 Dec 2013 12:47:46 -0500
>>From: Jean-Michel Campin <jmc at ocean.mit.edu>
>>To: mitgcm-support at mitgcm.org
>>Subject: Re: [MITgcm-support] Parameters to parallelise a coupled run
>>      ?
>>Message-ID: <20131216174746.GA8588 at ocean.mit.edu>
>>Content-Type: text/plain; charset=us-ascii
>>
>>Hi Alexandre,
>>
>>There was a problem in the coupler mapping interface (specially
>>regarding the run-off mapp that the coupler uses) that has been
>>found recently and fixed (Nov 27, 2013 and Dec 02, 2013).
>>I would recommand to update your code (checkpoint64r contains
>>these fixes).
>>
>>However, this would have very little or no impact on the
>>how fast the model will run on 1+48+48 procs.
>>
>>One (obvious) thing is that the default option in "run_cpl_test"
>>is to compile with "-devel" (very slow).
>>Otherwise, you my want to try smaller number of procs (e.g.,
>>1+12+12 and 1+24+24) to check when this scaling/speed degrades.
>>
>>Cheers,
>>Jean-Michel
>>
>>On Thu, Dec 12, 2013 at 10:53:35AM +0100, Alexandre Pohl wrote:
>>> Hi all,
>>>
>>> I asked you a few weeks ago about a way to merge netcdf output files. I am still learning to use the MITgcm, to hopefully be able to apply it soon to paleoclimate modelling (the Ordovician glaciation).
>>>
>>> But I am now having problems to parallelise the code. I use the cubed-sphere exch2 grid. I started from the "cpl_aim+ocn" verification case. It runs well with 6 faces, 6 tiles and 3 processes (1 for the ocean, 1 for the atmosphere and 1 for the coupler).
>>>
>>> To parallelise the code, I then divided each face of the cube into 8 tiles using the SIZE.h file :
>>>
>>>      &           sNx =  16,
>>>      &           sNy =   8,
>>>      &           OLx =   2,
>>>      &           OLy =   2,
>>>      &           nSx =   1,
>>>      &           nSy =   1,
>>>      &           nPx =  48,
>>>      &           nPy =   1,
>>>      &           Nx  = sNx*nSx*nPx,        > which gives 768 points in X
>>>      &           Ny  = sNy*nSy*nPy,        > which gives 8 points in Y
>>>      &           Nr  =  15)  [Nr=5 for the atmosphere]
>>>
>>> It used the "adjustment.cs-32x32x1" verification files to write my own, except that I gave 1 process to each tile ("nSx=1" above), which finally makes 6*8=48 processes ("nPx=48") instead of a single process.
>>>
>>> I added " usingMPI=.TRUE.," to my eedata files and set  "nTx=1".
>>>
>>> I am now wondering how processes I have to use for the ocean and for the atmosphere. I use the "run_cpl_test" script to prepare, compile and execute the model. I tried "Npr=97; NpOc=48;" which means 48 processes for the ocean, 48 for the atmosphere and 1 single processor for the coupler, but the model unfortunately does not run faster with these 97 processes than with only 3.
>>>
>>> Could you tell me if my parameters are wrong?
>>> Have I forgotten something?
>>>
>>> I would be very interested if somebody could describe a set of parameters that allow to parallelise the model efficiently.
>>>
>>> Thank you for your help,
>>> Alexandre
>>>
>>> _______________________________________________
>>> MITgcm-support mailing list
>>> MITgcm-support at mitgcm.org
>>> http://mitgcm.org/mailman/listinfo/mitgcm-support
>>
>>
>>
>>------------------------------
>>
>>Message: 2
>>Date: Tue, 17 Dec 2013 21:48:36 +0800 (CST)
>>From: ??? <oceanlizy at 163.com>
>>To: MITgcm <mitgcm-support at mitgcm.org>
>>Subject: [MITgcm-support] problems with compile with MPI
>>Message-ID: <103a0553.f734.14300d0bbb2.Coremail.oceanlizy at 163.com>
>>Content-Type: text/plain; charset="gbk"
>>
>>From: lizhiyuan <oceanlizy at 163.com>
>>To: mitgcm-support at mitgcm.orgSubject:   problems with compile with MPIDear,all;
>>      I want to use mpi run my  model , but I don't know how to compile it  wiht mpi  . my computer system is linux_amd64  with gfortran and openmpi. The openmpi is installed at '/dcfs2/program/mpi/openmpi/1.4.3/pgcc_pgf90/ ' and have five subdirectories : bin , etc ,include ,lib ,share .
>> I find my mpif90 is located at '/dcfs2/program/mpi/openmpi/1.4.3/pgcc_pgf90/bin/mpif90 '
>>should I just modify  the files 'linux_amd64_pgf90+mpi_xd1'in the /tools/build_options as follows:
>>
>>
>>FC='mpif90'
>>CC='mpicc'
>>LINK='mpif90'
>>
>>
>>MPI='true'
>>INCLUDEDIRS='/dcfs2/program/mpi/openmpi/1.4.3/pgcc_pgf90/include'
>>DEFINES='-DWORDLENGTH=4'
>>CPP='/usr/bin/cpp -P -traditional'
>>EXTENDED_SRC_FLAG='-Mextend'
>>
>>
>>INCLUDES='-I/dcfs2/program/mpi/openmpi/1.4.3/pgcc_pgf90/include'
>>LIBS='-L/dcfs2/program/mpi/openmpi/1.4.3/pgcc_pgf90/lib'
>>
>>
>>and then,cd build,type  " ../../../tools/genmake2 -mods=../code -of=../../../tools/build_options/linux_amd64_pgf90+mpi_xd1=/dcfs2/program/mpi/openmpi/1.4.3/pgcc_pgf90"?
>>I am some confused , someone met the same questions, can any one help me ?
>>I am looking forward to your quick reply .
>>thanks  so much !
>>-------------- next part --------------
>>An HTML attachment was scrubbed...
>>URL: <http://mitgcm.org/pipermail/mitgcm-support/attachments/20131217/16be92ab/attachment.html>
>>
>>------------------------------
>>
>>Message: 3
>>Date: Tue, 17 Dec 2013 10:01:24 -0500
>>From: MIT <jahn at MIT.EDU>
>>To: mitgcm-support at mitgcm.org
>>Subject: Re: [MITgcm-support] problems with compile with MPI
>>Message-ID: <52B06744.4030609 at mit.edu>
>>Content-Type: text/plain; charset=UTF-8
>>
>>Dear Lizhi,
>>
>>the modification you made look reasonable.  Why don't you give it a try
>>and report back how it went?  The last part of your genmake2 command
>>looks funny, though.  It should just be
>>
>>../../../tools/genmake2 -mods=../code
>>-of=../../../tools/build_options/linux_amd64_pgf90+mpi_xd1
>>
>>And make sure your SIZE.h (in code) has nPx or nPy > 1.  If you don't
>>know how to modify SIZE.h, have a look at one of the verification
>>experiments, like verification/tutorial_barotropic_gyre.  SIZE.h_mpi is
>>the version for mpi with 2 processors.
>>
>>Cheers,
>>Oliver
>>
>>
>>On 2013-12-17 08:48, ??? wrote:
>>> From: lizhiyuan <oceanlizy at 163.com <mailto:oceanlizy at 163.com>>
>>> To: mitgcm-support at mitgcm.org <mailto:mitgcm-support at mitgcm.org>
>>>
>>> Subject:   <mailto:mitgcm-support at mitgcm.org> problems with compile with MPI
>>>
>>> Dear,all;
>>>       I want to use mpi run my  model , but I don't know how to compile
>>> it  wiht mpi  . my computer system is linux_amd64  with gfortran and
>>> openmpi. The openmpi is installed at
>>> '/dcfs2/program/mpi/openmpi/1.4.3/pgcc_pgf90/ ' and have five
>>> subdirectories : bin , etc ,include ,lib ,share .
>>>  I find my mpif90 is located at
>>> '/dcfs2/program/mpi/openmpi/1.4.3/pgcc_pgf90/bin/mpif90 '
>>> should I just modify  the files 'linux_amd64_pgf90+mpi_xd1'in the
>>> /tools/build_options as follows:
>>>
>>> FC='mpif90'
>>> CC='mpicc'
>>> LINK='mpif90'
>>>
>>> MPI='true'
>>> INCLUDEDIRS='/dcfs2/program/mpi/openmpi/1.4.3/pgcc_pgf90/include'
>>> DEFINES='-DWORDLENGTH=4'
>>> CPP='/usr/bin/cpp -P -traditional'
>>> EXTENDED_SRC_FLAG='-Mextend'
>>>
>>> INCLUDES='-I/dcfs2/program/mpi/openmpi/1.4.3/pgcc_pgf90/include'
>>> LIBS='-L/dcfs2/program/mpi/openmpi/1.4.3/pgcc_pgf90/lib'
>>>
>>> and then,cd build,type  " ../../../tools/genmake2 -mods=../code
>>> -of=../../../tools/build_options/linux_amd64_pgf90+mpi_xd1=/dcfs2/program/mpi/openmpi/1.4.3/pgcc_pgf90"?
>>> I am some confused , someone met the same questions, can any one help me ?
>>> I am looking forward to your quick reply .
>>> thanks  so much !
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> MITgcm-support mailing list
>>> MITgcm-support at mitgcm.org
>>> http://mitgcm.org/mailman/listinfo/mitgcm-support
>>>
>>
>>
>>
>>
>>------------------------------
>>
>>_______________________________________________
>>MITgcm-support mailing list
>>MITgcm-support at mitgcm.org
>>http://mitgcm.org/mailman/listinfo/mitgcm-support
>>
>>
>>End of MITgcm-support Digest, Vol 126, Issue 31
>>***********************************************
>
>
>
>
>
> _______________________________________________
> MITgcm-support mailing list
> MITgcm-support at mitgcm.org
> http://mitgcm.org/mailman/listinfo/mitgcm-support
>

_______________________________________________
MITgcm-support mailing list
MITgcm-support at mitgcm.org
http://mitgcm.org/mailman/listinfo/mitgcm-support


More information about the MITgcm-support mailing list