[MITgcm-support] Reading errors
Estanislao Gavilan Pascual-Ahuir
dramauh at hotmail.com
Thu Aug 22 22:37:21 EDT 2019
Hi Martin and Jean-Michel
I still have the error forrtl: severe (36): attempt to access non-existent record, unit 16, file /bblablabla/run/OBzonalV.bin. In addition, I saw this message after doing make depend f90mkdepend: no source file found for module this. I am no sure if this is important.
I followed your advice. I start doing a simulation in serial mode without obcs and the model ran perfectly fine. Once I opened the obcs (i.e. useOBCS= TRUE), I had that error. Following the architecture of my cluster change a little bit the linux_ia64_ifort. By the way, do I need the last set of conditions of NETCDF_ROOT and netcdf test? I was thinking to remove them.
Kind regards,
Estanis
#!/bin/bash
#
# Tested on uv100.awi.de (SGI UV 100, details:
# http://www.sgi.com/products/servers/uv/specs.html)
# a) For more speed, provided your data size does not exceed 2GB you can
# remove -fPIC which carries a performance penalty of 2-6%.
# b) You can replace -fPIC with '-mcmodel=medium -shared-intel' which may
# perform faster than -fPIC and still support data sizes over 2GB per
# process but all the libraries you link to must be compiled with
# -fPIC or -mcmodel=medium
# c) flags adjusted for ifort 12.1.0
FC=ifort
F90C=ifort
CC=icc
# requires that all static libraries are available:
#LINK='ifort -static'
LINK='ifort'
# for adjoint runs the default makedepend often cannot handle enough files
#MAKEDEPEND=tools_xmakedepend
DEFINES='-DWORDLENGTH=4'
CPP='cpp -traditional -P'
F90FIXEDFORMAT='-fixed -Tf'
EXTENDED_SRC_FLAG='-132'
#OMPFLAG='-openmp'
NOOPTFLAGS="-O0 -g -m64"
NOOPTFILES=''
MCMODEL='-fPIC'
# for large memory requirements uncomment this line
#MCMODEL='-mcmodel=medium -shared-intel'
FFLAGS="$FFLAGS -W0 -WB -convert big_endian -assume byterecl $MCMODEL"
#- might want to use '-r8' for fizhi pkg:
#FFLAGS="$FFLAGS -r8"
if test "x$IEEE" = x ; then #- with optimisation:
FOPTIM='-O3 -align'
# does not work when -static does not work
# FOPTIM='-fast -align'
# instead you can use
# FOPTIM='-O3 -ipo -align'
else
if test "x$DEVEL" = x ; then #- no optimisation + IEEE :
FOPTIM='-O0 -noalign -fp-model precise'
# -fltconsistency
else #- development/check options:
FOPTIM='-O0 -noalign -fp-model precise'
FOPTIM="$FOPTIM -g -check all -fpe0 -traceback -ftrapuv -fp-model except -warn all"
fi
fi
F90FLAGS=$FFLAGS
F90OPTIM=$FOPTIM
CFLAGS="-O0 -ip $MCMODEL"
INCLUDEDIRS=''
INCLUDES='-I. -I$NETCDF/include -I/WORK/app/netcdf/4.3.2/01-CF-14/include -I/usr/local/mpi3-dynamic/include'
LIBS='-L$NETCDF/lib -lnetcdff -lnetcdf -I/WORK/app/netcdf/4.3.2/01-CF-14/lib'
if [ "x$NETCDF_ROOT" != x ] ; then
INCLUDEDIRS="${NETCDF_ROOT}/include"
INCLUDES="-I${NETCDF_ROOT}/include"
LIBS="-L${NETCDF_ROOT}/lib"
elif [ "x$NETCDF_HOME" != x ]; then
INCLUDEDIRS="${NETCDF_HOME}/include"
INCLUDES="-I${NETCDF_HOME}/include"
LIBS="-L${NETCDF_HOME}/lib"
elif [ "x$NETCDF_INC" != x -a "x$NETCDF_LIB" != x ]; then
NETCDF_INC=`echo $NETCDF_INC | sed 's/-I//g'`
NETCDF_LIB=`echo $NETCDF_LIB | sed 's/-L//g'`
INCLUDEDIRS="${NETCDF_INC}"
INCLUDES="-I${NETCDF_INC}"
LIBS="-L${NETCDF_LIB}"
elif [ "x$NETCDF_INCDIR" != x -a "x$NETCDF_LIBDIR" != x ]; then
INCLUDEDIRS="${NETCDF_INCDIR}"
INCLUDES="-I${NETCDF_INCDIR}"
LIBS="-L${NETCDF_LIBDIR}"
elif test -d /usr/include/netcdf-3 ; then
INCLUDEDIRS='/usr/include/netcdf-3'
INCLUDES='-I/usr/include/netcdf-3'
LIBS='-L/usr/lib/netcdf-3 -L/usr/lib64/netcdf-3'
elif test -d /usr/include/netcdf ; then
INCLUDEDIRS='/usr/include/netcdf'
INCLUDES='-I/usr/include/netcdf'
elif test -d /usr/local/netcdf ; then
INCLUDEDIRS='/usr/include/netcdf/include'
INCLUDES='-I/usr/local/netcdf/include'
LIBS='-L/usr/local/netcdf/lib'
elif test -d /usr/local/include/netcdf.inc ; then
INCLUDEDIRS='/usr/local/include'
INCLUDES='-I/usr/local/include'
LIBS='-L/usr/local/lib64'
fi
if [ -n "$MPI_INC_DIR" -a "x$MPI" = xtrue ] ; then
LIBS="$LIBS -lmpi"
INCLUDES="$INCLUDES -I$MPI_INC_DIR"
INCLUDEDIRS="$INCLUDEDIRS $MPI_INC_DIR"
#- used for parallel (MPI) DIVA
MPIINCLUDEDIR="$MPI_INC_DIR"
fi
----------------------------------------------------------------------
Message: 1
Date: Thu, 22 Aug 2019 13:10:55 +0200
From: Martin Losch <Martin.Losch at awi.de>
To: MITgcm Support <mitgcm-support at mitgcm.org>
Subject: Re: [MITgcm-support] Reading errors
Message-ID: <325AC4E8-1648-4C6A-BFED-7722921C733E at awi.de>
Content-Type: text/plain; charset="utf-8"
Hi Estanis,
thanks for the details. This is what I would do:
- At the compile level use a standard build options file, with an intel compiler on a linux system I would start with MITgcm/tools/build_options/linux_amd64_ifort, or linux_ia64_ifort (depending on the output of uname -a, in fact, genmake2 is probably able to pick the correct file if you don?t specify it), and since your domain is small I would first try without MPI, ie. like this:
${somepath}/tools/genmake2 -of ${somepath}/tools/build_options/linux_amd64_ifort -mods ../code
make CLEAN && make depend && make
- With this non-MPI configuration I would try to run the model. First with useOBCS=.FALSE. (just a few timesteps), and then with .TRUE.
- once this works, you can recompile with MPI (if you really need it), like this:
${somepath}/tools/genmake2 -of ${somepath}/tools/build_options/linux_amd64_ifort -mods ../code -mpi
make CLEAN && make depend && make
(note that the extra flag ?-mpi" is enough)
and check if you get the same. For further help, you should record the potential error messages after each step.
Martin
PS. Some comments about your namelist below:
> On 22. Aug 2019, at 12:39, Estanislao Gavilan Pascual-Ahuir <dramauh at hotmail.com> wrote:
>
> Hi Martin ,
>
> Before anything thank you so much for your help. I will try to answer all you questions.
>
> what is the platform, the compiler?
> The platform in linux 2.6.32-431.TH.x86_64 GNU/Linux. Red Hat Enterprise Linux Server release 6.5. I am using intel compilers wrapped them in mpi. The version of the compiler is 14.0.2
> details of the configuration (content of code-directory and namelist files)
> I am running a simple simulation with open boundaries. I load the packages gfd, obcs, mnc and diagnostics using the packages.config. The frequency of the open boundaries is stated in the data file. This is the data file
> Model parameters
> # Continuous equation parameters
> &PARM01
> tRef=23.,23.,
> sRef=35.,35.,
> selectCoriMap=4,
> viscAh=4.E2,
with your grid choice (sphericalPolarGrid), the coriolis parameter is computed and these values are not used.
> f0=1.E-4,
> beta=1.E-11,
> rhoNil=1000.,
> gBaro=9.81,
> rigidLid=.FALSE.,
> implicitFreeSurface=.TRUE.,
> # momAdvection=.FALSE.,
> tempStepping=.FALSE.,
> saltStepping=.FALSE.,
> &
>
> # Elliptic solver parameters
> &PARM02
> cg2dTargetResidual=1.E-7,
> cg2dMaxIters=1000,
> &
>
> # Time stepping parameters
> &PARM03
> nIter0=0,
> nTimeSteps=100,
> deltaT=1200.0,
> pChkptFreq=31104000.0,
> chkptFreq=15552000.0,
> dumpFreq=15552000.0,
# this will give you monitor output every timestep (which is what you want while debugging), later I would set it to something like 20-50 * deltaT
> monitorFreq=1200.,
> monitorSelect=2,
> periodicExternalForcing=.TRUE.,
# this means that you will read data each time step. Is that what you want?
> externForcingPeriod= 1200.,
# with your choice of externForcingPeriod, this requires that you have 1000. records in the file(s)
> externForcingCycle = 12000000.,
> &
> # Gridding parameters
> &PARM04
> usingSphericalPolarGrid=.TRUE.,
# alternatively you can say dxSpacing = 1., dySpacing = 1.,
> delX=43*1.,
> delY=43*1.,
> xgOrigin=-21.,
> ygOrigin=-21.,
> delR=2*500.,
> &
>
> # Input datasets
> &PARM05
> bathyFile='bathy_cir.bin'
> meridWindFile=,
> &
>
> This is the data.obcs
>
> # Open-boundaries
> &OBCS_PARM01
> OBCSfixTopo=.FALSE.,
# if I understand the configuration correctly, you have a zonally reentrant channel with wall in the north and the south (python notation: bathy[0+2,:] = 0, and bathy[ny-1,:] = 0, except where you have the open boundaries)? you could actually save two grid rows (have 40 instead of 43 point in j-direction and set bathy[0,:]=0, bathy[ny,:]=0)
> OB_Ieast=0,
> OB_Iwest=0,
> OB_Jnorth(16:28)=13*41,
> OB_Jsouth(16:28)=13*3,
> useOBCSprescribe = .TRUE.,
These files should be found, if they are in the same directory where you run your model. They should each contain (according to you dimensions and time parameters) for 100 timesteps 100 fields of dimension (nx,nz). For anything above 1000 timesteps, they should have 1000 fields (because after the 1000ths record, the model starts from the beginning again, according to you externForcingCycle)
> OBNvFile = 'OBzonalV.bin',
> OBSvFile = 'OBzonalV.bin',
> OBNuFile = 'OBmeridU.bin',
> OBSuFile = 'OBmeridU.bin?,
# same as before this will give you a lot of output. You may want to comment out this line, becasue OBCS_monitorFreq = monitorFreq by default
> OBCS_monitorFreq=1200.00,
> OBCS_monSelect = 1,
> &
>
> &OBCS_PARM02
> &
> are you using latest code (some of the flags in the build-option look very outdated ?)?
> Yes, it is the latest code (version MITgcm_c67k). About the flags in my build version, I did not make my own one. I used one that I found in our research group.
>
> Kind regards,
>
> Estanislao
>
------------------------------
Message: 2
Date: Thu, 22 Aug 2019 09:33:18 -0400
From: Jean-Michel Campin <jmc at mit.edu>
To: mitgcm-support at mitgcm.org
Subject: Re: [MITgcm-support] Reading errors
Message-ID: <20190822133318.GA13562 at ocean.mit.edu>
Content-Type: text/plain; charset=us-ascii
Hi Estanis,
Just a small adjustment:
the standard optfile for intel compiler (version 11 and newer) is:
linux_amd64_ifort11
in MITgcm/tools/build_options
The optfile "linux_amd64_ifort" is for older version (10 and older).
However, if you are compiling with intel MPI (recent version of the compiler),
then you need to use: linux_amd64_ifort+impi
Cheers,
Jean-Michel
On Thu, Aug 22, 2019 at 01:10:55PM +0200, Martin Losch wrote:
> Hi Estanis,
>
> thanks for the details. This is what I would do:
>
> - At the compile level use a standard build options file, with an intel compiler on a linux system I would start with MITgcm/tools/build_options/linux_amd64_ifort, or linux_ia64_ifort (depending on the output of uname -a, in fact, genmake2 is probably able to pick the correct file if you don???t specify it), and since your domain is small I would first try without MPI, ie. like this:
>
> ${somepath}/tools/genmake2 -of ${somepath}/tools/build_options/linux_amd64_ifort -mods ../code
> make CLEAN && make depend && make
>
> - With this non-MPI configuration I would try to run the model. First with useOBCS=.FALSE. (just a few timesteps), and then with .TRUE.
>
> - once this works, you can recompile with MPI (if you really need it), like this:
>
> ${somepath}/tools/genmake2 -of ${somepath}/tools/build_options/linux_amd64_ifort -mods ../code -mpi
> make CLEAN && make depend && make
> (note that the extra flag ???-mpi" is enough)
>
> and check if you get the same. For further help, you should record the potential error messages after each step.
>
> Martin
>
> PS. Some comments about your namelist below:
>
> > On 22. Aug 2019, at 12:39, Estanislao Gavilan Pascual-Ahuir <dramauh at hotmail.com> wrote:
> >
> > Hi Martin ,
> >
> > Before anything thank you so much for your help. I will try to answer all you questions.
> >
> > what is the platform, the compiler?
> > The platform in linux 2.6.32-431.TH.x86_64 GNU/Linux. Red Hat Enterprise Linux Server release 6.5. I am using intel compilers wrapped them in mpi. The version of the compiler is 14.0.2
> > details of the configuration (content of code-directory and namelist files)
> > I am running a simple simulation with open boundaries. I load the packages gfd, obcs, mnc and diagnostics using the packages.config. The frequency of the open boundaries is stated in the data file. This is the data file
> > Model parameters
> > # Continuous equation parameters
> > &PARM01
> > tRef=23.,23.,
> > sRef=35.,35.,
> > selectCoriMap=4,
> > viscAh=4.E2,
> with your grid choice (sphericalPolarGrid), the coriolis parameter is computed and these values are not used.
> > f0=1.E-4,
> > beta=1.E-11,
> > rhoNil=1000.,
> > gBaro=9.81,
> > rigidLid=.FALSE.,
> > implicitFreeSurface=.TRUE.,
> > # momAdvection=.FALSE.,
> > tempStepping=.FALSE.,
> > saltStepping=.FALSE.,
> > &
> >
> > # Elliptic solver parameters
> > &PARM02
> > cg2dTargetResidual=1.E-7,
> > cg2dMaxIters=1000,
> > &
> >
> > # Time stepping parameters
> > &PARM03
> > nIter0=0,
> > nTimeSteps=100,
> > deltaT=1200.0,
> > pChkptFreq=31104000.0,
> > chkptFreq=15552000.0,
> > dumpFreq=15552000.0,
> # this will give you monitor output every timestep (which is what you want while debugging), later I would set it to something like 20-50 * deltaT
> > monitorFreq=1200.,
> > monitorSelect=2,
> > periodicExternalForcing=.TRUE.,
> # this means that you will read data each time step. Is that what you want?
> > externForcingPeriod= 1200.,
> # with your choice of externForcingPeriod, this requires that you have 1000. records in the file(s)
> > externForcingCycle = 12000000.,
> > &
> > # Gridding parameters
> > &PARM04
> > usingSphericalPolarGrid=.TRUE.,
> # alternatively you can say dxSpacing = 1., dySpacing = 1.,
> > delX=43*1.,
> > delY=43*1.,
> > xgOrigin=-21.,
> > ygOrigin=-21.,
> > delR=2*500.,
> > &
> >
> > # Input datasets
> > &PARM05
> > bathyFile='bathy_cir.bin'
> > meridWindFile=,
> > &
> >
> > This is the data.obcs
> >
> > # Open-boundaries
> > &OBCS_PARM01
> > OBCSfixTopo=.FALSE.,
> # if I understand the configuration correctly, you have a zonally reentrant channel with wall in the north and the south (python notation: bathy[0+2,:] = 0, and bathy[ny-1,:] = 0, except where you have the open boundaries)? you could actually save two grid rows (have 40 instead of 43 point in j-direction and set bathy[0,:]=0, bathy[ny,:]=0)
> > OB_Ieast=0,
> > OB_Iwest=0,
> > OB_Jnorth(16:28)=13*41,
> > OB_Jsouth(16:28)=13*3,
> > useOBCSprescribe = .TRUE.,
> These files should be found, if they are in the same directory where you run your model. They should each contain (according to you dimensions and time parameters) for 100 timesteps 100 fields of dimension (nx,nz). For anything above 1000 timesteps, they should have 1000 fields (because after the 1000ths record, the model starts from the beginning again, according to you externForcingCycle)
> > OBNvFile = 'OBzonalV.bin',
> > OBSvFile = 'OBzonalV.bin',
> > OBNuFile = 'OBmeridU.bin',
> > OBSuFile = 'OBmeridU.bin???,
> # same as before this will give you a lot of output. You may want to comment out this line, becasue OBCS_monitorFreq = monitorFreq by default
> > OBCS_monitorFreq=1200.00,
> > OBCS_monSelect = 1,
> > &
> >
> > &OBCS_PARM02
> > &
> > are you using latest code (some of the flags in the build-option look very outdated ?)?
> > Yes, it is the latest code (version MITgcm_c67k). About the flags in my build version, I did not make my own one. I used one that I found in our research group.
> >
> > Kind regards,
> >
> > Estanislao
> >
>
> _______________________________________________
> MITgcm-support mailing list
> MITgcm-support at mitgcm.org
> http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support
------------------------------
Subject: Digest Footer
_______________________________________________
MITgcm-support mailing list
MITgcm-support at mitgcm.org
http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support
------------------------------
End of MITgcm-support Digest, Vol 194, Issue 12
***********************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.mitgcm.org/pipermail/mitgcm-support/attachments/20190823/95716834/attachment-0001.html>
More information about the MITgcm-support
mailing list