[MITgcm-support] Reading errors

kaveh Purkiani kavehpurkiani at googlemail.com
Sat Aug 24 03:01:13 EDT 2019


Hi Estanislao,

 Could you try the Matlab code once again with prec=real*8 and see if model
crashes again?

regards
Kaveh

On Sat 24. Aug 2019 at 03:09, Estanislao Gavilan Pascual-Ahuir <
dramauh at hotmail.com> wrote:

> Hi Martin
>
> I am not sure if that is the problem because I used readbin.m to check
> the input files and the matrix were correct. This is the MATLAB code I
> used.
> % creat N time slabs for testing
>  ieee='b';
>  prec='real*4';
> v0=0.0041;
> N=100
> vvel=ones(13,2,N);
> uMerid=zeros(13,2,N);
> for i=1:N
> vZonal(:,:,i)=v0*vvel(:,:,i)*i/N;
> end
> fid=fopen('OBmeridU.bin','w',ieee); fwrite(fid,uMerid,prec);fclose(fid);
> fid=fopen('OBzonalV.bin','w',ieee); fwrite(fid,vZonal,prec); fclose(fid);
>
> I have even tried to use your code, but the files do not change the size.
> They are always 11KB
>
> % creat N time slabs for testing
> v0=0.0041;
> N=100
> vvel=ones(13,2,N);
> uMerid=zeros(13,2,N);
> for i=1:N
>  vZonal(:,:,i)=v0*vvel(:,:,i)*i/N;
> end
> fid=fopen('OBmeridU.bin','w',ieee);
> for i=1:N
> fwrite(fid,squeeze(uMerid(:,:,i)),prec);
> end
> fclose(fid);
> fid=fopen('OBzonalV.bin','w',ieee);
> for i=1:N
> fwrite(fid,squeeze(vZonal(:,:,i)),prec);
> end
> fclose(fid);
>
> Kind regards,
>
> Estanislao
>
> ------------------------------
> *De:* MITgcm-support <mitgcm-support-bounces at mitgcm.org> en nombre de
> mitgcm-support-request at mitgcm.org <mitgcm-support-request at mitgcm.org>
> *Enviado:* viernes, 23 de agosto de 2019 18:00
> *Para:* mitgcm-support at mitgcm.org <mitgcm-support at mitgcm.org>
> *Asunto:* MITgcm-support Digest, Vol 194, Issue 14
>
> Send MITgcm-support mailing list submissions to
>         mitgcm-support at mitgcm.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support
> or, via email, send a message with subject or body 'help' to
>         mitgcm-support-request at mitgcm.org
>
> You can reach the person managing the list at
>         mitgcm-support-owner at mitgcm.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of MITgcm-support digest..."
>
>
> Today's Topics:
>
>    1. Re: Reading errors (Martin Losch)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 23 Aug 2019 10:02:49 +0200
> From: Martin Losch <Martin.Losch at awi.de>
> To: MITgcm Support <mitgcm-support at mitgcm.org>
> Subject: Re: [MITgcm-support] Reading errors
> Message-ID: <1AB158B6-A07E-43EB-A3F6-6530BDE345C9 at awi.de>
> Content-Type: text/plain; charset="utf-8"
>
> Hi Estanis,
>
> this most likely means that your OBzonalV.bin is ?too short?.
> Guess from you parameters you'll need fields with dimension (nt,nz,nx) =
> (100,2,43). In you data file you do not specify the ?readBinaryPrec?, so
> that will default to ?real*4?, single precision, ?float32?, but check in
> your stdout, so in python this would mean writing a file like this:
>
> import numpy as np
> import sys
> nt=100
> nx=43
> nz=2
>
> data = np.zeros((nt,nz,nx),dtype=?float32') # or some other values, but
> dtype is important
> if sys.byteorder == 'little': data.byteswap(True)
> #
> fid = open(?OBzonalV.bin',"wb")
> data.tofile(fid)
> fid.close()
> # switch back to machine format
> if sys.byteorder == 'little': data.byteswap(True)
>
> With this I get a file of this size:
>
> >>> du -h ./OBzonalV.bin
>  36K     ./OBzonalV.bin
>
> >>> ls -l OBzonalV.bin
> -rw-r--r--  1 mlosch  DMAWI\KLI04  34400 Aug 23 09:53 OBzonalV.bin
>
>
> # in matlab it would be similar, but note the different order of
> dimensions:
> data = zeros(nx,nz,nt);
> accuracy = ?real*4?;
> [fid message] = fopen(?OBzonalV.bin','w','ieee-be');
> count = fwrite(fid,data,accuracy);
> for k=1:nt
>   fwrite(fid,squeeze(data(:,:,k)),accuracy);
> end
> fclose(fid);
>
>
>
> About the NETCDF_ROOT and other INCLUDES: this file is meant to be as
> generic as possible, i.e. the code at the end of the build options file
> will look for paths in the usual places. In 95% of the cases this works. If
> you have a special installation, you might want to hard-wire the paths, as
> you have done, then you don?t need the rest anymore. Alternatively you
> could set the environment varialbes NETCDF_ROOT and MPI_INC_DIR, e.g. like
> this
>
> export NETCDF_ROOT=/WORK/app/netcdf/4.3.2/01-CF-14/
> export MPI_INC_DIR=/usr/local/mpi3-dynamic/include
>
> ***before*** you run genmake2, then it should work, too.
>
> Martin
>
> > On 23. Aug 2019, at 04:37, Estanislao Gavilan Pascual-Ahuir <
> dramauh at hotmail.com> wrote:
> >
> >
> > Hi Martin and Jean-Michel
> >   I still have the error forrtl: severe (36): attempt to access
> non-existent record, unit 16, file /bblablabla/run/OBzonalV.bin. In
> addition, I saw this message after doing make depend f90mkdepend: no source
> file found for module this. I am no sure if this is important.
> >
> > I followed your advice. I start doing a simulation in serial mode
> without obcs and the model ran perfectly fine. Once I opened the obcs (i.e.
> useOBCS= TRUE), I had that error. Following the architecture of my cluster
> change a little bit the linux_ia64_ifort. By the way, do I need the last
> set of conditions of NETCDF_ROOT and netcdf test? I was thinking to remove
> them.
> >
> > Kind regards,
> >
> > Estanis
> >
> > #!/bin/bash
> > #
> > # Tested on uv100.awi.de (SGI UV 100, details:
> > #
> http://www.sgi.com/products/servers/uv/specs.html)
> > # a) For more speed, provided your data size does not exceed 2GB you can
> > #    remove -fPIC which carries a performance penalty of 2-6%.
> > # b) You can replace -fPIC with '-mcmodel=medium -shared-intel' which may
> > #    perform faster than -fPIC and still support data sizes over 2GB per
> > #    process but all the libraries you link to must be compiled with
> > #    -fPIC or -mcmodel=medium
> > # c) flags adjusted for ifort 12.1.0
> >
> > FC=ifort
> > F90C=ifort
> > CC=icc
> > # requires that all static libraries are available:
> > #LINK='ifort -static'
> > LINK='ifort'
> > # for adjoint runs the default makedepend often cannot handle enough
> files
> > #MAKEDEPEND=tools_xmakedepend
> >
> > DEFINES='-DWORDLENGTH=4'
> > CPP='cpp -traditional -P'
> > F90FIXEDFORMAT='-fixed -Tf'
> > EXTENDED_SRC_FLAG='-132'
> > #OMPFLAG='-openmp'
> >
> > NOOPTFLAGS="-O0 -g -m64"
> > NOOPTFILES=''
> >
> > MCMODEL='-fPIC'
> > # for large memory requirements uncomment this line
> > #MCMODEL='-mcmodel=medium -shared-intel'
> >
> > FFLAGS="$FFLAGS -W0 -WB -convert big_endian -assume byterecl $MCMODEL"
> > #- might want to use '-r8' for fizhi pkg:
> > #FFLAGS="$FFLAGS -r8"
> >
> > if test "x$IEEE" = x ; then     #- with optimisation:
> >     FOPTIM='-O3 -align'
> > # does not work when -static does not work
> > #    FOPTIM='-fast -align'
> > # instead you can use
> > #    FOPTIM='-O3 -ipo -align'
> > else
> >   if test "x$DEVEL" = x ; then  #- no optimisation + IEEE :
> >     FOPTIM='-O0 -noalign -fp-model precise'
> >    # -fltconsistency
> >   else                          #- development/check options:
> >     FOPTIM='-O0 -noalign -fp-model precise'
> >     FOPTIM="$FOPTIM -g -check all -fpe0 -traceback -ftrapuv -fp-model
> except -warn all"
> >   fi
> > fi
> >
> > F90FLAGS=$FFLAGS
> > F90OPTIM=$FOPTIM
> > CFLAGS="-O0 -ip $MCMODEL"
> >
> > INCLUDEDIRS=''
> > INCLUDES='-I. -I$NETCDF/include
> -I/WORK/app/netcdf/4.3.2/01-CF-14/include -I/usr/local/mpi3-dynamic/include'
> > LIBS='-L$NETCDF/lib -lnetcdff -lnetcdf
> -I/WORK/app/netcdf/4.3.2/01-CF-14/lib'
> >
> > if [ "x$NETCDF_ROOT" != x ] ; then
> >    INCLUDEDIRS="${NETCDF_ROOT}/include"
> >     INCLUDES="-I${NETCDF_ROOT}/include"
> >     LIBS="-L${NETCDF_ROOT}/lib"
> > elif [ "x$NETCDF_HOME" != x ]; then
> >    INCLUDEDIRS="${NETCDF_HOME}/include"
> >     INCLUDES="-I${NETCDF_HOME}/include"
> >     LIBS="-L${NETCDF_HOME}/lib"
> > elif [ "x$NETCDF_INC" != x -a"x$NETCDF_LIB" != x ]; then
> >     NETCDF_INC=`echo $NETCDF_INC | sed 's/-I//g'`
> >     NETCDF_LIB=`echo $NETCDF_LIB | sed 's/-L//g'`
> >    INCLUDEDIRS="${NETCDF_INC}"
> >     INCLUDES="-I${NETCDF_INC}"
> >     LIBS="-L${NETCDF_LIB}"
> > elif [ "x$NETCDF_INCDIR" != x -a "x$NETCDF_LIBDIR" != x ]; then
> >    INCLUDEDIRS="${NETCDF_INCDIR}"
> >     INCLUDES="-I${NETCDF_INCDIR}"
> >     LIBS="-L${NETCDF_LIBDIR}"
> > elif test -d /usr/include/netcdf-3 ; then
> >    INCLUDEDIRS='/usr/include/netcdf-3'
> >    INCLUDES='-I/usr/include/netcdf-3'
> >    LIBS='-L/usr/lib/netcdf-3 -L/usr/lib64/netcdf-3'
> > elif test -d /usr/include/netcdf ; then
> >     INCLUDEDIRS='/usr/include/netcdf'
> >     INCLUDES='-I/usr/include/netcdf'
> > elif test -d /usr/local/netcdf ; then
> >     INCLUDEDIRS='/usr/include/netcdf/include'
> >     INCLUDES='-I/usr/local/netcdf/include'
> >     LIBS='-L/usr/local/netcdf/lib'
> > elif test -d /usr/local/include/netcdf.inc ; then
> >     INCLUDEDIRS='/usr/local/include'
> >     INCLUDES='-I/usr/local/include'
> >     LIBS='-L/usr/local/lib64'
> > fi
> >
> > if [ -n "$MPI_INC_DIR" -a "x$MPI" = xtrue ] ; then
> >     LIBS="$LIBS -lmpi"
> >     INCLUDES="$INCLUDES -I$MPI_INC_DIR"
> >     INCLUDEDIRS="$INCLUDEDIRS $MPI_INC_DIR"
> >     #- used for parallel (MPI) DIVA
> >     MPIINCLUDEDIR="$MPI_INC_DIR"
> > fi
> >
> >
> > ----------------------------------------------------------------------
> >
> > Message: 1
> > Date: Thu, 22 Aug 2019 13:10:55 +0200
> > From: Martin Losch <Martin.Losch at awi.de>
> > To: MITgcm Support <mitgcm-support at mitgcm.org>
> > Subject: Re: [MITgcm-support] Reading errors
> > Message-ID: <325AC4E8-1648-4C6A-BFED-7722921C733E at awi.de>
> > Content-Type: text/plain; charset="utf-8"
> >
> > Hi Estanis,
> >
> > thanks for the details. This is what I would do:
> >
> > - At the compile level use a standard build options file, with an intel
> compiler on a linux system I would start with
> MITgcm/tools/build_options/linux_amd64_ifort, or linux_ia64_ifort
> (depending on the output of uname -a, in fact, genmake2 is probably able to
> pick the correct file if you don?t specify it), and since your domain is
> small I would first try without MPI, ie. like this:
> >
> > ${somepath}/tools/genmake2 -of
> ${somepath}/tools/build_options/linux_amd64_ifort -mods ../code
> > make CLEAN && make depend && make
> >
> > - With this non-MPI configuration I would try to run the model. First
> with useOBCS=.FALSE. (just a few timesteps), and then with .TRUE.
> >
> > - once this works, you can recompile with MPI (if you really need it),
> like this:
> >
> > ${somepath}/tools/genmake2 -of
> ${somepath}/tools/build_options/linux_amd64_ifort -mods ../code -mpi
> > make CLEAN && make depend && make
> > (note that the extra flag ?-mpi" is enough)
> >
> > and check if you get the same. For further help, you should record the
> potential error messages after each step.
> >
> > Martin
> >
> > PS. Some comments about your namelist below:
> >
> > > On 22. Aug 2019, at 12:39, Estanislao Gavilan Pascual-Ahuir <
> dramauh at hotmail.com> wrote:
> > >
> > > Hi Martin ,
> > >
> > > Before anything thank you so much for your help. I will try to answer
> all you questions.
> > >
> > > what is the platform, the compiler?
> > > The platform in linux 2.6.32-431.TH.x86_64 GNU/Linux. Red Hat
> Enterprise Linux Server release 6.5. I am using intel compilers wrapped
> them in mpi. The version of the compiler is  14.0.2
> > > details of the configuration (content of code-directory and namelist
> files)
> > > I am running a simple simulation with open boundaries. I load the
> packages gfd, obcs, mnc and diagnostics  using the packages.config.  The
> frequency of the open boundaries is stated in the data file. This is the
> data file
> > >  Model parameters
> > > # Continuous equation parameters
> > >  &PARM01
> > >  tRef=23.,23.,
> > >  sRef=35.,35.,
> > >  selectCoriMap=4,
> > >  viscAh=4.E2,
> > with your grid choice (sphericalPolarGrid), the coriolis parameter is
> computed and these values are not used.
> > >  f0=1.E-4,
> > >  beta=1.E-11,
> > >  rhoNil=1000.,
> > >  gBaro=9.81,
> > >  rigidLid=.FALSE.,
> > >  implicitFreeSurface=.TRUE.,
> > > # momAdvection=.FALSE.,
> > >  tempStepping=.FALSE.,
> > >  saltStepping=.FALSE.,
> > >  &
> > >
> > > # Elliptic solver parameters
> > >  &PARM02
> > >  cg2dTargetResidual=1.E-7,
> > >  cg2dMaxIters=1000,
> > >  &
> > >
> > > # Time stepping parameters
> > >  &PARM03
> > >  nIter0=0,
> > >  nTimeSteps=100,
> > >  deltaT=1200.0,
> > >  pChkptFreq=31104000.0,
> > >  chkptFreq=15552000.0,
> > >  dumpFreq=15552000.0,
> > # this will give you monitor output every timestep (which is what you
> want while debugging), later I would set it to something like 20-50 * deltaT
> > >  monitorFreq=1200.,
> > >  monitorSelect=2,
> > >  periodicExternalForcing=.TRUE.,
> > # this means that you will read data each time step. Is that what you
> want?
> > >  externForcingPeriod= 1200.,
> > # with your choice of externForcingPeriod, this requires that you have
> 1000. records in the file(s)
> > >  externForcingCycle = 12000000.,
> > >  &
> > > # Gridding parameters
> > >  &PARM04
> > >  usingSphericalPolarGrid=.TRUE.,
> > # alternatively you can say dxSpacing = 1., dySpacing = 1.,
> > >  delX=43*1.,
> > >  delY=43*1.,
> > >  xgOrigin=-21.,
> > >  ygOrigin=-21.,
> > >  delR=2*500.,
> > >  &
> > >
> > > # Input datasets
> > >  &PARM05
> > >  bathyFile='bathy_cir.bin'
> > >  meridWindFile=,
> > >  &
> > >
> > > This is the data.obcs
> > >
> > > # Open-boundaries
> > >  &OBCS_PARM01
> > >  OBCSfixTopo=.FALSE.,
> > # if I understand the configuration correctly, you have a zonally
> reentrant channel with wall in the north and the south (python notation:
> bathy[0+2,:] = 0, and bathy[ny-1,:] = 0, except where you have the open
> boundaries)? you could actually save two grid rows (have 40 instead of 43
> point in j-direction and set bathy[0,:]=0, bathy[ny,:]=0)
> > >  OB_Ieast=0,
> > >  OB_Iwest=0,
> > >  OB_Jnorth(16:28)=13*41,
> > >  OB_Jsouth(16:28)=13*3,
> > >  useOBCSprescribe = .TRUE.,
> > These files should be found, if they are in the same directory where you
> run your model. They should each contain (according to you dimensions and
> time parameters) for 100 timesteps 100 fields of dimension (nx,nz). For
> anything above 1000 timesteps, they should have 1000 fields (because after
> the 1000ths record, the model starts from the beginning again, according to
> you externForcingCycle)
> > >  OBNvFile = 'OBzonalV.bin',
> > >  OBSvFile = 'OBzonalV.bin',
> > >  OBNuFile = 'OBmeridU.bin',
> > >  OBSuFile = 'OBmeridU.bin?,
> > # same as before this will give you a lot of output. You may want to
> comment out this line, becasue OBCS_monitorFreq = monitorFreq by default
> > >  OBCS_monitorFreq=1200.00,
> > >  OBCS_monSelect = 1,
> > >  &
> > >
> > >  &OBCS_PARM02
> > >  &
> > > are you using latest code (some of the flags in the build-option look
> very outdated ?)?
> > > Yes, it is the latest code (version MITgcm_c67k). About the flags in
> my build version, I did not make my own one. I used one that I found in our
> research group.
> > >
> > > Kind regards,
> > >
> > > Estanislao
> > >
> >
> >
> >
> > ------------------------------
> >
> > Message: 2
> > Date: Thu, 22 Aug 2019 09:33:18 -0400
> > From: Jean-Michel Campin <jmc at mit.edu>
> > To: mitgcm-support at mitgcm.org
> > Subject: Re: [MITgcm-support] Reading errors
> > Message-ID: <20190822133318.GA13562 at ocean.mit.edu>
> > Content-Type: text/plain; charset=us-ascii
> >
> > Hi Estanis,
> >
> > Just a small adjustment:
> > the standard optfile for intel compiler (version 11 and newer) is:
> >  linux_amd64_ifort11
> > in MITgcm/tools/build_options
> > The optfile "linux_amd64_ifort" is for older version (10 and older).
> >
> > However, if you are compiling with intel MPI (recent version of the
> compiler),
> > then you need to use: linux_amd64_ifort+impi
> >
> > Cheers,
> > Jean-Michel
> >
> > On Thu, Aug 22, 2019 at 01:10:55PM +0200, Martin Losch wrote:
> > > Hi Estanis,
> > >
> > > thanks for the details. This is what I would do:
> > >
> > > - At the compile level use a standard build options file, with an
> intel compiler on a linux system I would start with
> MITgcm/tools/build_options/linux_amd64_ifort, or linux_ia64_ifort
> (depending on the output of uname -a, in fact, genmake2 is probably able to
> pick the correct file if you don???t specify it), and since your domain is
> small I would first try without MPI, ie. like this:
> > >
> > > ${somepath}/tools/genmake2 -of
> ${somepath}/tools/build_options/linux_amd64_ifort -mods ../code
> > > make CLEAN && make depend && make
> > >
> > > - With this non-MPI configuration I would try to run the model. First
> with useOBCS=.FALSE. (just a few timesteps), and then with .TRUE.
> > >
> > > - once this works, you can recompile with MPI (if you really need it),
> like this:
> > >
> > > ${somepath}/tools/genmake2 -of
> ${somepath}/tools/build_options/linux_amd64_ifort -mods ../code -mpi
> > > make CLEAN && make depend && make
> > > (note that the extra flag ???-mpi" is enough)
> > >
> > > and check if you get the same. For further help, you should record the
> potential error messages after each step.
> > >
> > > Martin
> > >
> > > PS. Some comments about your namelist below:
> > >
> > > > On 22. Aug 2019, at 12:39, Estanislao Gavilan Pascual-Ahuir <
> dramauh at hotmail.com> wrote:
> > > >
> > > > Hi Martin ,
> > > >
> > > > Before anything thank you so much for your help. I will try to
> answer all you questions.
> > > >
> > > > what is the platform, the compiler?
> > > > The platform in linux 2.6.32-431.TH.x86_64 GNU/Linux. Red Hat
> Enterprise Linux Server release 6.5. I am using intel compilers wrapped
> them in mpi. The version of the compiler is  14.0.2
> > > > details of the configuration (content of code-directory and namelist
> files)
> > > > I am running a simple simulation with open boundaries. I load the
> packages gfd, obcs, mnc and diagnostics  using the packages.config.  The
> frequency of the open boundaries is stated in the data file. This is the
> data file
> > > >  Model parameters
> > > > # Continuous equation parameters
> > > >  &PARM01
> > > >  tRef=23.,23.,
> > > >  sRef=35.,35.,
> > > >  selectCoriMap=4,
> > > >  viscAh=4.E2,
> > > with your grid choice (sphericalPolarGrid), the coriolis parameter is
> computed and these values are not used.
> > > >  f0=1.E-4,
> > > >  beta=1.E-11,
> > > >  rhoNil=1000.,
> > > >  gBaro=9.81,
> > > >  rigidLid=.FALSE.,
> > > >  implicitFreeSurface=.TRUE.,
> > > > # momAdvection=.FALSE.,
> > > >  tempStepping=.FALSE.,
> > > >  saltStepping=.FALSE.,
> > > >  &
> > > >
> > > > # Elliptic solver parameters
> > > >  &PARM02
> > > >  cg2dTargetResidual=1.E-7,
> > > >  cg2dMaxIters=1000,
> > > >  &
> > > >
> > > > # Time stepping parameters
> > > >  &PARM03
> > > >  nIter0=0,
> > > >  nTimeSteps=100,
> > > >  deltaT=1200.0,
> > > >  pChkptFreq=31104000.0,
> > > >  chkptFreq=15552000.0,
> > > >  dumpFreq=15552000.0,
> > > # this will give you monitor output every timestep (which is what you
> want while debugging), later I would set it to something like 20-50 * deltaT
> > > >  monitorFreq=1200.,
> > > >  monitorSelect=2,
> > > >  periodicExternalForcing=.TRUE.,
> > > # this means that you will read data each time step. Is that what you
> want?
> > > >  externForcingPeriod= 1200.,
> > > # with your choice of externForcingPeriod, this requires that you have
> 1000. records in the file(s)
> > > >  externForcingCycle = 12000000.,
> > > >  &
> > > > # Gridding parameters
> > > >  &PARM04
> > > >  usingSphericalPolarGrid=.TRUE.,
> > > # alternatively you can say dxSpacing = 1., dySpacing = 1.,
> > > >  delX=43*1.,
> > > >  delY=43*1.,
> > > >  xgOrigin=-21.,
> > > >  ygOrigin=-21.,
> > > >  delR=2*500.,
> > > >  &
> > > >
> > > > # Input datasets
> > > >  &PARM05
> > > >  bathyFile='bathy_cir.bin'
> > > >  meridWindFile=,
> > > >  &
> > > >
> > > > This is the data.obcs
> > > >
> > > > # Open-boundaries
> > > >  &OBCS_PARM01
> > > >  OBCSfixTopo=.FALSE.,
> > > # if I understand the configuration correctly, you have a zonally
> reentrant channel with wall in the north and the south (python notation:
> bathy[0+2,:] = 0, and bathy[ny-1,:] = 0, except where you have the open
> boundaries)? you could actually save two grid rows (have 40 instead of 43
> point in j-direction and set bathy[0,:]=0, bathy[ny,:]=0)
> > > >  OB_Ieast=0,
> > > >  OB_Iwest=0,
> > > >  OB_Jnorth(16:28)=13*41,
> > > >  OB_Jsouth(16:28)=13*3,
> > > >  useOBCSprescribe = .TRUE.,
> > > These files should be found, if they are in the same directory where
> you run your model. They should each contain (according to you dimensions
> and time parameters) for 100 timesteps 100 fields of dimension (nx,nz). For
> anything above 1000 timesteps, they should have 1000 fields (because after
> the 1000ths record, the model starts from the beginning again, according to
> you externForcingCycle)
> > > >  OBNvFile = 'OBzonalV.bin',
> > > >  OBSvFile = 'OBzonalV.bin',
> > > >  OBNuFile = 'OBmeridU.bin',
> > > >  OBSuFile = 'OBmeridU.bin???,
> > > # same as before this will give you a lot of output. You may want to
> comment out this line, becasue OBCS_monitorFreq = monitorFreq by default
> > > >  OBCS_monitorFreq=1200.00,
> > > >  OBCS_monSelect = 1,
> > > >  &
> > > >
> > > >  &OBCS_PARM02
> > > >  &
> > > > are you using latest code (some of the flags in the build-option
> look very outdated ?)?
> > > > Yes, it is the latest code (version MITgcm_c67k). About the flags in
> my build version, I did not make my own one. I used one that I found in our
> research group.
> > > >
> > > > Kind regards,
> > > >
> > > > Estanislao
> > > >
> > >
> > > _______________________________________________
> > > MITgcm-support mailing list
> > > MITgcm-support at mitgcm.org
> > > http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support
> >
> >
> > ------------------------------
> >
> > Subject: Digest Footer
> >
> > _______________________________________________
> > MITgcm-support mailing list
> > MITgcm-support at mitgcm.org
> > http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support
> >
> >
> > ------------------------------
> >
> > End of MITgcm-support Digest, Vol 194, Issue 12
> > ***********************************************
> > _______________________________________________
> > MITgcm-support mailing list
> > MITgcm-support at mitgcm.org
> > http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support
>
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> MITgcm-support mailing list
> MITgcm-support at mitgcm.org
> http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support
>
>
> ------------------------------
>
> End of MITgcm-support Digest, Vol 194, Issue 14
> ***********************************************
> _______________________________________________
> MITgcm-support mailing list
> MITgcm-support at mitgcm.org
> http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.mitgcm.org/pipermail/mitgcm-support/attachments/20190824/29af656a/attachment-0001.html>


More information about the MITgcm-support mailing list