[MITgcm-support] Issue running cpl_aim+ocn test case

Christopher O'Reilly christopher.oreilly at physics.ox.ac.uk
Fri Jun 28 14:54:45 EDT 2019


Hi Jean-Michel,


Thanks for your reply.


I tested the mpi commands and found one that worked to run the mulitple processes, so I have it up and running.


Do you have a feel for the best setting to get optimise the speed-up of the coupled setup? I.e. how many processes to use for the ocean and atmosphere?


Thanks again,


Chris


________________________________
From: MITgcm-support <mitgcm-support-bounces at mitgcm.org> on behalf of Jean-Michel Campin <jmc at mit.edu>
Sent: 28 June 2019 18:48:08
To: mitgcm-support at mitgcm.org
Subject: Re: [MITgcm-support] Issue running cpl_aim+ocn test case

Hi Chris,

The numbers that are printed out (specialy: MPI_GROUP_World= -2013265920)
do not look too good.

Here is what I would suggest:

1) provide more info regarding:
   a) platform/computer/OS
   b) which compiler are you using and which version
   c) which MPI are you using (+ version ?)
 Might help in figuring out what is wrong.

2) Did you try, using same compiler and MPI (same "modules" if using module)
  and same optfile, to run a simple verification experiment with MPI ?
  You can even try with testreport:
  > cd verification
  > ./testreport -MPI 4 -of {same optfile} -t global_ocean.cs32x15
  Just to make sure everything works well in the same environment but without
  coupling interfaces.

Cheers,
Jena-Michel

On Wed, Jun 26, 2019 at 04:22:21PM +0000, Christopher O'Reilly wrote:
> Hi,
>
>
> I am trying to get the "cpl_aim+ocn" verification case running.
>
>
> The following step runs and successfully produces the 3 executables (well it seems to anyway):
>
>
> ./run_cpl_test 1 -of $OPTFILE
>
>
> I then run step 2 and step 3 but on runnign step 3 I get the following output:
>
>
> execute 'mpirun -np 1 ./build_cpl/mitgcmuv : -np 1 ./build_ocn/mitgcmuv : -np 1 ./build_atm/mitgcmuv' :
>  MITCPLR_init1:            0  Coupler Bcast cbuf=Coupler                         x
>  MITCPLR_init1:            0  Coupler coupler=Coupler                                    0
>  MITCPLR_init1:            0  Coupler MPI_Comm_group MPI_GROUP_World= -2013265920  ierr=           0
>  MITCPLR_init1:            0  Coupler component num=           1  MPI_COMM=  1140850688  1140850688
> At line 15 of file mitcplr_initcomp.f
> Fortran runtime error: Actual string length is shorter than the declared one for dummy argument 'carg' (8/32)
> EESET_PARMS: Unable to open parameter file "eedata"
> EESET_PARMS: Unable to open parameter file "eedata"
> EESET_PARMS: Error reading parameter file "eedata"
> EESET_PARMS: Error reading parameter file "eedata"
> EEDIE: earlier error in multi-proc/thread setting
> PROGRAM MAIN: ends with fatal Error
> STOP ABNORMAL END: PROGRAM MAIN
> EEDIE: earlier error in multi-proc/thread setting
> PROGRAM MAIN: ends with fatal Error
> STOP ABNORMAL END: PROGRAM MAIN
> --------------------------------------------------------------------------
> mpirun noticed that the job aborted, but has no info as to the process
> that caused that situation.
> --------------------------------------------------------------------------
>
> I'm sure this is probably something obvious but I'm not sure why it's not finding the "eedata" files (which seem to be located in the  in the input_atm adn input_ocn folders).
>
> Has anyone encountered this before?
>
> Kindest regards,
>
> Chris
>

> _______________________________________________
> MITgcm-support mailing list
> MITgcm-support at mitgcm.org
> http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support

_______________________________________________
MITgcm-support mailing list
MITgcm-support at mitgcm.org
http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.mitgcm.org/pipermail/mitgcm-support/attachments/20190628/3f3b2db5/attachment.html>


More information about the MITgcm-support mailing list