[MITgcm-devel] JFNK

Jean-Michel Campin jmc at ocean.mit.edu
Wed Oct 17 18:21:17 EDT 2012


Hi Martin,

I think I fixed the problem with this:
> Modified Files:
>         seaice_readparms.F
> Log Message:
> fix previous modif (SOLV_MAX_ITERS was left to UNSET_I with SEAICE_ALLOW_JFNK undef)

I had to diff a coupled of files, and I have a few questions regarding
1) cpp option in seaice_dynsolver.F:
  a) starting line 303:
#if (!defined ALLOW_AUTODIFF_TAMC && defined SEAICE_ALLOW_JFNK)
       IF ( SEAICEuseJFNK ) THEN
        CALL SEAICE_JFNK( myTime, myIter, myThid )
       ENDIF
#endif /* SEAICE_ALLOW_JFNK */

I would prefer just #ifdef SEAICE_ALLOW_JFNK , to avoid the problem of
missing piece of code when building the adjoint (it happens more often than
one would imagine); or if you want to make it very safe, can add a STOP like this: 
#ifdef SEAICE_ALLOW_JFNK
       IF ( SEAICEuseJFNK ) THEN
#ifdef ALLOW_AUTODIFF
         STOP
#else
         CALL ...
#endif
       ENDIF
#endif

  b) I don't understand the duplicated "defined(SEAICE_ALLOW_EVP)" 
in line 309 and line 316:
#if defined(SEAICE_ALLOW_EVP) || defined(SEAICE_ALLOW_EVP) \

2) the printed message you changed in seaice_lsr.F :
do we expect ilcall to go beyond 9999 ? (in this case the other
print of ilcall would need to be changed)
Or is I4 large enough ? (and in this case we could use I4 also 
in the 1rst printed msg).

Cheers,
Jean-Michel

On Wed, Oct 17, 2012 at 06:15:58PM +0200, Martin Losch wrote:
> Hi Jean-Michel,
> 
> my check-in's hopefully do not affect the adjoint, the code is not even shown to TAF (in my adjoint testreport lab_sea/build/seaice_dynsolver.f does not call seaice_jfnk, as it should not) , because I do not think that it will ever be adjointable (maybe with some approximation).
> 
> Let me know if I can help. I can see recomputation problems, but in parts of the code that appear (superficially) unrelated to my modifications: dwnslp_apply, salt_plume_frac, gmredi_slope_limit, advect, seaice_lsr, exf_bulkformulae
> 
> The problem is manifests itself in seaice_lsr.F, but just taking back my recent change (printing of ilcall) does not change anything.
> 
> Martin
> 
> On Oct 17, 2012, at 4:41 PM, Jean-Michel Campin wrote:
> 
> > Hi Martin,
> > 
> > I have few (at least 4) adjoint test that started this morning 
> > but do not finish when running lab_sea (but only the standard test; 
> > the additional evp, noseaice and noseaicedyn test are running fine)
> > 
> > Since it looks like you are the only one who check-in things to MITgcm
> > yesterday, it might well be related.
> > 
> > The 4 tests are the 3 adjoint tests run on old aces cluster (32 bit,
> > with g77, ifort+mpi & open64) and the one adjoint test run on new
> > aces cluster (64 bit, using pgi+mpi).
> > 
> > I will try to narrow down the problem first.
> > 
> > Cheers,
> > Jean-Michel
> > 
> > On Tue, Oct 16, 2012 at 11:37:05AM +0200, Martin Losch wrote:
> >> Hi there,
> >> 
> >> I checked in what I have, am I am working on a better parallel version of fgmres. In the meantime, the preconditioner is giving me a headache, so the entire system is not really in the best state ... 
> >> 
> >> M.
> >> 
> >> On Oct 15, 2012, at 5:18 PM, Jean-Michel Campin wrote:
> >> 
> >>> Hi Martin,
> >>> 
> >>> This is good news. Please check-in the version you have.
> >>> we can figure out later the multi-treaded issues (if there are some).
> >>> Don't know much about alternative parallel algorithm, but may be others do.
> >>> 
> >>> Cheers,
> >>> Jean-Michel
> >>> 
> >>> On Fri, Oct 12, 2012 at 04:43:37PM +0200, Martin Losch wrote:
> >>>> Hi Jean-Michel, and others
> >>>> 
> >>>> I have implemented a JFNK solver for the sea ice dynamics (following Lemieux et al, 2010, 2012). I have not made too many tests yet, but I think that the serial version works.
> >>>> 
> >>>> Now I need to work on the parallel version and for that I need some help:
> >>>> 1. I have an MPI version runing (almost) that will give you similar results for 1 or 2 cpu, but I have not yet done the multithreading (I hope to have moved all the computation to myThid=1).
> >>>> 2. The JFNK requires a (F)GMRES implementation (with reverse communication). In the routine that I have, the orthogonalization of the Krylov subspaces is done with a modified Gram Schmidt method that requires many scale products and thus global sums. This will break any advantage that a JFNK might bring. The solution appears to use a different orthogonalization (I have seen Householder reflection), but I have absolutely no idea about these things. So far my search for availble parallel FGMRES routines has not been successful.
> >>>> 
> >>>> In order to continue (especially with 1.), I would like to check the present version in, but since it is not fully functional, will probably put a stop somewhere. Are you OK with that? Or should I wait, until I find good parallel FGMRES (maybe with your help?).
> >>>> 
> >>>> Martin
> >>>> 
> >>>> 
> >>>> 
> >>>> 
> >>>> 
> >>>> _______________________________________________
> >>>> MITgcm-devel mailing list
> >>>> MITgcm-devel at mitgcm.org
> >>>> http://mitgcm.org/mailman/listinfo/mitgcm-devel
> >>> 
> >>> _______________________________________________
> >>> MITgcm-devel mailing list
> >>> MITgcm-devel at mitgcm.org
> >>> http://mitgcm.org/mailman/listinfo/mitgcm-devel
> >> 
> >> 
> >> _______________________________________________
> >> MITgcm-devel mailing list
> >> MITgcm-devel at mitgcm.org
> >> http://mitgcm.org/mailman/listinfo/mitgcm-devel
> > 
> > _______________________________________________
> > MITgcm-devel mailing list
> > MITgcm-devel at mitgcm.org
> > http://mitgcm.org/mailman/listinfo/mitgcm-devel
> 
> 
> _______________________________________________
> MITgcm-devel mailing list
> MITgcm-devel at mitgcm.org
> http://mitgcm.org/mailman/listinfo/mitgcm-devel



More information about the MITgcm-devel mailing list