<div dir="ltr"><div dir="auto"><div dir="ltr"><div>Dear Martin, dear Senja<br></div><div><br></div><div>thank you very much for your thorough explanation and for the suggestions on the parameters and compilation options. <br>The tiles now are now smoothly connected! <br></div><div>I still have issues though, i.e. the simulation starts generating Nans and i don't understand whether it is something related to sea ice or not. I'll try to figure it out.<br></div><div>Thank you,<br></div><div><br></div><div>Enrico<br></div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Il giorno mar 5 nov 2019 alle ore 11:59 Martin Losch <<a href="mailto:Martin.Losch@awi.de" rel="noreferrer" target="_blank">Martin.Losch@awi.de</a>> ha scritto:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Enrico,<br>
<br>
I think you are the first to use the "Krylov solver” (SEAICEuseKrylov=.TRUE.,) with this model in a realistic configuration (other than tests that I have done during the implementation). Note that we have very little experience with it. In fact, my experience is, that it is actually doing very well in terms of parallelisation (in contrast to what you are seeing), but that using it slows down the simulation. If you want to continue with this solver, then you need to follow Senja’s advice and comment out<br>
SEAICE_Olx = 0,<br>
SEAICE_Oly = 0,<br>
in your data.seaice (in fact, the entire “old defaults” section should not be in a new run). That should fix the problem. Here’s an attempt of an explanation: The Krylov solver uses the LSR solver as a preconditioner (see Lemieux et al 2009?, Lemieux et al 2010, for an explanation), but we never check convergence and let it run for only a few steps (you’ll find the default SEAICElinearIterMax = 10 in STDOUT.0000) and LSR_ERROR is never used (that’s why changing it doesn’t help). This makes the preconditioner solution very stripy and this probably carries over to the actual iteration that is done within SEAICE_KRYLOV and SEAICE_FGMRES. Using the default values for SEAICIE_OLx/y will make this problem smaller so that the Krylov solver has a good chance of converging to the correct field. The nonlinear iteration will never converge with just SEAICEnonLinIterMax = 10, see STDOUT.0000, eg. like this:<br>
<br>
>> grep %KRYLOV_MON STDOUT.0000 <br>
(PID.TID 0000.0001) %KRYLOV_MON: time step = 8<br>
(PID.TID 0000.0001) %KRYLOV_MON: Nb. of time steps = 8<br>
(PID.TID 0000.0001) %KRYLOV_MON: Nb. of Picard steps = 80<br>
(PID.TID 0000.0001) %KRYLOV_MON: Nb. of Krylov steps = 80<br>
(PID.TID 0000.0001) %KRYLOV_MON: Nb. of Picard failures = 8<br>
(PID.TID 0000.0001) %KRYLOV_MON: Nb. of Krylov failures = 0<br>
<br>
There are as many Picard failures as steps (the krylov solver for the linear subproblem converges). In the end this does not matter, because we normally don’t want to spend the time to solve non-linear problem.<br>
I would use these CPP flags:<br>
# undef SEAICE_PRECOND_EXTRA_EXCHANGE<br>
# define SEAICE_DELTA_SMOOTHREG<br>
# define SEAICE_LSR_ZEBRA<br>
with the Krylov solver (the last one if a good idea in many cases).<br>
<br>
I would be very interested in what the solution looks like when you don’t set SEAICE_Olx/y to zero (use the default = Olx/Oly-2). If the “inter-tile discontinuities” remain, there’s a problem with the solver which I would need to debug.<br>
<br>
Alternatively you can use the default LSR solver (as most people do), with these parameters<br>
<br>
SEAICEuseKrylov=.FALSE., ! default<br>
#- seaice dynamics params:<br>
LSR_ERROR = 1.E-5,<br>
SEAICEnonLinIterMax=10,<br>
# SEAICE_Olx = 0,<br>
# SEAICE_Oly = 0,<br>
<br>
Both the Krylov and the LSR solver are really a Picard (or fixed-point) iteration, where the lineared subproblem is solved either with a Krylov method (FGMRES), or with LSR (line successive over relaxation). The advantage of the LSR solver is, that it is not very well parallelized and I recommend at least LSR_ERROR=1e-5 or smaller (I am working on new defaults for this solver). The advantage is that it is simple and relatively fast. With this solver it also makes sense to use non-zero overlaps (although this is not essential). If sea ice is not the focus of your science question, then I recommend using this default solver (but not with the default values of LSR_ERROR).<br>
<br>
Martin<br>
<br>
PS please consider removing the entire “old defaults” section in your data.seaice, unless you need to keep these parameters. Here are further comments on data.seaice (because you use an example from one of the test, you carried over many parameters that were put there just to test some parts of the code), they are unrelated to your problem.<br>
<br>
#- seaice dynamics params<br>
LSR_ERROR = 1.E-12, ! way too small, just for testing<br>
# LSR_mixIniGuess=1 : compute free-drift residual; =2,4 mix into initial guess<br>
LSR_mixIniGuess = 1, ! doesn’t do anything but compute some free-drift residual, can be removed<br>
# add small diffKh to test diffusion with multi-dim advect.<br>
SEAICEdiffKhArea = 20., ! diffusion of seaice does not make sense with a stable advection scheme, so use the default of 0.<br>
#- seaice thermodyn params:<br>
SEAICE_multDim = 1, ! here you can use 7 (see documenation)<br>
#- constant seawater freezing point: <br>
SEAICE_tempFrz0 = -1.96,<br>
SEAICE_dTempFrz_dS = 0., ! why do you want a constant freezing point of sea water??<br>
#- to reproduce old results with former #defined SEAICE_SOLVE4TEMP_LEGACY code<br>
useMaykutSatVapPoly = .TRUE., ! I would use the default, this is just for testing the old polynomial code<br>
postSolvTempIter = 0,<br>
! these albedos are very high and the result of a tuning experiment with specific forcing (An Nguyen et all 2011), I would reconsider them in the context of your forcing<br>
SEAICE_dryIceAlb = 0.8756,<br>
SEAICE_wetIceAlb = 0.7856,<br>
SEAICE_drySnowAlb = 0.9656,<br>
SEAICE_wetSnowAlb = 0.8256,<br>
! default is 27.5e3, so not really very different<br>
SEAICE_strength = 2.6780e+04,<br>
<br>
Also there’s a lot of diffusion of tracer (1e3) and momentum (1e4) in you “data” for a grid spacing of 5km. You can get away without any explicit diffusion for tracers with a positive (flux limited) advection scheme for tracers, e.g.<br>
tempAdvScheme = 33,<br>
saltAdvSchem = 33,<br>
(or 77, 7)<br>
use these together with<br>
staggerTimeStep = .TRUE.,<br>
<br>
You can probably reduce you viscosity dramatically. I normally use the nondimensional form viscAhGrid, which needs to be smaller 1, but order 0.01 is often good (that’s about viscAh = 416 m^2/s) , or the even bi-harmonic version of this (viscA4Grid). There is also flow-depenedent viscosity (e.g. Leith), see documentation.<br>
<br>
> On 4. Nov 2019, at 15:00, Pochini, Enrico <<a href="mailto:epochini@inogs.it" rel="noreferrer" target="_blank">epochini@inogs.it</a>> wrote:<br>
> <br>
> Hi Martin,<br>
> <br>
> thank you for your answer. I used the data.seaice and SEAICE_OPTIONS.h from verification/lab_sea. I've tried to set the parameters as you said and I also tried with SEAICEuseKrylov=.TRUE., which automatically sets LSR_ERROR=1.E-12 and SEAICEnonLinIterMax=10. However I still have the issue.<br>
> I attach a link to the code and namelist which I used. In STDOUT it seems that the solver for the sea ice dynamics does not converge. <br>
> Eventually the model crashes...<br>
> <br>
> Enrico<br>
> <br>
> <a href="http://results72.nc" rel="noreferrer noreferrer" target="_blank">results72.nc</a><br>
> <br>
> <a href="http://results_seaice72.nc" rel="noreferrer noreferrer" target="_blank">results_seaice72.nc</a><br>
> <br>
> <a href="http://sysdiag_48.nc" rel="noreferrer noreferrer" target="_blank">sysdiag_48.nc</a><br>
> <br>
> SEAICE_OPTIONS.h<br>
> <br>
> SEAICE_SIZE.h<br>
> <br>
> data.seaice<br>
> <br>
> data<br>
> <br>
> STDOUT.0000<br>
> <br>
> job_err.err<br>
> <br>
> Il giorno mar 29 ott 2019 alle ore 17:17 Martin Losch <<a href="mailto:Martin.Losch@awi.de" rel="noreferrer" target="_blank">Martin.Losch@awi.de</a>> ha scritto:<br>
> Hi Enrico,<br>
> <br>
> this is most likely linked to the dynamics solver. For historical reasons, the default parameters are not very fortunate. Without knowing your “data.seaice” and “SEAICE_OPTIONS.h”, I am guessing that you can fix this problem by reducing the linear tolerance (by setting LSR_ERROR to values of 1.e-5 or smaller).<br>
> I also suggest to increase the number of nonlinear iterations to 10 (SEAICEnonLinIterMax, default is 2). These unfortunate defaults are part of an issue (<<a href="https://github.com/MITgcm/MITgcm/issues/171" rel="noreferrer noreferrer" target="_blank">https://github.com/MITgcm/MITgcm/issues/171</a>>) on github, but I never got around to changing them to something more sensible (and nobody seems to care).<br>
> <br>
> Alternatively you could use the EVP code, but then make sure that you use at least mEVP (again, the defaults are not great), see the documentation (<<a href="https://mitgcm.readthedocs.io/en/latest/phys_pkgs/seaice.html#more-stable-variants-of-elastic-viscous-plastic-dynamics-evp-mevp-and-aevp" rel="noreferrer noreferrer" target="_blank">https://mitgcm.readthedocs.io/en/latest/phys_pkgs/seaice.html#more-stable-variants-of-elastic-viscous-plastic-dynamics-evp-mevp-and-aevp</a>>)<br>
> <br>
> Martin<br>
> <br>
> > On 29. Oct 2019, at 14:42, Pochini, Enrico <<a href="mailto:epochini@inogs.it" rel="noreferrer" target="_blank">epochini@inogs.it</a>> wrote:<br>
> > <br>
> > Dear all,<br>
> > <br>
> > (I've attached the file as gdrive link..)<br>
> > I'm having issues with the SEAICE module i guess. After I activated it I started to observe discontinuities at tiles edge, where the marine ice is present. The discontinuities grow in time. I see discontinuities in surface fields and shallower levels of 3D fields.<br>
> > What could cause this issue?<br>
> > <br>
> > Thanks,<br>
> > <br>
> > Enrico<br>
> > <a href="http://results24.nc" rel="noreferrer noreferrer" target="_blank">results24.nc</a><br>
> > _______________________________________________<br>
> > MITgcm-support mailing list<br>
> > <a href="mailto:MITgcm-support@mitgcm.org" rel="noreferrer" target="_blank">MITgcm-support@mitgcm.org</a><br>
> > <a href="http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support" rel="noreferrer noreferrer" target="_blank">http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support</a><br>
> <br>
> _______________________________________________<br>
> MITgcm-support mailing list<br>
> <a href="mailto:MITgcm-support@mitgcm.org" rel="noreferrer" target="_blank">MITgcm-support@mitgcm.org</a><br>
> <a href="http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support" rel="noreferrer noreferrer" target="_blank">http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support</a><br>
> _______________________________________________<br>
> MITgcm-support mailing list<br>
> <a href="mailto:MITgcm-support@mitgcm.org" rel="noreferrer" target="_blank">MITgcm-support@mitgcm.org</a><br>
> <a href="http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support" rel="noreferrer noreferrer" target="_blank">http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support</a><br>
<br>
_______________________________________________<br>
MITgcm-support mailing list<br>
<a href="mailto:MITgcm-support@mitgcm.org" rel="noreferrer" target="_blank">MITgcm-support@mitgcm.org</a><br>
<a href="http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support" rel="noreferrer noreferrer" target="_blank">http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support</a><br>
</blockquote></div>