[MITgcm-devel] seaice dynamics in a 2-D set-up

Martin Losch Martin.Losch at awi.de
Tue Jun 23 05:45:31 EDT 2015


Hi Jean-Michel,

I have not been able to figure out the true reason for this, but this is what I have done (and found out) so far:

- turned the problem by 90 degs, so that it is (800,1) instead of (1,800) and modified delY instead of delX, the problem remains (and looks very similar, although I am sure that there are issues with forcing
- in both cases the problem goes away, when I don’t solve for uIce (independent of whether uIce is along or across the line)

- I increased the number of MPSEUDOSTEPS from 10 to 100 to 1000 and the differences reduce from O(0.012) to O(0.008) to O(0.002). - I tried to run JFNK with very low tolerances, (JFNK_gammaNonlin = 1.e-1) and the differences become larger.
- I have tried turning off various parts of the equations. I get the same solutions for different across grid spacing, only for SEAICE_strength=0 (P*) together with SEAICEuseTilt =.FALSE. This is odd, because phiSurf is constant accross, so the gradients should alway be zero (I checked that).

So I would argue that with better convergence, the differences get smaller. and they are connected to the uIce-equation, but not the vIce-equation. That’s very odd. JFNK uses the same subroutines (seaice_lsr_calc_coefficients, seaice_lsr_rhsu/v) to set up the LSR-preconditioner, but the structure of the differences is very different. Keep in mind, that the preconditioner only provides very approximate solutions, and that the true system is determined by the calculation of the residual F(u)=A(u)*u - b(u), so a small bug in seaice_lsr_calc_coefficients could be a problem that JFNK does not care about too much.

I’ll keep trying, but maybe this gives you an idea what’s going on? I am puzzled by this.

Martin

> On 19 Jun 2015, at 01:28, Jean-Michel Campin <jmc at ocean.mit.edu> wrote:
> 
> Hi Martin,
> 
> Hve not tried to turn off Coriolis yet (I don't think we have an option
> to turn it off for just the seaice-dyn, right ?) but will do.
> 
> Will send the set-up to you.
> 
> One thing I forgot (nothing to do with seaice, but good to keep in mind) 
> is that I was using prather adv-scheme for T & S for the 2-yrs spin-up
> and had to re-scale the SOM pickup files when changing delX 
> since they contain grid-cell integrated 1rst & 2nd moments.
> 
> Cheers,
> Jean-Michel
> 
> On Wed, Jun 17, 2015 at 05:03:03PM +0200, Martin Losch wrote:
>> Hi Jean-Michel,
>> 
>> what a nice experiment!
>> 
>> I think that JFNK fixes it, because it really converges, where the Picard solver with LSR does not.
>> 
>> why would the solution (or the path to solution) depend on the size of the irrelevant delX? The lsr-preconditioner in the JFNK solver uses the same subroutines for constructing the LSR coefficient matrix, the rhs, and for solving the tridiagnoal systems (SEAICE_LSR_CALC_COEFFS, SEAICE_LSR_RHSU/V, SEAICE_LSR_TRIDIAGU/V, ) as seaice_lsr.
>> 
>> What happens if you turn off coriolis?
>> 
>> Can you send me your configuration? I???ll be offline tomorrow morning with a lot of time on my hands, so I might be able to have a look at this myself.
>> 
>> Martin
>> 
>>> On 16 Jun 2015, at 19:01, Jean-Michel Campin <jmc at ocean.mit.edu> wrote:
>>> 
>>> Hi Martin,
>>> 
>>> I did few more tests with this 2-D (y-z) set-up, and found something 
>>> curious when I change delX : I use monthly-mean forcing, run for 2 years
>>> with delX=delY, then do a short run (10 iter, deltaT=20mn) with
>>> delX=300*delY and compare with same run with delX=delY.
>>> 
>>> Normally, changing delX should not affect the solution. However, cg2d solver
>>> is affected and requires more iterations with small delX than with large delX,
>>> but converges to the same solution (I can also modify ini_cg2d.F to remove
>>> this dependency on delX do get exactly the same cg2d convergence).
>>> 
>>> Regarding seaice dynamics, I was running with default LSR_ERROR (=1.e-4),
>>> default NPSEUDOTIMESTEPS (=2) and SEAICE_EPS = 1.e-9.
>>> When I increase delX (by 300), more iterations (linear solver) are needed
>>> initially but then it goes back (after ~10 model time-steps) to similar behavior 
>>> as the delX=delY case, but the solution (i.e., uIce,vIce) is different 
>>> from the delX=delY case.
>>> I have attached a plot (uIce,vIce fct of grid-point index j) after 5.iters:
>>> s04b is with delX=delY and s05b is with delX=300*delY)
>>> 
>>> If I increase NPSEUDOTIMESTEPS to 10, the difference are similar but
>>> develop more rapidly: after 1 model time step, it reach same differences
>>> as the NPSEUDOTIMESTEPS=2 case did after 10 time-steps.
>>> 
>>> I also tried JFNK (SOLV_MAX_ITERS=10, SEAICEnewtonIterMax=200, SEAICEkrylovIterMax=50)
>>> and the 2 solutions with different delX are now very similar (diff < 1.e-8 or so).
>>> 
>>> So, it seems that this 2-D (Nx=1) set-up has some problems
>>> regarding spurious dependence on delX but JFNK does fix it !
>>> 
>>> Cheers,
>>> Jean-Michel
>>> 
>>> On Thu, Mar 05, 2015 at 09:44:02AM +0100, Martin Losch wrote:
>>>> Hi Jean-Michel,
>>>> 
>>>> the default for the LSR convergence criterion is this
>>>> LSR_ERROR  = 0.0001    _d 0
>>>> and that???s what I normally use, I think that now you set it to 2e-9? That???s not what I meant, I suggested to set the parameter SEAICE_EPS = 2e-9; this will make the problem less stiff.
>>>> 
>>>> In general, the linear solver converges very slowly, too, but that???s not so important (although I have tried to improve that unsucessufully). What is more important is the convergence of the nonlinear solver (the actual residuals). I suggest that if you have only a limited amount of N iterations to spend, then it makes more sense to have a ???large??? LSR_ERROR (like the default 1e-4), so that only very few iterations are spent in the linear solver but make more nonlinear iterations (NPSEUDOTIMESTEPS=20 or more). 
>>>> 
>>>> There is more that you can try:
>>>> SEAICEuseStrImpCpl = .TRUE. (Hutchings et al 2005)
>>>> 
>>>> if your OLx/y > 2, you can try SEAICE_OLx/y = OLx/y - 2, to make the restricted additive Schwarz method, that we use, smoother.
>>>> 
>>>> This usually does not help very much, but it???s worth a try.
>>>> 
>>>> 
>>>> JFNK: The linear solve for JFNK ist different (FGMRES) and LSR is only the (poor but best available) precondition. I suggest this for practical purposes:
>>>> 
>>>>     SEAICEnewtonIterMax = 200,
>>>>     SEAICEkrylovIterMax = 50,
>>>>     JFNKgamma_nonlin    = 1. _d -05    # default
>>>>     JFNKgamma_lin_min   = 0.10 _d 0   # default
>>>>     JFNKgamma_lin_max   = 0.99 _d 0   # default
>>>>     JFNKres_tFac        = 0.5
>>>>     SEAICE_JFNKepsilon  = 1. _d -06 # default
>>>>     SEAICE_JFNKalpha    = 1.5, # a bit more agressive than the default
>>>> 
>>>> Martin
>>>> 
>>>>> On 04 Mar 2015, at 21:23, Jean-Michel Campin <jmc at ocean.mit.edu> wrote:
>>>>> 
>>>>> Hi Martin,
>>>>> 
>>>>> Thanks for the suggestions: I increased the tolerance (from 1e-12 before to
>>>>> 2e-9 as you suggested) and it reduces the number of "non converging" messages
>>>>> that I am getting, also run a little bit faster.
>>>>> 
>>>>> Now, I am still not sure that the linear solver converges well enough.
>>>>> In fact, in this latest run, the report that I am getting from SEAICE_LSR
>>>>> seems to indicate no real convergence. For instance:
>>>>> SEAICE_LSR: Residual Initial ipass,Uice,Vice=    1  7.743844 19E-03  1.27959541E-02
>>>>> SEAICE_LSR (ipass=   1) iters,dU,Resid=    98  1.84258913E-09  7.73824698E-03
>>>>> SEAICE_LSR (ipass=   1) iters,dV,Resid=   448  1.99455537E-09  1.27569603E-02
>>>>> SEAICE_LSR: Residual Initial ipass,Uice,Vice=    2  7.73904339E-03  1.27591200E-02
>>>>> SEAICE_LSR (ipass=   2) iters,dU,Resid=    90  1.91089900E-09  7.73729344E-03
>>>>> SEAICE_LSR (ipass=   2) iters,dV,Resid=   114  1.99557763E-09  1.27661753E-02
>>>>> the final residual is in fact very close (sometimes smaller, sometimes larger)
>>>>> to the initial residual.
>>>>> 
>>>>> Regarding the JNFK option: I am using the default number of non-linear
>>>>> solver iteration (i.e. 2) and I imagined that using JFNK would make sense with
>>>>> more Non-Lin iterations (-> increase the cost) but not fixing the linear solver
>>>>> convergence.
>>>>> 
>>>>> And finally regarding Nx=1: since some parts of the solver (the implicit
>>>>> tridiagonal part only ?) are not tile independent, I am not sure that
>>>>> when I have Nx=1 I really zero out all derivative in X direction.
>>>>> I tried SEAICEuseMultiTileSolver=T (but I don't trust too much this
>>>>> code because I know who wrote it) but this did not help.
>>>>> 
>>>>> Cheers,
>>>>> Jean-Michel
>>>>> 
>>>>> On Wed, Mar 04, 2015 at 06:07:27PM +0100, Martin Losch wrote:
>>>>>> Hi Jean-Michel,
>>>>>> 
>>>>>> do you think that Ny=1 is the problem? I don???t think so, because always the LSR solver converges only very slowly. I see the same problem when I start from a horizonally homogeneous ice distribution. It takes a while before the ice starts moving, ie. from uice=vice=0 => solver does not do anything => doesn???t cost anything, but when it starts, LSR picks up, does not converge and uses a lot of time.
>>>>>> 
>>>>>> If you want a converging (but still expensive) solver use JFNK (for a start the parameters in offline_exf_seaice.dyn_jfnk should be OK, also have a look at SEAICE_OPTIONS.h!)
>>>>>> 
>>>>>> Also, convergence is always improved for larger seaice_eps (default is 1e-11, but 2e-9 is used very often), and a smoother/better regularization (not implemented: deltaCreg = deltaC + seaice_eps).
>>>>>> 
>>>>>> Martin
>>>>>>> On 04 Mar 2015, at 17:55, Jean-Michel Campin <jmc at ocean.mit.edu> wrote:
>>>>>>> 
>>>>>>> Hi Martin and others,
>>>>>>> 
>>>>>>> I am trying to run a simple oceanic 2-D (Y-Z) set-up with seaice dynamics.
>>>>>>> It starts without seaice and before seaice forms the LSR solver
>>>>>>> converges (not just doing 1 iteration, but within 10-12 iters).
>>>>>>> 
>>>>>>> But as soon as there is seaice, it stops converging and
>>>>>>> more than half of the CPU time is spent in SEAICE_DYNSOLVER
>>>>>>> (despite the fact that I have 50 levels + SOM advection scheme).
>>>>>>> 
>>>>>>> I can reduce the max number of LSR iterations (SOLV_MAX_ITERS),
>>>>>>> faster but still not great.
>>>>>>> I tried separatly SEAICEuseMultiTileSolver=T and SEAICE_OLx,y=1
>>>>>>> but the improvement is very marginal.
>>>>>>> 
>>>>>>> I am tempting to reduce even more SOLV_MAX_ITERS but start to worry
>>>>>>> about what the ice velocity will be.
>>>>>>> Is there anything I could try ?
>>>>>>> and should we think of some code modification for the specific case
>>>>>>> where Nx=1 or Ny=1 ?
>>>>>>> 
>>>>>>> Cheers,
>>>>>>> Jean-Michel
>>>>>>> 
>>>>>>> _______________________________________________
>>>>>>> MITgcm-devel mailing list
>>>>>>> MITgcm-devel at mitgcm.org
>>>>>>> http://mitgcm.org/mailman/listinfo/mitgcm-devel
>>>>>> 
>>>>>> 
>>>>>> _______________________________________________
>>>>>> MITgcm-devel mailing list
>>>>>> MITgcm-devel at mitgcm.org
>>>>>> http://mitgcm.org/mailman/listinfo/mitgcm-devel
>>>>> 
>>>>> _______________________________________________
>>>>> MITgcm-devel mailing list
>>>>> MITgcm-devel at mitgcm.org
>>>>> http://mitgcm.org/mailman/listinfo/mitgcm-devel
>>>> 
>>>> 
>>>> _______________________________________________
>>>> MITgcm-devel mailing list
>>>> MITgcm-devel at mitgcm.org
>>>> http://mitgcm.org/mailman/listinfo/mitgcm-devel
>>> <ice_uv_45b.it5.jpg>_______________________________________________
>>> MITgcm-devel mailing list
>>> MITgcm-devel at mitgcm.org
>>> http://mitgcm.org/mailman/listinfo/mitgcm-devel
>> 
>> 
>> _______________________________________________
>> MITgcm-devel mailing list
>> MITgcm-devel at mitgcm.org
>> http://mitgcm.org/mailman/listinfo/mitgcm-devel
> 
> _______________________________________________
> MITgcm-devel mailing list
> MITgcm-devel at mitgcm.org
> http://mitgcm.org/mailman/listinfo/mitgcm-devel




More information about the MITgcm-devel mailing list