[MITgcm-devel] (not so) funny things happen in seaice_lsr and pickups
Martin Losch
Martin.Losch at awi.de
Wed Mar 4 11:55:06 EST 2009
Hi Jinlun,
this is probably a question/remark for you.
I have to admit, that I never looked at the details of the LSOR
solution technique, but as far as I can see, it probably works best,
wenn the "tridiagonal" system is diagonally dominant (so abs(B)>abs(A)
and abs(B)>abs(C), right?
I have now encoutered situations where one of the above conditions are
not met, and this is exactly when the solver does not converge. I get
these cases in physically plausible situations, e.g. at an ice edge
were P/zeta/eta is large in one grid point and nearly zero in the
neighboring gridpoint, or where P/zeta/eta are nearly zero in an
isolated gridpoint, which can easily happen when a closed ice cover
starts opening up due to wind stress, etc.
There is a part in the solver code, that has a comment (probably
originally yours): "SAFEGUARD AGAINST BAD FORCING ETC." which stops
the iteration if the innovation u(1)-u(3) increases with iteration
(instead of decreasing), indicating divergence. Please confirm or
correct my speculation, that this test anticipates that the solver
does not and will not converge in a few situations, so we end up with
potentially bad ice velocities in a few grid points, but since the ice
velocities are clipped (to 40cm/s), in the next time step, the forcing
situation may be different and the model recovers. A different
probably more drastic fix would be this: check if
abs(B)>abs(A),abs(C), and if not, reset everything to B=1,A=C=0,and
rhs=U, so that in this point the old ice velocity is used.
In all my runs, where this problem occurs, the forcing changes very
slowly (once per day) compared to the model time step of 1200sec, so
that "BAD FORCING" can persist for quite a long time (order 72 time
steps). Maybe that's an issue, which could be solved by calling the
dynamics solver less frequently or increasing the forcing frequency.
Why the b-grid code seems to be more stable beats me, maybe the
coriolis terms adds a bit of fun here (as it always does on a C-grid),
but given that the blow-ups are so rare and seemingly co-incidental
(even a small deviation from the trajectory make them go away).
Any comments? When you first came up with this solver, you must have
had some problems with stability, otherwise this catch "SAFEGUARD
AGAINST ..." wouldn't be there, right? What is your experience with
that?
Martin
PS. I don't think that the metric terms are really the issue, I have
now fixed them so that in my tests everything looks perfect ...
On Mar 3, 2009, at 11:26 AM, Martin Losch wrote:
> Hi all,
>
> I looks like I have found the problem that causes the "spontaneous"
> explosions in seaice_lsr (C-grid): Some of the metric terms were
> discretized in an, let's say, unfortunate way (actually, really
> wrong in a few places, all my fault of course). Also my fault: I
> never tested this part of the code properly (all the rigorous tests
> were carried out on cartesian grids or Dimitris CS510, where the
> metric terms are turned off).
>
> It is very well possible (from my experience with an inconsistent
> discretization in the B-grid that caused the B-grid CS510 to crash),
> that these issues are responsible for Matt's problems, too. The
> crashes I observed are highly sensitive to the model trajectory and
> slight modifications already changed the time/timestep when the
> crash actually occurs. Using a fix like Matt's (use different
> thermodynamic code) just changes the solution, and the model may
> explode at a different time, that fortuitously is beyond Matt's
> integration period, so what works for Matt may not work otherwise.
>
> Now my questions:
> 1. In order to code the metric terms correctly I need fields like
> tan(YG) and tan(YC) (in analogy to tanPhiAtU/V these would be called
> tanPhiAtZ and tanPhiAtC). On a lat/lon grid these fields should be
> identical to tanPhiAtC=tanPhiAtU, tanPhiAtZ=tanPhiAtV. However, I
> would prefer having these fields with the correct names. Do you
> agree? If you agree there's a second question:
> 2. Although these fields will only be used in pkg/seaice, I still
> need to initialize them in ini_grid (and subroutines) because of the
> rotated grid option (after ini_spherical_polar_grid, YC and YG are
> no longer the grid coordinate but the geographical coordinates).
> Should I define and set them with a CPP flag ALLOW_SEAICE/useSEAICE?
> Alternatively on could derive tanPhiAtZ/C in seaice_init_fixed based
> on "averaging" or copying tanPhiAtU/V, but that method might break
> down, once we decide to bite the bullet and implement all metric
> terms also for the curvilinear grids.
>
> How should I do it? the clean solution with 2 further, mostly unused
> 2D grid-fields (defined in GRID.h), or the solution that is confined
> to pkg/seaice and include a hack for spherical grids and potentially
> more hacks for general curvilinear grids? While I prefer the first
> option, the second is perfectly OK, too.
>
> Martin
>
> PS. All my restart problems remain, and are really weird, but that's
> a different story.
>
>
>
> _______________________________________________
> MITgcm-devel mailing list
> MITgcm-devel at mitgcm.org
> http://mitgcm.org/mailman/listinfo/mitgcm-devel
More information about the MITgcm-devel
mailing list