[MITgcm-support] cg convergence vs processor count

Jason Goodman jgoodman at whoi.edu
Wed Jul 27 11:36:45 EDT 2005


> Alas the non-associative nature of floating point arithmetic  
> ensures that the
> numerical solution is actually dependent on the order reductions  
> (such as the
> residual calculation in CG) are evaluated. However it is rather  
> unusual for
> these differences to lead to such huge differences in solver behaviour
> (non-convergence vs. convergence). This is rather worrying and may be
> indicative of some other underlying problem with your setup. Large
> differences in summation results with differing reduction order  
> tend to occur
> when values in significantly different exponent ranges are added  
> together.

Your point about floating-point errors is a good one, but I agree  
that the very big difference is odd.

I've tried running the exp5 verification (rotating convection from  
widespread surface buoyancy loss)  with varying numbers of  
processors, and don't have this problem, so I suspect there's a  
problem with my experimental setup rather than my hardware. I also  
notice that convergence seems to be slower for the point-source  
problem than a broad source with a similar model domain.

I'm doing a point-source convection problem with a rather dense grid  
(200x200 horizontal, 133 vertical); the surface buoyancy forcing is  
isolated at a single gridpoint.  Could the fact that most of the  
domain is "boring", in that initially only one point in a million has  
anything going on, cause problems with solving the pressure field?

If so, is there any way to apply a preconditioner or some sort of  
weighting to encourage the CG algorithm to focus its effort on the  
buoyancy source, where the action is?  My long-term goal is to use a  
grid with narrow spacing near the source and wider spacing farther  
away, but I'm having blowup problems with that so I'm trying to get  
the evenly-spaced grid working first.

Jason




More information about the MITgcm-support mailing list