[Mitgcm-support] Re: MIT model/hardware question

mitgcm-support at dev.mitgcm.org mitgcm-support at dev.mitgcm.org
Wed Jul 9 15:53:02 EDT 2003


Hi Fiama,

  For our code a good rule of thumb for timing in seconds is

grid size * 2000 / ( flop rate on the machine )

 for memory in MB the formula is

grid size * 8 * 100 / 1024 /1024 

  Both tend to be a bit pessimistic, so generally you can do a bit
better than these numbers suggest. 

  For a 140 x 140 x 20 grid on a high-end Intel P4 (where sustained
flop rate is about 200 million ops/sec ) the timing
I would estimate is

  140 * 140 * 20 / 200 / 1000000  

 which is about 4 secs per time step.

 A 10km Lab Sea simulation would probably have a 15 minute time step.

 Memory wise you get

 140 * 140 * 20 * 8 * 100 / 1024 /1204

 which is about 300MB.

 So my estimate ( assuming a ten minute time step to
 be safe ) is that you would be able to do
 roughly 150 days of simulation per day on a single
 processor P4.

 You should check my numbers - I often get the arithemtic
 wrong! 

 For the sort of prob. you describe an eight-way cluster
 of P4 machines could work well. The problem should scale comfortably
 to that size, gving you around 1200 days/day.

 Cost for that setup is about

  $1200 per machine i.e. about $10K.
  plus around 13K for a fast network (Myrinet).
  leaving $7K for buying me and Alistair beer (actually malt whisky
  gets the best results out of us :-).

 If you don't get the fast network you won't get good scaling.
 So you could still run 8 independent scenarios each at
 150 days/day, and we could get more beer. However, you 
 would not be able to do one scenario at 1200 days/day.

 Hope this helps..... 
 
Chris

Fiamma Straneo wrote:
> 
> Ciao Chris,
> 
> although I only briefly talked to you, last week at Patrick's
> surprise party, I am hoping I can ask you some questions
> regarding the MIT model.
> 
> I am about to purchase a computer to do some modeling
> work. While I am not running the MIT model right now,
> I envision that I will be in less than a year or so.
> So I thought I would ask you whether you have
> any suggestions.
> 
> Specifically, I would like to do some basin (say the
> Labrador Sea, for example) simulations with a 4-10km
> grid scale (that means a grid of 80-140 points - more
> or less - for the higher resolution run), with up to
> 20 levels, for 10 years, or more, integrations (in the
> hydrostatic version).
> 
> Would I be able to run it on a large PC workstation ?
> Something like a dual processor 2GHz (eg intel xeon)/
> 2 GB memory. What kind of running time am I looking at,
> if I have given you enough info to estimate it?
> These machines run around 10k or less as far as I can tell.
> 
> My other option is to try and get a faster machine. Maybe
> one of the ones that IBM or DELL sell under the name
> of servers, with up to 4 processors - I can
> spend up to 30k or so, but I am trying to be cost-effective.
> What would be a comparable running time on these ?
> Alistair, for example, suggested getting two PC workstations
> instead of one large machine.
> 
> I am also, in general, heading for Linux Boxes instead of
> Alpha machines, but maybe you think otherwise ?
> 
> Any advice you can give me - is greatly appreciated.
> Thanks a ton, maybe in exchange I can buy you beer next
> time I see you?
> 
> cheers,
> 
> fiamma
> 
> --------------------------------------------------------
> Fiamma Straneo                          tel. 508-289-2914
> Dept. of Physical Oceanography          fax. 508-457-2181
> MS #21, WHOI, Woods Hole, MA 02543



More information about the MITgcm-support mailing list