[MITgcm-support] Compiler options for ifort on x86_64
Constantinos Evangelinos
ce107 at ocean.mit.edu
Mon Oct 24 14:29:15 EDT 2005
On Monday 24 October 2005 05:41, Lucas Merckelbach wrote:
> In an earlier email I told you on the results of your test program, that
> compiled on both platforms using scenario 2) (-mcmodel=medium) and it only
> ran on the amd64 and gave a "killed" on the em64t. Well, it gave a
> "killed| because a 256x256x32 job was running as well. Without that job,
> your test program does run on the em64t as well. could it be related to a
> memory problem?
This is clearly a memory problem.
> My memory with no jobs running:
>
> nova:~$ free -m
> total used free shared buffers cached
> Mem: 2008 1818 189 0 45 1109
> -/+ buffers/cache: 663 1345
> Swap: 4094 521 3572
So you essentially have almost 2GB of available actual memory plus another
4GB+ of swap space. That means:
a) You can only allocate a total of 6GB minus the O/S overhead as a total for
all codes running or swapped out at any given time.
b) You can only run fast if your code never asks for more than 2GB of memory
(minus at least 32MB or more for the O/S). Once you start going to swap your
speed goes down the drain.
c) As a corrolary of (b) the only reason you may want to compile in 64-bit
mode is to use the 8 additional integer and FP registers that it provides.
Otherwise 32-bit mode is more than fine for codes below the 2GB mark.
Moreover you obviously have no need for the medium or large memory models.
> When I run the test program (declares 6e8 doubles) it eats 80% of my swap
> file, so both the mitmodel running and the test program is too much.)
Obviously, see above. That would be close to 4.8GB.
> Running two instances of the test program simultaneously paralyses the
> system (load goes up to 20), in the end it comes back to life and one of
> the exe's made it until the end, the other got Killed as well. But only
> after a minute or 5.
When the O/S starts running out of swap it proceeds to start killing processes
to recover. Hence the "killed" messages.
> What i suspect is that the 512x512x32 is just too big to fit in the
> memory.
This means 64MB per 3D field. That means that you only need 32 3D fields
internal to the code to fill up 2GB of RAM. An actual test would be to run
the command "size" on your executable. The total memory requirement that is
known at compile time is shown in the 4th column of the output.
> If I enlarge the number of elements in the test program by a
> factor of 10 (6e9 elements), it also quits immediately.
Because that would be asking for close to 48GB on a machine with 2GB of
physical memory and 4GB of swap.
> The transition
> between killed and not killed is between 6.5e8 - 7e8 elements. This is
> about 77 times larger than what is required for one variable defined on
> the whole grid. I can understand that this is pretty tight.
You do not want to live in the region between the O/S killing and not killing
your process due to lack of swap as it still would be running at a dog slow
pace.
> Plugging in
> more memory, would that help?
Obviously - keep in mind that with these platforms your only limitation is how
much the motherboard and your wallet can handle.
> If you think that available RAM is the bottelneck here, then please don't
> spend too much time on this. I guess for me the solution is to move to the
> IA64 platform, where it does seem to work.
You would run into the same runtime trouble (not the compile-time nonsense of
course) with an IA64 box with only 2GB of RAM.
Constantinos
--
Dr. Constantinos Evangelinos
Department of Earth, Atmospheric and Planetary Sciences
Massachusetts Institute of Technology
More information about the MITgcm-support
mailing list