I had a general question regarding the memory analysis shown in the ABINIT output files. For example, in the following case, I observed the memory estimates for two datasets:
Dataset 1(GS):
P This job should need less than 51.350 Mbytes of memory.
Rough estimation (10% accuracy) of disk space for files :
_ WF disk file : 1611.951 Mbytes ; DEN or POT disk file : 0.502 Mbytes.
Biggest array : cg(disk), with 22.3902 MBytes.
memana : allocated an array of 22.390 Mbytes, for testing purposes.
memana: allocated 51.350Mbytes, for testing purposes.
The job will continue.
Dataset 2 (RF):
P This job should need less than 607.920 Mbytes of memory.
Rough estimation (10% accuracy) of disk space for files :
_ WF disk file : -825.248 Mbytes ; DEN or POT disk file : 0.502 Mbytes.
Biggest array : cg(disk), with 159.5178 MBytes.
memana : allocated an array of 159.518 Mbytes, for testing purposes.
memana: allocated 607.920Mbytes, for testing purposes.
The job will continue.
My understanding is:
For Dataset 1, if I run on a single core and allocate ~100 MB of memory, that should be sufficient.
For Dataset 2, if I run on a single core and allocate ~1 GB of memory, that should also be sufficient.
Is this correct?
Also, are these estimates per core or for the total job (across all MPI tasks)?
I’d appreciate any clarification on this — especially how to properly allocate memory for multi-core jobs based on these memory estimates.
Thanks!
Dominic