[Dune] Shared memory capabilities?
aleksejs.fomins at lspr.ch
Wed Oct 29 11:01:20 CET 2014
This question is not of an immediate importance for us at the moment.
However, in the future, we intend to run multiple jobs on large clusters
and a supercomputer at CSCS.
I have recently learned that the architecture of a supercomputer usually
involves nodes, which by default have certain internal memory (say 32Gb)
for certain number of processes (say 32). I will guess that the current
DUNE approach is to split this memory into equal non-overlapping chunks
(say, 1Gb per process), such that no process accesses the memory of any
In principle, there could be some gain of using shared memory between
all processes in the node:
1) With shared memory, it may be possible to reduce the communication
between processes in one node.
2) Some information between processes is overlapping, like some vertices
and boundaries, and thus will slightly reduce memory footprint.
My question was to describe the work that has been done or is planned to
be done by Dune in this direction in foreseeable future.
On 10/29/2014 09:42 AM, Oliver Sander wrote:
> Am 29.10.2014 um 09:04 schrieb Aleksejs Fomins:
>> Dear Dune,
>> What are the current capabilities of Dune to deal with shared memory
>> architectures? Is there a plan to implement it?
>> By shared memory I mean using all the memory of a node by all its
>> processes to avoid communication between them
> Hi Aleksejs,
> I'm afraid you need to be a bit more specific here. What exactly do you need?
> Dune mailing list
> Dune at dune-project.org
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 551 bytes
Desc: OpenPGP digital signature
More information about the Dune