[Dune] Contribution to CollectiveCommunication

Aleksejs Fomins aleksejs.fomins at lspr.ch
Thu Jan 22 16:23:04 CET 2015


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Dear Markus,

Thanks again for your reply.

I am indeed writing the communication interface for the curviliear grid.

Up to now I was under the impression that even for sparse
communication, MPI_Alltoallv was the better way to go since
1) I provide to the method all the information I know.
2) A supercomputer would be able to optimize the MPI implementation in
accordance to its internal architecture.

After your reply, I have read a few more publications on this issue,
and now I see another two points
3) MPI_Alltoallv memory usage per process increases linearly with
process number, which is wasteful for large architectures, since the
amount of memory per process stays the same
4) Most communications in PDE are nearest neighbor, so the
communicator memory usage need not scale with process number

I would like to comment, that there are use cases for all-to-all
communication, for example all Boundary Integral (BI) techniques
require each process boundary to communicate to each other. In our
electromagnetic code we use a BI technique to truncate the
computational domain. At the moment it is implemented internally.

It is a good point that the DataHandle interface only exhibits nearest
neighbor communication. I of course agree that I should use a better
communication paradigm if it is available.

What would you suggest? I could implement a sparse communication
pattern in terms of MPI_Send and MPI_Recv. However, I have read that
there was an effort to design such compact patterns in MPI-3 precisely
for scalability reasons. Could you suggest any such pattern I could use?

Regards,
Aleksejs




On 22/01/15 13:58, Markus Blatt wrote:
> Hi,
> 
> On Thu, Jan 22, 2015 at 09:50:49AM +0100, Aleksejs Fomins wrote:
>> Dear Dune,
>> 
>> Following our discussions on POD communication, I have written a
>> small utility, which might find its place among
>> CollectiveCommunication methods if people want it. The motivation
>> is that there are numerous gather and scatter methods in there,
>> but no wrapper for MPI_Alltoallv, as far as I can tell.
>> 
> 
> probably because nobody needed it yet.
> 
>> [... MPI] For example, if process 1 would like to send 2 elements
>> to process 0 3 elements to process 1 (self) 5 elements to process
>> 2 then "in" would have length 10, and lengthIn={2,3,5} on process
>> 1
>> 
> 
> This question might be really stupid, but up until now I was under
> the impression that you are working on implementing the
> communication interface for curvilinear-grid. Where would you need
> such a functionality for this?
> 
> Just to be sure (ignore it if I am pointing out the obvious): 
> Please note that from outside it might seem as  the communication
> in the grid interface is sending from  all to all processors. But
> for large numbers of processors there is alway a quite small number
> of processors where a process might send to or receive from. I am
> sure that you agree that MPI_Alltoallv is not the weapon of choice
> here.
> 
> Markus
> 
> 
> 
> 
> _______________________________________________ Dune mailing list 
> Dune at dune-project.org 
> http://lists.dune-project.org/mailman/listinfo/dune
> 
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQIcBAEBAgAGBQJUwRXYAAoJEDkNM7UOEwMZBT0QAI3YmGjETJaul61aCsrzJ1Gx
Zj5RUUsXBzWJFSv1UXclVveoTp5NtOsxv2fk3J6VjRJyuX8XdkPmBnrmPTv7rtIP
e/gQHodksw1XYBwqyn6BEcoEJsYKBCnCOcdiIEs7IbzL0OOgj9ABX2c7gGHDIx2/
qMEUezgBMSDrtM9ywPI4EdV8J9L2TJXiN/jc3wm66r/Mu/ZMZ8vz9RilowdYAaIz
MtfL/CXgE0HkYaziNEZqeF4L+rGnLD8t/N3V8hc54NiG8XNW38SZn/o07hr5TW3o
afNQTbaPj1ritN2SbPeuGj8RQRS3IC+lTLvzRN8LXBj+UvZJHvqHg6+HE1yU7wYH
CuOZNIMGqqPv45IxloaXnklrjLlz9E4x2UvHAlWqx7QETi7tTy8VnELZBUo3uzb+
a/gbIKhQgvmVMCMCHKMNU0mZgPm6U8OSrbT2VhZGj55TGfHnswkU3BwuGZf0xuW/
iu6Xk3jVf5zX9tK9IrB2pPqrIMRX8BTPO4cQx4lGbbcNLsGu6+t9Qoqg4K72+mEm
gVHbSprVpJzTEJgizQcaUqg4pb24S749HlV4rmpuML6YWyt0Ed+NwS6RZ4jGpBgS
ZSJS8XdTsEvEKMiS4UKmKGUhSyln3lN4Kau2rLG5UKs2Bkrl2j/4vimXUx2iIRP5
ig7iVoimHKz4OrKGfZ2G
=zOm1
-----END PGP SIGNATURE-----




More information about the Dune mailing list