[Dune] Question on parallel computation
Bernd Flemisch
bernd at iws.uni-stuttgart.de
Thu Apr 23 09:24:47 CEST 2015
Hi Aleksejs,
thank you for your answer. I did not know too much about the MPI Graph
Topologies/Communicators yet, they seem to be right for this. I will try
to do the following:
A. Setup
1. Balance the grid.
2. Gatherv element indices and coordinates onto root.
3. Root detects neighbors and builds graph topology.
4. Scatterv which process needs to provide which numbers for each of his
neighbors.
5. Broadcast the graph topology, call MPI_Create_graph.
B. Run
1. Use MPI_Neighbor_alltoallv to communicate.
Obviously, A.3. can be the performance killer. I hope to get away with
the fact that it has to be done only once and an overall still moderate
element count. If it gets larger, I have to do more sophisticated tree
stuff.
Kind regards
Bernd
On 04/22/2015 04:25 PM, Aleksejs Fomins wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Dear Bernd,
>
> So, the naive algorithm is
> 1) Determine all neighbours in serial mode using pairwise checking. Complexity O( n(n+1) / 2) for n elements
> 2) Determine all averages on the master process
> 2.1) All numbers get communicated to master via comm.gather
> 2.2) Master calculates all averages. Complexity O(n * n_neighbor_within_radius)
> 2.3) Master communicates all averages to their owners via comm.scatter
>
> My thoughts on optimization of above algorithm
> 1) If the radius is small compared to size of the mesh, the mesh can be placed into an OCTree. This way, first the octants get compared, and if they do not intersect, then the element pairs within them do not get checked. The complexity will be reduced to approximately O(n * n_neighbor_within_radius / 2)
>
> 2) The computation of the averages on master process can be optimized by dynamical programming. Since the sets of radius-neighbours are going to significantly overlap, then, instead of storing the set of all neighbours for each element, it makes sense to store the change of neighbours between this element and some other element close to it, perhaps one of its real neighbours, thus making sort of a tree. The complexity is going to be O(n * K), where K will be approximately (n_neighbor_within_radius)^{2/3}.
>
> 3) A different approach would be to send to each element only the values it needs to compute its own average. Then your friend is the new mpi routine MPI_Neighbor_allgather, which allows to specify in parallel the graph of communication, and then each process sends data only to its recepients. Then the computation is cheap, and this type of mpi communication is definitely scalable to very large processor number unlike the standard gather one. Unfortunately, it is still not implemented in Dune, but I have written a wrapper for one of these routines, you can have it if you want to
>
> Greetings,
> Aleksejs
>
>
> On 22/04/15 15:08, Bernd Flemisch wrote:
>> Dear Dune,
>>
>> I want to calculate a number a_e for each element e that is the average of other numbers x_n associated with elements n that lie within a certain radius r of e:
>>
>> a_e = avg ( all x_n with n such that dist(e, n) < r )
>>
>> Maybe some of you already had a similar problem: How can I achieve this in MPI-parallel on a Dune grid without overlap, with maximal use of the existing Dune infrastructure?
>>
>> I can think of determining all the interacting elements n for each element e sequentially on the macro grid before that is balanced over the processes. Based on that pattern, I want to calculate frequently the a_e's.
>>
>> Thank you for your thoughts. Kind regards
>> Bernd
>>
> _______________________________________________
> Dune mailing list
> Dune at dune-project.org
> http://lists.dune-project.org/mailman/listinfo/dune
--
_______________________________________________________________
Bernd Flemisch phone: +49 711 685 69162
IWS, Universität Stuttgart fax: +49 711 685 60430
Pfaffenwaldring 61 email: bernd at iws.uni-stuttgart.de
D-70569 Stuttgart url: www.hydrosys.uni-stuttgart.de
_______________________________________________________________
More information about the Dune
mailing list