[Dune] Constrained load balance
Andreas Dedner
a.s.dedner at warwick.ac.uk
Wed Feb 1 14:23:15 CET 2017
Sorry my first reply didn't go to the list so I'm moving the discussion
back again:
Actually going from parmetis to zoltan is not that much work - we did
that with alugrid. One advantage is that zoltan interfaces to parmetis.
Have a look here:
http://www.cs.sandia.gov/zoltan/ug_html/ug_alg_phg.html
The advantage of that apporach is that you can assign certain elements
to a given
rank which can be very practical. Your approach should work as well I
guess - which one will give
better results I don't know. But have a look at Zoltan it really is
quite similar as far as I remember.
We actually have an example usage in
dune-alugrid/examples/loadbalance/loadbalance_zoltan.hh
which does the partitioning on the dune side using dune-alugrid's
slightly extended interface.
Andreas
On 01/02/17 12:21, Aleksejs Fomins wrote:
> Hi Andreas,
>
> Actually it does help :)
>
> I realise that I do not actually need (1) strictly. I am used to having it, but it is not necessary for the code. I just want to decrease the number of process boundaries and ghost elements, so as not to communicate a lot.
>
> I realise that I could do what you suggest on ParMetis as follows
> A) Unconstrained partition all elements, for which one or more boundary segments are assigned
> B) Put each boundary segment on the same process as the element it is assigned to
> C) Unconstrained partition all elements, for which no boundary segments are assigned
> D) On each process, make a union of the above
>
> Thus each process will have 1 contiguous block of interior elements, and a set of boundary elements that are close to each other. Since most of these boundary elements will have common ghosts, perhaps there will not be so much extra communication.
>
> Another way to do the same thing is to mark a boundary layer of the geometry with a different tag, and partition that tag, and not that tag, and take union. Then each process will have at most 2 contiguous chunks of the mesh, and boundary segments will be more-or-less equal.
>
> I have not yet looked at Zoltan. Transitioning to a new partitioning system will require time. Do you think that with Zoltan it will be possible to achieve better results than what I have described above?
>
> Thanks
> Aleksejs
>
>
>
> On 01.02.2017 12:01, Andreas Dedner wrote:
>> Hi.
>> Have you looked into Zoltan.
>> I've had a similar problem and what we did is
>> A) partition the surface triangulation to satisfy (3),(4)
>> B) partitioned the bulk grid (again satisfying (3),(4)) but restricting the elements with boundary segments to be assigned the same process as that boundary segment provided by (A).
>> Zoltan can do that sort of thing.
>> Don't know if that solves your problem and there is no guarantee for (1) I think.
>> Andreas
>>
>> On 01/02/17 10:11, Aleksejs Fomins wrote:
>>> Dear Dune,
>>>
>>> I have implemented support for Boundary Integral (BI) methods in CurvGrid some time ago. Such methods require coupling of each two pairs of boundary segments, or each boundary segment with each interior boundary segment, depending on formulation. The efficiency of such methods strongly depends on load balance across all processes. Here are the load balance criteria for BI in decreasing priority
>>>
>>> 1) Chunk of grid on each process should form a contiguous polyhedron
>>> 2) Polyhedron on each process touches the boundary, and the number of boundary segments should be as equal as possible among all processes
>>> 3) The element distribution on all processes is as equal as possible
>>> 4) Surface area of the polyhedron on each process should be as small as possible
>>>
>>> For FEM, 2) can be ignored.
>>>
>>> So far we have been using ParMetis. "ParMETIS_V3_PartKway" automatically optimizes 1), 3) and 4), and does exactly what we want for FEM. The above method also provides capabilities of constrained optimization, which can be used to get 2). However, the problem is that chunks are then no longer continuous. I have implemented it for CurvGMSHReader, and the result of such optimization is a highly scattered mesh with low connectivity.
>>>
>>> I have contacted the developers of ParMetis, and was told that 1) is not actually done in ParMetis, and in the case of unconstrained optimization it is kind of side effect, whereas for constrained optimization it is not a given at all.
>>>
>>> Has anybody come across constrained load balancing before?
>>>
>>> Is there a trick to do it with ParMetis, or perhaps some other partitioning software?
>>>
>>> Best regards,
>>> Aleksejs
>>>
>>>
>>>
>>> _______________________________________________
>>> Dune mailing list
>>> Dune at dune-project.org
>>> http://lists.dune-project.org/mailman/listinfo/dune
>
More information about the Dune
mailing list