[Dune] parallel DUNE grid for implementing a spherical shell (UGGrid)

Peter Bastian peter.bastian at iwr.uni-heidelberg.de
Wed Nov 14 20:51:33 CET 2012


In fact the load balance method in YaspGrid does nothing!

In YaspGrid the grid is load balanced in the constructor. It contains an algorithm that
determines the number of procs per direction and then does a tensor product
subdivision. Then the overlap is added in each direction.
There is also the possibility to prescribe the number of procs per direction.

In global refinement the new cells are at the same processor as the father cell
as in ALUGrid.

-- Peter

Am 13.11.2012 um 12:29 schrieb "Dedner, Andreas" <A.S.Dedner at warwick.ac.uk>:

> I might be wrong but I think Yasp only distributes the macro grid like ALU does?
> So if you start with only one element then there is no way to loadbalance it.
> As far as I know only UG differs from this behaviour allowing to loadbalance leaf elements
> "arbitrarily" over the processors....
> ________________________________________
> From: dune-bounces+a.s.dedner=warwick.ac.uk at dune-project.org [dune-bounces+a.s.dedner=warwick.ac.uk at dune-project.org] on behalf of Eike Mueller [E.Mueller at bath.ac.uk]
> Sent: 13 November 2012 11:04
> To: Oliver Sander
> Cc: dune at dune-project.org
> Subject: Re: [Dune] parallel DUNE grid for implementing a spherical shell (UGGrid)
> 
> To make it even simpler, I replaced the UGGrid by an 1x1x1 YaspGrid, i.e. I start with a grid with one element. If I
> globalRefine() this once, and then loadBalance(), one processor always ends up with the entire grid. I must be missing something
> very basic here.
> Does the load balancing in DUNE assume that if a father cell is owned by a processor, then all it's children on the finer levels
> are owned by the same processor? But then calling loadBalance() after grid refinement would not make sense. If I start with a
> 2x2x2 grid and do not refine, then it works, i.e. if I run on 8 cores, then each of them ends up getting one element.
> 
> Thanks,
> 
> Eike
> 
> Oliver Sander wrote:
>> Am 12.11.2012 16:01, schrieb Eike Mueller:
>>> Hi Oliver,
>>> 
>>> I tried several other strategies, but without any luck. Whatever I do,
>>> the algorithm seems to refuse to split up the macro cells, i.e. the 6
>>> elements I insert with the grid factory.
>>> 
>>> I also tried to simplify the problem as much as possible. I now create
>>> one unit cube with the grid factory, do not insert any boundary
>>> segments and refine the grid by calling globalRefine(refcount), so
>>> that I end up with a cartesian unit cube split into 8^refcount
>>> elements. I then balance the grid with loadBalance() (i.e. no
>>> arguments). I would have thought that that should work. Still, if I
>>> refine 1,2 or 3 times (i.e. I should end up with 8,64,512 elements),
>>> for an 8 core run only one process stores the entire grid.
>> This should really work, can you post your test program?
>> 
>> But be careful: If you load-balance small grids then all processors
>> get all elements, but most of them only as ghosts.  Did you
>> check that?
>> 
>> cheers,
>> Oliver
>> 
>>> 
>>> Could this be a problem with the grid factory?
>>> 
>>> Thanks,
>>> 
>>> Eike
>>> 
>>> Oliver Sander wrote:
>>>> Am 06.11.2012 08:44, schrieb Markus Blatt:
>>>>> Hey Eike,
>>>>> 
>>>>> On Mon, Nov 05, 2012 at 07:17:48PM +0000, Eike Mueller wrote:
>>>>>>> And loadbalancing would only be needed for the macrogrid, e.g.,
>>>>>>> not dynamic.
>>>>>>> 
>>>>>> That's right, in the code I would refine the grid until there is
>>>>>> only cell per processor (this is the macrogrid). Then I would call
>>>>>> loadBalance, followed by further grid refinement. So for example
>>>>>> with 24 processors, I would subdivide each of the six cells in the
>>>>>> original grid into four cells, then loadbalance that grid and refine
>>>>>> further.
>>>>> Actually, this approach could be the root of the problem. The
>>>>> loadbalancing is a heuristic algorithm and normally one always gets
>>>>> some load imbalance here. But if you just have as many cells as
>>>>> processor, then naturally some will end up with no cells at all.
>>>> This is true, but it is not the whole truth.
>>>> 
>>>> The default load balancing strategy of UG is Recursive Coordinate
>>>> Bisection.
>>>> This means roughly that the grid bounding box is partitionend into
>>>> axis-aligned
>>>> cells, and these cells are assigned to processors.  I reckon (I
>>>> didn't check)
>>>> that since your grid is a hollow sphere, some cells simply remain empty.
>>>> 
>>>> UG offers several other strategies, but all this really is hardly
>>>> tested.
>>>> Have a look at the lbs method in ug/parallel/dddif/lb.c:533 for some
>>>> alternatives.
>>>> 
>>>> good luck,
>>>> Oliver
>>>> 
>>>>> 
>>>>> How about doing some more refinement before load balancing?
>>>>> 
>>>>> Cheers,
>>>>> 
>>>>> Markus
>>>>> 
>>>> 
>>>> 
>>>> _______________________________________________
>>>> Dune mailing list
>>>> Dune at dune-project.org
>>>> http://lists.dune-project.org/mailman/listinfo/dune
>>> 
>>> 
>> 
> 
> 
> --
> Dr Eike Mueller
> Research Officer
> 
> Department of Mathematical Sciences
> University of Bath
> Bath BA2 7AY, United Kingdom
> 
> +44 1225 38 5633
> e.mueller at bath.ac.uk
> http://people.bath.ac.uk/em459/
> 
> _______________________________________________
> Dune mailing list
> Dune at dune-project.org
> http://lists.dune-project.org/mailman/listinfo/dune
> 
> 
> 
> _______________________________________________
> Dune mailing list
> Dune at dune-project.org
> http://lists.dune-project.org/mailman/listinfo/dune

------------------------------------------------------------
Peter Bastian
Interdisziplinäres Zentrum für Wissenschaftliches Rechnen
Universität Heidelberg
Im Neuenheimer Feld 368
D-69120 Heidelberg
Tel: 0049 (0) 6221 548261
Fax: 0049 (0) 6221 548884
email: peter.bastian at iwr.uni-heidelberg.de
web: http://conan.iwr.uni-heidelberg.de/people/peter/





More information about the Dune mailing list