[Dune] parallel DUNE grid for implementing a spherical shell (UGGrid)

Eike Mueller E.Mueller at bath.ac.uk
Mon Nov 12 17:04:45 CET 2012


Hi Oliver,

I don't think it's a problem with the ghost cells. I attach data to the grid and then print it out as a vtu file, but only one 
of these files contains the data for the entire grid:

em459 at mapc-0210 $ mpirun -n 8 ./uggrid 3
UGgrid running on 8 processes.
DimX=4, DimY=2, DimZ=1
DimX=4, DimY=2, DimZ=1
Number of grid cells = 0
Number of grid cells = 0
Number of grid cells = 0
Number of grid cells = 0
Number of grid cells = 0
Number of grid cells = 0
Number of grid cells = 512
Number of grid cells = 0
em459 at mapc-0210 $ ls -l *s0008*vtu
-rw-r--r-- 1 em459 bath11   835 Nov 12 16:02 s0008-elementdata.pvtu
-rw-r--r-- 1 em459 bath11   942 Nov 12 16:02 s0008-p0000-elementdata.vtu
-rw-r--r-- 1 em459 bath11   942 Nov 12 16:02 s0008-p0001-elementdata.vtu
-rw-r--r-- 1 em459 bath11   942 Nov 12 16:02 s0008-p0002-elementdata.vtu
-rw-r--r-- 1 em459 bath11   942 Nov 12 16:02 s0008-p0003-elementdata.vtu
-rw-r--r-- 1 em459 bath11   942 Nov 12 16:02 s0008-p0004-elementdata.vtu
-rw-r--r-- 1 em459 bath11   942 Nov 12 16:02 s0008-p0005-elementdata.vtu
-rw-r--r-- 1 em459 bath11   942 Nov 12 16:02 s0008-p0006-elementdata.vtu
-rw-r--r-- 1 em459 bath11 40618 Nov 12 16:02 s0008-p0007-elementdata.vtu

the example code I used for this is attached.

Thanks a lot,

Eike

Oliver Sander wrote:
> Am 12.11.2012 16:01, schrieb Eike Mueller:
>> Hi Oliver,
>>
>> I tried several other strategies, but without any luck. Whatever I do, 
>> the algorithm seems to refuse to split up the macro cells, i.e. the 6 
>> elements I insert with the grid factory.
>>
>> I also tried to simplify the problem as much as possible. I now create 
>> one unit cube with the grid factory, do not insert any boundary 
>> segments and refine the grid by calling globalRefine(refcount), so 
>> that I end up with a cartesian unit cube split into 8^refcount 
>> elements. I then balance the grid with loadBalance() (i.e. no 
>> arguments). I would have thought that that should work. Still, if I 
>> refine 1,2 or 3 times (i.e. I should end up with 8,64,512 elements), 
>> for an 8 core run only one process stores the entire grid.
> This should really work, can you post your test program?
> 
> But be careful: If you load-balance small grids then all processors
> get all elements, but most of them only as ghosts.  Did you
> check that?
> 
> cheers,
> Oliver
> 
>>
>> Could this be a problem with the grid factory?
>>
>> Thanks,
>>
>> Eike
>>
>> Oliver Sander wrote:
>>> Am 06.11.2012 08:44, schrieb Markus Blatt:
>>>> Hey Eike,
>>>>
>>>> On Mon, Nov 05, 2012 at 07:17:48PM +0000, Eike Mueller wrote:
>>>>>> And loadbalancing would only be needed for the macrogrid, e.g., 
>>>>>> not dynamic.
>>>>>>
>>>>> That's right, in the code I would refine the grid until there is
>>>>> only cell per processor (this is the macrogrid). Then I would call
>>>>> loadBalance, followed by further grid refinement. So for example
>>>>> with 24 processors, I would subdivide each of the six cells in the
>>>>> original grid into four cells, then loadbalance that grid and refine
>>>>> further.
>>>> Actually, this approach could be the root of the problem. The
>>>> loadbalancing is a heuristic algorithm and normally one always gets
>>>> some load imbalance here. But if you just have as many cells as
>>>> processor, then naturally some will end up with no cells at all.
>>> This is true, but it is not the whole truth.
>>>
>>> The default load balancing strategy of UG is Recursive Coordinate 
>>> Bisection.
>>> This means roughly that the grid bounding box is partitionend into 
>>> axis-aligned
>>> cells, and these cells are assigned to processors.  I reckon (I 
>>> didn't check)
>>> that since your grid is a hollow sphere, some cells simply remain empty.
>>>
>>> UG offers several other strategies, but all this really is hardly 
>>> tested.
>>> Have a look at the lbs method in ug/parallel/dddif/lb.c:533 for some
>>> alternatives.
>>>
>>> good luck,
>>> Oliver
>>>
>>>>
>>>> How about doing some more refinement before load balancing?
>>>>
>>>> Cheers,
>>>>
>>>> Markus
>>>>
>>>
>>>
>>> _______________________________________________
>>> Dune mailing list
>>> Dune at dune-project.org
>>> http://lists.dune-project.org/mailman/listinfo/dune
>>
>>
> 


-- 
Dr Eike Mueller
Research Officer

Department of Mathematical Sciences
University of Bath
Bath BA2 7AY, United Kingdom

+44 1225 38 5633
e.mueller at bath.ac.uk
http://people.bath.ac.uk/em459/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: uggrid.cc
Type: text/x-c++src
Size: 3980 bytes
Desc: not available
URL: <https://lists.dune-project.org/pipermail/dune/attachments/20121112/f62d46c8/attachment.cc>


More information about the Dune mailing list