[Dune] parallel DUNE grid for implementing a spherical shell (UGGrid)

Martin Nolte nolte at mathematik.uni-freiburg.de
Fri Nov 16 22:50:32 CET 2012


Hi Eike, hi Andreas,

the problem is not the DGFWriter but the DGFParser itself. The DGF format only 
supports blocks for simplices or cubes, so the DGFWriter cannot write prisms or 
pyramids (without conversion to simplices). I don't think extending the DGF 
parser is very complicated, but it will require some effort.

Best,

Martin

On 11/16/2012 09:21 PM, Dedner, Andreas wrote:
> At the moment the  dgfwritter.hh only writes cubes and simplices  - have a look.
> It should take not a lot of effort to extend it to other element types.
> Best
> Andreas
> ________________________________________
> From: dune-bounces+a.s.dedner=warwick.ac.uk at dune-project.org [dune-bounces+a.s.dedner=warwick.ac.uk at dune-project.org] on behalf of Eike Mueller [E.Mueller at bath.ac.uk]
> Sent: 16 November 2012 18:19
> To: dune at dune-project.org
> Subject: Re: [Dune] parallel DUNE grid for implementing a spherical shell (UGGrid)
>
> Dear all,
>
> I can now also generate the icosahedral macro grid (see attached .vtu file), however when I write this out with the DGFWriter,
> the prisms do not get written to disk. Is there a way around this or do I have to do it by hand?
>
> Thanks a lot,
>
> Eike
>
>
>
> Eike Mueller wrote:
>> Dear all,
>>
>> thank you very much for all your help, I have now solved the load
>> balancing issue, at least for the hexahedral grid.
>> I wrote some sequential code which creates a macro grid with p elements
>> using the UGgrid factory, and save this as a .dgf file. I then wrote a
>> parser for .dgf files, which reads the grid from disk and also adds
>> boundary segments (this is not supported for UG at the moment). If I
>> then read in my .dgf file on p processors, all I need to do is call
>> loadBalance() and each processor automatically ends up with one element,
>> which I can then refine further. It means that at some point I store the
>> entire macro grid on one processor, but I think that is fine, as this
>> grid will maximally contain 10,000s of cells, and there is probably no
>> way this can be avoided, even if I set up the grid with a factory in one
>> go.
>>
>> I did come across some strange behaviour in the DGFWriter, though. When
>> writing out the file, I got a
>>
>> Warning: Ignoring nonpositive boundary id: 0
>>
>> and it would not add this boundary segment to the .dgf file, which meant
>> I could not read it properly later. I traced this down to this bit of
>> code in dune/grid/io/file/dgfparser/dgfwriter.hh:
>>
>> inline void DGFWriter<  GV>::write ( std::ostream&gridout ) const
>> [...]
>> if( boundaryId<= 0 )
>>          {
>>            std::cerr<<  "Warning: Ignoring nonpositive boundary id:"
>>                      <<  boundaryId<<  "."<<  std::endl;
>>            continue;
>>          }
>> [...]
>>
>> If I change the condition to "if( boundaryId<  0 )"
>>
>> it works. Why does it exclude boundaryId = 0?
>>
>> Next goal is doing the same for the icosahedral grid, for this I will
>> need prisms, and again they should only be subdivided in the horizontal
>> direction.
>>
>> Thanks, Eike
>>
>> PS: Andreas, the code has been checked into the GungHo! repository.
>>
>> Oliver Sander wrote:
>>> Am 13.11.2012 12:29, schrieb Dedner, Andreas:
>>>> I might be wrong but I think Yasp only distributes the macro grid
>>>> like ALU does?
>>>> So if you start with only one element then there is no way to
>>>> loadbalance it.
>>>> As far as I know only UG differs from this behaviour allowing to
>>>> loadbalance leaf elements
>>>> "arbitrarily" over the processors....
>>> Yes, UG supposedly does that.  It is called 'vertical load
>>> balancing'.  I have never
>>> actually done that, though (never had to).  Therefore I don't really
>>> know how
>>> it works and whether you need special flags or anything.
>>> --
>>> Oliver
>>>
>>>> ________________________________________
>>>> From: dune-bounces+a.s.dedner=warwick.ac.uk at dune-project.org
>>>> [dune-bounces+a.s.dedner=warwick.ac.uk at dune-project.org] on behalf of
>>>> Eike Mueller [E.Mueller at bath.ac.uk]
>>>> Sent: 13 November 2012 11:04
>>>> To: Oliver Sander
>>>> Cc: dune at dune-project.org
>>>> Subject: Re: [Dune] parallel DUNE grid for implementing a spherical
>>>> shell (UGGrid)
>>>>
>>>> To make it even simpler, I replaced the UGGrid by an 1x1x1 YaspGrid,
>>>> i.e. I start with a grid with one element. If I
>>>> globalRefine() this once, and then loadBalance(), one processor
>>>> always ends up with the entire grid. I must be missing something
>>>> very basic here.
>>>> Does the load balancing in DUNE assume that if a father cell is owned
>>>> by a processor, then all it's children on the finer levels
>>>> are owned by the same processor? But then calling loadBalance() after
>>>> grid refinement would not make sense. If I start with a
>>>> 2x2x2 grid and do not refine, then it works, i.e. if I run on 8
>>>> cores, then each of them ends up getting one element.
>>>>
>>>> Thanks,
>>>>
>>>> Eike
>>>>
>>>> Oliver Sander wrote:
>>>>> Am 12.11.2012 16:01, schrieb Eike Mueller:
>>>>>> Hi Oliver,
>>>>>>
>>>>>> I tried several other strategies, but without any luck. Whatever I do,
>>>>>> the algorithm seems to refuse to split up the macro cells, i.e. the 6
>>>>>> elements I insert with the grid factory.
>>>>>>
>>>>>> I also tried to simplify the problem as much as possible. I now create
>>>>>> one unit cube with the grid factory, do not insert any boundary
>>>>>> segments and refine the grid by calling globalRefine(refcount), so
>>>>>> that I end up with a cartesian unit cube split into 8^refcount
>>>>>> elements. I then balance the grid with loadBalance() (i.e. no
>>>>>> arguments). I would have thought that that should work. Still, if I
>>>>>> refine 1,2 or 3 times (i.e. I should end up with 8,64,512 elements),
>>>>>> for an 8 core run only one process stores the entire grid.
>>>>> This should really work, can you post your test program?
>>>>>
>>>>> But be careful: If you load-balance small grids then all processors
>>>>> get all elements, but most of them only as ghosts.  Did you
>>>>> check that?
>>>>>
>>>>> cheers,
>>>>> Oliver
>>>>>
>>>>>> Could this be a problem with the grid factory?
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Eike
>>>>>>
>>>>>> Oliver Sander wrote:
>>>>>>> Am 06.11.2012 08:44, schrieb Markus Blatt:
>>>>>>>> Hey Eike,
>>>>>>>>
>>>>>>>> On Mon, Nov 05, 2012 at 07:17:48PM +0000, Eike Mueller wrote:
>>>>>>>>>> And loadbalancing would only be needed for the macrogrid, e.g.,
>>>>>>>>>> not dynamic.
>>>>>>>>>>
>>>>>>>>> That's right, in the code I would refine the grid until there is
>>>>>>>>> only cell per processor (this is the macrogrid). Then I would call
>>>>>>>>> loadBalance, followed by further grid refinement. So for example
>>>>>>>>> with 24 processors, I would subdivide each of the six cells in the
>>>>>>>>> original grid into four cells, then loadbalance that grid and
>>>>>>>>> refine
>>>>>>>>> further.
>>>>>>>> Actually, this approach could be the root of the problem. The
>>>>>>>> loadbalancing is a heuristic algorithm and normally one always gets
>>>>>>>> some load imbalance here. But if you just have as many cells as
>>>>>>>> processor, then naturally some will end up with no cells at all.
>>>>>>> This is true, but it is not the whole truth.
>>>>>>>
>>>>>>> The default load balancing strategy of UG is Recursive Coordinate
>>>>>>> Bisection.
>>>>>>> This means roughly that the grid bounding box is partitionend into
>>>>>>> axis-aligned
>>>>>>> cells, and these cells are assigned to processors.  I reckon (I
>>>>>>> didn't check)
>>>>>>> that since your grid is a hollow sphere, some cells simply remain
>>>>>>> empty.
>>>>>>>
>>>>>>> UG offers several other strategies, but all this really is hardly
>>>>>>> tested.
>>>>>>> Have a look at the lbs method in ug/parallel/dddif/lb.c:533 for some
>>>>>>> alternatives.
>>>>>>>
>>>>>>> good luck,
>>>>>>> Oliver
>>>>>>>
>>>>>>>> How about doing some more refinement before load balancing?
>>>>>>>>
>>>>>>>> Cheers,
>>>>>>>>
>>>>>>>> Markus
>>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Dune mailing list
>>>>>>> Dune at dune-project.org
>>>>>>> http://lists.dune-project.org/mailman/listinfo/dune
>>>>>>
>>>>
>>>> --
>>>> Dr Eike Mueller
>>>> Research Officer
>>>>
>>>> Department of Mathematical Sciences
>>>> University of Bath
>>>> Bath BA2 7AY, United Kingdom
>>>>
>>>> +44 1225 38 5633
>>>> e.mueller at bath.ac.uk
>>>> http://people.bath.ac.uk/em459/
>>>>
>>>> _______________________________________________
>>>> Dune mailing list
>>>> Dune at dune-project.org
>>>> http://lists.dune-project.org/mailman/listinfo/dune
>>>>
>>>>
>>>
>>
>>
>
>
> --
> Dr Eike Mueller
> Research Officer
>
> Department of Mathematical Sciences
> University of Bath
> Bath BA2 7AY, United Kingdom
>
> +44 1225 38 5633
> e.mueller at bath.ac.uk
> http://people.bath.ac.uk/em459/
>
>
> _______________________________________________
> Dune mailing list
> Dune at dune-project.org
> http://lists.dune-project.org/mailman/listinfo/dune

-- 
Dr. Martin Nolte <nolte at mathematik.uni-freiburg.de>

Universität Freiburg                                   phone: +49-761-203-5630
Abteilung für angewandte Mathematik                    fax:   +49-761-203-5632
Hermann-Herder-Straße 10
79104 Freiburg, Germany




More information about the Dune mailing list