[Dune] Fwd: Re: How to implement point to point communication

Andreas Dedner a.s.dedner at warwick.ac.uk
Thu Oct 16 17:43:49 CEST 2014


On 16/10/14 15:38, Aleksejs Fomins wrote:
> Dear Andreas,
>
> I think you guessed right, but let me write it again just to be sure. We
> want to do our own partitioning for 2 reasons:
> 1) The elements will initially be stored on processes in an irregular
> way. I assume that creating a grid from that chaos would cause the
> hostgrid to do a lot of unnecessary work, as the elements would be
> scattered over the domain, opposed to making a nice connected subdomain.
Yes - that would cause a lot of overhead for alugrid
>
> 2) Curvilinear elements can, in principle, have different interpolation
> orders in the same mesh. Thus, the amount of computation per element may
> vary. Main causes are
> * Computation of global-to-local and local-to-global maps, as well as
> integration over elements will be progressively more expensive with
> increasing interpolation order
> * When doing FEM calculations involving basis functions, it is
> reasonable to assume that a highly curved element would require more
> basis functions to describe the fine internal processes than the
> straight-sided one. Roughly speaking, the number of interpolation
> functions increases quadratically with the polynomial order, so the
> number of matrix elements could increase quartically.
This could be handled by using e.g. the option to provide weights during
the call
to ALUGrid loadbalancing.
>
>
> To summarize, we need a partitioning algorithm that would be able to
> manage weights, and run before AluGridFactory.createGrid(). Whether or
> not it uses parmetis is not important, I speak of parmetis is because it
> is the only thing I have experience with at the moment.
Agree. Just one option to consider: the partitioning could also be a
preprocessing step/
So instead of doing it within a dune structure one could simply have a
"partition.cc" which
does the partitioning and then writes partitioned gmsh files for your
parallel reader which
at least avoid your first point and are also correctly balanced w.r.t.
your second point.
In principal your approach is a preprocessing step anyway so one doesn't
necessarily want to do this
every time a simulation is started anyway. On the other hand if the
whole thing is contained within a
extra program partition.cc then runtime does perhaps not play such an
important role and you
could use the partitioning algorithm and parallel gridfactory methods
available in ALUGrid and except the
additional overhead caused for example by your first point. If you use
the BackupRestore
facility then you do not even need to write partitioned gmsh files. If I
understand you correctly
that would already work with what you have - if that approach then turns
out not to be feasible you
can still add a partitioning into the GridFactory. But perhaps that
approach does not harmonize well with
your workflow.
Andreas
>
> ---------------------------------------------------------------------
> The strategy for the CurvilinearGridFactory would then be as follows:
>
> 1) Add all vertices on process (global index mandatory)
>
> 2) Read all elements on process (type, interpOrder, vertexIndexVector
> mandatory)
>
> 3) Read all boundarySegments on process (type, interpOrder,
> vertexIndexVector mandatory, relatedElement optional). It is mandatory
> to insert all boundarySegments. At the moment it is also mandatory that
> each process only inserts boundarySegments which are connected to its
> elements, because otherwise it is not trivial how to communicate them
> afterwards. Therefore, when inserting the boundary segment we also ask
> to insert the index of the element the boundary is a face to. It is
> reasonable to assume that the reader would have to calculate this
> information when figuring out which boundaries to read from the file.
>
> 4) call createGrid. Have a vector of coordinates, elements and
> boundartysegments
>
> 4.1) Call partition method, receive which element should go to which
> process.
>
> 4.2) Communicate all elements and boundarySegments such that each
> element only has elements that belong to it. Boundary segments are
> communicated together with the elements they are a face to.
>
> 4.3) Each process deletes all vertices it does not need any more
>
> 4.4) For each element and boundarySegment - compute corners, insert
> corners and linear element to hostgrid
>
> 4.5) Compute and insert processBoundaries to HostGrid
>
> 4.6) Create HostGrid. Delete HostGridFactory
>
> 4.7) Create MetaGrid(HostGrid). This would mainly correspond to locating
> elements in ALUGrid and making maps to link them with the curvilinear
> elements in the metagrid.
>
> ----------------------------------------------------------------
>
> I see your point that making a general MetaGridFactory is a very nice
> idea. I guess that I could create a generic factory and then create a
> CurvilinearGridFactory to inherit from it, as it would have some
> separate methods.
>
> Regards,
> Aleksejs
>
> On 10/16/2014 10:48 AM, Andreas Dedner wrote:
>> Hi.
>>
>> To make one thing clear from the start, I think this is a great project
>> and very useful for dune.
>> That was for me always clear. But what I was missing was a clear idea
>> what the plan was, i.e.,
>> a list like the one you just wrote. Of course there are many dune
>> projects going on where I have at
>> best an idea of what the aim is and no more - and that is fine. But to
>> provide any useful input,
>> a clearer idea helps and just discussing it with Peter does not provide
>> me or others with a clear picture.
>>
>> Even after reading your description of the project I am still a bit
>> unsure about the partitioning.
>> To summarize my understanding:
>> In one case, one has a partitioned gmsh file. Then repartitioning is
>> probably not required?
>> In the other case, the gmsh file is on one process only and the gmsh
>> reader adds everything into
>> the (meta)gridfactory. Is now the idea that the gridfactory first stores
>> the inserted element/vertices
>> itself (so does not call the insert methods on the hostgrid). Then in
>> the createGrid method
>> the inserted elements are partitioned and then each element calls the
>> insert method on the host
>> gridfactory for its own elements? That sounds like a very useful
>> "MetaGridFactory" in its own right, i.e.,
>> independent of a CurvilinearGeometryGrid. But perhaps/I misunderstood
>> the idea. /
>> If that is the idea, I would suggest to consider a callback approach
>> similar to the one we are using
>> for the repartitioning in ALUGrid. So the method createGrid gets a
>> callback object which it can ask
>> for a partition number to which to send the process. That would mean
>> that the code would not be
>> restricted to ParMetis.
>>
>> Best
>> Andreas
>>
>> On 16/10/14 08:47, Benedikt Oswald wrote:
>>> Dear Dune,
>>>
>>> for ca. 3 months now we have been working to implement a higher order
>>> curvilinear grid manager,
>>> using the concept of the metagrid on top of a host grid.
>>>
>>> In particular, Aleksejs Fomins is in charge of this project within
>>> LSPR AG.
>>>
>>> I should also mention that the sources of this project are publicly
>>> available on github
>>> (clone of dune-geometry, dune-curvilineargrid)
>>>
>>>
>>> The plan has been as follows:
>>> =======================
>>>
>>> 1) implement the curvilinear geometry in the class LagrangeGeometry
>>> which handles
>>>      the curvilinear, tetrahedral geometry using Lagrange polynomials
>>>
>>>      we comment that at present we implement up to order 5 but in
>>> principle
>>>      we can implement higher orders as well, if required.
>>>
>>>      status: implemented & tested
>>>
>>>
>>> 2) use the concept of the meta grid, embodied in the GeometryGrid
>>>
>>>
>>> 3) after discussions with Peter Bastian, we decided to create a new
>>> Dune module,
>>>      i.e.dune-curvilineargrid that uses sources from GeometryGrid
>>> which have been
>>>      renamed to reflect the new module name; in particular,
>>> GeometryGrid in its
>>>      present form simply does not do what is needed.
>>>
>>>      We have decided to use the new dune-alugrid module as the host
>>> grid and
>>>      we appreciate the ALUGrid effort very much.
>>>
>>>      status: in progress
>>>
>>>
>>> 4) we wish to avoid the bottleneck of reading the whole mesh on the
>>> master node only
>>>      and have therefore implement a parallel gmsh reader which reads
>>> the full curvilinear
>>>      gmsh .msh format, including tags and everything
>>>
>>>     status: operational
>>>
>>>
>>> 5) as a consequence, we need to repartition the mesh before we create
>>> the grid,both
>>>      that is the reason why Aleksejs asked for advice on how to
>>> communicate elements
>>>      between the MPI processes;  in fact, we have found a solution and
>>> use the well
>>>      known CLINK protocol
>>>
>>>
>>> 6) as a result, we estimate that we will have a first version of the
>>> parallel dune-curviineargrid
>>>      module operational in December;
>>>
>>>      the Dune community is very welcome to test it and comment on it.
>>>
>>>
>>> We really appreciate the support given to us from the Dune mailing
>>> list and we appreciate
>>> the Dune effort enormously. On the other hand, we openly admit that,
>>> sometimes in the past,
>>> we perceived certain comments as a tad 'grossfürstlich'.
>>>
>>>
>>> with the best of intentions and wishes for a wonderful day,
>>>
>>> Benedikt
>>>
>>>
>>>
>>>
>>>
>>>
>>> --------------------------------------------------------------------------------------------------------------------------------------------------------------
>>>
>>> Dr. sc. techn. Benedikt Oswald - first engineer - LSPR AG - phone -
>>> +41 43 366 90 74
>>> Technoparkstrasse 1, CH-8005 Zürich, benedikt.oswald at lspr.ch - labor
>>> vincit omnia improbus
>>> --------------------------------------------------------------------------------------------------------------------------------------------------------------
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Dune mailing list
>>> Dune at dune-project.org
>>> http://lists.dune-project.org/mailman/listinfo/dune
>>
>>
>>
>> _______________________________________________
>> Dune mailing list
>> Dune at dune-project.org
>> http://lists.dune-project.org/mailman/listinfo/dune
>>
>
>
>
>
>
>
> _______________________________________________
> Dune mailing list
> Dune at dune-project.org
> http://lists.dune-project.org/mailman/listinfo/dune

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.dune-project.org/pipermail/dune/attachments/20141016/67b2189d/attachment.htm>


More information about the Dune mailing list