[dune-pdelab] Implementing OVLP_AMG_4_DG solver
Lukas Riedel
mail at lukasriedel.com
Wed Mar 29 12:27:23 CEST 2017
Hi Christian,
I fear I’ll really have to implement the communications for this myself, as I don’t get it to work in any combination of PDELab Constraints.
The combination where the solver actually computes a solution is the following:
Grid: YaspGrid
DG Space: OverlappingEntitySet, QkDGLocalFiniteElementMap (order >0), NoConstraints or P0ParallelConstraints
CG Space: OverlappingEntitySet, QkDGLocalFiniteElementMap (order 0),P0ParallelConstraints
Here, the PDELab::constraints() routine actually shows some constrained DOFs. However, the plots of the solution show that there is no apparent communication between the processor boundaries. On UG, this combination does not work, as the grid seems to assemble Ghost DOFs (Error message: Exception [scatter:/Users/lriedel/dune/dune-pdelab/dune/pdelab/gridfunctionspace/genericdatahandle.hh:300]: expected no DOFs in partition 'ghost', but have 1).
I also tried the combination
Grid: YaspGrid
DG Space: OverlappingEntitySet, QkDGLocalFiniteElementMap (order >0), OverlappingConformingDirichletConstraints
CG Space: OverlappingEntitySet, QkDGLocalFiniteElementMap (order 0), OverlappingConformingDirichletConstraints
with NoDirichletConstraintsParameters inserted into PDELab::constraints(), since I handle boundary conditions implicitly in my local operator.
Strangely, this combination results in zero constrained DOFs for both GFS.
Using NonOverlappingEntitySet and P0ParallelGhostConstraints always results in segfaults on both YaspGrid and UG.
I guess I probably won’t find a combination which suits both grid managers and both finite element maps I use, but so far I’m lacking any working combination. Is there any more advice you could give me?
Thanks a lot!
Cheers,
Lukas
> On 6 Mar 2017, at 18:10, Christian Engwer <christian.engwer at uni-muenster.de> wrote:
>
> Hi Lukas,
>
> as the DG disccretization only requires face neighbors, the ghosts
> should be sufficient. The problem is the low order space.
>
> You basically have two options: p0 (aka Finite Volume), which should
> also work with ghosts. The other option is p1/q1 which requires all
> vertex neighbors present on the local machine, or you have to add an
> additional commuication to obtain the full matrix rows (this is what
> is implemented in the non-overlaping version).
>
> From that point of view the p0 version would be better, but in my
> experience the convergence rate is significantly lower. In the p1/q1
> version you obtain (like in the standard case) a h-independent
> convergence rate betweeen 0.1 and 0.2. In the p1/q1 case it is also
> h-independent (so it is still optimal), but the constant is worse
> (between 0.6 and 0.8).
>
> Ciao
> Christian
>
> On Mon, Mar 06, 2017 at 01:11:08PM +0100, Lukas Riedel wrote:
>> Dear all,
>>
>> I’m trying to ‘go parallel’ with our instationary DG solver by implementing the OVLP_AMG_4_DG linear solver. I use UG and YaspGrid as grid backends. Apparently, UG does not support Overlap, but only Ghosts. Can I still use this linear solver in both cases or do I need different solver implementations for the two grids?
>>
>> I use PkLocalFiniteElementMap (UG grid only) or QkDGLocalFiniteElementMap (UG & Yasp) for my solution (DG) GFS. For the solver to work, I understand that I have to assemble another (CG) GFS with the same FEMs, but with polynomial order 0. Which EntitySet and which Constraints should I use for the two GridFunctionSpaces? Does this choice depend on the type of Grid used?
>>
>> Thank you in advance!
>> Cheers,
>> Lukas
>> _______________________________________________
>> dune-pdelab mailing list
>> dune-pdelab at dune-project.org
>> http://lists.dune-project.org/mailman/listinfo/dune-pdelab
More information about the dune-pdelab
mailing list