[dune-pdelab] parallel pdelab for simple problem

Peter Bastian peter.bastian at iwr.uni-heidelberg.de
Wed Sep 28 12:45:06 CEST 2011


Hello Eike,

Am 28.09.2011 um 09:23 schrieb Eike Mueller:

> Hello Peter,
> 
> ah, ok, thanks, I still get confused by the difference between overlap/ghost. I thought that if I use a non-overlapping grid each cell element can still see it's neighbours but they are ghost cells, but maybe the difference is that it will not update the residual in the neighbouring cells? Or it does not even loop over the codim-1 elements on the boundary between processors as they are counted as ghosts, and hence it never calls alpha_skeleton on these faces?
> I guess the FEM scheme in example 1 or 2 would work, here only the volume and boundary integrals are evaluated.
> 
The thing is that YaspGrid with overlap 0 has NO Ghosts, in contrast to UG and ALU which are nonoverlapping
grids with one layer of ghosts added. You achieve (nearly) the same thing with YaspGrid using
overlap=1 (but then the extra cells are overlap and not ghost).

> For the cell-centered scheme I'm trying the oberlapping CG solver with AMG preconditioner now. Will the amount of overlap have any impact on the results or is 1 enough?
> 
With AMG the overlap should have no influence. Anything >1 is discarded. So 1 should be fine.

Peter

> Eike
> 
> On 28 Sep 2011, at 08:04, Peter Bastian wrote:
> 
>> Hello Eike,
>> 
>> YaspGrid can be used as nonoverlapping (overlap=0) and
>> overlapping grid (overlap>0). Since example4 is a cell-centered scheme
>> it only makes sense to use overlap>0. Then you have to use the
>> OVLP... backends, e.g. overlapping Schwarz with some subdomain solver.
>> As constraints use the P0ParallelConstraints. An example can be found
>> in the pdelab howto in src/convection-diffusion/transporttest.cc
>> 
>> Sorry, it is a bit confusing.
>> 
>> Best,
>> 
>> Peter
>> 
>> 
>> 
>> Am 27.09.2011 um 10:39 schrieb Eike Mueller:
>> 
>>> Dear dune-pdelab list,
>>> 
>>> I'm trying to adapt my serial code to run in parallel. Basically, I took  example 4 from the pdelab howto (I modified the local operator slightly) and use a 3d YaspGrid. I then modified my code according to section 3.3, i.e. I changed the constraints, use a parallel grid (again, YaspGrid with overlap 0) and changed to solver backend to one of the non-overlapping backends. However, the results are wrong, if I run with 8 cores then the solution is discontinuous across the faces that separate the eight domains. Also, alpha_skeleton of the local operator does not seem to get called for the faces between the domains. Is there an example of how to modify the local operator to make it parallel? Or do I have to choose a non-zero overlap in this case?
>>> 
>>> Thank you very much,
>>> 
>>> Eike
>>> 
>>> _______________________________________________
>>> dune-pdelab mailing list
>>> dune-pdelab at dune-project.org
>>> http://lists.dune-project.org/mailman/listinfo/dune-pdelab
>> 
>> ------------------------------------------------------------
>> Peter Bastian
>> Interdisziplinäres Zentrum für Wissenschaftliches Rechnen
>> Universität Heidelberg
>> Im Neuenheimer Feld 368
>> D-69120 Heidelberg
>> Tel: 0049 (0) 6221 548261
>> Fax: 0049 (0) 6221 548884
>> email: peter.bastian at iwr.uni-heidelberg.de
>> web: http://conan.iwr.uni-heidelberg.de/people/peter/
>> 
>> 

------------------------------------------------------------
Peter Bastian
Interdisziplinäres Zentrum für Wissenschaftliches Rechnen
Universität Heidelberg
Im Neuenheimer Feld 368
D-69120 Heidelberg
Tel: 0049 (0) 6221 548261
Fax: 0049 (0) 6221 548884
email: peter.bastian at iwr.uni-heidelberg.de
web: http://conan.iwr.uni-heidelberg.de/people/peter/





More information about the dune-pdelab mailing list