[Dune] Parallel Jacobi, application example, similar to overlapping Schwarz ?

Markus Blatt Markus.Blatt at ipvs.uni-stuttgart.de
Wed Jan 24 17:50:47 CET 2007


On Mon, Jan 22, 2007 at 04:42:18PM +0100, Oswald Benedikt wrote:
> Hi All,
> 
> we are in the midst of parallelizing our finite-element time-domain
> Maxwell solver using DUNE/ISTL. Up to now, we have used the sequential
> Jacobi solver, provided by DUNE/ISTL. To study reasoanbly sized systems,
> in particular for validating them against analytical solutions, we need
> to run larger problems. That's why we must go paralle.
> 
> Here is what we do: we use a 3d ALUgrid tetrahedron mesh, build up
> the FE matrices on every cpu and then solve this linear system.
> 
> Where we would appreciate some input is on how to practically use the
> ISTL jacobi class, in a similar way like the Overlapping Schwarz things,
> which are templatized by communication interface classe.
>

In general there two different parallel solver approaches in dune.
- One is using overlapping additive Schwarz methods , imlemented in
  the dune-dd module. They require a parallel grid with overlap like
  Yaspgrid. The each process does his local finite element
  discretization and the sequential iterative solvers are used to solve
  the problems on the processors subdomain. All communication is done
  via the grid and encapsulated in the OverlappingSchwarzScalarProduct and
  the OverlappingSchwarzOperator. 
  As you are using ALUGridm that has no overlap (AFAIK) this choice is
  not possible for you.

- The other way is to use the parallel solver of ISTL. They do not
  require an overlapping grid. Actually they are totally decoupled
  from dune-grid, but they do need a special Matrix setup.

  If you are interested in P1 Finite Elements you are set. Just use
  dune-disc and the extend overlap flags accordingly. See the example.

  If you have a nonoverlapping grid, you must as a first step compute
  a disjoint splitting of your entities the nodal basis functions of
  the finite element basis are attached to. (the simplest way to do
  this, is to say that if the more processes know an entity the one
  with lowest rank will include this entity in his part of the
  disjoint splitting).
  The next step is that matrix you would get from just discretizing
  the on the local subdomain needs to be augmented such that
  afterwards each nonzero entries of a matrix row knows global
  values of all nonzero columns. The matrix entries representing
  basic functions attached to entities that are not in the local part
  of the disjoint splitting will be representing Dirichlet boundary
  conditions. 
  As the communication will not happen via the grid communication the
  index information has to be set up by hand using the global ids of
  the entities and attributes (either owner if the entity belongs to
  the local part of the disjoint splitting or slave in the other
  case). To be precise this information has to be set up only for
  entries (Not Entities!) that are also present on other
  processes. Peter has provided the class InfoFromGrid which handels
  this index information and can be provided to
  OwnerOverlapCopyCommunication constructor. If using this class the
  information about the remote indices has to be set up hand,
  too. Another way would be to work directly with the IndexSet of
  OwnerOverlapIndices and let
  RemoteIndices communicate and update this information automatically. 


> A short application example would be most useful,

  For an example of how to setup the parallel solver (using the
  utilities provide by dune-disc) see file nonoverlappingclassic.cc in
  dune-dd.


Cheers,

Markus




More information about the Dune mailing list