[Dune] Parallel CG Solver
Markus.Blatt at ipvs.uni-stuttgart.de
Tue Nov 11 11:00:32 CET 2008
On Mon, Nov 10, 2008 at 11:00:28AM +0100, Arne Rekdal wrote:
> I am working on a parallel version of a Poisson solver. I have managed
> to partition the
> domain onto p processors (each element is assigned to only one processor).
A disjoint partititioning of your domain is a very good starting
> My plan is to solve the system of equations with Conjugate Gradient
> method. I am now
> creating a Dune Operator. My idea for this operator is:
> 1) Calculate local operator evaluation for each processor
> 2) Communication(potentially all to all) between processors that share
> the same vertices,
> and sum up.
Actually you do not have to do this by hand. Instead you have to
provide your local matrix with a corresponding index set
(ParallelIndexSet). (You can do what Robert proposes, too. But I
regard this as far more work and most of the ISTL preconditioners will
probably not work.)
> How do I know which processor a vertex should be sent to? All interior
> boundary vertices
> needs to be sent to other processors. Are there suitable methods in
> DUNE for this?
Of course there is (in dune-istl).
> Since a vertex can be on multiple processors, then there should also
> be a special
> innerproduct associated with the CG algorithm. How is this specified?
> Example: I have total 6 DOFs.
> Indexset, processor 0: 1 2
> Indexset, processor 1: 1 2 3 4 5 6
> Indexset, processor 2: 5 6
Starting with your disjoint partitioning (0: 0,1; 1: 3,4; 2: 5,6) you
need to augment it such that for the graph of your global matrix all
indices all the indices of the connected dofs are also known to the
Assuming a simple 1D problem this might look like this (in terms of
0: 0,1,2; 1:2,3,4; 2:4,5,6
Using ParallelIndexSet you identify dofs of the disjoint partition
with the attribute owner, and all others with copy.
typedef Dune::FieldMatrix<double,1,1> MatrixBlock;
typedef Dune::BCRSMatrix<MatrixBlock> BCRSMat;
typedef Dune::FieldVector<double,1> VectorBlock;
// This encapsulates the communication stuff
typedef CollectiveCommunication<MPICommunicator> Communication;
typedef int LocalId;
typedef int GlobalId;
typedef Dune::OwnerOverlapCopyCommunication<LocalId,GlobalId> Communication;
// The global matrix product
typedef typename Communication::ParallelIndexSet IndexSet;
typedef typename IndexSet::LocalIndex LI;
// The same for the other processes...
//Compute global information
You can now provide this comm object to the
OverlappingSchwarzOperator together with your local matrix (with
dirichlet border conditions for copy) and the
For more see the Preprint
More information about the Dune