[Dune] Parallel CG Solver
Markus Blatt
Markus.Blatt at ipvs.uni-stuttgart.de
Tue Nov 11 11:00:32 CET 2008
Hi Arne,
On Mon, Nov 10, 2008 at 11:00:28AM +0100, Arne Rekdal wrote:
> I am working on a parallel version of a Poisson solver. I have managed
> to partition the
> domain onto p processors (each element is assigned to only one processor).
>
A disjoint partititioning of your domain is a very good starting
point.
> My plan is to solve the system of equations with Conjugate Gradient
> method. I am now
> creating a Dune Operator. My idea for this operator is:
> 1) Calculate local operator evaluation for each processor
> 2) Communication(potentially all to all) between processors that share
> the same vertices,
> and sum up.
>
Actually you do not have to do this by hand. Instead you have to
provide your local matrix with a corresponding index set
(ParallelIndexSet). (You can do what Robert proposes, too. But I
regard this as far more work and most of the ISTL preconditioners will
probably not work.)
> How do I know which processor a vertex should be sent to? All interior
> boundary vertices
> needs to be sent to other processors. Are there suitable methods in
> DUNE for this?
>
Of course there is (in dune-istl).
> Since a vertex can be on multiple processors, then there should also
> be a special
> innerproduct associated with the CG algorithm. How is this specified?
>
> Example: I have total 6 DOFs.
> Indexset, processor 0: 1 2
> Indexset, processor 1: 1 2 3 4 5 6
> Indexset, processor 2: 5 6
>
Starting with your disjoint partitioning (0: 0,1; 1: 3,4; 2: 5,6) you
need to augment it such that for the graph of your global matrix all
indices all the indices of the connected dofs are also known to the
processor.
Assuming a simple 1D problem this might look like this (in terms of
indices)
0: 0,1,2; 1:2,3,4; 2:4,5,6
Using ParallelIndexSet you identify dofs of the disjoint partition
with the attribute owner, and all others with copy.
typedef Dune::FieldMatrix<double,1,1> MatrixBlock;
typedef Dune::BCRSMatrix<MatrixBlock> BCRSMat;
typedef Dune::FieldVector<double,1> VectorBlock;
// This encapsulates the communication stuff
typedef CollectiveCommunication<MPICommunicator> Communication;
typedef int LocalId;
typedef int GlobalId;
typedef Dune::OwnerOverlapCopyCommunication<LocalId,GlobalId> Communication;
// The global matrix product
typedef
Dune::OverlappingSchwarzOperator<BCRSMat,Vector,Vector,Communication> Operator;
Communication comm(MPI_COMM_WORLD);
if(comm.communicator().rank()==1){
typedef typename Communication::ParallelIndexSet IndexSet;
typedef typename IndexSet::LocalIndex LI;
IndexSet& iset=comm.indexSet();
iset.beginResize();
iset.add(2,LI(0,OwnerOverlapCopyAttributeSet::copy));
iset.add(3,LI(0,OwnerOverlapCopyAttributeSet::owner));
iset.add(4,LI(0,OwnerOverlapCopyAttributeSet::Owner));
iset.endResize();
}
// The same for the other processes...
//Compute global information
comm.remoteIndices().template rebuild<false>();
See
http://www.dune-project.org/doc-1.1/doxygen/html/group__ISTL__Comm.html
You can now provide this comm object to the
OverlappingSchwarzOperator together with your local matrix (with
dirichlet border conditions for copy) and the
OverlappingSchwarzScalarProduct
http://www.dune-project.org/doc-1.1/doxygen/html/classDune_1_1OverlappingSchwarzOperator.html
http://www.dune-project.org/doc-1.1/doxygen/html/classDune_1_1OverlappingSchwarzScalarProduct.html
http://www.dune-project.org/doc-1.1/doxygen/html/group__ISTL__Parallel.html
For more see the Preprint
http://www.dune-project.org/publications/parallel_istl_paper_ijcse.pdf
Cheers,
Markus
More information about the Dune
mailing list