[Dune] dune parallel matrix format, example

Markus Blatt markus at dr-blatt.de
Mon Jul 18 10:19:56 CEST 2016


Hi,

On Sun, Jul 17, 2016 at 11:31:04PM +0200, Feng Xing wrote:
> After reading the example of parallel matrix (M.Blatt): https://www.dr-blatt.de/blog/posts/creating_a_parallel_istl_matrix/ 
> and the paper (P.Bastian, M.Blatt) "On the Generic Parallelisation of Iterative Solvers for the Finite Element Method”,
> I am a little confused with the parallel matrix format used in dune. 
> 
> If I understand well, each proc has an csr matrix independent, and
> these local matrix are used in to create a parallel operator using
> “OwnerOverlapCopyCommunication”.

Yes that is correct. These to objects contain all the information
needed to apply a parallel linear operator using e.g. the
OverlappingSchwarzOperator, see
https://beta.dune-project.org/doxygen/2.4.1/classDune_1_1OverlappingSchwarzOperator.html#aafc9bff73a9c5544c16e94a834b1b6e9 
> I want to create a parallel matrix myself without using the
> functions shown in these example, I would like to ask if it is like
> PETSc, each proc contains some rows? If so, the col index is local
> or global? Thank you very much.

Yes and No. PetSc has a very different approach as the abstraction is
incorporated into the matrix class directly. In
PETSc you can set up the entries just as with a non-parallel matrix
regardless where the rows are stored. This is not possible in
dune-istl. PETsc's approach is probably easier for the users, but less
flexible and might lead scalability issues.

Basically you need to twist your PETSc mind a little. Be aware that I
am not a PETSc expert and my assumptions might be wrong. I assume that
PetSc uses global indices for both column and row indices. The rows
are partitioned consecutively among the processes (process i holds
n_i, ..., n_{i+1}-1).

This directly gives you the rows that the process is computing correct
results in dune-istl for. Just give them the local indices 0,
... n_{i+1}-n_i. They also get the marker owner.

Now you look at the col indices of those rows in your PETSC sparse
matrix that do not have a local index assigned yet, i.e. those not in
{n_i, ..., n_{i+1}-1}, assign them one and mark them as copy. This is
needed as we have to store dummy rows for them, too.

Now you have your complete parallel index set with a global index and
a marker for each local index. Let l(i) be the function that
determines the local index of the global (PETSc) index i, them your
local matrix can be set up, if you insert an entry l(i),l(j) for each
PETSc sparse matrix entry (i,j). For all column indices g not in {n_i,
..., n_{i+1}-1}, you insert a row l(g) with a 1 on the diagonal and
zeros elsewhere. Please make sure that for a symmetric sparse matrix
the sparsity pattern should be symetric, too.

You can also look at the test in dune/istl/paamg/test for some example
code, e.g. amgtest.cc.

HTH and Cheers,

Markus

-- 
Dr. Markus Blatt - HPC-Simulation-Software & Services http://www.dr-blatt.de
Hans-Bunte-Str. 8-10, 69123 Heidelberg, Germany
Tel.: +49 (0) 160 97590858
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <https://lists.dune-project.org/pipermail/dune/attachments/20160718/52817dc9/attachment.sig>


More information about the Dune mailing list