[dune-pdelab] Vector and Matrix backends for PETSc

Steffen Müthing steffen.muething at ipvs.uni-stuttgart.de
Tue Sep 13 16:27:27 CEST 2011


Hi Markus,

> thanks a lot for the patches. I am just wondering whether a branch for these
> changes would have made sense. IMHO there was an interface change
> involved (which would have  stired up a heated discussion for a core
> module).
> 

we (Christian and me) were thinking about that as well, but then figured that since
the interface has only one (known) implementation right now and is used only in a
single location within the PDELab code (LocalAssemblerBase), a branch
would create more problems than necessary in this case. Apart from that, the old
interface still works (at least for the ISTL backends), so the old GridOperatorSpace
works fine with the ISTL implementation.

> Anyway:
> 
> Just to sum this up: the main problem is that Petsc needs to setup the
> sparsity pattern at once for all rows and one needs to set all local
> entries of an element as a small dense matrix. This requires the new
> accessor and the internal state.
> 
> A few questions:
> - Is there another way to setup the pattern?
> - Is there another efficient way besides SetValues? 
> - Given a maximum number of nonzeros per row and providing this in the
>  constructor, would it be possible to set up the pattern and values
>  at once (even though this would need to rely on internals of PETSc?

Actually, the maximum number of nonzeros per row is all PETSc is interested
in (it doesn't care about the actual column indices). Moreover, it is also possible
to just specify a global upper bound of the nonzeros per row.

Unfortunately, SetValues() is really required to achieve decent performance when
writing values into the matrix, as every PETSc call results in at least one normal
C function call and one call to a function pointer. That renders accessing single
matrix entries completely impractical.

> 
> The reason for asking is this:
> We were thinking about providing an adapter for easier construction on
> ISTL sparse matrices to get rid of the two grid traversal (1. sparsity
> pattern, 2. values). Basically this means providing an average (or
> upper limit) of the nonzeros per row size when constructing the
> adapter. This way all needed space for row, column index and values
> would have been set up and the row pointer could already be created. 
> Then matrix entries can already be set up. On destruction of the
> adapter the data is compressed to get rid off holes in the vectors.

That shouldn't pose any problem - as I said above, Petsc isn't interested in the exact
pattern anyway. In fact, Petsc will even let you store more values per row than this
upper bound or do weird stuff like writing to rows on different MPI nodes - of course,
shenanigans like that (especially exceeding the limit of non-zero columns per row) are
really going to hurt performance. But constructing Petsc matrices with just a rough
estimate of the number of non-zeros per row is absolutely no problem from an API
point of view.

> 
> 
> For this task one would need an adapter which lets us set/add matrix
> values with column and row index arbitrarily. For ISTL this adapter
> could be a friend of the matrix and setup stuff directly. For Petsc we
> would need a way besides SetValues.
> 
> If there is the possibility to do this I would postpone you changes
> and create an interface that suits both approaches well before adding
> Petsc support.

Well, that is part of the reason Christian wanted me to commit the Petsc backend - now
there is a second implementation which highlights some of the features and restrictions
required for a more general interface. As I said, Petsc has some quirks like always
requiring a global compression / synchronization step, which cannot be modeled in the
current interface.

What I tried to say in my previous mail (I probably wasn't clear enough) is that I totally agree with
you: The MatrixBackend should just give you an adapter, which corresponds to a new, to-be-defined
interface and associated protocol (like requiring certain setup methods to be called at specified times
during the assembly). That adapter probably should not even pretend to be a matrix, all it would do is
allow for the efficient construction of and assembly into the wrapped matrix, which in the end might be
retrieved by a call to a method like base() or matrix() or something like that. Of course, this step might
easily be hidden from the user, as that call will just happen somewhere inside the ISTL (or Petsc or
whatever) solver backends.

I think that such an approach would be a good way to improve the decoupling of the linear
algebra from the assembly - up to now, this wasn't all too necessary, because the ISTL vectors and
matrices can be accessed in such a natural and straightforward fashion... :-)

Steffen

> 
> Greetings to Oslo from
> 
> Markus
> 
> -- 
> Markus Blatt, Interdisciplinary Center for Scientific Computing,
> University Heidelberg, Im Neuenheimer Feld 368, D-69120 Heidelberg
> Tel: +49 (0) 6221 548881 Fax: +49 (0) 6221 548884
> 
> 'Mathematics knows no races or geographic boundaries; for
> mathematics, the cultural world is one country' - David Hilbert
> 
> _______________________________________________
> dune-pdelab mailing list
> dune-pdelab at dune-project.org
> http://lists.dune-project.org/mailman/listinfo/dune-pdelab

Steffen Müthing
Universität Stuttgart
Institut für Parallele und Verteilte Systeme
Universitätsstr. 38
70569 Stuttgart
Tel: +49 711 685 88429
Fax: +49 711 685 88340
Email: steffen.muething at ipvs.uni-stuttgart.de









More information about the dune-pdelab mailing list