[dune-pdelab] Vector and Matrix backends for PETSc

Steffen Müthing steffen.muething at ipvs.uni-stuttgart.de
Mon Sep 12 20:25:10 CEST 2011

Dear PDELab developers,

as I mentioned some time ago, I developed an initial implementation of a PETSc
backend for PDELab, which required a small number of changes to the current
backend interface, as PETSc performance really suffers if the matrix accesses are
not batched in some way.

Even though the code of the backend is still in a rather rough shape (I would not
recommend using it for anything remotely production-related), Christian encouraged
me to nevertheless commit it to enable a discussion about the current interface of the

As I see it, the main problem with the backend is its completely stateless nature, which
complicates implementations like PETSc which want to batch updates, in turn requiring
state. On the other hand, keeping that state information outside the matrix class creates
synchronization issues. So, just from the experience I had trying to fit PETSc into the
backend (from a quick glance at its documentation, Trilinos is very similar to PETSc),
here are a couple of things I would consider useful in an improved backend interface:

- The current backend class should be reduced to a factory for matrix wrapper classes
  (by specifying the field type and the associated function spaces)

- We should make it clearer that those wrapper classes do not really represent the matrices itself,
  but are a special interface to them that allows a mapping between GridFunctionSpace DOFs
  and the associated values stored in the underlying matrix.

- Those wrappers can easily store any additional state required for efficient assembly

- The wrappers should be aware of the function spaces associated with trial and test spaces

- Access should work in a similar way to the Accessors I have hacked into the current backend

- A better name for those accessors might be "View" or "LocalView"

- They should probably be split into "ReadOnlyView", "WriteOnlyView" and "AddOnlyView", as both
  PETSc and Trilinos do not let you mix those access patterns within one batch

- Instead of simply constructing them, the user should retrieve them from the matrix using methods
  like addOnlyView(lfsv,lfsu).

- That way, the matrix can control whether it will allow more than one view to be active at any given
  time and allow it to cache any underlying data structures for index mapping etc. On the other hand,
  destruction of the View will neatly trigger a commit of the data, so that users cannot forget to do that.
  Moreover, the matrix could provide a method like dirty() or something similar, which can be used to
  check whether all views have been cleaned up before doing any algebra using the matrix.

- In order to simplify the implementation of nested backends, it might be useful to provide a multi-index
  instead of a flat one (this topic touches not only the linear algebra backend, but also the function spaces
  and mappings).

I don't have any real opinion about the vector backend, but there the split between container and
wrapper is already a lot more explicit, as the container defined in the backend does not inherit from
the ISTL vector.

I think the most important change would be the switch from keeping the access interface external
from the vector and matrix objects to having them conform to a new, yet-to-be-defined API for retrieving
values associated with certain DOFs. That will require wrapping objects from libraries like PETSc or
Trilinos, but since they will require additional state for batching accesses anyway, keeping the object
and that state together in a single wrapper seems natural to me.

That's just my two cents, but I hope this mail will get a good discussion going... ;-)

I also added a (probably rather flaky) PETSc autoconf test and a test program called testpoisson-petsc
to PDELab. Moreover, there is also a second set of backends that allow the creation of nested vectors
and matrices for the efficient construction of things like Schur Complement solvers.

Please note that the backend will NOT work correctly in parallel computations yet, mostly because PETSc
has global matrices and vectors and I didn't have the time to do (and test) the index translations everywhere.

Greetings from rainy Oslo,


Steffen Müthing
Universität Stuttgart
Institut für Parallele und Verteilte Systeme
Universitätsstr. 38
70569 Stuttgart
Tel: +49 711 685 88429
Fax: +49 711 685 88340
Email: steffen.muething at ipvs.uni-stuttgart.de

More information about the dune-pdelab mailing list