[Dune] Performace of ISTL

Markus Blatt Markus.Blatt at ipvs.uni-stuttgart.de
Thu Apr 19 18:49:40 CEST 2007


On Thu, Apr 19, 2007 at 04:51:52PM +0200, Patrick Leidenberger wrote:
> Hi all,
> 
> because of the very bad convergence of istl on a very specific FE 
> problem I try to solve my linear system with Trilinos. I use a few 
> #ifdef's in my Dune code and replace only the call of the istl solver 
> with the Trilinos/actecOO solver (the code snippets are below).
> 
> For testing I use both different codes on the same problem (~155000 
> degrees of freedom), serial:
> Dune solver: 57sec, 81 iterations, residual: 1.5402E-08
> Trilinos   : 10sec,  6 iterations, residual: 2.8799e-08
>             (including copy from dune vector to trilinos vector and back)
> 
> Perhaps I use the dune solvers in a wrong way, but I tried different 
> settings. Is there a way to get the same performance with dune?
> 

I do not know the Aztek solver, but a short glance at the user guide
revealed that it uses GMRes to solver you system.

As you use the GradientSolver of Dune you are kind of comparing a
sandwiches with three-course meals.

Why don't you use other combinations of the solvers and
preconditioners?


> And there is an other question:
> Is there no easy way, to get a global, consecutive integer id in a 
> parallel DUNE code? I know, there is the global id, which I can compare 
> and sort, but if I have such an integer index I can feed my problem to a 
> parallel Trilinos solver in a very easy way. 

No there is no consecutive global id, as it is hard to support this
together with adaptivity.

> An other application for 
> such an index will be, to write the solution from different processes in 
> the same hdf5 file.

Does it really have to be consecutive for this task?

> I have a look at the global id and I saw, that it consists of 4 
> integers. Is any functionality to send such a global id object via MPI? 
> Then can I build my own integer index.
> 

I do not know how the global id is implement and it might even be
different between various grids. 
But you can take a look at  Generic_MPI_Datatype in
common/mpicollectivecommunication which creates MPI_Datatypes for
classes and structs without pointers or references in them.

Markus




More information about the Dune mailing list