[Dune] [dune-istl] mpi error due to wrong send rank

Simon Praetorius simon.praetorius at tu-dresden.de
Mon Jun 17 00:03:15 CEST 2019


Hi Dune community,

I'm currently trying out AMG in parallel. Internally when building the
matrix hierarchy the function `graphRepartition()` from repatition.hh is
called that eventually calls `buildCommunication()` with MPI_Isend of
some data. There I get an MPI error, since the send rank is set to -1.
(MPI_ERR_RANK: invalid rank)

I tried to figure out the reason for this. It seems that the set
`setPartition` contains several -1 values before calling
`buildCommunication()` in repartition.hh:1475. I have nparts=1, so does
this mean that the coarse matrix is build up on just 1 processor? I have
a simple yaspgrid setup with domain size 8x8 and overlap size 1, where I
assembled a poisson equation with Lagrange basis of polynomial degree 2
and run a CG solver with an AMG preconditioner with SSOR smoother and
all other parameters at default values, using 2 processors.

What could be the reason for this? Could it be that the
parallel-index-set is build up wrong in `fillIndexSetHoles()`? I tried
to figure out what an increment on a global DOF ID could mean (see also
#74 in dune-istl issue list) (I think I have implemented a way that
guarantees uniqueness of the new generated IDs.)

Any experience with such an error?

Best,
Simon






More information about the Dune mailing list