[Dune] Problem in implementing parallel linear solver

Markus Blatt markus at dr-blatt.de
Wed Dec 6 12:27:46 CET 2017


Hi,

On Tue, Dec 05, 2017 at 07:51:34PM +0100, Kumar, Paras wrote:
> I'm trying to implement a parallel iterative solver for a LSE based on a FE
> discretization for a time depenpendent problem.
> It was observed that the number of iterations taken by the iterative solver
> to converge varies with the number of processors. This also affects the
> accuracy of the computed solution.
> 
> 
> For the purpose of simplification, I developed a MWE where I solve the 1D
> laplacian problem by FE discretization. The code for this example along with
> the output for 4 processes is attached for reference.
> The matrix A and the rhs vector b are first computed by the master process
> and then distributed to all others so that the solution can be computed in
> parallel.
> Pt-scotch has been used for the data distribution.
> Currently, the following preconditioner-solver combination is used :
> RestartedGMRES + SSOR
> Once, the parallel solver has ended, the solution is computed in serial at
> the master processes in order to compare the two.
> 
> Here, are a few observations which might give some clues about the error I
> make, but I'm not sure.
> 1. As can be seen in the output file, A is tridiagonal where as the
> distributed are not always symmetric.

Please check where the unsymmetry happens. This should only happen
rows whose index is not marked as owner.

using OwnerSet = DuneCommunication::OwnerSet;
using GlobalLookup = DuneCommunication::GlobalLookupIndexSet;
GlobalLookup global(DuneComm.indexSet());
auto pair = global.pair(index);
if(!pair || OwnerSet::contains(pair.local().attribute())
  cerr<<"Houston we have a problem at index "<<index<<" which is
  owned.
  
> 2. The defect for the solver is different for the parallel and the serial
> cases even before the solver starts.

Well in the parallel case you seem to only compute the local defect on
the master process. You should not use the Matrix::mv (or any other
Matrix method directly here, but the functions of the linear
operator (applyscaleadd). See how it is is done in e.g. the CGSolver
dune/istl/solvers.hh or any other solver. 

> 3. The defect differs from the residual(2-norm) [||A-bx||] computed manually
> for both the serial and the parallel cases.
>

We can only assume that the first defect is the same as the
convergence will differ in parallel. In your case it seems like the
parallel_solve is not using v at all. Maybe there is a redistribution
of that vector missing?

BTW it also seems like sol_s, and sol_p are not initialized to any
values. Resizing will not touch the content of the memory. At least in
an optimized build the values will be determined by whatever was
stored in that memory chunck before.


HTH

Markus

-- 
Dr. Markus Blatt - HPC-Simulation-Software & Services http://www.dr-blatt.de
Hans-Bunte-Str. 8-10, 69123 Heidelberg, Germany
Tel.: +49 (0) 160 97590858
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <https://lists.dune-project.org/pipermail/dune/attachments/20171206/06e30eef/attachment.sig>


More information about the Dune mailing list