[Dune] Error in implementing PAAMG using dune-istl

Kumar, Paras paras.kumar at fau.de
Thu Dec 14 16:04:00 CET 2017


Hi again,

Currently, only the master process has the complete system (A , x and 
b).

Do I need to have a distributed grid using the following command so that 
the each processes assembles its part of the matrix.

auto gridptr = std::make_shared<DuneGridType>(app->x_start, app->x_end, 
app->num_elems,std::bitset<SPACE_DIM>(0ULL),1,MPI_COMM_WORLD);

or should the distribution be done by using the matrix graph and 
redistribute_matrix functions as explained here :

https://www.dr-blatt.de/blog/posts/creating_a_parallel_istl_matrix/


With best regards,
Paras



On 2017-12-14 15:39, Markus Blatt wrote:
> Hi,
> 
> On Thu, Dec 14, 2017 at 02:44:14PM +0100, Markus Blatt wrote:
>> On Thu, Dec 14, 2017 at 02:15:46PM +0100, Kumar, Paras wrote:
>> > Another point is that for 2 process and 33 unknowns it shows,
>> >
>> > Level 0 has 33 unknowns, 16.5 unknowns per proc (procs=2)
>> >
>> > Probably, the distribution of indexes across processes is not happening
>> > properly. Do I need to somehow ensure the linking of the Matrix index set to
>> > the DuneComm object.
>> 
>> That looks distributed to me. Each process has about 16 unknowns. But
>> maybe I misunderstood you?
> 
> On a second look. Somehow I cannot find any place where you distribute
> your linear system. To me it looks like only rank 0 actually creates a
> matrix with entries. All other ranks have a matrix with zero rows and
> columns (which might not be well tested). In addition your index set
> is empyty and the call to DuneComm.remoteIndices().template
> rebuild<false>() will create no remote indices. So even if the
> segfault is gone this will be a pure sequential solve.
> 
> Markus




More information about the Dune mailing list