[Dune] Parallel CG Solver

Arne Rekdal arne at stud.ntnu.no
Mon Nov 24 10:55:17 CET 2008


Hello!

I try to build a ParallelIndexSet as Markus suggested in an earlier  
mail. Now I've got this ownership of the vertices which are dofs in  
the 1D grid. I'm using the YaspGrid.

Rank: 0 - GlobalId: 0 (000040000001)  LocalId: 0 - OWNER
Rank: 0 - GlobalId: 1 (000040000002)  LocalId: 1 - OWNER
Rank: 0 - GlobalId: 2 (000040000003)  LocalId: 2 - COPY

Rank: 1 - GlobalId: 0 (000040000001)  LocalId: 0 - COPY
Rank: 1 - GlobalId: 1 (000040000002)  LocalId: 1 - COPY
Rank: 1 - GlobalId: 2 (000040000003)  LocalId: 2 - OWNER
Rank: 1 - GlobalId: 3 (000040000004)  LocalId: 3 - OWNER
Rank: 1 - GlobalId: 4 (000040000005)  LocalId: 4 - COPY

Rank: 2 - GlobalId: 0 (000040000003)  LocalId: 0 - COPY
Rank: 2 - GlobalId: 1 (000040000004)  LocalId: 1 - COPY
Rank: 2 - GlobalId: 2 (000040000005)  LocalId: 2 - OWNER

As you can see, there is a problem with the globalId. They are the  
same as the localIds. I have used  
Dune::GlobalUniversalMapper<GridType>(grid) to create a global mapper.  
The Id in the paranthesis is the lookup with the grid.globalIdSet().id  
function. In YaspGrid these Id's are stored as bigunsignedint or  
something like that.

I have this code:
typedef int LocalId;
typedef GridType::Traits::GlobalIdSet::IdType GlobalId;

typedef Dune::OwnerOverlapCopyCommunication<GlobalId, LocalId> Communication;
Communication comm(MPI_COMM_WORLD);

typedef Communication::ParallelIndexSet IndexSet;
typedef IndexSet::LocalIndex LI;
IndexSet& iset=comm.indexSet();
iset.beginResize();

GlobalId globalId=grid.globalIdSet().id(*it);
LocalId localIIndex = ...

iset.add(globalId, LI(localIndex,Dune::OwnerOverlapCopyAttributeSet::owner) );
...

I get an error from MPI when
	comm.remoteIndices().rebuild<false>();

is called:

[macbookpro.local:26383] *** An error occurred in MPI_Type_create_struct
[macbookpro.local:26383] *** on communicator MPI_COMM_WORLD
[macbookpro.local:26383] *** MPI_ERR_TYPE: invalid datatype
[macbookpro.local:26383] *** MPI_ERRORS_ARE_FATAL (goodbye)

This is not a problem when I use the GlobalUniversalMapper, but then  
the global indices are wrong. The GlobalUniversalMapper should produce  
global indices, right?



Greetings
Arne Rekdal






More information about the Dune mailing list