[Dune] incorect GlobalIndexSet when using SPgrid with many threads
Giraud, Mona
m.giraud at fz-juelich.de
Sat Oct 28 20:26:38 CEST 2023
Dear Dune developers,
there seems to be an issue when using the Dune::GlobalIndexSet class (<dune/grid/utility/globalindexset.hh>) if we compute an SPgrid on a high number of threads.
Indeed, GlobalIndexSet seems to assume that each thread will have a specific "region" of the grid. Therefore, it gets the global index of the grid by using the local grid index + an offset.
However, SPgrids can be used with a high number of threads, which leads to a high number of overlaps and the cells owned by the different threads are intertwined. Consequently, Dune::GlobalIndexSet yields the wrong global index.
I adapted a dumux introductory exercise to showcase this problem (see zip folder attached and the summary given bellow). Even though it uses DuMux, I hope it is still informative.
I have the following question on this issue:
1) did I evaluate the issue causing the error in Dune::GlobalIndexSet correctly?
2) it seems that Dune::GlobalIndexSet is only used within parmetisgridpartitioner::repartition() which is itself not used by any other dune modules. Is this correct?
3) Consequently, we had planned on using a custom mapper in our implementation(/python binding) to go from 'globalIdSet' to contiguous global indexes. Once this is implemented are there other problems that might arise when using SPgrids with a high number of threads?
Many thanks for your help and with best regards
Mona
Summary of the example:
######### core code:
int main(int argc, char** argv)
{
using namespace Dumux;
// define the type tag for this problem
using TypeTag = Properties::TTag::Injection2pCC;
/// initialize MPI+X, finalize is done automatically on exit
Dumux::initialize(argc, argv);
// parse command line arguments and input file
Parameters::init(argc, argv);
// try to create a grid (from the given grid file or the input file)
GridManager<GetPropType<TypeTag, Properties::Grid>> gridManager;
gridManager.init();
////////////////////////////////////////////////////////////
// run instationary non-linear problem on this grid
////////////////////////////////////////////////////////////
// we compute on the leaf grid view
const auto& leafGridView = gridManager.grid().leafGridView();
// create the finite volume grid geometry
using GridGeometry = GetPropType<TypeTag, Properties::GridGeometry>;
auto gridGeometry = std::make_shared<GridGeometry>(leafGridView);
// create the finite volume grid geometry
using SoilGridType = GetPropType<TypeTag, Properties::Grid>;
using GridView = typename SoilGridType::Traits::LeafGridView;
int dimWorld = GridView::dimensionworld;
typedef typename GridView::Grid Grid;
typedef typename Grid::GlobalIdSet GlobalIdSet;
const GlobalIdSet & globalIdSet = gridManager.grid().leafGridView().grid().globalIdSet();/** retrieve globally unique Id set */
/** local id**/
const typename GridView::IndexSet& indexSet = gridManager.grid().leafGridView().indexSet();
/** retrieve contiguous global unique Id set via GlobalIndexSet*/
auto cellIdx = std::make_shared<Dune::GlobalIndexSet<GridView>>(gridManager.grid().leafGridView(), 0);
std::cout<<"thread rank:"<<leafGridView.comm().rank()<<" dimWorld:"<<dimWorld <<"\n";
for (const auto& e : elements(gridGeometry->gridView())) {
auto p = e.geometry().center(); // get cell center to compaire with indexes
auto gId2 = globalIdSet.id(e); // real global index (not contiguous)
auto lId = indexSet.index(e); // local index
int gIdx = cellIdx->index(e);//get the cell global index
std::cout<<"rank "<<leafGridView.comm().rank()
<<" GlobalIndexSet_gIdx "<<gIdx<<" real_gIdx "<<gId2<<" lId "<<lId;
std::cout<<" partitionType "<<e.partitionType() ;//check if cell is owned by the thread (i.e., e.partitionType() == interior)
for (int i=0; i< 3 ; i++) { // print cell center coordinates
std::cout<<", dim "<<i<<": "<<p[i];
}std::cout<<std::endl;
}
return 0;
} // end main
#########
inputs:
grid parameter:
[Grid]
LowerLeft = 0 0 0
UpperRight = 6 6 6
Cells = 2 2 2
#########
output:
for 1 thread:
rank 0 GlobalIndexSet_gIdx 0 real_gIdx 31 lId 0 partitionType interior, dim 0: 1.5, dim 1: 1.5, dim 2: 1.5
rank 0 GlobalIndexSet_gIdx 1 real_gIdx 33 lId 1 partitionType interior, dim 0: 4.5, dim 1: 1.5, dim 2: 1.5
rank 0 GlobalIndexSet_gIdx 2 real_gIdx 41 lId 2 partitionType interior, dim 0: 1.5, dim 1: 4.5, dim 2: 1.5
rank 0 GlobalIndexSet_gIdx 3 real_gIdx 43 lId 3 partitionType interior, dim 0: 4.5, dim 1: 4.5, dim 2: 1.5
rank 0 GlobalIndexSet_gIdx 4 real_gIdx 81 lId 4 partitionType interior, dim 0: 1.5, dim 1: 1.5, dim 2: 4.5
rank 0 GlobalIndexSet_gIdx 5 real_gIdx 83 lId 5 partitionType interior, dim 0: 4.5, dim 1: 1.5, dim 2: 4.5
rank 0 GlobalIndexSet_gIdx 6 real_gIdx 91 lId 6 partitionType interior, dim 0: 1.5, dim 1: 4.5, dim 2: 4.5
rank 0 GlobalIndexSet_gIdx 7 real_gIdx 93 lId 7 partitionType interior, dim 0: 4.5, dim 1: 4.5, dim 2: 4.5
for 2 threads (both threads give the same output except for the partitionType. Example for thread 0):
rank 0 GlobalIndexSet_gIdx 0 real_gIdx 31 lId 0 partitionType interior, dim 0: 1.5, dim 1: 1.5, dim 2: 1.5
rank 0 GlobalIndexSet_gIdx 4 real_gIdx 33 lId 1 partitionType overlap, dim 0: 4.5, dim 1: 1.5, dim 2: 1.5
rank 0 GlobalIndexSet_gIdx 1 real_gIdx 41 lId 2 partitionType interior, dim 0: 1.5, dim 1: 4.5, dim 2: 1.5
rank 0 GlobalIndexSet_gIdx 5 real_gIdx 43 lId 3 partitionType overlap, dim 0: 4.5, dim 1: 4.5, dim 2: 1.5
rank 0 GlobalIndexSet_gIdx 2 real_gIdx 81 lId 4 partitionType interior, dim 0: 1.5, dim 1: 1.5, dim 2: 4.5
rank 0 GlobalIndexSet_gIdx 6 real_gIdx 83 lId 5 partitionType overlap, dim 0: 4.5, dim 1: 1.5, dim 2: 4.5
rank 0 GlobalIndexSet_gIdx 3 real_gIdx 91 lId 6 partitionType interior, dim 0: 1.5, dim 1: 4.5, dim 2: 4.5
rank 0 GlobalIndexSet_gIdx 7 real_gIdx 93 lId 7 partitionType overlap, dim 0: 4.5, dim 1: 4.5, dim 2: 4.5
As can be seen, when we use several threads, the real_gIdx follows the expected order (from bottom left to top right), but the GlobalIndexSet_gIdx changes.
###########
------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
Forschungszentrum Jülich GmbH
52425 Jülich
Sitz der Gesellschaft: Jülich
Eingetragen im Handelsregister des Amtsgerichts Düren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Stefan Müller
Geschäftsführung: Prof. Dr. Astrid Lambrecht (Vorsitzende),
Karsten Beneke (stellv. Vorsitzender), Dr. Ir. Pieter Jansens
------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.dune-project.org/pipermail/dune/attachments/20231028/67ea2ef6/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: issueSPGridParallel.zip
Type: application/x-zip-compressed
Size: 9121 bytes
Desc: issueSPGridParallel.zip
URL: <https://lists.dune-project.org/pipermail/dune/attachments/20231028/67ea2ef6/attachment-0001.bin>
More information about the Dune
mailing list