[Dune] incorect GlobalIndexSet when using SPgrid with many threads
Oliver Sander
oliver.sander at tu-dresden.de
Sun Oct 29 07:43:38 CET 2023
Dear Mona,
thanks for reaching out.
> 1) did I evaluate the issue causing the error in Dune::GlobalIndexSet correctly?
It sounds likely. I wrote GlobalIndexSet back in the days, and while doing so
I never thought about thread safety. As a matter of fact I am quite unhappy with it
for a number of reasons nowadays, and some people have started thinking about
alternatives, e.g.,
https://gitlab.dune-project.org/staging/dune-functions/-/merge_requests/342
> 2) it seems that Dune::GlobalIndexSet is only used within parmetisgridpartitioner::repartition() which is itself not used by any other dune modules. Is this correct?
I wrote it for the partitioner, and some of my local research code also uses it.
Other than that, I don't know.
> 3) Consequently, we had planned on using a custom mapper in our implementation(/python binding) to go from 'globalIdSet' to contiguous global indexes. Once this is implemented are there other problems that might arise when using SPgrids with a high number of threads?
That is something that you have to ask the SPGrid maintainers.
Best,
Oliver
>
> Many thanks for your help and with best regards
>
> Mona
>
> Summary of the example:
>
> ######### core code:
>
>
> int main(int argc, char** argv)
> {
> using namespace Dumux;
>
> // define the type tag for this problem
> using TypeTag = Properties::TTag::Injection2pCC;
>
> /// initialize MPI+X, finalize is done automatically on exit
> Dumux::initialize(argc, argv);
>
> // parse command line arguments and input file
> Parameters::init(argc, argv);
>
> // try to create a grid (from the given grid file or the input file)
> GridManager<GetPropType<TypeTag, Properties::Grid>> gridManager;
> gridManager.init();
>
> ////////////////////////////////////////////////////////////
> // run instationary non-linear problem on this grid
> ////////////////////////////////////////////////////////////
>
> // we compute on the leaf grid view
> const auto& leafGridView = gridManager.grid().leafGridView();
>
> // create the finite volume grid geometry
> using GridGeometry = GetPropType<TypeTag, Properties::GridGeometry>;
> auto gridGeometry = std::make_shared<GridGeometry>(leafGridView);
> // create the finite volume grid geometry
> using SoilGridType = GetPropType<TypeTag, Properties::Grid>;
> using GridView = typename SoilGridType::Traits::LeafGridView;
> int dimWorld = GridView::dimensionworld;
> typedef typename GridView::Grid Grid;
> typedef typename Grid::GlobalIdSet GlobalIdSet;
>
> const GlobalIdSet & globalIdSet = gridManager.grid().leafGridView().grid().globalIdSet();/** retrieve globally unique Id set */
>
> /** local id**/
> const typename GridView::IndexSet& indexSet = gridManager.grid().leafGridView().indexSet();
> /** retrieve contiguous global unique Id set via GlobalIndexSet*/
> auto cellIdx = std::make_shared<Dune::GlobalIndexSet<GridView>>(gridManager.grid().leafGridView(), 0);
> std::cout<<"thread rank:"<<leafGridView.comm().rank()<<" dimWorld:"<<dimWorld <<"\n";
> for (const auto& e : elements(gridGeometry->gridView())) {
> auto p = e.geometry().center(); // get cell center to compaire with indexes
> auto gId2 = globalIdSet.id(e); // real global index (not contiguous)
> auto lId = indexSet.index(e); // local index
> int gIdx = cellIdx->index(e);//get the cell global index
> std::cout<<"rank "<<leafGridView.comm().rank()
> <<"GlobalIndexSet_gIdx "<<gIdx<<" real_gIdx "<<gId2<<" lId "<<lId;
> std::cout<<" partitionType "<<e.partitionType() ;//check if cell is owned by the thread (i.e., e.partitionType() == interior)
> for (int i=0; i< 3 ; i++) { // print cell center coordinates
> std::cout<<", dim "<<i<<": "<<p[i];
> }std::cout<<std::endl;
> }
> return 0;
> } // end main
>
>
> #########
> inputs:
>
> grid parameter:
>
> [Grid]
> LowerLeft = 0 0 0
> UpperRight = 6 6 6
> Cells = 2 2 2
>
> #########
> output:
>
> for 1 thread:
>
> rank 0 GlobalIndexSet_gIdx 0 real_gIdx 31 lId 0 partitionType interior, dim 0: 1.5, dim 1: 1.5, dim 2: 1.5
> rank 0 GlobalIndexSet_gIdx 1 real_gIdx 33 lId 1 partitionType interior, dim 0: 4.5, dim 1: 1.5, dim 2: 1.5
> rank 0 GlobalIndexSet_gIdx 2 real_gIdx 41 lId 2 partitionType interior, dim 0: 1.5, dim 1: 4.5, dim 2: 1.5
> rank 0 GlobalIndexSet_gIdx 3 real_gIdx 43 lId 3 partitionType interior, dim 0: 4.5, dim 1: 4.5, dim 2: 1.5
> rank 0 GlobalIndexSet_gIdx 4 real_gIdx 81 lId 4 partitionType interior, dim 0: 1.5, dim 1: 1.5, dim 2: 4.5
> rank 0 GlobalIndexSet_gIdx 5 real_gIdx 83 lId 5 partitionType interior, dim 0: 4.5, dim 1: 1.5, dim 2: 4.5
> rank 0 GlobalIndexSet_gIdx 6 real_gIdx 91 lId 6 partitionType interior, dim 0: 1.5, dim 1: 4.5, dim 2: 4.5
> rank 0 GlobalIndexSet_gIdx 7 real_gIdx 93 lId 7 partitionType interior, dim 0: 4.5, dim 1: 4.5, dim 2: 4.5
>
> for 2 threads (both threads give the same output except for the partitionType. Example for thread 0):
> rank 0 GlobalIndexSet_gIdx 0 real_gIdx 31 lId 0 partitionType interior, dim 0: 1.5, dim 1: 1.5, dim 2: 1.5
> rank 0 GlobalIndexSet_gIdx 4 real_gIdx 33 lId 1 partitionType overlap, dim 0: 4.5, dim 1: 1.5, dim 2: 1.5
> rank 0 GlobalIndexSet_gIdx 1 real_gIdx 41 lId 2 partitionType interior, dim 0: 1.5, dim 1: 4.5, dim 2: 1.5
> rank 0 GlobalIndexSet_gIdx 5 real_gIdx 43 lId 3 partitionType overlap, dim 0: 4.5, dim 1: 4.5, dim 2: 1.5
> rank 0 GlobalIndexSet_gIdx 2 real_gIdx 81 lId 4 partitionType interior, dim 0: 1.5, dim 1: 1.5, dim 2: 4.5
> rank 0 GlobalIndexSet_gIdx 6 real_gIdx 83 lId 5 partitionType overlap, dim 0: 4.5, dim 1: 1.5, dim 2: 4.5
> rank 0 GlobalIndexSet_gIdx 3 real_gIdx 91 lId 6 partitionType interior, dim 0: 1.5, dim 1: 4.5, dim 2: 4.5
> rank 0 GlobalIndexSet_gIdx 7 real_gIdx 93 lId 7 partitionType overlap, dim 0: 4.5, dim 1: 4.5, dim 2: 4.5
>
>
> As can be seen, when we use several threads, the real_gIdx follows the expected order (from bottom left to top right), but the GlobalIndexSet_gIdx changes.
> ###########
>
>
>
> ------------------------------------------------------------------------------------------------
> ------------------------------------------------------------------------------------------------
> Forschungszentrum Jülich GmbH
> 52425 Jülich
> Sitz der Gesellschaft: Jülich
> Eingetragen im Handelsregister des Amtsgerichts Düren Nr. HR B 3498
> Vorsitzender des Aufsichtsrats: MinDir Stefan Müller
> Geschäftsführung: Prof. Dr. Astrid Lambrecht (Vorsitzende),
> Karsten Beneke (stellv. Vorsitzender), Dr. Ir. Pieter Jansens
> ------------------------------------------------------------------------------------------------
> ------------------------------------------------------------------------------------------------
>
> _______________________________________________
> Dune mailing list
> Dune at lists.dune-project.org
> https://lists.dune-project.org/mailman/listinfo/dune
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5813 bytes
Desc: S/MIME Cryptographic Signature
URL: <https://lists.dune-project.org/pipermail/dune/attachments/20231029/fc6d664c/attachment.bin>
More information about the Dune
mailing list