<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<meta name="Generator" content="Microsoft Exchange Server">
<!-- converted from text --><style><!-- .EmailQuote { margin-left: 1pt; padding-left: 4pt; border-left: #800000 2px solid; } --></style>
</head>
<body>
<meta content="text/html; charset=UTF-8">
<style type="text/css" style="">
<!--
p
{margin-top:0;
margin-bottom:0}
-->
</style>
<div dir="ltr">
<div id="x_divtagdefaultwrapper" dir="ltr" style="font-size:12pt; color:#000000; font-family:Calibri,Helvetica,sans-serif">
<p>Dear Oliver,</p>
<p>Many thanks for all the information!</p>
<p>Regarding SPGrid, I saw that <span style="font-family:Calibri,Helvetica,sans-serif,EmojiFont,"Apple Color Emoji","Segoe UI Emoji",NotoColorEmoji,"Segoe UI Symbol","Android Emoji",EmojiSymbols; font-size:16px">
Dr. Martin Nolte is given as </span>the SPgrid code maintainer although he<span> is also given as alumni/former Dune developer.</span></p>
<p><span>Is he still the contact person for questions regarding SPGrid?</span></p>
<p><span>Best,</span></p>
<p><span>Mona</span></p>
</div>
<hr tabindex="-1" style="display:inline-block; width:98%">
<div id="x_divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" color="#000000" style="font-size:11pt"><b>From:</b> Oliver Sander <oliver.sander@tu-dresden.de><br>
<b>Sent:</b> Sunday, October 29, 2023 7:43:38 AM<br>
<b>To:</b> dune@lists.dune-project.org; Giraud, Mona<br>
<b>Subject:</b> Re: [Dune] incorect GlobalIndexSet when using SPgrid with many threads</font>
<div> </div>
</div>
</div>
<font size="2"><span style="font-size:10pt;">
<div class="PlainText">Dear Mona,<br>
<br>
thanks for reaching out.<br>
<br>
> 1) did I evaluate the issue causing the error in Dune::GlobalIndexSet correctly?<br>
<br>
It sounds likely. I wrote GlobalIndexSet back in the days, and while doing so<br>
I never thought about thread safety. As a matter of fact I am quite unhappy with it<br>
for a number of reasons nowadays, and some people have started thinking about<br>
alternatives, e.g.,<br>
<br>
<a href="https://gitlab.dune-project.org/staging/dune-functions/-/merge_requests/342">
https://gitlab.dune-project.org/staging/dune-functions/-/merge_requests/342</a><br>
<br>
> 2) it seems that Dune::GlobalIndexSet is only used within parmetisgridpartitioner::repartition() which is itself not used by any other dune modules. Is this correct?<br>
<br>
I wrote it for the partitioner, and some of my local research code also uses it.<br>
Other than that, I don't know.<br>
<br>
> 3) Consequently, we had planned on using a custom mapper in our implementation(/python binding) to go from 'globalIdSet' to contiguous global indexes. Once this is implemented are there other problems that might arise when using SPgrids with a high number
of threads?<br>
<br>
That is something that you have to ask the SPGrid maintainers.<br>
<br>
Best,<br>
Oliver<br>
<br>
<br>
> <br>
> Many thanks for your help and with best regards<br>
> <br>
> Mona<br>
> <br>
> Summary of the example:<br>
> <br>
> ######### core code:<br>
> <br>
> <br>
> int main(int argc, char** argv)<br>
> {<br>
> using namespace Dumux;<br>
> <br>
> // define the type tag for this problem<br>
> using TypeTag = Properties::TTag::Injection2pCC;<br>
> <br>
> /// initialize MPI+X, finalize is done automatically on exit<br>
> Dumux::initialize(argc, argv);<br>
> <br>
> // parse command line arguments and input file<br>
> Parameters::init(argc, argv);<br>
> <br>
> // try to create a grid (from the given grid file or the input file)<br>
> GridManager<GetPropType<TypeTag, Properties::Grid>> gridManager;<br>
> gridManager.init();<br>
> <br>
> ////////////////////////////////////////////////////////////<br>
> // run instationary non-linear problem on this grid<br>
> ////////////////////////////////////////////////////////////<br>
> <br>
> // we compute on the leaf grid view<br>
> const auto& leafGridView = gridManager.grid().leafGridView();<br>
> <br>
> // create the finite volume grid geometry<br>
> using GridGeometry = GetPropType<TypeTag, Properties::GridGeometry>;<br>
> auto gridGeometry = std::make_shared<GridGeometry>(leafGridView);<br>
> // create the finite volume grid geometry<br>
> using SoilGridType = GetPropType<TypeTag, Properties::Grid>;<br>
> using GridView = typename SoilGridType::Traits::LeafGridView;<br>
> int dimWorld = GridView::dimensionworld;<br>
> typedef typename GridView::Grid Grid;<br>
> typedef typename Grid::GlobalIdSet GlobalIdSet;<br>
> <br>
> const GlobalIdSet & globalIdSet = gridManager.grid().leafGridView().grid().globalIdSet();/** retrieve globally unique Id set */<br>
> <br>
> /** local id**/<br>
> const typename GridView::IndexSet& indexSet = gridManager.grid().leafGridView().indexSet();<br>
> /** retrieve contiguous global unique Id set via GlobalIndexSet*/<br>
> auto cellIdx = std::make_shared<Dune::GlobalIndexSet<GridView>>(gridManager.grid().leafGridView(), 0);<br>
> std::cout<<"thread rank:"<<leafGridView.comm().rank()<<" dimWorld:"<<dimWorld <<"\n";<br>
> for (const auto& e : elements(gridGeometry->gridView())) {<br>
> auto p = e.geometry().center(); // get cell center to compaire with indexes<br>
> auto gId2 = globalIdSet.id(e); // real global index (not contiguous)<br>
> auto lId = indexSet.index(e); // local index<br>
> int gIdx = cellIdx->index(e);//get the cell global index<br>
> std::cout<<"rank "<<leafGridView.comm().rank()<br>
> <<"GlobalIndexSet_gIdx "<<gIdx<<" real_gIdx "<<gId2<<" lId "<<lId;<br>
> std::cout<<" partitionType "<<e.partitionType() ;//check if cell is owned by the thread (i.e., e.partitionType() == interior)<br>
> for (int i=0; i< 3 ; i++) { // print cell center coordinates<br>
> std::cout<<", dim "<<i<<": "<<p[i];<br>
> }std::cout<<std::endl;<br>
> }<br>
> return 0;<br>
> } // end main<br>
> <br>
> <br>
> #########<br>
> inputs:<br>
> <br>
> grid parameter:<br>
> <br>
> [Grid]<br>
> LowerLeft = 0 0 0<br>
> UpperRight = 6 6 6<br>
> Cells = 2 2 2<br>
> <br>
> #########<br>
> output:<br>
> <br>
> for 1 thread:<br>
> <br>
> rank 0 GlobalIndexSet_gIdx 0 real_gIdx 31 lId 0 partitionType interior, dim 0: 1.5, dim 1: 1.5, dim 2: 1.5<br>
> rank 0 GlobalIndexSet_gIdx 1 real_gIdx 33 lId 1 partitionType interior, dim 0: 4.5, dim 1: 1.5, dim 2: 1.5<br>
> rank 0 GlobalIndexSet_gIdx 2 real_gIdx 41 lId 2 partitionType interior, dim 0: 1.5, dim 1: 4.5, dim 2: 1.5<br>
> rank 0 GlobalIndexSet_gIdx 3 real_gIdx 43 lId 3 partitionType interior, dim 0: 4.5, dim 1: 4.5, dim 2: 1.5<br>
> rank 0 GlobalIndexSet_gIdx 4 real_gIdx 81 lId 4 partitionType interior, dim 0: 1.5, dim 1: 1.5, dim 2: 4.5<br>
> rank 0 GlobalIndexSet_gIdx 5 real_gIdx 83 lId 5 partitionType interior, dim 0: 4.5, dim 1: 1.5, dim 2: 4.5<br>
> rank 0 GlobalIndexSet_gIdx 6 real_gIdx 91 lId 6 partitionType interior, dim 0: 1.5, dim 1: 4.5, dim 2: 4.5<br>
> rank 0 GlobalIndexSet_gIdx 7 real_gIdx 93 lId 7 partitionType interior, dim 0: 4.5, dim 1: 4.5, dim 2: 4.5<br>
> <br>
> for 2 threads (both threads give the same output except for the partitionType. Example for thread 0):<br>
> rank 0 GlobalIndexSet_gIdx 0 real_gIdx 31 lId 0 partitionType interior, dim 0: 1.5, dim 1: 1.5, dim 2: 1.5<br>
> rank 0 GlobalIndexSet_gIdx 4 real_gIdx 33 lId 1 partitionType overlap, dim 0: 4.5, dim 1: 1.5, dim 2: 1.5<br>
> rank 0 GlobalIndexSet_gIdx 1 real_gIdx 41 lId 2 partitionType interior, dim 0: 1.5, dim 1: 4.5, dim 2: 1.5<br>
> rank 0 GlobalIndexSet_gIdx 5 real_gIdx 43 lId 3 partitionType overlap, dim 0: 4.5, dim 1: 4.5, dim 2: 1.5<br>
> rank 0 GlobalIndexSet_gIdx 2 real_gIdx 81 lId 4 partitionType interior, dim 0: 1.5, dim 1: 1.5, dim 2: 4.5<br>
> rank 0 GlobalIndexSet_gIdx 6 real_gIdx 83 lId 5 partitionType overlap, dim 0: 4.5, dim 1: 1.5, dim 2: 4.5<br>
> rank 0 GlobalIndexSet_gIdx 3 real_gIdx 91 lId 6 partitionType interior, dim 0: 1.5, dim 1: 4.5, dim 2: 4.5<br>
> rank 0 GlobalIndexSet_gIdx 7 real_gIdx 93 lId 7 partitionType overlap, dim 0: 4.5, dim 1: 4.5, dim 2: 4.5<br>
> <br>
> <br>
> As can be seen, when we use several threads, the real_gIdx follows the expected order (from bottom left to top right), but the GlobalIndexSet_gIdx changes.<br>
> ###########<br>
> <br>
> <br>
> <br>
> ------------------------------------------------------------------------------------------------<br>
> ------------------------------------------------------------------------------------------------<br>
> Forschungszentrum Jülich GmbH<br>
> 52425 Jülich<br>
> Sitz der Gesellschaft: Jülich<br>
> Eingetragen im Handelsregister des Amtsgerichts Düren Nr. HR B 3498<br>
> Vorsitzender des Aufsichtsrats: MinDir Stefan Müller<br>
> Geschäftsführung: Prof. Dr. Astrid Lambrecht (Vorsitzende),<br>
> Karsten Beneke (stellv. Vorsitzender), Dr. Ir. Pieter Jansens<br>
> ------------------------------------------------------------------------------------------------<br>
> ------------------------------------------------------------------------------------------------<br>
> <br>
> _______________________________________________<br>
> Dune mailing list<br>
> Dune@lists.dune-project.org<br>
> <a href="https://lists.dune-project.org/mailman/listinfo/dune">https://lists.dune-project.org/mailman/listinfo/dune</a><br>
</div>
</span></font><br>
<font face="Arial" color="Black" size="1"><br>
------------------------------------------------------------------------------------------------<br>
------------------------------------------------------------------------------------------------<br>
Forschungszentrum Jülich GmbH<br>
52425 Jülich<br>
Sitz der Gesellschaft: Jülich<br>
Eingetragen im Handelsregister des Amtsgerichts Düren Nr. HR B 3498<br>
Vorsitzender des Aufsichtsrats: MinDir Stefan Müller<br>
Geschäftsführung: Prof. Dr. Astrid Lambrecht (Vorsitzende),<br>
Karsten Beneke (stellv. Vorsitzender), Dr. Ir. Pieter Jansens<br>
------------------------------------------------------------------------------------------------<br>
------------------------------------------------------------------------------------------------<br>
</font>
</body>
</html>