[Dune] solved! Re: erradic behaviour of parallel grid sizes in 3D

Michael Wenske m_wens01 at wwu.de
Fri Feb 23 14:07:22 CET 2018


Hi,

I have to apologize. I built a container with size NX*NY*NY, which made
my program fail for all
3D cases with NZ > NY by segfault.

Michael


On 23.02.2018 12:22, Michael Wenske wrote:
> Dear list,
>
> i am working on verifying a dune-pdelab code and most things work out
> nifty.
> To verify convergence and tackle possible weighting errors I chose
> 'ugly' Grid dimensions for a YaspGrid:
>
> for example:
>
> LX = 3.14159265359
> LY = 1.41421356237
> LZ =2.71828182845904523536028
>
> NX = 15
> Ny = 7
> NZ =  13
>
> Now the odd thing is that the code behaves nicely for some values of
> NX/NY/NZ and gives the following errors (all MPI related)
> for other values (namely the above).
>
> command: mpirun -n 2 ./executable
>
> *** Process received signal ***
> Signal: Segmentation fault (11)
> Signal code: Address not mapped (1)
>  Failing at address: 0xffffffaa060feb70
> [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7f172f284390]
> [ 1] /usr/local/lib/openmpi/mca_btl_vader.so(+0x405d)[0x7f17201a505d]
> [ 2] /usr/local/lib/libopen-pal.so.20(opal_progress+0x3c)[0x7f172c113efc]
> [ 3]
> /usr/local/lib/libmpi.so.20(ompi_request_default_test+0x2a)[0x7f172fd7b72a]
> [ 4] /usr/local/lib/libmpi.so.20(PMPI_Test+0x59)[0x7f172fdaaf49]
> [ 5]
> ./dune-micro(_ZNK4Dune5TorusINS_23CollectiveCommunicationIP19ompi_communicator_tEELi3EE8exchangeEv+0x5ce)[0x96b3ea]
>
> To root out any overflow error I chose a couple of settings with high
> NX/NY/NZ and those work fine.
> I may also chose grid dimensions with 2:1 aspect ratios. The error does
> not seem to occur in 2D.
>
> The grid instantiation looks like this, with overlap =1;
>
>             typedef Dune::YaspGrid<3> Grid;
>             typedef Grid::ctype DF;
>             Dune::FieldVector<DF, dim> L;
>             L[0] = ptree.get("grid.structured.LX", (double) 1.0);
>             L[1] = ptree.get("grid.structured.LY", (double) 1.0);
>             L[2] = ptree.get("grid.structured.LZ", (double) 1.0);
>             std::array<int, dim> N;
>             N[0] = ptree.get("grid.structured.NX", (int) 1);
>             N[1] = ptree.get("grid.structured.NY", (int) 1);
>             N[2] = ptree.get("grid.structured.NZ", (int) 1);
>             int overlap = ptree.get<int>("grid.overlap", -1);       
> //make it fail if unspecified
>             std::bitset < dim > B(false);
>             std::shared_ptr<Grid> gridp = std::shared_ptr < Grid > (new
> Grid(L, N,B,overlap,Dune::MPIHelper::getCollectiveCommunication()));
>
> I figured that this might also influence other peoples work. Are there
> some restrictions on the Grid dimensions in Yasp grid for 3D, like only
> 2^n or sth.?
> Is there anything else I have to consider when working with parallel grids?
>
> Any hints are welcome,
>
> Michael
>
>
>
>
> _______________________________________________
> Dune mailing list
> Dune at lists.dune-project.org
> http://lists.dune-project.org/mailman/listinfo/dune





More information about the Dune mailing list