[Dune] GMSH reader fails in parallel case

Andreas Dedner a.s.dedner at warwick.ac.uk
Mon Jul 14 14:54:31 CEST 2014


There has never been a clear decision I think on how the grid readers 
should work in the
case that the macro grid is not pre distributed. In ALU the idea is that 
the grid is distributed
but in a way that one process has all the elements and the others are 
empty. Consequently
only process zero should read the gmsh file and the others should 
generate an empty grid.
Now I remember that UG does it differently (requiring) that all process 
read the full macro grid.
As I said a place were we need to fix the semantics. DGF does it the ALU 
way that is why that works
and gmshreader does in the UG way...

The simplest way to avoid the issue is to surround the call to the 
gmshreader by if (rank==0)
and to construct empty ALUGrids in the else part - but then I assume UG 
would not be happy....

Andreas

PS: it would help if you could open a flyspray task with a report, a 
test program, I could then add
my 5 cent from above. This would increase the chances that we would 
actually discuss this
at the developer meeting in September.


On 14/07/14 13:00, Sacconi, Andrea wrote:
> Hi DUNErs,
>
> I would like to ask you a question about the GMSH reader for parallel computation (with open-MPI 1.6.5). I am using AlugridSimplex <3,3> for a standard Poisson problem.
> Everything is fine in the sequential case, while in the parallel case I get the error reported below.
>
> as7211 at macomp01:~/dune-2.3.1/dune-bulk/src$ mpirun -n 2 dune_bulk
> Reading 3d Gmsh grid...
> Reading 3d Gmsh grid...
> version 2.2 Gmsh file detected
> version 2.2 Gmsh file detected
> file contains 3323 nodes
> file contains 3323 nodes
> file contains 19216 elements
> file contains 19216 elements
> terminate called after throwing an instance of 'Dune::GridError'
> [macomp01:03890] *** Process received signal ***
> [macomp01:03890] Signal: Aborted (6)
> [macomp01:03890] Signal code:  (-6)
> [macomp01:03890] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x10340) [0x7ff363b7b340]
> [macomp01:03890] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x39) [0x7ff3637dbf79]
> [macomp01:03890] [ 2] /lib/x86_64-linux-gnu/libc.so.6(abort+0x148) [0x7ff3637df388]
> [macomp01:03890] [ 3] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(_ZN9__gnu_cxx27__verbose_terminate_handlerEv+0x155) [0x7ff3643056b5]
> [macomp01:03890] [ 4] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x5e836) [0x7ff364303836]
> [macomp01:03890] [ 5] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x5e863) [0x7ff364303863]
> [macomp01:03890] [ 6] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x5eaa2) [0x7ff364303aa2]
> [macomp01:03890] [ 7] dune_bulk(_ZN4Dune16ALU3dGridFactoryINS_7ALUGridILi3ELi3ELNS_18ALUGridElementTypeE0ELNS_21ALUGridRefinementTypeE1EP19ompi_communicator_tEEE12insertVertexERKNS_11FieldVectorIdLi3EEE+0x1b2) [0x616c82]
>
> Any idea about how to use make only the master process read the grid, and not all the processes? In any case, how can the issue be fixed?
> By the way, if I use DGF reader everything runs fine, both in the sequential and parallel case.
>
> Thanks in advance!
> Andrea
> __________________________________________________________
>
> Andrea Sacconi
> PhD student, Applied Mathematics
> AMMP Section, Department of Mathematics, Imperial College London,
> London SW7 2AZ, UK
> a.sacconi11 at imperial.ac.uk
> _______________________________________________
> Dune mailing list
> Dune at dune-project.org
> http://lists.dune-project.org/mailman/listinfo/dune





More information about the Dune mailing list