<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: arial,helvetica,sans-serif; font-size: 12pt; color: #000000'>Hi,<br>I don't know if what I'm going to say can help, surely it cannot hurt.<br>I customize the reader for the prostar mesh (starcdreader.hh in dune) file format, the reader fills a grid factory and only process 0 does it, but everybody instantiates the factory.<br>At the end of the reading, 0 has a filled factory, while other processes have an empty factory. Then, all the processes build the grid from their own factory. Finally, the loadbalance method distributes the grid.<br>I read the gmsh reader code and the difference with my code is that in gmsh reader the processes build different grid and only 0 has a factory while in my code everybody has a factory (filled for 0, empty for the others) and everybody builds the same grid form their own factory.<br>Again, I don't know if it helps but it works quite efficiently.<br><br>Bests,<br><br>Marco<br><br>PS: the orginal starcdreader.hh behaved in the gmsh reader way.<br><br><div><span name="x"></span>--<br>-----------------------------------------------<br>Marco Cisternino, PhD<br>OPTIMAD Engineering s.r.l.<br>Via Giacinto Collegno 18<br>10143 Torino - Italy<br>www.optimad.it<br>marco.cisternino@optimad.it<br>+39 011 19719782<br>-----------------------------------------------<span name="x"></span><br></div><br><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>Da: </b>"Jö Fahlke" <jorrit@jorrit.de><br><b>A: </b>"Andrea Sacconi" <a.sacconi11@imperial.ac.uk><br><b>Cc: </b>dune@dune-project.org<br><b>Inviato: </b>Lunedì, 14 luglio 2014 23:07:22<br><b>Oggetto: </b>Re: [Dune] GMSH reader fails in parallel case<br><br>There is a utility in dune-grid that will load a gmsh file, distribute it, and<br>write it as a set of files in alugrid format:<br>http://cgit.dune-project.org/repositories/dune-grid/tree/src/gmsh-to-alu<br>The reading part should be exactly what your want. Note that you need to<br>configure dune-grid with --enable-extra-utilities (or similar) to actually<br>compile those programs. Note also that you can attach data to (physical entity<br>number) to entities in gmsh, and that program can redistribute that kind of<br>data too, which should explain some complications in the code.<br><br>Hope that get you started,<br>Jö.<br><br>Am Mon, 14. Jul 2014, 16:13:54 +0000 schrieb Sacconi, Andrea:<br>> Hi all,<br>> <br>> following Andreas's suggestions, I added these lines of code:<br>> <br>> HostGridType* gridPtr(nullptr);<br>> if(rank==0)<br>> gridPtr=Dune::GmshReader<HostGridType>::read(FileName);<br>> else gridPtr = new HostGridType();<br>> <br>> so only the process with rank 0 reads the file, while the others initialise an empty grid.<br>> Then I call:<br>> <br>> grid.loadBalance();<br>> <br>> but unfortunately this error message appears:<br>> <br>> as7211@macomp000:~/dune-2.3.1/dune-bulk/src$ mpirun -n 2 dune_bulk<br>> Reading 3d Gmsh grid...<br>> version 2.2 Gmsh file detected<br>> file contains 3323 nodes<br>> file contains 19216 elements<br>> number of real vertices = 3322<br>> number of boundary elements = 3036<br>> number of elements = 15981<br>> <br>> Created parallel ALUGrid<3,3,simplex,nonconforming> from macro grid file ''. <br>> <br>> [macomp000:5707] *** An error occurred in MPI_Allgather<br>> [macomp000:5707] *** on communicator MPI COMMUNICATOR 3 DUP FROM 0<br>> [macomp000:5707] *** MPI_ERR_TRUNCATE: message truncated<br>> [macomp000:5707] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort<br>> --------------------------------------------------------------------------<br>> mpirun noticed that the job aborted, but has no info as to the process<br>> that caused that situation.<br>> <br>> So, it appears that the file has been read (by the process 0) and the grid initialised correctly. The problem is, process 0 freezes at the end of the reading bit. If you comment the line with load balancing, nothing appears on the screen, because the process 0 is frozen.<br>> Any ideas about this issue? I'm very confused.<br>> <br>> Thanks again!<br>> Andrea<br>> __________________________________________________________<br>> <br>> Andrea Sacconi<br>> PhD student, Applied Mathematics<br>> AMMP Section, Department of Mathematics, Imperial College London,<br>> London SW7 2AZ, UK<br>> a.sacconi11@imperial.ac.uk<br>> <br>> ________________________________________<br>> From: dune-bounces+a.sacconi11=imperial.ac.uk@dune-project.org [dune-bounces+a.sacconi11=imperial.ac.uk@dune-project.org] on behalf of Oliver Sander [sander@igpm.rwth-aachen.de]<br>> Sent: 14 July 2014 15:37<br>> To: dune@dune-project.org<br>> Subject: Re: [Dune] GMSH reader fails in parallel case<br>> <br>> Am 14.07.2014 14:54, schrieb Andreas Dedner:<br>> > There has never been a clear decision I think on how the grid readers should work in the<br>> > case that the macro grid is not pre distributed. In ALU the idea is that the grid is distributed<br>> > but in a way that one process has all the elements and the others are empty. Consequently<br>> > only process zero should read the gmsh file and the others should generate an empty grid.<br>> > Now I remember that UG does it differently (requiring) that all process read the full macro grid.<br>> > As I said a place were we need to fix the semantics. DGF does it the ALU way that is why that works<br>> > and gmshreader does in the UG way...<br>> ><br>> > The simplest way to avoid the issue is to surround the call to the gmshreader by if (rank==0)<br>> > and to construct empty ALUGrids in the else part - but then I assume UG would not be happy....<br>> ><br>> <br>> I don't think so. UGGrid contains extra code to handle that case. I don't really know how much<br>> testing it got, though.<br>> --<br>> Oliver<br>> <br>> > Andreas<br>> ><br>> > PS: it would help if you could open a flyspray task with a report, a test program, I could then add<br>> > my 5 cent from above. This would increase the chances that we would actually discuss this<br>> > at the developer meeting in September.<br>> ><br>> ><br>> > On 14/07/14 13:00, Sacconi, Andrea wrote:<br>> >> Hi DUNErs,<br>> >><br>> >> I would like to ask you a question about the GMSH reader for parallel computation (with open-MPI 1.6.5). I am using AlugridSimplex <3,3> for a standard Poisson problem.<br>> >> Everything is fine in the sequential case, while in the parallel case I get the error reported below.<br>> >><br>> >> as7211@macomp01:~/dune-2.3.1/dune-bulk/src$ mpirun -n 2 dune_bulk<br>> >> Reading 3d Gmsh grid...<br>> >> Reading 3d Gmsh grid...<br>> >> version 2.2 Gmsh file detected<br>> >> version 2.2 Gmsh file detected<br>> >> file contains 3323 nodes<br>> >> file contains 3323 nodes<br>> >> file contains 19216 elements<br>> >> file contains 19216 elements<br>> >> terminate called after throwing an instance of 'Dune::GridError'<br>> >> [macomp01:03890] *** Process received signal ***<br>> >> [macomp01:03890] Signal: Aborted (6)<br>> >> [macomp01:03890] Signal code: (-6)<br>> >> [macomp01:03890] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x10340) [0x7ff363b7b340]<br>> >> [macomp01:03890] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x39) [0x7ff3637dbf79]<br>> >> [macomp01:03890] [ 2] /lib/x86_64-linux-gnu/libc.so.6(abort+0x148) [0x7ff3637df388]<br>> >> [macomp01:03890] [ 3] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(_ZN9__gnu_cxx27__verbose_terminate_handlerEv+0x155) [0x7ff3643056b5]<br>> >> [macomp01:03890] [ 4] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x5e836) [0x7ff364303836]<br>> >> [macomp01:03890] [ 5] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x5e863) [0x7ff364303863]<br>> >> [macomp01:03890] [ 6] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x5eaa2) [0x7ff364303aa2]<br>> >> [macomp01:03890] [ 7]<br>> >> dune_bulk(_ZN4Dune16ALU3dGridFactoryINS_7ALUGridILi3ELi3ELNS_18ALUGridElementTypeE0ELNS_21ALUGridRefinementTypeE1EP19ompi_communicator_tEEE12insertVertexERKNS_11FieldVectorIdLi3EEE+0x1b2) [0x616c82]<br>> >><br>> >> Any idea about how to use make only the master process read the grid, and not all the processes? In any case, how can the issue be fixed?<br>> >> By the way, if I use DGF reader everything runs fine, both in the sequential and parallel case.<br>> >><br>> >> Thanks in advance!<br>> >> Andrea<br>> >> __________________________________________________________<br>> >><br>> >> Andrea Sacconi<br>> >> PhD student, Applied Mathematics<br>> >> AMMP Section, Department of Mathematics, Imperial College London,<br>> >> London SW7 2AZ, UK<br>> >> a.sacconi11@imperial.ac.uk<br>> >> _______________________________________________<br>> >> Dune mailing list<br>> >> Dune@dune-project.org<br>> >> http://lists.dune-project.org/mailman/listinfo/dune<br>> ><br>> ><br>> > _______________________________________________<br>> > Dune mailing list<br>> > Dune@dune-project.org<br>> > http://lists.dune-project.org/mailman/listinfo/dune<br>> <br>> <br>> <br>> _______________________________________________<br>> Dune mailing list<br>> Dune@dune-project.org<br>> http://lists.dune-project.org/mailman/listinfo/dune<br>> <br><br>-- <br>Jorrit (Jö) Fahlke, Institute for Computational und Applied Mathematics,<br>University of Münster, Orleans-Ring 10, D-48149 Münster<br>Tel: +49 251 83 35146 Fax: +49 251 83 32729<br><br>Of all the things I've lost, I miss my mind the most.<br>-- Ozzy Osbourne<br></div><br></div></body></html>