[Dune] How to use parallel UGGrid?]

Oliver Sander sander at mi.fu-berlin.de
Tue Jun 22 21:17:01 CEST 2010


Hi Wolfgang!
I can reproduce your crash now.  I'll have a look.

best,
Oliver

Am 2010-06-21 11:06, schrieb giese at mathematik.hu-berlin.de:
> Hi Oliver,
>
> I forgot to send you the UG Version that is installed:
> UG-2008-11-13.tar.gz was used and patched with
> ug-dune-patches-2009-12-01.diff . Does this cause any problems with Dune
> 2.0?
>
> Best regards,
> Wolfgang
>
>
> ----
> Hi Oliver,
>
> thanks for your mail. UG and Dune are compiled with --enable-parallel. I
> am using the latest Revisions: dune-commen rev. 6048, dune-grid rev. 6832,
> dune-istl rev. 1226, dune-localfunctions rev. 882, dune-pdelab rev. 529
>
> The following Components are installed.
>
> dune-common.............: version 2.1-svn
> ALBERTA.................: version 2.0
> ALUGrid.................: version 1.22
> AlgLib for DUNE.........: no
> AmiraMesh...............: no
> GMP.....................: yes
> Grape...................: yes
> MPI.....................: yes (OpenMPI)
> OpenGL..................: yes
> UG......................: yes (parallel)
> psurface................: yes
>
> You can find the file cube.msh attached.
>
> Best regards,
> Wolfgang
>
> I am also sending you code and error message.
>
> Code:
>
> #ifdef HAVE_CONFIG_H
> #include "config.h"
> #endif
> #include<iostream>
> #include<vector>
> #include<string>
>
> #include<dune/common/mpihelper.hh>
> #include<dune/common/exceptions.hh>
> #include<dune/grid/io/file/vtk/vtkwriter.hh>
> #include<dune/grid/io/file/gmshreader.hh>       // New: Dune::GmshReader
> #if HAVE_UG                                    // New: Use UG here
> #include<dune/grid/uggrid.hh>
> #endif
>
> int main(int argc, char** argv)
> {
>    try{
>      //Maybe initialize Mpi
>      Dune::MPIHelper&  helper = Dune::MPIHelper::instance(argc, argv);
>      if(Dune::MPIHelper::isFake)
>        std::cout<<  "This is a sequential program."<<  std::endl;
>      else
>        {
>          if(helper.rank()==0)
>            std::cout<<  "parallel run on "<<  helper.size()<<  "
> process(es)"<<  std::endl;
>        }
>
>        // check arguments
>        if (argc!=3)
>        {
>   	std::cout<<  "usage: "<<  argv[0]<<  "<gridName>  <level>"<<  std::endl;
>          return 1;
>        }
>
>      // refinement level
>      int level = 0;
>      sscanf(argv[2], "%d",&level);
>
>      // instanciate ug grid object (xxx MB heap)
>      typedef Dune::UGGrid<3>  GridType;
>      GridType grid(400);
>
>      // read a gmsh file
>      std::string gridName = argv[1];
>      if(helper.rank()==0) {
>        Dune::GmshReader<GridType>  gmshreader;
>        gmshreader.read(grid, gridName);
>        grid.globalRefine(level);
>      }
>
>      // edit gridName
>      gridName.erase(0, gridName.rfind("/")+1);
>      gridName.erase(gridName.find(".",0), gridName.length());
>
>      if(!Dune::MPIHelper::isFake)
>        grid.loadBalance();
>
>      // get a grid view
>      typedef GridType::LeafGridView GV;
>      const GV&  gv = grid.leafView();
>
>      // plot celldata
>      std::vector<int>  a(gv.size(0), 1);
>
>      //  output
>      Dune::VTKWriter<GV>  vtkwriter(gv);
>      vtkwriter.addCellData(a, "celldata");
>      vtkwriter.write(gridName.c_str(), Dune::VTKOptions::ascii);
>
>      // done
>      return 0;
>    }
>    catch (Dune::Exception&e){
>      std::cerr<<  "Dune reported error: "<<  e<<  std::endl;
>      return 1;
>    }
>    catch (...){
>      std::cerr<<  "Unknown exception thrown!"<<  std::endl;
>      return 1;
>    }
>
> }
>
> Error Message:
>
> parallel run on 3 process(es)
> DimX=3, DimY=1, DimZ=1
> Reading 3d Gmsh grid...
> Dune reported error: GridError: The grid has not been properly initialized!
> Dune reported error: GridError: The grid has not been properly initialized!
> version 2 Gmsh file detected
> file contains 14 nodes
> file contains 48 elements
> number of real vertices = 14
> number of boundary elements = 24
> number of elements = 24
>
> Then the process stops and does not finish...
>
>
>    
>> Hi Wolfgang!
>> I tried your code, but cannot reproduce the problem.  Can I have your
>> grid?  Does is load in a sequential setting?  What is your dune version?
>> What kind of UG do you have?  How many processes did you try?
>> What is your MPI?
>>
>> --
>> Oliver
>>
>> Am 2010-06-18 11:02, schrieb giese at mathematik.hu-berlin.de:
>>      
>>> Dear Dune-Developers,
>>>
>>> I have a question regarding the usage of the parallel UGGrid in
>>> combination
>>> with the gmshreader. I tried an easy example in DUNE, where the
>>> programme
>>> just reads a grid using the gmshreader and produces an output of the
>>> distributed grid using the vtkwriter. The crucial part of the code looks
>>> as follows:
>>>
>>>       .
>>>       .
>>>       .
>>>
>>>       typedef Dune::UGGrid<3>   GridType;
>>>       GridType grid(400);
>>>
>>>       // read a gmsh file
>>>       Dune::GmshReader<GridType>   gmshreader;
>>>       gmshreader.read(grid, "cube.msh");
>>>
>>>       // refine grid
>>>       grid.globalRefine(level);
>>>
>>>       if(!Dune::MPIHelper::isFake)
>>>         grid.loadBalance();
>>>
>>>       // get a grid view
>>>       typedef GridType::LeafGridView GV;
>>>       const GV&   gv = grid.leafView();
>>>
>>>       // plot celldata
>>>       std::vector<int>   a(gv.size(0), 1);
>>>
>>>       // output
>>>       Dune::VTKWriter<GV>   vtkwriter(gv);
>>>       vtkwriter.addCellData(a, "celldata");
>>>       vtkwriter.write("TestGrid", Dune::VTKOptions::ascii);
>>>
>>>       .
>>>       .
>>>       .
>>>
>>> This code produces an error message, which you can be seen below. It
>>> seems
>>> that UGGrid starts in parallel. It is actually configured with
>>> "--enable-parallel". But somehow in the end something is going wrong.
>>> What
>>> do I have to change? Do I have to add some code? Maybe you have an easy
>>> example that works in parallel? I would be very grateful if you could
>>> help
>>> me!
>>>
>>> Best regards,
>>> Wolfgang Giese
>>>
>>> ---Error Message---
>>> The programme "gridtest" started on three nodes produces the
>>> following error Message:
>>>
>>> parallel run on 3 process(es)
>>> DimX=3, DimY=1, DimZ=1
>>> Reading 3d Gmsh grid...
>>> Reading 3d Gmsh grid...
>>> Reading 3d Gmsh grid...
>>> version 2 Gmsh file detected
>>> file contains 14 nodes
>>> number of real vertices = 14
>>> number of boundary elements = 24
>>> number of elements = 24
>>> number of real vertices = 14
>>> number of boundary elements = 24
>>> number of elements = 24
>>> file contains 48 elements
>>> number of real vertices = 14
>>> number of boundary elements = 24
>>> number of elements = 24
>>> [node18:11171] *** Process received signal ***
>>> [node18:11171] Signal: Segmentation fault (11)
>>> [node18:11171] Signal code: Address not mapped (1)
>>> [node18:11171] Failing at address: 0x7359e3588
>>> [node18:11170] *** Process received signal ***
>>> [node18:11170] Signal: Segmentation fault (11)
>>> [node18:11170] Signal code: Address not mapped (1)
>>> [node18:11170] Failing at address: 0x1655035c8
>>> [node18:11170] [ 0] /lib64/libpthread.so.0 [0x2b402c690a90]
>>> [node18:11170] [ 1]
>>> ./gridtest(_ZN4Dune6UGGridILi3EE12globalRefineEi+0xdb)
>>> [0x60c78b]
>>> [node18:11170] [ 2] ./gridtest(main+0x233) [0x597213]
>>> [node18:11170] [ 3] /lib64/libc.so.6(__libc_start_main+0xe6)
>>> [0x2b402c8bd586]
>>> [node18:11170] [ 4] ./gridtest [0x596b79]
>>> [node18:11170] *** End of error message ***
>>> [node18:11171] [ 0] /lib64/libpthread.so.0 [0x2ac5e672ca90]
>>> [node18:11171] [ 1]
>>> ./gridtest(_ZN4Dune6UGGridILi3EE12globalRefineEi+0xdb)
>>> [0x60c78b]
>>> [node18:11171] [ 2] ./gridtest(main+0x233) [0x597213]
>>> [node18:11171] [ 3] /lib64/libc.so.6(__libc_start_main+0xe6)
>>> [0x2ac5e6959586]
>>> [node18:11171] [ 4] ./gridtest [0x596b79]
>>> [node18:11171] *** End of error message ***
>>> --------------------------------------------------------------------------
>>> mpirun noticed that process rank 2 with PID 11171 on node node18 exited
>>> on
>>> signal 11 (Segmentation fault).
>>> --------------------------------------------------------------------------
>>>
>>>
>>> _______________________________________________
>>> Dune mailing list
>>> Dune at dune-project.org
>>> http://lists.dune-project.org/mailman/listinfo/dune
>>>
>>>        
>>
>> --
>> ************************************************************************
>> * Oliver Sander                ** email: sander at mi.fu-berlin.de        *
>> * Freie Universität Berlin     ** phone: + 49 (30) 838 75348           *
>> * Institut für Mathematik      ** URL  : page.mi.fu-berlin.de/~sander  *
>> * Arnimallee 6                 ** -------------------------------------*
>> * 14195 Berlin, Germany        ** Member of MATHEON (www.matheon.de)   *
>> ************************************************************************
>>
>>      


-- 
************************************************************************
* Oliver Sander                ** email: sander at mi.fu-berlin.de        *
* Freie Universität Berlin     ** phone: + 49 (30) 838 75348           *
* Institut für Mathematik      ** URL  : page.mi.fu-berlin.de/~sander  *
* Arnimallee 6                 ** -------------------------------------*
* 14195 Berlin, Germany        ** Member of MATHEON (www.matheon.de)   *
************************************************************************





More information about the Dune mailing list