[Dune] How to use parallel UGGrid?
Oliver Sander
sander at mi.fu-berlin.de
Fri Jun 18 11:32:50 CEST 2010
Hi all!
That's not quite right. You have to set up the grid in only one
processor, but all the others need to know the grid boundary.
Therefore you have to do the same calls to GridFactory for the
rank 0 process and all the others. The UGGrid factory then does
it right internally. Afterwards you manually call loadBalance.
In other words, Wolfgangs code looks okay to me. I wonder
whether we are seeing FS 776 in disguise.
Wolfgang: can you provide a backtrace?
--
Oliver
Am 2010-06-18 11:27, schrieb Christian Engwer:
> Hi Wolfgang,
>
> UG Grid assumes, that you setup the grid in one processor and then
> calls load-balance to distribute ist. ATM there is no way to
> distribute mesh afterwards.
>
> Please try to read the grid _only_ on rank==0.
>
> If this still doesn't help, you can send the new error messages.
>
> Christian
>
> On Fri, Jun 18, 2010 at 11:02:21AM +0200, giese at mathematik.hu-berlin.de wrote:
>
>> Dear Dune-Developers,
>>
>> I have a question regarding the usage of the parallel UGGrid in combination
>> with the gmshreader. I tried an easy example in DUNE, where the programme
>> just reads a grid using the gmshreader and produces an output of the
>> distributed grid using the vtkwriter. The crucial part of the code looks
>> as follows:
>>
>> .
>> .
>> .
>>
>> typedef Dune::UGGrid<3> GridType;
>> GridType grid(400);
>>
>> // read a gmsh file
>> Dune::GmshReader<GridType> gmshreader;
>> gmshreader.read(grid, "cube.msh");
>>
>> // refine grid
>> grid.globalRefine(level);
>>
>> if(!Dune::MPIHelper::isFake)
>> grid.loadBalance();
>>
>> // get a grid view
>> typedef GridType::LeafGridView GV;
>> const GV& gv = grid.leafView();
>>
>> // plot celldata
>> std::vector<int> a(gv.size(0), 1);
>>
>> // output
>> Dune::VTKWriter<GV> vtkwriter(gv);
>> vtkwriter.addCellData(a, "celldata");
>> vtkwriter.write("TestGrid", Dune::VTKOptions::ascii);
>>
>> .
>> .
>> .
>>
>> This code produces an error message, which you can be seen below. It seems
>> that UGGrid starts in parallel. It is actually configured with
>> "--enable-parallel". But somehow in the end something is going wrong. What
>> do I have to change? Do I have to add some code? Maybe you have an easy
>> example that works in parallel? I would be very grateful if you could help
>> me!
>>
>> Best regards,
>> Wolfgang Giese
>>
>> ---Error Message---
>> The programme "gridtest" started on three nodes produces the
>> following error Message:
>>
>> parallel run on 3 process(es)
>> DimX=3, DimY=1, DimZ=1
>> Reading 3d Gmsh grid...
>> Reading 3d Gmsh grid...
>> Reading 3d Gmsh grid...
>> version 2 Gmsh file detected
>> file contains 14 nodes
>> number of real vertices = 14
>> number of boundary elements = 24
>> number of elements = 24
>> number of real vertices = 14
>> number of boundary elements = 24
>> number of elements = 24
>> file contains 48 elements
>> number of real vertices = 14
>> number of boundary elements = 24
>> number of elements = 24
>> [node18:11171] *** Process received signal ***
>> [node18:11171] Signal: Segmentation fault (11)
>> [node18:11171] Signal code: Address not mapped (1)
>> [node18:11171] Failing at address: 0x7359e3588
>> [node18:11170] *** Process received signal ***
>> [node18:11170] Signal: Segmentation fault (11)
>> [node18:11170] Signal code: Address not mapped (1)
>> [node18:11170] Failing at address: 0x1655035c8
>> [node18:11170] [ 0] /lib64/libpthread.so.0 [0x2b402c690a90]
>> [node18:11170] [ 1] ./gridtest(_ZN4Dune6UGGridILi3EE12globalRefineEi+0xdb)
>> [0x60c78b]
>> [node18:11170] [ 2] ./gridtest(main+0x233) [0x597213]
>> [node18:11170] [ 3] /lib64/libc.so.6(__libc_start_main+0xe6) [0x2b402c8bd586]
>> [node18:11170] [ 4] ./gridtest [0x596b79]
>> [node18:11170] *** End of error message ***
>> [node18:11171] [ 0] /lib64/libpthread.so.0 [0x2ac5e672ca90]
>> [node18:11171] [ 1] ./gridtest(_ZN4Dune6UGGridILi3EE12globalRefineEi+0xdb)
>> [0x60c78b]
>> [node18:11171] [ 2] ./gridtest(main+0x233) [0x597213]
>> [node18:11171] [ 3] /lib64/libc.so.6(__libc_start_main+0xe6) [0x2ac5e6959586]
>> [node18:11171] [ 4] ./gridtest [0x596b79]
>> [node18:11171] *** End of error message ***
>> --------------------------------------------------------------------------
>> mpirun noticed that process rank 2 with PID 11171 on node node18 exited on
>> signal 11 (Segmentation fault).
>> --------------------------------------------------------------------------
>>
>>
>> _______________________________________________
>> Dune mailing list
>> Dune at dune-project.org
>> http://lists.dune-project.org/mailman/listinfo/dune
>>
>>
> _______________________________________________
> Dune mailing list
> Dune at dune-project.org
> http://lists.dune-project.org/mailman/listinfo/dune
>
--
************************************************************************
* Oliver Sander ** email: sander at mi.fu-berlin.de *
* Freie Universität Berlin ** phone: + 49 (30) 838 75348 *
* Institut für Mathematik ** URL : page.mi.fu-berlin.de/~sander *
* Arnimallee 6 ** -------------------------------------*
* 14195 Berlin, Germany ** Member of MATHEON (www.matheon.de) *
************************************************************************
More information about the Dune
mailing list