[dune-pdelab] [Dune] Parallel ALUGrid and gmshreader
Steffen Müthing
steffen.muething at iwr.uni-heidelberg.de
Thu Jul 3 12:12:46 CEST 2014
Hi Lukas,
Am 03.07.2014 um 11:38 schrieb Lukas Riedel <riedel-lukas at gmx.de>:
> Dear Developers,
>
> I built a working code in PDELab for parallel computation using YaspGrid and OpenMPI.
great to hear! :-)
> Now i want to use the gmshreader and the ALUgrid (simplex,2D) for the same code but i cannot find out how to initialize the ALUgrid in parallel mode.
>
> OS: Mac OS X 10.9.3 Mavericks
> GCC: gcc (MacPorts gcc49 4.9-20140416_2) 4.9.0 20140416 (prerelease)
> G++: g++ (MacPorts gcc49 4.9-20140416_2) 4.9.0 20140416 (prerelease)
> clang: clang version 3.5.0 (trunk 210448)
> Target: x86_64-apple-darwin13.2.0
> Thread model: posix
> CC: Apple LLVM version 5.1 (clang-503.0.40) (based on LLVM 3.4svn)
> Target: x86_64-apple-darwin13.2.0
> Thread model: posix
> DUNE-modules: dune-common.............: version 2.3.0
> dune-geometry...........: version 2.3.0
> dune-grid...............: version 2.3.0
> dune-istl...............: version 2.3.0
> dune-localfunctions.....: version 2.3.0
> dune-pdelab.............: version 2.0-dev
> dune-typetree...........: version 2.3-dev
> ALBERTA.................: version 2.0
> ALUGrid.................: version 1.52 (parallel)
>
>
> With YaspGrid, I wrote a „YaspPartition“ class and passed the MPI communicator:
>
> int overlap=1;
> const YaspPartition<2> yp;
> Dune::YaspGrid<2> grid(helper.getCommunicator(),L,N,periodic,overlap,&yp);
>
> Now, I use the gmshreader in the following way:
>
> typedef Dune::ALUGrid<2,2,Dune::simplex,Dune::conforming> GridType;
> Dune::shared_ptr<GridType> gridp(Dune::GmshReader<GridType>::read(meshfilename));
>
> The check whether the created ALUGrid Type is parallel ( Dune::Capabilities::isParallel<GridType>::v ) returns false.
>
> The output when executing $mpirun -n 2 <myProgram> reveals that two processes are executed, but both seem to calculate the whole problem on their own:
> (every line doubled, of course)
>
> parallel run on 2 process(es)
>
> Reading 2d Gmsh grid...
> version 2.2 Gmsh file detected
> file contains 378 nodes
> file contains 754 elements
> number of real vertices = 378
> number of boundary elements = 68
> number of elements = 686
> Created serial ALUGrid<2,2,simplex,conforming>
>
> rank 0 number of DOF = 378
> number of DOF 378
>
> I built ALUGrid using METIS, parMETIS and OpenMPI
>
> ./configure CC=gcc CXX=g++ F77=gfortran —prefix=$HOME/opt/alugrid --with-metis=/usr/local --with-parmetis=/usr/local
> CPPFLAGS="$CPPFLAGS `../dune-common*/bin/mpi-config --cflags --disable-cxx —mpicc=mpicc`" LDFLAGS="$LDFLAGS `../dune-common*/bin/mpi-config --libs --disable-cxx
> --mpicc=mpicc`“ CXXFLAGS="-O3 -DNDEBUG“ CFLAGS="-O3 -DNDEBUG“
>
> and DUNE successfully checks the serial and parallel usability of ALUGrid:
>
> configure: searching for ALUGrid in /Users/lriedel/opt/alugrid...
> checking ALUGrid version >= 1.52... yes (ALUGrid-1.52)
> checking alugrid_serial.h usability... yes
> checking alugrid_serial.h presence... yes
> checking for alugrid_serial.h... yes
> checking alugrid_parallel.h usability... yes
> checking alugrid_parallel.h presence... yes
> checking for alugrid_parallel.h… yes
>
> How can i initialize a parallel ALUgrid using the gmshreader? How can i access/change the load balance of the grid?
Unfortunately, this cannot work - ALUGrid is currently only parallel in 3D (yes, that’s hard to gleam from the documentation…),
cf. http://users.dune-project.org/projects/main-wiki/wiki/Grid-Manager_Features. AFAIK, this should change in the near future,
but until then, you can use UGGrid instead. Get it from http://www.iwr.uni-heidelberg.de/frame/iwrwikiequipment/software/ug,
rebuild dune-grid and reconfigure your own module. On the other hand, it might be a good idea to update to DUNE 2.3.1 while
you’re at it, the update fixed quite a number of ugly bugs…
If you need an example on how to use the Gmshreader, take a look at dune-pdelab-howto, e.g. at src/convection-diffusion/ldomain.cc,
where an UGGrid is used in parallel mode. The important part is the call to loadBalance().
>
> Currently, i am using the following ISTL Solver Backend:
>
> typedef Dune::PDELab::ISTLBackend_BCGS_AMG_SSOR<IGO> LS;
> LS ls(gfs,5000,0,false,true);
>
> As ALUgrid has no overlap but only ghosts, do i need to use a NOVLP solver backend then?
> Are the ghosts assembled automatically?
Yes, you will have to switch to a nonoverlapping backend and also have to switch the GridOperator
to nonoverlapping mode (it’s a boolean template parameter on the GridOperator that defaults to false).
When using AMG, you should *really* update to DUNE 2.3.1 and the 2.0.0 release of PDELab!
Finally, using AMG in nonoverlapping mode, you have to use a GridView on the InteriorBorder partition
for your GridFunctionSpace:
typedef Grid::Partition<Dune::InteriorBorder_Partition>::LeafGridView GV;
GV gv = grid.leafGridView<Dune::InteriorBorder_Partition>();
and Dune::PDELab::NonOverlappingLeafOrderingTag as the ordering tag of your GridFunctionSpace.
These changes will avoid putting the ghost DOFs in the DOF vector, which improves the surface-to-volume
ratio.
I hope that’s enough info to get you started!
Best,
Steffen
>
> Thank you for your help and best regards,
> Lukas Riedel
> _______________________________________________
> Dune mailing list
> Dune at dune-project.org
> http://lists.dune-project.org/mailman/listinfo/dune
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 495 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <https://lists.dune-project.org/pipermail/dune-pdelab/attachments/20140703/c18b3f80/attachment.sig>
More information about the dune-pdelab
mailing list