[Dune] parallel alugrid

Ganesh Diwan gcdiwan83 at gmail.com
Wed Sep 9 15:07:47 CEST 2015


Hi Andreas

Here is the code:

// Maybe initialize MPI
        Dune::MPIHelper& helper = Dune::MPIHelper::instance(argc, argv);

        typedef Dune::ALUGrid<2, 2, Dune::cube, Dune::nonconforming>
GridType;

//        typedef Dune::UGGrid<2> GridType;
//        std::auto_ptr<HGridType> grid( GmshReader<HGridType>::read(
"./inp/mymesh.msh", true, true ) );

        Dune::GridFactory<GridType> factory;
        Dune::GmshReader<GridType>::read(factory, "./inp/mymesh.msh", true,
true);
        GridType *grid = factory.createGrid();
        typedef GridType::LeafGridView GV; // Define a typedef for a Leaf
Grid View
        const GridType::LeafGridView gv = grid->leafGridView(); // actually
create a grid view object here


        // now loadbalance
        grid->loadBalance();

        //
        int numElemesOnThisProcess = 0;
        for (auto eIt = gv.begin<0, Interior_Partition>(); eIt != gv.end<0,
Interior_Partition>(); ++eIt)
        {
            numElemesOnThisProcess++;
        }
        std::cout << "you have " << numElemesOnThisProcess << " elements on
process  " << helper.rank() <<std::endl;
        const std::string baseOutName = "Grid_";
        VTKWriter<GV> vtkWriter(gv);
        std::vector<int> rankField(gv.size(0));
        std::fill(rankField.begin(), rankField.end(), grid->comm().rank());
        vtkWriter.addCellData(rankField,"rank");
        vtkWriter.write(baseOutName+std::to_string(0));

        return 0;

The output from my process 0 xterm window is:

(gdb) run
Starting program: /usr/local/home/gcd3/codes/vem/dune-part/src/dune-part
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Reading 2d Gmsh grid...
version 2.2 Gmsh file detected
file contains 81 nodes
file contains 64 elements
number of real vertices = 81
number of boundary elements = 0
number of elements = 64

Created serial ALUGrid<2,2,cube,nonconforming>.

you have 64 elements on process  0
[Inferior 1 (process 31987) exited normally]

and that from process 1 window:
(gdb) run
Starting program: /usr/local/home/gcd3/codes/vem/dune-part/src/dune-part
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Reading 2d Gmsh grid...
version 2.2 Gmsh file detected
file contains 81 nodes
file contains 64 elements
number of real vertices = 81
number of boundary elements = 0
number of elements = 64

Created serial ALUGrid<2,2,cube,nonconforming>.

you have 64 elements on process  1
[Inferior 1 (process 31958) exited normally]

Since both of them show 64 elements (same as original grid) I thought I am
doing something wrong.

Thank you
Ganesh


On Wed, Sep 9, 2015 at 1:17 PM, Andreas Dedner <a.s.dedner at warwick.ac.uk>
wrote:

> Hi.
> How do you read in your grid? Please post the code you are using to do
> that.
> Andreas
>
>
> On 09/09/15 13:07, Ganesh Diwan wrote:
> > Hi Dune list
> >
> > I am not able to distribute the grid with loadbalance when I use alugrid
> > as gridtype.  All processes contain the same number of elements as the
> > original grid. I think alugrid is not correctly configured for parallel
> > mode. However uggrid seems to work as loadbalance splits the grid and I
> > can see the rank partitions in vtk output fine. I configured with
> > following contents in my opts file:
> >
> > CONFIGURE_FLAGS="CXX=g++-4.8 \
> > --enable-parallel \
> > --enable-experimental-grid-extensions \
> > --with-metis=/usr \
> > --with-metis-lib=metis \
> > --with-parmetis=/usr \
> > --with-parmetis-lib=parmetis \
> > --with-alberta='/home/gcd3/dune/ext_build/alberta' \
> > --with-zoltan='/home/gcd3/dune/ext_build/Zoltan-v3.82' \
> > --with-ug='/home/gcd3/dune/ext_build/ug' \
> > --prefix='/home/gcd3/dune/core-2.4.0/install/' \
> > "
> >
> > Do I need to include any other flags, perhaps adding CXXFLAGS=mpicc? I
> > was under impression that --enable-parallel would invoke the parallel
> > flags for grid, is it correct?
> >
> > Thanks in advance for help,
> > Ganesh
> >
> >
> >
> >
> > _______________________________________________
> > Dune mailing list
> > Dune at dune-project.org
> > http://lists.dune-project.org/mailman/listinfo/dune
> >
>
> _______________________________________________
> Dune mailing list
> Dune at dune-project.org
> http://lists.dune-project.org/mailman/listinfo/dune
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.dune-project.org/pipermail/dune/attachments/20150909/bbb1ac84/attachment.htm>


More information about the Dune mailing list