[Dune] Assertion error when running Dune greed in parallel

Dedner, Andreas A.S.Dedner at warwick.ac.uk
Fri Jul 5 17:33:11 CEST 2019


Hi.
I am traveling at the moment but I can have a look over the weekend hopefully.
Just a remark:
In it's original version the dgf factory reads everything onro process zero including the data and then everything is distributed when liadbalance on the factory. So sizes of zero could be right. Perhaps the assert is wrong...

More later
Andreas (one of the developers of dune-peits)

Sent from my Huawei Mobile

-------- Original Message --------
Subject: Re: [Dune] Assertion error when running Dune greed in parallel
From: "Guichard, Roland"
To: "Guichard, Roland"
CC: dune at lists.dune-project.org

@Markus,

I passed a communicator I got from the Dune::Fem::MPIManager but the error persists:

dune-peits: /home/vagrant/dune-grid-2.6.0/dune/grid/io/file/dgfparser/gridptr.hh:291: const std::vector<double>& Dune::GridPtr<GridType>::parameters(const Entity&) const [with Entity = Dune::Entity<0, 3, const Dune::ALU3dGrid<3, 3, (Dune::ALU3dGridElementType)4, Dune::ALUGridMPIComm>, Dune::ALU3dGridEntity>; GridType = Dune::ALUGrid<3, 3, (Dune::ALUGridElementType)0, (Dune::ALUGridRefinementType)0>]: Assertion `(unsigned int)gridView.indexSet().index( entity ) < elParam_.size()' failed.

When I print out the indexSet.size( 0 ) from which elParam_.size() is initialised, I get:

indexSet.size( 0 ) before the elParam_ initialisation: 0
After the DFGfactory call, after initialize, elParam_ size is: 0

For the MPI rank 1. Any thoughts maybe ?
That’d be really appreciated.

Best wishes,

Dr. Roland Guichard
Research Software Engineer
UCL-RITS
Internal Extension: 86947
External Number: 02031086947







On 5 Jul 2019, at 15:06, Guichard, Roland <r.guichard at ucl.ac.uk<mailto:r.guichard at ucl.ac.uk>> wrote:

Thank you for the clarification. Some comments below:

On 5 Jul 2019, at 14:38, Jö Fahlke <jorrit.fahlke at wwu.de<mailto:jorrit.fahlke at wwu.de>> wrote:

Am Fr,  5. Jul 2019, 12:28:51 +0000 schrieb Guichard, Roland:
I see thanks.
A quick question though, does it mean that there are several MPI initializations possible within Dune and the communicator that needs to be passed to the GridPtr (from dune-fem) is responsible for handling the mesh in parallel ?

MPI_Init is only ever called once.


Yes, but potentially through multiple ways. I mean by this that for instance the Dune::Fem::MPIManager::initialize( argc, argv ) implicitly calls the MPIHelper::instance( argc, argv ). I understand that managers are wrappers around the MPIHelper one. However what I don’t quiet get so far is that, in the code, MPI ranks and size are retrieved using direct MPI calls:

MPI_Comm_rank(MPI_COMM_WORLD, &myRank);
MPI_Comm_size(MPI_COMM_WORLD, &numProcs);

But since the managers provide that functionality I don’t see the point in doing this. Ideally and for readability, you would access anything you need through a single manager. And this also applies to the communicator.
But that is merely my opinion.


Though it is up to the grid manager implementation which MPI communicators are
supported.  Some managers may implicitly assume MPI_COMM_WORLD, others may not
support MPI at all (which is implicitly equivalent to MPI_COMM_SELF).

Hm, do I need to understand managers in a larger scope that MPI only then ?


For yet others it may be possible to specify the communicator when
constructing the grid.  So if you obtained that communicator by splitting
MPI_COMM_WORLD into two subsets of nodes, you will end up with two independent
grids, each operating within one of the subsets.

OK got that one.


Regards,
Jö.

--
Jorrit (Jö) Fahlke, Institute for Computational und Applied Mathematics,
University of Münster, Orleans-Ring 10, D-48149 Münster
Tel: +49 251 83 35146 Fax: +49 251 83 32729

This message is protected by DoubleROT13 encryption
Attempting to decode it violates the DMCA/WIPO acts


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.dune-project.org/pipermail/dune/attachments/20190705/50ec9d8b/attachment.htm>


More information about the Dune mailing list