[dune-pdelab] Problems with MPI

Dodwell, Timothy T.Dodwell at exeter.ac.uk
Thu Jan 14 00:29:35 CET 2016


Thanks, yup just found it. Works a treat! Thanks for your help. Tim

> On 13 Jan 2016, at 23:10, Christian Engwer <christian.engwer at uni-muenster.de> wrote:
> 
> I don't know which mesh you are using. It depends on the particular
> implementation. If you are using YaspGrid, you can just pass the
> communicator to the constructor.
> 
> On Wed, Jan 13, 2016 at 11:00:09PM +0000, Dodwell, Timothy wrote:
>> Hi Christian,
>> 
>> Thanks for you reply! Good guess...
>> 
>> If I print out the gv.size(0,0) number of elements on level 0, co-dim 0 then on one processor it is 64 and for two it is 32. Therefore  for two processors my finite element calculation conditions are only on half the mesh, and the boundary conditions mean I get rubbish!
>> 
>> Is there an easy way of forcing the grids to build across only a MPI_comm (with a defined MPI_group) rather than MPI_comm_world? I ask this as later I would like to do the coarse calculations solves on just one processor using a direct solver, and the fine grids across a group of processors in parallel.
>> 
>> Thanks Again
>> 
>> Tim
>> 
>> Dr Tim Dodwell
>> Lecturer in Engineering Mathematics
>> Rm 221 - Harrison Building
>> College of Engineering, Mathematics & Physical Sciences
>> University of Exeter
>> Exeter
>> Devon
>> EX4 4PY
>> 
>> mail: t.dodwell(at)exeter.ac.uk
>> tel: +44 (0)1392 725899
>> mob: +44 (0)7745 622870
>> web: http://emps.exeter.ac.uk/engineering/staff/td336
>> Papers and Pre-prints: @Research-Gate
>> Citations: @Google-Scholar
>> 
>> This email and any attachment may contain information that is confidential, privileged, or subject to copyright, and which may be exempt from disclosure under applicable legislation. It is intended for the addressee only. If you received this message in error, please let me know and delete the email and any attachments immediately. The University will not accept responsibility for the accuracy/completeness of this email and its attachments.
>> 
>> ________________________________________
>> From: Christian Engwer <christian.engwer at uni-muenster.de>
>> Sent: 13 January 2016 17:15
>> To: Dodwell, Timothy
>> Cc: dune-pdelab mailing list
>> Subject: Re: [dune-pdelab] Problems with MPI
>> 
>> Hi Timothy,
>> 
>> just a guess...
>> 
>> As you are now using MPI, DUNE will work in the parallel mode. This
>> means in particular, that your meshes are now by default bound to the
>> MPI_Comm_World communicator. As you want to run many independent
>> samples in parallel, you have to make sure to choose the correct
>> communicator.
>> 
>> Ciao
>> Christian
>> 
>> On Wed, Jan 13, 2016 at 04:54:19PM +0000, Dodwell, Timothy wrote:
>>> Dear All,
>>> 
>>> 
>>> I am running the latest release of PDELab on a Mac OX 10.10.5. I am implementing a multilevel monte carlo algorithm which is wrapped around a general model (in this case linear elasticity which uses SEQ_CG_AMG_SSOR for the solver.)
>>> 
>>> 
>>> All works well and is tested in series. I have now tried to simply parallelise a for loop which computes a number of independent samples (see below). getSample is a function which returns a quantity of interest from my model and nproc is the number of processors.
>>> 
>>> 
>>> When I do this, all processors with rank from 0 to (nproc -2) return values of NaN. If I run the code on 1 processor all works ok.
>>> 
>>> Thanks in advance for your help!
>>> 
>>> Tim
>>> 
>>> 
>>> int numSamples = N[L] / nproc + 1; // Compute number of samples on each processor (rounding up)
>>> 
>>> 
>>> double * Ytmp = new double[numSamples];
>>> 
>>> 
>>> 
>>> double * Yroot = NULL;
>>> 
>>> 
>>> if (rank == 0){ Yroot = new double[numSamples * nproc];    }
>>> 
>>> 
>>> 
>>> for (int i = 0; i < numSamples; i++){    Ytmp[i] = getSample(L,z);    }
>>> 
>>> 
>>> 
>>> MPI_Gather(Ytmp,numSamples,MPI_DOUBLE,Yroot,numSamples,MPI_DOUBLE,0,MPI_COMM_WORLD);
>>> 
>>> 
>>> 
>>> Dr Tim Dodwell
>>> 
>>> Lecturer in Engineering Mathematics
>>> 
>>> Rm 221 - Harrison Building
>>> 
>>> College of Engineering, Mathematics & Physical Sciences
>>> 
>>> University of Exeter
>>> 
>>> Exeter
>>> 
>>> Devon
>>> 
>>> EX4 4PY
>>> 
>>> 
>>> mail: t.dodwell(at)exeter.ac.uk
>>> 
>>> tel: +44 (0)1392 725899
>>> 
>>> mob: +44 (0)7745 622870
>>> 
>>> web: http://emps.exeter.ac.uk/engineering/staff/td336
>>> 
>>> Papers and Pre-prints: @Research-Gate<https://www.researchgate.net/profile/Timothy_Dodwell>
>>> 
>>> Citations: @Google-Scholar<https://scholar.google.co.uk/citations?user=lPpjRfUAAAAJ&hl=en>
>>> 
>>> [http://www.exeter.ac.uk/media/universityofexeter/communicationservices/design/corporate_sig.jpg]
>>> 
>>> This email and any attachment may contain information that is confidential, privileged, or subject to copyright, and which may be exempt from disclosure under applicable legislation. It is intended for the addressee only. If you received this message in error, please let me know and delete the email and any attachments immediately. The University will not accept responsibility for the accuracy/completeness of this email and its attachments.
>> 
>>> _______________________________________________
>>> dune-pdelab mailing list
>>> dune-pdelab at dune-project.org
>>> http://lists.dune-project.org/mailman/listinfo/dune-pdelab
>> 
>> 
>> --
>> Prof. Dr. Christian Engwer
>> Institut für Numerische und Angewandte Mathematik
>> Fachbereich Mathematik und Informatik der Universität Münster
>> Einsteinstrasse 62
>> 48149 Münster
>> 
>> E-Mail  christian.engwer at uni-muenster.de
>> Telefon +49 251 83-35067
>> FAX     +49 251 83-32729
> 
> -- 
> Prof. Dr. Christian Engwer 
> Institut für Numerische und Angewandte Mathematik
> Fachbereich Mathematik und Informatik der Universität Münster
> Einsteinstrasse 62
> 48149 Münster
> 
> E-Mail  christian.engwer at uni-muenster.de
> Telefon +49 251 83-35067
> FAX     +49 251 83-32729



More information about the dune-pdelab mailing list