[Dune] Should we use #ifdef HAVE_MPI in parallel GMSH Reader
Aleksejs Fomins
aleksejs.fomins at lspr.ch
Fri Nov 28 09:45:55 CET 2014
Dear Christian,
There are a few places where I need to use MPI_alltoall. Also when
calling parmetis, I need to pass MPI_COMM comm as argument, so I will
need a few of #if statements, but I will cut them to a minimum.
Cheers,
Aleksejs
On 11/28/2014 09:31 AM, Christian Engwer wrote:
> Hi Markus, hi Aleksejs,
>
>> Do not use #ifdef HAVE_MPI, but #if HAVE_MPI. We are using a trick to
>> enable MPI only if the compilation flags include our MPI flags. To
>> accomplish this HAVE_MPI will always be defined if your system
>> provides even DUNE does not activate it.
>
> while this is true as a generral advice, I suggest not to check
> HAVE_MPI at all. Aleksejs said he uses only the the DUNE MPI
> infrastructure. Thus it should be sufficient to use it always. In any
> case you should try to avoid '#if' statements where possible.
>
> Christian
>
> _______________________________________________
> Dune mailing list
> Dune at dune-project.org
> http://lists.dune-project.org/mailman/listinfo/dune
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 490 bytes
Desc: OpenPGP digital signature
URL: <https://lists.dune-project.org/pipermail/dune/attachments/20141128/1fed0547/attachment.sig>
More information about the Dune
mailing list