<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: arial,helvetica,sans-serif; font-size: 12pt; color: #000000'>Hi duners,<br>I'm building dune with alugrid on a workstation, but I'm having a problem running a dune project.<br>I built Alugrid with<br>./configure CC=gcc CXX=g++ MPICC=/opt/openmpi/openmpi-1.6.4-gcc/bin/mpiCC --prefix=/opt/ALUGrid-1.52 --with-metis=/usr/local --with-parmetis=/usr/local CFLAG=-DNDEBUG CPPFLAGS=-DNDEBUG CXXFLAGS=-DNDEBUG CXXFLAGS=-O3 CFLAGS=-O3<br>I built dune-common, dune-geometry and dune-grid with<br># install to custom directory<br>CONFIGURE_FLAGS="CC=gcc CXX=g++ MPICC=/opt/openmpi/openmpi-1.6.4-gcc/bin/mpiCC --prefix=/opt/dune-2.3 --enable-parallel -enable-experimental-grid-extensions --disable-documentation --with-metis=/usr/local --with-parmetis=/usr/local --with-alugrid=/opt/ALUGrid-1.52 CFLAGS=\"-O3 -DNDEBUG\" CXXFLAGS=\"-O3 -DNDEBUG\" "<br># default target of make to install, then dune is not only built but also installed<br>#MAKE_FLAGS=install<br># the default version of automake and autogen are not sufficient therefore we need to specify what version we use<br>#AUTOGEN_FLAGS="--ac=2.65 --am=1.11.1"<br><br>Everything goes fine, no problem nor in configuration neither during compiling process of every dune module<br>Then I run duneproject and dunecontrol to build a default project.<br>If I try to run the default project with /opt/openmpi/openmpi-1.6.4-gcc/bin/mpiexec -np 1 ./myproject I get this:<br>[sandrino:06918] [[45031,1],0] ORTE_ERROR_LOG: Data unpack would read past end of buffer in file util/nidmap.c at line 398<br>[sandrino:06918] [[45031,1],0] ORTE_ERROR_LOG: Data unpack would read past end of buffer in file base/ess_base_nidmap.c at line 62<br>[sandrino:06918] [[45031,1],0] ORTE_ERROR_LOG: Data unpack would read past end of buffer in file ess_env_module.c at line 173<br>--------------------------------------------------------------------------<br>It looks like orte_init failed for some reason; your parallel process is<br>likely to abort. There are many reasons that a parallel process can<br>fail during orte_init; some of which are due to configuration or<br>environment problems. This failure appears to be an internal failure;<br>here's some additional information (which may only be relevant to an<br>Open MPI developer):<br><br> orte_ess_base_build_nidmap failed<br> --> Returned value Data unpack would read past end of buffer (-26) instead of ORTE_SUCCESS<br>--------------------------------------------------------------------------<br>[sandrino:06918] [[45031,1],0] ORTE_ERROR_LOG: Data unpack would read past end of buffer in file runtime/orte_init.c at line 132<br>--------------------------------------------------------------------------<br>It looks like orte_init failed for some reason; your parallel process is<br>likely to abort. There are many reasons that a parallel process can<br>fail during orte_init; some of which are due to configuration or<br>environment problems. This failure appears to be an internal failure;<br>here's some additional information (which may only be relevant to an<br>Open MPI developer):<br><br> orte_ess_set_name failed<br> --> Returned value Data unpack would read past end of buffer (-26) instead of ORTE_SUCCESS<br>--------------------------------------------------------------------------<br>*** The MPI_Init() function was called before MPI_INIT was invoked.<br>*** This is disallowed by the MPI standard.<br>*** Your MPI job will now abort.<br>--------------------------------------------------------------------------<br>It looks like MPI_INIT failed for some reason; your parallel process is<br>likely to abort. There are many reasons that a parallel process can<br>fail during MPI_INIT; some of which are due to configuration or environment<br>problems. This failure appears to be an internal failure; here's some<br>additional information (which may only be relevant to an Open MPI<br>developer):<br><br> ompi_mpi_init: orte_init failed<br> --> Returned "Data unpack would read past end of buffer" (-26) instead of "Success" (0)<br>--------------------------------------------------------------------------<br>[sandrino:6918] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!<br>--------------------------------------------------------------------------<br>mpiexec has exited due to process rank 0 with PID 6918 on<br>node sandrino exiting improperly. There are two reasons this could occur:<br><br>1. this process did not call "init" before exiting, but others in<br>the job did. This can cause a job to hang indefinitely while it waits<br>for all processes to call "init". By rule, if one process calls "init",<br>then ALL processes must call "init" prior to termination.<br><br>2. this process called "init", but exited without calling "finalize".<br>By rule, all processes that call "init" MUST call "finalize" prior to<br>exiting or it will be considered an "abnormal termination"<br><br>This may have caused other processes in the application to be<br>terminated by signals sent by mpiexec (as reported here).<br>--------------------------------------------------------------------------<br><br><br>Then I tried a simple MPI code calling MPI_Init, MPI_Comm_rank, MPI_Comm_size and MPI_Finalize<br>Everything works fine.<br>I cannot understand what's the matter with the dune project.<br>I don't know if you can help with this information. I know they could seem cryptical, but I thought someone has already seen something like that, maybe.<br>The most surprising thing is that I get no errors building dune. Moreover, I built dune on my laptop following the same scheme and it works perfectly.<br>I hope someone can help me. It seems like the mpi wrapper for compiling and mpiexec are not from the same version of Openmpi but they are. Weird!<br><br>Thanks for any hint.<br>Bests,<br><br>Marco<br><br><br><br><br><br><br><br><div><span name="x"></span>--<br>-----------------------------------------------<br>Marco Cisternino, PhD<br>OPTIMAD Engineering s.r.l.<br>Via Giacinto Collegno 18<br>10143 Torino - Italy<br>www.optimad.it<br>marco.cisternino@optimad.it<br>+39 011 19719782<br>-----------------------------------------------<span name="x"></span><br></div><br></div></body></html>