MPI compilers not set

Hello,
I’m trying to build parallel Abinit on a national lab computing cluster, using the Intel compilers and Intel MPI, but I’m having trouble with the MPI setup. I have the loaded modules 1) lanlpe/.tce (H) 2) mkl/cluster.2020.4 3) intel/2021.2.0 4) intel-mpi/cluster.2020.4
I’m using the following configuration options:

with_mpi="${I_MPI_ROOT}"
FC="mpiifort"
CC="mpiicc"
CXX="mpiicpc"
with_optim_flavor="aggressive"
enable_openmp="yes"
#LINALG_LIBS="${MKLROOT}/lib/intel64/libmkl_lapack95_ilp64.a -L${MKLROOT}/lib/intel64 -lmkl_scalapack_ilp64 -lmkl_bl$
lmkl_intel-thread="yes"
with_linalg_flavor="mkl"
with_dft_flavor"libxc"

When I echo $I_MPI_ROOT, it returns a directory, which contains the required intel compiler wrappers (mpiifort etc) under the relative path $I_MPI_ROOT/intel64/bin. However, the configure run reports that the MPI C, C++, and Fortran compilers are not set, causing it to error out with the “MPI support does not work” message.Curiously enough, when I echo $MPIFC it returns a different directory with its own copies of the compiler wrappers, but setting that directory as my with_mpi didn’t work either. Does anyone know what’s going on?

Putting the MPI configuration output here since the forum software doesn’t trust me with the power of file upload yet:

  === Multicore architecture support                                         ===
 ==============================================================================

checking whether to enable OpenMP support... yes
checking Fortran flags for OpenMP... -qopenmp
checking whether OpenMP COLLAPSE works... yes
configure: OpenMP support is enabled in Fortran source code only
checking whether to enable MPI... yes
checking how MPI parameters have been set... dir
checking whether the MPI C compiler is set... no
checking whether the MPI C++ compiler is set... no
checking whether the MPI Fortran compiler is set... no
checking for MPI C preprocessing flags... -I/turquoise/usr/projects/hpcsoft/tce/linux-rhel7-x86_64_v3-omnipath-none/linux-rhel7-broadwell/intel-parallel-studio/intel-2021.2.0/intel-parallel-studio-cluster.2020.4-2pqnyusu2fldhobiiduq2tf3xysuukgb/compilers_and_libraries_2020.4.304/linux/mpi/include
checking for MPI C flags...
checking for MPI C++ flags...
checking for MPI Fortran flags...  -I/turquoise/usr/projects/hpcsoft/tce/linux-rhel7-x86_64_v3-omnipath-none/linux-rhel7-broadwell/intel-parallel-studio/intel-2021.2.0/intel-parallel-studio-cluster.2020.4-2pqnyusu2fldhobiiduq2tf3xysuukgb/compilers_and_libraries_2020.4.304/linux/mpi/include
checking for MPI linker flags...
checking for MPI library flags... -L/turquoise/usr/projects/hpcsoft/tce/linux-rhel7-x86_64_v3-omnipath-none/linux-rhel7-broadwell/intel-parallel-studio/intel-2021.2.0/intel-parallel-studio-cluster.2020.4-2pqnyusu2fldhobiiduq2tf3xysuukgb/compilers_and_libraries_2020.4.304/linux/mpi/lib -lmpi
checking whether the MPI C API works... no
checking whether the MPI C environment works... no
configure: error: in `/users/steventhar/abinit-9.6.2':
configure: error: MPI support does not work
See `config.log' for more details

config.log (275.6 KB)

Hi Steven,

2 first comments:

can you use :

with_mpi="yes"

and, if you used a ABINIT version >= 9,

with_dft_flavor=“libxc”

is depreacated.

jmb

I just tried changing these two options, but it produced the same standard output.
config.log (271.0 KB)

Hi,

I suspect that the environment variables FC, CC, … are defined and therefore the BuildSystem takes these contents.

As seen in the config.log :

configure:7214: overriding configuration of CC from environment
configure:7396: overriding configuration of CXX from environment
configure:7543: overriding configuration of FC from environment

and confirmed on config.log

CC='/turquoise/usr/projects/hpcsoft/tce/linux-rhel7-x86_64_v3-omnipath-none/packages/intel-mpi/intel-mpi-cluster.2020.4/bin/icc'
CXX='/turquoise/usr/projects/hpcsoft/tce/linux-rhel7-x86_64_v3-omnipath-none/packages/intel-mpi/intel-mpi-cluster.2020.4/bin/icpc'
FC='/turquoise/usr/projects/hpcsoft/tce/linux-rhel7-x86_64_v3-omnipath-none/packages/intel-mpi/intel-mpi-cluster.2020.4/bin/ifort'

try to unset FC, CC and CXX before start

./configure

jmb

Thanks for the advice. I tried unsetting these variables; while this got rid of the override messages, the final error was still the same.config.log (267.2 KB)

There is a problem with your environment…

extract from config.log :

configure:22927: checking whether the MPI C API works
configure:22961: mpiicc -o conftest -g -O3           conftest.c -lmpi  >&5
conftest.c(91): catastrophic error: cannot open source file "mpi.h"
  #include <mpi.h>
                  ^

compilation aborted for conftest.c (code 4)

when everything works :

configure:23028: checking whether the MPI C API works
configure:23062: mpiicc -o conftest -g -O2           conftest.c -lmpi  >&5
configure:23062: $? = 0

can send the output of

env

I had exited the terminal, so I had to reload it to reproduce the error and the MPI stuff worked this time. I think last time, I unset MPIFC as well as FC. This time, just unsetting FC, CC, and CXX but leaving their MPI equivalents untouched caused it to work.

Thanks! Now I just need to get netcdf, libxc, and the FFT libraries working . . . you may see more forum posts from me at some point.