Hello,
I’m trying to build parallel Abinit on a national lab computing cluster, using the Intel compilers and Intel MPI, but I’m having trouble with the MPI setup. I have the loaded modules 1) lanlpe/.tce (H) 2) mkl/cluster.2020.4 3) intel/2021.2.0 4) intel-mpi/cluster.2020.4
I’m using the following configuration options:
with_mpi="${I_MPI_ROOT}"
FC="mpiifort"
CC="mpiicc"
CXX="mpiicpc"
with_optim_flavor="aggressive"
enable_openmp="yes"
#LINALG_LIBS="${MKLROOT}/lib/intel64/libmkl_lapack95_ilp64.a -L${MKLROOT}/lib/intel64 -lmkl_scalapack_ilp64 -lmkl_bl$
lmkl_intel-thread="yes"
with_linalg_flavor="mkl"
with_dft_flavor"libxc"
When I echo $I_MPI_ROOT, it returns a directory, which contains the required intel compiler wrappers (mpiifort etc) under the relative path $I_MPI_ROOT/intel64/bin. However, the configure run reports that the MPI C, C++, and Fortran compilers are not set, causing it to error out with the “MPI support does not work” message.Curiously enough, when I echo $MPIFC it returns a different directory with its own copies of the compiler wrappers, but setting that directory as my with_mpi didn’t work either. Does anyone know what’s going on?
Putting the MPI configuration output here since the forum software doesn’t trust me with the power of file upload yet:
=== Multicore architecture support ===
==============================================================================
checking whether to enable OpenMP support... yes
checking Fortran flags for OpenMP... -qopenmp
checking whether OpenMP COLLAPSE works... yes
configure: OpenMP support is enabled in Fortran source code only
checking whether to enable MPI... yes
checking how MPI parameters have been set... dir
checking whether the MPI C compiler is set... no
checking whether the MPI C++ compiler is set... no
checking whether the MPI Fortran compiler is set... no
checking for MPI C preprocessing flags... -I/turquoise/usr/projects/hpcsoft/tce/linux-rhel7-x86_64_v3-omnipath-none/linux-rhel7-broadwell/intel-parallel-studio/intel-2021.2.0/intel-parallel-studio-cluster.2020.4-2pqnyusu2fldhobiiduq2tf3xysuukgb/compilers_and_libraries_2020.4.304/linux/mpi/include
checking for MPI C flags...
checking for MPI C++ flags...
checking for MPI Fortran flags... -I/turquoise/usr/projects/hpcsoft/tce/linux-rhel7-x86_64_v3-omnipath-none/linux-rhel7-broadwell/intel-parallel-studio/intel-2021.2.0/intel-parallel-studio-cluster.2020.4-2pqnyusu2fldhobiiduq2tf3xysuukgb/compilers_and_libraries_2020.4.304/linux/mpi/include
checking for MPI linker flags...
checking for MPI library flags... -L/turquoise/usr/projects/hpcsoft/tce/linux-rhel7-x86_64_v3-omnipath-none/linux-rhel7-broadwell/intel-parallel-studio/intel-2021.2.0/intel-parallel-studio-cluster.2020.4-2pqnyusu2fldhobiiduq2tf3xysuukgb/compilers_and_libraries_2020.4.304/linux/mpi/lib -lmpi
checking whether the MPI C API works... no
checking whether the MPI C environment works... no
configure: error: in `/users/steventhar/abinit-9.6.2':
configure: error: MPI support does not work
See `config.log' for more details