Compiling and Running MPI Jobs

Compiling Parallel MPI Programs

The Intel, PGI and GNU compilers are installed on all the clusters. The standard MPI implementation is Intel MPI, which supports the Infiniband infrastructure. Open MPI is also available.

To set up your environment.

$ module load intel/19.1/64/19.1.1.217 intel-mpi/intel/2019.7/64

To compile Fortran code:

$ mpif90 myMPIcode.f90

See an example for Fortran 90.

To compile C code:

$ mpicc myMPIcode.c

To compile C++ code:

$ mpicxx myMPIcode.cpp

See an example for C++.

 

Compiling Vectorized Code on Della

Please note: If you are compiling programs with the -x option, you will need to use -ax on Della as described below. On Della the -xCORE-AVX2 or -xHost options can result in poor performance or error messages.

Compiling with -xHost on the Della head node (a Broadwell) will produce code optimized for Broadwell processors. As a result, when run on the older nodes, the executable will fail with an error message similar to: "Please verify that both the operating system and the processor support Intel(R) F16C and AVX1 instructions." When running on the Skylake nodes, it may run below optimal performance. The recommended solution is to use the -ax flag to tell the compiler to build a binary with instruction sets for each architecture, and choose the best one at runtime. For example, instead of -xCORE-AVX2 or -xHost, use:

$ icc -Ofast -xCORE-AVX2 -axCORE-AVX512 -o myexe mycode.c The resulting executable will then be able to run on both Broadwell as well as Skylake and Cascade nodes. It will failure on Ivy nodes and those should be excluded with: #SBATCH --constraint=haswell|broadwell

 

Submitting an MPI Job

See the "Multinode Jobs" section of the Slurm page for an example. You will need to load the same environment modules in your Slurm script that you used to compile the code.