A popular simulation package for ab initio calculations

Obtaining a Licence

Research Computing does not have a license for VASP. See the VASP FAQ for information about licensing.

In short, VASP licenses are given to well-defined research groups only. They are not granted by department or institution. Furthermore, they are not personal licenses. All members of a VASP group have to work in the same organizational unit (department, institute) at the same location. The PI of the research group must obtain the license.

Once a research group obtains a license, Research Computing can help with the installation of the software and configuring the license server.

VASP can be built in various ways. We cannot provide build directions for all possible variations. If you encounter problems then submit a support ticket or attend an in-person help session.

Installation

Tiger3

$ ssh <YourNetID>@tiger3.princeton.edu
$ cd /path/to/vasp.6.3.X
# create a file called makefile.include in /path/to/vasp.6.3.X (see below)

Below are the contents of makefile.include:

# Default precompiler options
CPP_OPTIONS = -DHOST=\"LinuxIFC\" \
              -DMPI -DMPI_BLOCK=8000 -Duse_collective \
              -DscaLAPACK \
              -DCACHE_SIZE=4000 \
              -Davoidalloc \
              -Dvasp6 \
              -Duse_bse_te \
              -Dtbdyn \
              -Dfock_dblbuf \
              -D_OPENMP
CPP         = fpp -f_com=no -free -w0  $*$(FUFFIX) $*$(SUFFIX) $(CPP_OPTIONS)
FC          = mpiifx -qopenmp
FCL         = mpiifx
FREE        = -free -names lowercase
FFLAGS      = -assume byterecl -w
OFLAG       = -O2
OFLAG_IN    = $(OFLAG)
DEBUG       = -O0
OBJECTS     = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o
OBJECTS_O1 += fftw3d.o fftmpi.o fftmpiw.o
OBJECTS_O2 += fft3dlib.o
# For what used to be vasp.5.lib
CPP_LIB     = $(CPP)
FC_LIB      = $(FC)
CC_LIB      = icx
CFLAGS_LIB  = -O
FFLAGS_LIB  = -O1
FREE_LIB    = $(FREE)
OBJECTS_LIB = linpack_double.o
# For the parser library
CXX_PARS    = icpx
LLIBS       = -lstdc++
##
## Customize as of this point! Of course you may change the preceding
## part of this file as well if you like, but it should rarely be
## necessary ...
##
# When compiling on the target machine itself, change this to the
# relevant target when cross-compiling for another architecture
VASP_TARGET_CPU ?= -xHOST
FFLAGS     += $(VASP_TARGET_CPU)
# Intel MKL (FFTW, BLAS, LAPACK, and scaLAPACK)
# (Note: for Intel Parallel Studio's MKL use -mkl instead of -qmkl)
FCL        += -qmkl
MKLROOT    ?= /opt/intel/oneapi/mkl/2024.2
LLIBS      += -L$(MKLROOT)/lib -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64
INCS        =-I$(MKLROOT)/include/fftw

After creating makefile.include, load the appropriate modules and build the code:

$ module purge
$ module load intel-oneapi/2024.2
$ module load intel-mpi/oneapi/2021.13
$ module load intel-mkl/2024.2
$ make DEPS=1 -j 16 all

Run the test suite:

$ export PATH=$PATH:/path/to/vasp.6.3.X/bin
$ unset I_MPI_HYDRA_BOOTSTRAP # for testing only, do not include in Slurm script
$ unset I_MPI_PMI_LIBRARY     # for testing only, do not include in Slurm script
$ make test

Below is a sample Slurm script:

#!/bin/bash
#SBATCH --job-name=vasp          # create a short name for your job
#SBATCH --nodes=1                # node count
#SBATCH --ntasks-per-node=4      # total number of tasks per node
#SBATCH --cpus-per-task=2        # cpu-cores per task (>1 if multi-threaded tasks)
#SBATCH --mem-per-cpu=4G         # memory per cpu-core (4G is default)
#SBATCH --time=00:01:00          # total run time limit (HH:MM:SS)
#SBATCH --mail-type=begin        # send email when job begins
#SBATCH --mail-type=end          # send email when job ends
#SBATCH --mail-user=<YourNetID>@princeton.edu

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export SRUN_CPUS_PER_TASK=$SLURM_CPUS_PER_TASK
export PATH=$PATH:/path/to/vasp.6.3.1/bin

module purge
module load intel-oneapi/2024.2
module load intel-mpi/oneapi/2021.13
module load intel-mkl/2024.2

srun vasp_std <inputs>

Users will need to find the optimal values for --nodes, --ntasks-per-node and --cpus-per-task.

 

Della (Installation Method #1)

VASP provides installation directions for Version 6.X.X. Here is a sample procedure for Della (CPU):

$ ssh <YourNetID>@della8.princeton.edu
$ cd /path/to/vasp.6.2.1
$ cp arch/makefile.include.linux_intel_omp  ./makefile.include
$ module purge
$ module load intel/2021.1.2 intel-mpi/intel/2021.1.1
$ make DEPS=1 -j8 all

The code should build successfully. The next step is to run the test suite:

$ export PATH=$PATH:/path/to/vasp.6.3.1/bin
$ unset I_MPI_HYDRA_BOOTSTRAP # for testing only, do not include in Slurm script
$ unset I_MPI_PMI_LIBRARY     # for testing only, do not include in Slurm script
$ make test

The directions above build an OpenMP/MPI version of VASP. Here is a general Slurm script for such a case. Be sure to include the two environment modules and the definition of the PATH environment variable in your Slurm script. You will also need to conduct a scaling analysis to find the optimal Slurm directives.

Below is a sample Slurm script:

#!/bin/bash
#SBATCH --job-name=vasp          # create a short name for your job
#SBATCH --nodes=1                # node count
#SBATCH --ntasks-per-node=8      # total number of tasks per node
#SBATCH --cpus-per-task=2        # cpu-cores per task (>1 if multi-threaded tasks)
#SBATCH --mem-per-cpu=4G         # memory per cpu-core (4G is default)
#SBATCH --time=00:01:00          # total run time limit (HH:MM:SS)
#SBATCH --mail-type=begin        # send email when job begins
#SBATCH --mail-type=end          # send email when job ends
#SBATCH --mail-user=<YourNetID>@princeton.edu
#SBATCH --constraint=cascade
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export PATH=$PATH:/path/to/vasp.6.3.1/bin
module purge
module load intel/2021.1.2 intel-mpi/intel/2021.1.1
srun vasp_std <inputs>

Della (Installation Method #2)

This approach results in a CPU-only build of VASP that runs in parallel using Intel's Math Kernel Library (MKL), Intel's built-in MPI implementation (Intel MPI), and the Intel-optimized HDF5 libraries. Both the AVX2 and AVX512 instruction sets are enabled with this build procedure, so the resulting executables can run on any Della cluster nodes. The example procedures presented here are based on using VASP 6.3.1.

$ ssh <YourNetID>@della.princeton.edu
$ cd /home/$USER/vasp.6.3.1  # Path to your VASP install directory
$ cp arch/makefile.include.intel_omp ./makefile.include

Uncomment and edit lines in the new makefile.include file so they match these:​​​​​

VASP_TARGET_CPU ?= -axCORE-AVX2,CORE-AVX512
FCL        += -qmkl=sequential
MKLROOT    ?= /opt/intel/oneapi/mkl/2022.2.0
LLIBS      += -L$(MKLROOT)/lib/intel64 -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64
INCS        = -I$(MKLROOT)/include/fftw
CPP_OPTIONS+= -DVASP_HDF5
HDF5_ROOT  ?= /usr/local/hdf5/intel-2021.1/intel-mpi/1.10.6
LLIBS      += -L$(HDF5_ROOT)/lib64 -lhdf5_fortran
INCS       += -I$(HDF5_ROOT)/include

Next, load the needed software modules, remove any components from previous build attempts, and start the build (compiling) process:

$ module purge
$ module load intel/2022.2.0
$ module load intel-mpi/intel/2021.7.0
$ module load hdf5/intel-2021.1/intel-mpi/1.10.6
$ rm -rf bin/* build/*
$ make DEPS=1 -j8 all

Below is an example SLURM job script for this build of VASP:

#!/bin/bash
#SBATCH --job-name=VASP-EX
#SBATCH --nodes=1
#SBATCH --ntasks=8
#SBATCH --mem=4G 
#SBATCH --time=00:02:00
#SBATCH --mail-type=end
#SBATCH [email protected]
module purge
module load intel/2022.2.0
module load intel-mpi/intel/2021.7.0
module load hdf5/intel-2021.1/intel-mpi/1.10.6
srun /home/$USER/vasp.6.3.1/bin/vasp_gam

Della (Installation Method #3 for GPUs)

Directions by Donghao Zheng of Geosciences.

This approach results in a GPU-enabled build of VASP that runs in parallel using Intel's Math Kernel Library (MKL) and Open MPI. The example procedure presented here is based on using VASP 6.3.2. You can also reproduce it in VASP 6.3.1 version.

Note: Move the VASP source code to the recommended path of /scratch/gpfs/<YourNetID>/

Log in to the della-gpu:

$ ssh <YourNetID>@della-gpu.princeton.edu

Go to the VASP directory and create the makefile.include file:

$ cd /home/$USER/vasp.6.3.2  # path to your VASP install directory
$ cp arch/makefile.include.nvhpc_ompi_mkl_omp_acc ./makefile.include

Modify the makefile.include file using a text editor as follows:

FC          = mpif90 -acc -gpu=cc80,cuda11.3 -mp
FCL         = mpif90 -acc -gpu=cc80,cuda11.3 -mp -c++libs
.........................
VASP_TARGET_CPU ?= -tp haswell
.........................
MKLROOT    ?= /opt/intel/oneapi/mkl/2022.2.0

Comment out the following two lines like so (i.e., add the "#" character):

#SCALAPACK_ROOT ?= /path/to/your/scalapack/installation
#LLIBS_MKL   = -L$(SCALAPACK_ROOT)/lib -lscalapack -Mmkl

Run the commands below to load the appropriate environment modules:

$ module purge
$ module load nvhpc/21.5
$ module load cudatoolkit/11.3
$ module load openmpi/cuda-11.3/nvhpc-21.5/4.1.1
$ module load intel-tbb/2021.7.0
$ module load intel-rt/2022.2.0
$ module load intel-mkl/2022.2.0

Set the following environment variable:

$ export NVHPC_CUDA_HOME=/opt/nvidia/hpc_sdk/Linux_x86_64/21.5

Lastly, remove any previous builds and compile the code:

$ rm -rf bin/* build/*
$ make DEPS=1 -j4 all

Below is a sample Slurm script for this build:

#!/bin/bash
#SBATCH --job-name=User-job
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --mem=16G
#SBATCH --gres=gpu:1
#SBATCH --time=00:20:00
#SBATCH --mail-type=end
#SBATCH --mail-user=<YourNetID>@princeton.edu
module purge
module load nvhpc/21.5
module load cudatoolkit/11.3
module load openmpi/cuda-11.3/nvhpc-21.5/4.1.1
module load intel-mkl/2022.2.0
module load intel-tbb/2021.7.0
module load intel-rt/2022.2.0
export NVHPC_CUDA_HOME=/opt/nvidia/hpc_sdk/Linux_x86_64/21.5
srun /path/to/VASP/vasp_gam <inputs>

Della (Installation Method #4)

Note: The previous CUDA-C GPU-port of VASP is considered to be deprecated and is no longer actively developed, maintained, or supported. As of VASP.6.3.0, the CUDA-C GPU-port of VASP has been dropped completely.

This approach results in a GPU-enabled build of VASP that runs in parallel using Intel's Math Kernel Library (MKL) and the NVIDIA HPC SDK's built-in MPI implementation. The compiler options that can be passed to the NVIDIA HPC SDK's compilers are not as feature-specific as those of the Intel compiler suite, so here we're enabling a broad range of instruction sets (those associated with Dellla hardware) by using the most basic standard: Haswell. The example procedures presented here are based on using VASP 6.3.1.

$ ssh <YourNetID>@della-gpu.princeton.edu
$ cd /home/$USER/vasp.6.3.1  # Path to your VASP install directory
$ cp arch/makefile.include.nvhpc_ompi_mkl_omp ./makefile.include

Uncomment and edit lines in the new makefile.include file so they match these:​​​​​

FC          = mpif90 -cuda -gpu=cc60,cc70,cc80,cuda11.7 -mp
FCL         = mpif90 -cuda -gpu=cc60,cc70,cc80,cuda11.7 -mp -c++libs
VASP_TARGET_CPU ?= -tp haswell
MKLROOT    ?= /opt/intel/oneapi/mkl/2022.2.0

Next, load the needed software modules, remove any components from previous build attempts, and start the build (compiling) process:

$ module purge
$ module load intel/2022.2.0
$ module load intel-mpi/intel/2021.7.0
$ export NVHPC_CUDA_HOME=/opt/nvidia/hpc_sdk/Linux_x86_64/22.5
$ rm -rf bin/* build/*
$ make DEPS=1 -j8 all

Below is an example SLURM job script for this build of VASP:

#!/bin/bash
#SBATCH --job-name=VASP-EX
#SBATCH --nodes=1
#SBATCH --ntasks=8
#SBATCH --mem=4G 
#SBATCH --gres=gpu:1
#SBATCH --time=00:02:00
#SBATCH --mail-type=end
#SBATCH [email protected]
module purge
$ module load intel/2022.2.0 intel-mkl/2022.2.0
$ module load nvhpc/22.5 openmpi/nvhpc-22.5/4.1.3
$ export NVHPC_CUDA_HOME=/opt/nvidia/hpc_sdk/Linux_x86_64/22.5
srun /home/$USER/vasp.6.3.1/bin/vasp_std

Tiger

The procedures above can be used as a guide for Tiger. Please use these modules:

module load intel/19.1/64/19.1.1.217
module load intel-mkl/2020.1/1/64
module load intel-mpi/intel/2019.7/64
module load hdf5/intel-17.0/intel-mpi/1.10.0

Adroit

The procedures above can be used as a guide for Adroit. Please use these modules:

module purge
module load intel/2021.1.2
module load intel-mpi/intel/2021.3.1
module load hdf5/intel-2021.1/1.10.6
make veryclean  # this will clean out any object files from previous builds
make

Edit your makefile.include so it looks like this:

VASP_TARGET_CPU ?= -axCORE-AVX2,CORE-AVX512
FFLAGS += $(VASP_TARGET_CPU)
# Intel MKL (FFTW, BLAS, LAPACK, and scaLAPACK)
# (Note: for Intel Parallel Studio's MKL use -mkl instead of -qmkl)
FCL +=
# MKLROOT ?= $(MKLROOT)
LLIBS += -L$(MKLROOT)/lib/intel64 -lmkl_scalapack_lp64 -lmkl_intel_lp64
-lmkl_sequential -lmkl_core -lmkl_blacs_intelmpi_lp64 -lpthread -lm -ldl
INCS =-I$(MKLROOT)/include/fftw