Obtaining a Licence
Research Computing does not have a license for VASP. See this page for information about licensing:
https://www.vasp.at/faqs
In short, VASP licenses are given to well-defined research groups only. They are not granted by department or institution. Furthermore, they are not personal licenses. All members of a VASP group have to work in the same organizational unit (department, institute) at the same location. The way to get a license is for PI of a research group to obtain it.
Once a research group obtains a license, Research Computing can help with the installation of the software and configuring the license server.
Installation
Della (Installation Method #1)
VASP provides installation directions for Version 6.X.X. Here is a sample procedure for Della (CPU):
$ ssh <YourNetID>@della8.princeton.edu $ cd /path/to/vasp.6.2.1 $ cp arch/makefile.include.linux_intel_omp ./makefile.include $ module purge $ module load intel/2021.1.2 intel-mpi/intel/2021.1.1 $ make DEPS=1 -j8 all
The code should build successfully. The next step is to run the test suite:
$ export PATH=$PATH:/path/to/vasp.6.3.1/bin $ unset I_MPI_HYDRA_BOOTSTRAP # for testing only, do not include in Slurm script $ unset I_MPI_PMI_LIBRARY # for testing only, do not include in Slurm script $ make test
The directions above build an OpenMP/MPI version of VASP. Here is a general Slurm script for such a case. Be sure to include the two environment modules and the definition of the PATH environment variable in your Slurm script. You will also need to conduct a scaling analysis to find the optimal Slurm directives.
Below is a sample Slurm script:
#!/bin/bash #SBATCH --job-name=vasp # create a short name for your job #SBATCH --nodes=1 # node count #SBATCH --ntasks-per-node=8 # total number of tasks per node #SBATCH --cpus-per-task=2 # cpu-cores per task (>1 if multi-threaded tasks) #SBATCH --mem-per-cpu=4G # memory per cpu-core (4G is default) #SBATCH --time=00:01:00 # total run time limit (HH:MM:SS) #SBATCH --mail-type=begin # send email when job begins #SBATCH --mail-type=end # send email when job ends #SBATCH --mail-user=<YourNetID>@princeton.edu #SBATCH --constraint=cascade export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK export PATH=$PATH:/path/to/vasp.6.3.1/bin module purge module load intel/2021.1.2 intel-mpi/intel/2021.1.1 srun vasp_std <inputs>
Della (Installation Method #2)
This approach results in a CPU-only build of VASP that runs in parallel using Intel's Math Kernel Library (MKL), Intel's built-in MPI implementation (Intel MPI), and the Intel-optimized HDF5 libraries. Both the AVX2 and AVX512 instruction sets are enabled with this build procedure, so the resulting executables can run on any Della cluster nodes. The example procedures presented here are based on using VASP 6.3.1.
$ ssh <YourNetID>@della.princeton.edu $ cd /home/$USER/vasp.6.3.1 # Path to your VASP install directory $ cp arch/makefile.include.linux_intel ./makefile.include
Uncomment and edit lines in the new makefile.include file so they match these:
VASP_TARGET_CPU ?= -axCORE-AVX2,CORE-AVX512 FCL += -qmkl=sequential MKLROOT ?= /opt/intel/oneapi/mkl/2022.2.0 LLIBS += -L$(MKLROOT)/lib/intel64 -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 INCS =-I$(MKLROOT)/include/fftw CPP_OPTIONS+= -DVASP_HDF5 HDF5_ROOT ?= /usr/local/hdf5/intel-2021.1/intel-mpi/1.10.6 LLIBS += -L$(HDF5_ROOT)/lib64 -lhdf5_fortran INCS += -I$(HDF5_ROOT)/include
Next, load the needed software modules, remove any components from previous build attempts, and start the build (compiling) process:
$ module purge $ module load intel/2022.2.0 $ module load intel-mpi/intel/2021.7.0 $ module load hdf5/intel-2021.1/intel-mpi/1.10.6 $ rm -rf bin/* build/* $ make DEPS=1 -j8 all
Below is an example SLURM job script for this build of VASP:
#!/bin/bash #SBATCH --job-name=VASP-EX #SBATCH --nodes=1 #SBATCH --ntasks=8 #SBATCH --mem=4G #SBATCH --time=00:02:00 #SBATCH --mail-type=end #SBATCH [email protected] module purge module load intel/2022.2.0 module load intel-mpi/intel/2021.7.0 module load hdf5/intel-2021.1/intel-mpi/1.10.6 srun /home/$USER/vasp.6.3.1/bin/vasp_gam
Della (Installation Method #3)
This approach results in a GPU-enabled build of VASP that runs in parallel using Intel's Math Kernel Library (MKL) and the NVIDIA HPC SDK's built-in MPI implementation. The compiler options that can be passed to the NVIDIA HPC SDK's compilers are not as feature-specific as those of the Intel compiler suite, so here we're enabling a broad range of instruction sets (those associated with Dellla hardware) by using the most basic standard: Haswell. The example procedures presented here are based on using VASP 6.3.1.
$ ssh <YourNetID>@della-gpu.princeton.edu $ cd /home/$USER/vasp.6.3.1 # Path to your VASP install directory $ cp arch/makefile.include.nvhpc_ompi_mkl_omp ./makefile.include
Uncomment and edit lines in the new makefile.include file so they match these:
FC = mpif90 -cuda -gpu=cc60,cc70,cc80,cuda11.7 -mp FCL = mpif90 -cuda -gpu=cc60,cc70,cc80,cuda11.7 -mp -c++libs VASP_TARGET_CPU ?= -tp haswell MKLROOT ?= /opt/intel/oneapi/mkl/2022.2.0
Next, load the needed software modules, remove any components from previous build attempts, and start the build (compiling) process:
$ module purge $ module load intel/2022.2.0 $ module load intel-mpi/intel/2021.7.0 $ export NVHPC_CUDA_HOME=/opt/nvidia/hpc_sdk/Linux_x86_64/22.5 $ rm -rf bin/* build/* $ make DEPS=1 -j8 all
Below is an example SLURM job script for this build of VASP:
#!/bin/bash #SBATCH --job-name=VASP-EX #SBATCH --nodes=1 #SBATCH --ntasks=8 #SBATCH --mem=4G #SBATCH --gres=gpu:1 #SBATCH --time=00:02:00 #SBATCH --mail-type=end #SBATCH [email protected] module purge $ module load intel/2022.2.0 intel-mkl/2022.2.0 $ module load nvhpc/22.5 openmpi/nvhpc-22.5/4.1.3 $ export NVHPC_CUDA_HOME=/opt/nvidia/hpc_sdk/Linux_x86_64/22.5 srun /home/$USER/vasp.6.3.1/bin/vasp_std
Tiger
The procedure above can be used as a guide for Tiger. Please use these modules:
module load intel/19.1/64/19.1.1.217 module load intel-mkl/2020.1/1/64 module load intel-mpi/intel/2019.7/64 module load hdf5/intel-17.0/intel-mpi/1.10.0