NAMD on the HPC Clusters

NAMD is a parallel molecular dynamics code designed for the simulation of biomolecular systems. You should build an MPI version of the code from source instead of using one of the pre-compiled executables since these executables are not Slurm-aware. The directions for doing this depend on the cluster. Here is the NAMD documentation.

Container

Consider using the NAMD 3 container from NVIDIA GPU Cloud. See our Singularity page for directions on running the container.

 

TigerCPU, Stellar or Adroit (CPU)

These directions are for building an MPI verison of NAMD 2.13 (official release) for one of the CPU clusters:

$ ssh <YourNetID>@della.princeton.edu  # or another CPU cluster
$ cd software  # or another directory
$ wget https://raw.githubusercontent.com/PrincetonUniversity/hpc_beginning_workshop/master/RC_example_jobs/namd/namd_2.13_cpu_only.sh 
$ bash namd_2.13_cpu_only.sh | tee build.log

This will build the namd2 executable in ~/software/NAMD_2.13_Source/Linux-x86_64-intel/. You may consider adding this path to your PATH environment variable.

Below is a sample Slurm script for a NAMD job on one of the CPU clusters:

#!/bin/bash
#SBATCH --job-name=namd-cpu      # create a short name for your job
#SBATCH --nodes=1                # node count
#SBATCH --ntasks=4               # total number of tasks across all nodes
#SBATCH --cpus-per-task=1        # cpu-cores per task (>1 if multi-threaded tasks)
#SBATCH --mem-per-cpu=4G         # memory per cpu-core (4G per cpu-core is default)
#SBATCH --time=00:10:00          # total run time limit (HH:MM:SS)
#SBATCH --mail-type=begin        # send email when job begins
#SBATCH --mail-type=end          # send email when job ends
#SBATCH --mail-type=fail         # send email if job fails
#SBATCH --mail-user=<YourNetID>@princeton.edu

module purge
module load intel/19.0/64/19.0.5.281 intel-mpi/intel/2019.5/64

srun </path/to/>namd2 <configfile>

Note that the build above produces a flat MPI version meaning it does not use threading. If you decide to use charmrun instead of srun then you will need to set -c 1 since the default is -c 2.

benchmark system with 92K atoms can be obtained with:

$ wget https://www.ks.uiuc.edu/Research/namd/utilities/apoa1.tar.gz
$ tar zxf apoa1.tar.gz

 

Della

Della is a CPU cluster composed of different generations of Intel processors. The code can be built the same as above but the Slurm script should be written to ignore the slowest nodes:

#!/bin/bash
#SBATCH --job-name=namd-cpu      # create a short name for your job
#SBATCH --nodes=1                # node count
#SBATCH --ntasks=4               # total number of tasks across all nodes
#SBATCH --cpus-per-task=1        # cpu-cores per task (>1 if multi-threaded tasks)
#SBATCH --mem-per-cpu=4G         # memory per cpu-core (4G per cpu-core is default)
#SBATCH --time=00:10:00          # total run time limit (HH:MM:SS)
#SBATCH --mail-type=begin        # send email when job begins
#SBATCH --mail-type=end          # send email when job ends
#SBATCH --mail-type=fail         # send email if job fails
#SBATCH --mail-user=<YourNetID>@princeton.edu
#SBATCH --exclude=della-r4c1n[1-16] # exclude ivy nodes

module purge
module load intel/19.0/64/19.0.5.281 intel-mpi/intel/2018.3/64

srun </path/to/>namd2 <configfile>

 

TigerGPU

NAMD can only use GPUs on a single node. Therefore it is fine to use the binary executable that they provide:

$ ssh <YourNetID>@tigergpu.princeton.edu
$ cd software  # or another directory
$ wget https://www.ks.uiuc.edu/Research/namd/2.13/download/412487/NAMD_2.13_Linux-x86_64-multicore-CUDA.tar.gz
$ tar zxf NAMD_2.13_Linux-x86_64-multicore-CUDA.tar.gz

Below is a Slurm script for running the single-node GPU version of NAMD:

#!/bin/bash
#SBATCH --job-name=namd-gpu      # create a short name for your job
#SBATCH --nodes=1                # node count
#SBATCH --ntasks-per-node=1      # total number of tasks across all nodes
#SBATCH --cpus-per-task=4        # cpu-cores per task (>1 if multi-threaded tasks)
#SBATCH --mem-per-cpu=4G         # memory per cpu-core (4G per cpu-core is default)
#SBATCH --time=00:10:00          # total run time limit (HH:MM:SS)
#SBATCH --gres=gpu:1             # number of gpus per node
#SBATCH --mail-type=begin        # send email when job begins
#SBATCH --mail-type=end          # send email when job ends
#SBATCH --mail-type=fail         # send email if job fails
#SBATCH --mail-user=<YourNetID>@princeton.edu

module purge
PATH=$PATH:</path/to>/NAMD_2.13_Linux-x86_64-multicore-CUDA
charmrun namd2 +p${SLURM_CPUS_PER_TASK} apoa1/apoa1.namd

Users should find the optimal value of --cpus-per-task for their work.

 

VMD

See this page for installing VMD on the clusters.