NAMD is a parallel molecular dynamics code designed for the simulation of biomolecular systems. You should build an MPI version of the code from source instead of using one of the pre-compiled executables since these executables are not Slurm-aware. The directions for doing this depend on the cluster.
Container for GPU Clusters
We recommend using the NAMD 3 container from NVIDIA GPU Cloud. See our Singularity page for general directions on running containers.
CPU Clusters (e.g., Della)
These directions are for building an MPI verison of NAMD 2.13:
$ ssh <YourNetID>@della.princeton.edu # or another CPU cluster $ cd software # or another directory $ wget https://raw.githubusercontent.com/PrincetonUniversity/hpc_beginning_workshop/refs/heads/main/namd/namd_2.13_cpu_only.sh $ bash namd_2.13_cpu_only.sh | tee build.log
This will build the namd2 executable in ~/software/NAMD_2.13_Source/Linux-x86_64-intel/. You may consider adding this path to your PATH environment variable.
Below is a sample Slurm script for a NAMD job on one of the CPU clusters:
#!/bin/bash #SBATCH --job-name=namd-cpu # create a short name for your job #SBATCH --nodes=1 # node count #SBATCH --ntasks=4 # total number of tasks across all nodes #SBATCH --cpus-per-task=1 # cpu-cores per task (>1 if multi-threaded tasks) #SBATCH --mem-per-cpu=4G # memory per cpu-core (4G per cpu-core is default) #SBATCH --time=00:10:00 # total run time limit (HH:MM:SS) #SBATCH --mail-type=begin # send email when job begins #SBATCH --mail-type=end # send email when job ends #SBATCH --mail-type=fail # send email if job fails #SBATCH --mail-user=<YourNetID>@princeton.edu module purge module load intel-oneapi/2024.2 intel-mpi/oneapi/2021.13 srun </path/to/>namd2 <configfile>
Note that the build above produces a flat MPI version meaning it does not use threading. If you decide to use charmrun instead of srun then you will need to set -c 1 since the default is -c 2.
A benchmark system with 92K atoms can be obtained with:
$ wget https://www.ks.uiuc.edu/Research/namd/utilities/apoa1.tar.gz $ tar zxf apoa1.tar.gz
GPU Clusters
NAMD can only use GPUs on a single node. Therefore it is fine to use the binary executable that they provide:
$ ssh <YourNetID>@tigergpu.princeton.edu $ cd software # or another directory $ wget https://www.ks.uiuc.edu/Research/namd/2.13/download/412487/NAMD_2.13_Linux-x86_64-multicore-CUDA.tar.gz $ tar zxf NAMD_2.13_Linux-x86_64-multicore-CUDA.tar.gz
Below is a Slurm script for running the single-node GPU version of NAMD:
#!/bin/bash #SBATCH --job-name=namd-gpu # create a short name for your job #SBATCH --nodes=1 # node count #SBATCH --ntasks-per-node=1 # total number of tasks across all nodes #SBATCH --cpus-per-task=4 # cpu-cores per task (>1 if multi-threaded tasks) #SBATCH --mem-per-cpu=4G # memory per cpu-core (4G per cpu-core is default) #SBATCH --time=00:10:00 # total run time limit (HH:MM:SS) #SBATCH --gres=gpu:1 # number of gpus per node #SBATCH --mail-type=begin # send email when job begins #SBATCH --mail-type=end # send email when job ends #SBATCH --mail-type=fail # send email if job fails #SBATCH --mail-user=<YourNetID>@princeton.edu module purge PATH=$PATH:</path/to>/NAMD_2.13_Linux-x86_64-multicore-CUDA charmrun namd2 +p${SLURM_CPUS_PER_TASK} apoa1/apoa1.namd
Users should find the optimal value of --cpus-per-task for their work.
VMD
See this GitHub page for installing VMD on the clusters.