Overview

Della is a general-purpose cluster for running serial and parallel production jobs. The cluster features both CPU and GPU nodes.

How to Access

To use the Della cluster you have to request an account and then log in through SSH.

Requesting Access to Della

Access to the large clusters like Della is granted on the basis of brief faculty-sponsored proposals (see For large clusters: Submit a proposal or contribute).

If, however, you are part of a research group with a faculty member who has contributed to or has an approved project on Della, that faculty member can sponsor additional users by sending a request to [email protected]. Any non-Princeton user must be sponsored by a Princeton faculty or staff member for a Research Computer User (RCU) account.

Logging into Della

Option 1

Once you have been granted access to Della, you can connect by opening an SSH client and using the SSH command:

For CPU or GPUs jobs using the Springdale Linux 8 operating system (VPN required from off-campus)

$ ssh <YourNetID>@della.princeton.edu

For GPU jobs (VPN required from off-campus)

$ ssh <YourNetID>@della-gpu.princeton.edu

For more on how to SSH, see the Knowledge Base article Secure Shell (SSH): Frequently Asked Questions (FAQ). If you have trouble connecting then see our SSH page.

Option 2

If you prefer to navigate Della through a graphical user interface rather than the Linux command line, there is also a web portal called MyDella (VPN required from off-campus):

https://mydella.princeton.edu

MyDella provides access to the cluster through a web browser. This enables easy file transfers and interactive jobs: RStudio, Jupyter, Stata and MATLAB

To work with visualizations, or applications that require graphical user interfaces (GUIs), use Della's visualization nodes instead.

Della OnDemand

How to Use

Since Della is a Linux system, knowing some basic Linux commands is highly recommended. For an introduction to navigating a Linux system, view the material associated with our Intro to Linux Command Line workshop. 

Using Della also requires some knowledge on how to properly use the file system, module system, and how to use the scheduler that handles each user's jobs. For an introduction to navigating Princeton's High Performance Computing systems, view our Guide to Princeton's Research Computing Clusters. Additional information specific to Della's file system, priority for job scheduling, etc. can be found below.

To work with visualizations, or applications that require graphical user interfaces (GUIs), use Della's visualization nodes.

To attend a live session of either workshop, see our Trainings page for the next available workshop.
For more resources, see our Support - How to Get Help page.

Important Guidelines

The login nodes, della8 and della-gpu, should be used for interactive work only, such as compiling programs and submitting jobs as described below. No jobs should be run on the login node, other than brief tests that last no more than a few minutes and only use a few CPU-cores. If you'd like to run a Jupyter notebook, we have a few options for running Jupyter notebooks so that you can avoid running on the login nodes.

Where practical, we ask that you entirely fill the nodes so that CPU-core fragmentation is minimized.

Hardware Configuration

Della is composed of both CPU and GPU nodes:

ProcessorNodesCores per NodeMemory per NodeMax Instruction SetGPUs per Node
2.4 GHz Intel Broadwell9628128 GBAVX2N/A
2.6 GHz Intel Skylake6432190 GBAVX-512N/A
2.8 GHz Intel Cascade Lake6432190 GBAVX-512N/A
2.6 GHz AMD EPYC Rome20128768 GBAVX22 (A100)
2.8 GHz Intel Ice Lake69481000 GBAVX-5124 (A100)
2.8 GHz Intel Ice Lake2481000 GBAVX-51228 (MIG)
2.8 ARM Neoverse-V2172575 GB--1 (GH200)*
2.1 GHz Intel Sapphire Rapids37961000 GBAVX-5128 (H100)**

Each GPU has either 10 GB, 40 GB or 80 GB of memory (see GPU Jobs below for more). The nodes of Della are connected with FDR Infiniband. Run the "shownodes" command for additional information about the nodes. Note that there are some private nodes that belong to specific departments or research groups. If you are in the "physics" Slurm account then use #SBATCH --partition=physics to access private nodes (which offer 40 CPU-cores and 380 GB of memory per node). For more technical details about the Della cluster, see the full version of the systems table. *There is one login node and one compute node with the Grace Hopper Superchip. **The H100 GPUs are only available to PLI members.

Large-Memory Nodes

Della also has 14 large-memory nodes that were purchased by CSML and are available to all users:

Number of NodesMemory per NodeCores per Node
11510 GB48
12000 GB56
103080 GB96
36150 GB96

The large-memory nodes may only be used for jobs that require large memory. Your jobs will run on these nodes if you request more memory than what is available on the regular nodes. The regular nodes provide 380 GB for those in "physics" and 190 GB for all other users. Users in the "physics" Slurm account should include the following line in their Slurm scripts:

#SBATCH --partition=physics

The large-memory nodes are monitored to ensure that only large-memory jobs run there. Use the "jobstats" command and Slurm email reports to see the memory usage of your jobs.

To see which nodes are available, run this command:

$ shownodes -p datascience

All of the large-memory nodes feature Intel Cascade Lake CPUs which support the AVX-512 instruction set.

Job Scheduling (QOS Parameters)

All jobs must be run through the Slurm scheduler on Della. If a job would exceed any of the limits below, it will be held until it is eligible to run. Jobs should not specify the qos into which it should run, allowing the Slurm scheduler to distribute the jobs accordingly.

Jobs that run on the CPU nodes will be assigned a quality of service (QOS) according to the length of time specified for the job:

CPU Jobs

QOSTime LimitJobs per UserCores per UserCores Available
test61 minutes2 jobs[30 nodes]no limit
short24 hours300 jobs300 coresno limit
medium72 hours100 jobs250 cores2000 cores
vlong144 hours
(6 days)
40 jobs160 cores1350 cores

Use the "qos" command to see the latest values for the table above.

GPU Jobs

QOSTime LimitJobs per UserNodes per UserGPUs per User
gpu-test61 minutes2 jobsno limitno limit
gpu-short24 hours30 jobs3035
gpu-medium72 hours24 jobs2424
gpu-long144 hours
(6 days)
7 jobs1616

Use the "qos" command to see the latest values for the table above. Jobs that run on the "mig" partition use the same QOSes as those above.

Jobs are further prioritized through the Slurm scheduler based on a number of factors: job size, run times, node availability, wait times, and percentage of usage over a 30 day period (fairshare). Also, these values reflect the minimum limits in effect and the actual values may be higher.

Running on Specific CPUs

The CPU nodes of Della feature different generations of Intel CPUs: broadwell (2014), skylake (2015) and cascade (2019). This can often explain why the execution time of your code varies. To see the CPU generation of each node, run the "shownodes" command and look at the FEATURES column. The newest CPU generation determines the node type. For example, "cascade,skylake,broadwell" is cascade.

To cause your jobs to run only on the cascade nodes, add this line to your Slurm script:

#SBATCH --constraint=cascade

To run on cascade and skylake while ignoring broadwell, add this line to your Slurm script:

#SBATCH --exclude=della-r4c[1-4]n[1-16],della-r1c[3,4]n[1-16]

Illegal Instruction Errors

Some of the compute nodes on Della support a lower instruction set than the login nodes. This means that if you optimize your compiled code for the login nodes, you may encounter "illegal instruction" errors when running on certain compute nodes.

CPU Jobs

If you encounter an error such the following when running on the CPU nodes:

Illegal instruction: illegal operand
Illegal instruction (core dumped)
Please verify that both the operating system and the processor support
    Intel(R) AVX512F, AVX512DQ, AVX512CD, AVX512BW and AVX512VL instructions.

Then your code was probably compiled with AVX-512 instructions and it landed on a broadwell compute node which only supports up to AVX2. Run the command "shistory -j" to see the node type of your recent jobs (single-node jobs only).

There are three solutions to this problem. The easist one is to exclude the broadwell nodes by adding the following line to your Slurm script:

#SBATCH --exclude=della-r4c[1-4]n[1-16],della-r1c[3,4]n[1-16]

A second solution is to build a so-called "fat binary" which can run on all three generations of CPUs. For the Intel compilers this is done using "-xCORE-AVX2 -axCORE-AVX512" instead of "-xHost".

The third solution would be to rebuild the code while removing the optimization flags that are adding the AVX-512 instructions such as "-xHost" and "-march=native".

The following error message can result by building your code on the cascade lake login node and then running it on a skylake node:

Please verify that both the operating system and the processor support Intel(R) AVX512_VNNI instructions.

The solution in this case is to add the following Slurm directive:

#SBATCH --constraint=cascade

GPU Jobs

The CPUs on on della-gpu are AMD supporting AVX2 while those on della8 are Intel supporting AVX-512. The CPUs on the nodes with the 40 GB GPUs are AMD while the CPUs on the nodes with the 80 GB GPUs are Intel. If you compile your code on della8 with AVX-512 instructions and then run it on a GPU node with AMD CPUs, it will fail with:

Illegal instruction (core dumped)

There are two solutions to this problem. The simplest solution is to add the following line to your Slurm script so that you always land on the GPU nodes with Intel CPUs:

#SBATCH --constraint="intel&gpu80"

One downside to this approach is that your queue times for test jobs (less than 1 hour) may increase. The second solution is to recompile the code on the della-gpu login node.

OnDemand Jobs

Illegal instruction errors can also happen with OnDemand sessions. The solution is to choose "skylake" or "cascade" for the "Node type" when creating the session.

GPU Jobs

The login node to the GPU portion of Della is della-gpu.princeton.edu. Be aware of the following:

  • All GPU jobs must be submitted from either della-gpu or the della8 login node.
  • Note that the CPUs on della-gpu are AMD while those on della8 are Intel. If you are compiling from source then keep the CPU type in mind. Failure to do this can lead to "illegal instruction" errors. This arises due to compiler optimizations like "-xHost" and "-march=native". Note that the 40 GB A100 GPUs are on nodes with AMD CPUs while the 80 GB GPUs are on nodes with Intel CPUs.
  • The GPU nodes are for GPU jobs only. Do not run CPU jobs on the GPU nodes.

To connect to the login node:

$ ssh <YourNetID>@della-gpu.princeton.edu

Nodes with 10 GB of Memory per GPU

Della provides three GPU options: (1) a MIG GPU with 10 GB, (2) an A100 GPU with 40 GB and (3) an A100 GPU with 80 GB. A MIG GPU is essentially a small A100 GPU with about 1/7th the performance and memory of an A100. MIG GPUs are ideal for interactive work and for codes that do not need a powerful GPU. The queue time for a MIG GPU is on average much less than that for an A100.

A job can use a MIG GPU when:

  1. Only a single GPU is needed
  2. Only a single CPU-core is needed
  3. The required CPU memory is less than 32 GB
  4. The required GPU memory is less than 10 GB

Please use a MIG GPU whenever possible.

For batch jobs, add the following "partition" directive to your Slurm script to allocate a MIG GPU:

#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --partition=mig

For interactive Slurm allocations, use the following:

$ salloc --nodes=1 --ntasks=1 --time=60:00 --gres=gpu:1 --partition=mig

In the command above, only the value of --time can be changed. All mig jobs are assigned a CPU memory of 32 GB. The GPU memory for MIG is always 10 GB. If your job exceeds either of these memory limits then it will fail.

A MIG GPU can also be used for MyDella Jupyter notebooks as explained on the Jupyter page.

To see the number of available MIG GPUs, run this command and look at the "FREE" column:

$ shownodes -p mig

Nodes with 80 GB of Memory per GPU

There are 69 nodes with 4 GPUs per node. Each GPU has 80 GB of memory. To explicitly run on these nodes, use this Slurm directive:

#SBATCH --constraint=gpu80

Each node has two sockets with two GPUs per socket. The GPUs on the same socket are connected via NVLink.

Nodes with 40 GB of Memory per GPU

There are 20 nodes with 2 GPUs per node. Each GPU has 40 GB of memory. There is no Slurm constraint to specifically run on these nodes. By requesting a GPU you will land on either the 40 or 80 GB GPUs. To run on the 80 GB GPUs see the Slurm constraint above.

Della GPU

The figure above shows the login node and 20 compute nodes. Each compute node has 2 AMD EPYC CPUs and 2 NVIDIA A100 GPUs. Each CPU has 64 CPU-cores. Users do not need to concern themselves with the details of the CPUs or the CPU memory. Simply choose up to 128 CPU-cores per node and up to 768 GB of memory per node in your Slurm script. The compute nodes are interconnected using FDR Infiniband making multinode jobs possible.

Each A100 GPU has 40 GB of memory and 6912 CUDA cores (FP32) across 108 Streaming Multiprocessors. Make sure you are using version 12.x of the CUDA Toolkit and version 8.x of cuDNN when possible. Not all codes will be able to take full advantage of these powerful accelerators. Please use a MIG GPU when that is the case. See the GPU Computing page to learn how to monitor GPU utilization using tools like nvidia-smi and gpustat. See an example Slurm script for a GPU job.

More on GPU Jobs

To see your GPU utilization every 10 minutes over the last hour, run this command:

$ gpudash -u $USER

One can run "gpudash" without options to see the utilization across the entire cluster. You can also directly examine your GPU utilization in real time.

To see the number of available GPUs, run this command:

$ shownodes -p gpu,mig

PyTorch

PyTorch can take advantage of mixed-precision training on the A100 and H100 GPUs. Follow these directions to install PyTorch on della-gpu. Then see the docs on AMP. You should also try using a DataLoader with multiple CPU-cores. For more ways to optimize your PyTorch jobs see PyTorch Performance Tuning Guide.

TensorFlow

TensorFlow can take advantage of mixed-precision training on the A100 and H100 GPUs. Follow these directions to install TensorFlow on della-gpu. Then see the Mixed Precision Guide. You should also try using a data loader from tf.data to keep the GPU busy (watch YouTube video).

Compiling from Source

All GPU codes must be compiled on the della-gpu login node. When compiling, one should prefer the cudatoolkit/12.x modules. The A100 GPU has a compute capability of 8.0 while the H100 is 9.0. To compile a CUDA kernel for an A100, one might use the following commands:

$ module load cudatoolkit/12.4
$ nvcc -O3 -arch=sm_80 -o myapp myapp.cu

The compute capability should also be specified when building codes using CMake. For example, for LAMMPS and GROMACS one would use -DGPU_ARCH=sm_80 and -DGMX_CUDA_TARGET_SM=80 for an A100 GPU, respectively.

For tips on compiling the CPU code of your GPU-enabled software see "AMD Nodes" on the Stellar page.

CUDA Multi-Process Service (MPS)

Certain codes that use GPUs may benefit from CUDA MPS (see ORNL docs), which enables multiple processes to concurrently share the resources on a single GPU. To use MPS simply add this directive to your Slurm script:

#SBATCH --gpu-mps

In most cases users will see no speed-up. Codes where the individual MPI processes underutilize the GPU should see a performance gain.

GPU Nodes for PLI

Members of Princeton Language and Intelligence (PLI) have exclusive access to what will eventually be 296 H100 SXM GPUs (37 nodes at 8 GPUs per node). The PLI portion of the Della cluster is designed for working with large AI models as described in this article. Each H100 GPU provides 80 GB of GPU memory and support for the FP8 numerical format. The GPUs within a node are connected in an all-to-all configuration with the high-speed interconnect NVLink. In addition to the standard network fabric, there is a dedicated Infiniband network (NDR) for internode GPU-GPU communication. There are 96 Intel CPU-cores and 1 TB of CPU memory per node. To see the availability of these nodes:

$ shownodes -p pli

You must be a member of PLI[1] to run jobs on these nodes ([email protected]). To see if you are a member, run this command:

$ getent group pli

Core PLI Members

To run batch jobs, add the following directive to your Slurm script:

#SBATCH --partition=pli-c

For interactive jobs use, for example:

$ salloc --nodes=1 --ntasks=1 --mem=4G --time=01:01:00 --gres=gpu:1 --partition=pli-c --mail-type=begin

Do not run more than 2 interactive jobs simultaneously.

Campus PLI Members

To run batch jobs, add the following directives to your Slurm script:

#SBATCH --partition=pli
#SBATCH --account=<ACCOUNT>

For interactive jobs use, for example:

$ salloc --nodes=1 --ntasks=1 --mem=4G --time=01:01:00 --gres=gpu:1 --partition=pli --account=<ACCOUNT> --mail-type=begin

Do not run more than 2 interactive jobs simultaneously.

Note that currently there is no test queue. This means that you can only run jobs that require 61 minutes or more.

Grace Hopper Superchip

There is one login node and one compute node for experimenting with the GH200 Grace Hopper Superchip. The main novelty of this hardware is the coherent memory between the CPU and GPU.

If you have an account on the Della cluster then run the command below to connect to the login node:

$ ssh <YourName>@della-gh.princeton.edu

The CPU is made by ARM. You will need to recompile your code on the login node. Use the “module avail” command to see the available environment modules. To run on the compute node, add the following directive to your Slurm script:

#SBATCH --partition=grace

For interactive jobs use, for example:

$ salloc --nodes=1 --ntasks=1 --mem=4G --time=01:01:00 --gres=gpu:1 --partition=grace

Note that these nodes are provided in an experimental sense. If you encounter issues then please request support.

Globus

Research Computing has multiple Globus endpoints. An endpoint is known as a "Collection" in the Globus app. For Della /scratch/gpfs use "Princeton Della /scratch/gpfs" as shown below (replace "aturing" with your NetID):

Globus Endpoint

Running Software using the Previous Operating System

The operating system on most of the nodes of Della was upgraded from SDL 7 to SDL 8 in the winter of 2022. Users should reinstall or recompile their codes on the della8 login node. When this is not possible, we provide a compatibility tool for effectively running software under the old operating system (SDL 7). This involves prepending the command you want to run with /usr/licensed/bin/run7. Below are a few examples:

$ /usr/licensed/bin/run7 cat /etc/os-release
$ /usr/licensed/bin/run7 bash
Singularity> source /home/aturing/env.sh
Singularity> solar -i 42 -d /scratch/gpfs/aturing/output

Visualization Nodes

The Della cluster has two dedicated nodes for visualization and post-processing tasks, called della-vis1 and della-vis2.

Hardware Details

The della-vis1 node features 80 CPU-cores, 1 TB of memory and an A100 GPU with 40 GB of memory.
The della-vis2 node features 28 CPU-cores,  256 GB of memory and four P100 GPUs with 16 GB of memory per GPU.

Both nodes have internet access.

How to Use the Visualization Node

Users can connect via SSH with the following command (VPN required if connecting from off-campus)

$ ssh <YourNetID>@della-vis1.princeton.edu
$ ssh <YourNetID>@della-vis2.princeton.edu

but to work with graphical applications on the visualization node, see our guide to working with visualizations and graphical user-interface (GUI) applications.

Note that there is no job scheduler on della-vis1 or della-vis2, so please be considerate of other users when using this resource. To ensure that the system remains a shared resource, there are limits in place preventing one individual from using all of the resources. You can check your activity with the command "htop -u $USER".

In addition to visualization, the nodes can be used for tasks that are incompatible with the Slurm job scheduler, or for work that is not appropriate for the Della login nodes (such as downloading large amounts of data from the internet).

Filesystem Usage and Quotas

/home (shared via NFS to all the compute nodes) is intended for scripts, source code, executables and small static data sets that may be needed as input for codes.

/scratch/gpfs (shared via GPFS to all the compute nodes) is intended for dynamic data that requires high-bandwidth I/O. This is where all users should write their job output. Files on /scratch/gpfs are NOT backed up so data should be moved to persistent storage as soon as it is no longer needed for computations.

/tigress and /projects are the long-term storage systems. They are shared by all of the large clusters via a single, slow connection and they are designed for non-volatile files only (i.e., files that do not change over time). Writing the output of actively running jobs to /tigress or /projects may adversely affect the work of other users and it may cause your jobs to run inefficiently or fail. Instead, write your output to the much faster /scratch/gpfs/<YourNetID> and then, after the job completes, copy or move the output to /tigress or /projects if a backup is desired.

To request a quota increase, please complete the Quota Increase form. If you need an increase in the number of files then indicate this in the "Notes" field of the form.

/tmp is local scratch space available on each compute node. It is the fastest filesystem. Nodes have about 130 to 1400 GB of space available on /tmp. Learn more about local scratch.

See the Data Storage page for complete details about the file and storage systems.

Running Third-party Software

If you are running 3rd-party software whose characteristics (e.g., memory usage) you are unfamiliar with, please check your job after 5-15 minutes using 'top' or 'ps -ef' on the compute nodes being used. If the memory usage is growing rapidly, or close to exceeding the per-processor memory limit, you should terminate your job before it causes the system to hang or crash. You can determine on which node(s) your job is running using the "scontrol show job <jobnumber>" command.

Maintenance Window

Della will be down for routine maintenance on the second Tuesday of every month from approximately 6 AM to 2 PM. This includes the associated filesystems of /scratch/gpfs, /projects and /tigress. Please mark your calendar. Jobs submitted close to downtime will remain in the queue unless they can be scheduled to finish before downtime (see more). Users will receive an email when the cluster is returned to service.

Wording of Acknowledgement of Support and/or Use of Research Computing Resources

"The author(s) are pleased to acknowledge that the work reported on in this paper was substantially performed using the Princeton Research Computing resources at Princeton University which is consortium of groups led by the Princeton Institute for Computational Science and Engineering (PICSciE) and Office of Information Technology's Research Computing."

"The simulations presented in this article were performed on computational resources managed and supported by Princeton Research Computing, a consortium of groups including the Princeton Institute for Computational Science and Engineering (PICSciE) and the Office of Information Technology's High Performance Computing Center and Visualization Laboratory at Princeton University."