Della

OUTLINE

 

Overview

Della is intended as a platform for running both parallel and serial production jobs.  The system has grown over time and includes groups of nodes using different generations of Intel processor technology. The cluster features 20 AMD nodes with two NVIDIA A100 GPUs per node.

Some Technical Specifications:
Della is a 4-rack Intel computer cluster, originally acquired through a joint effort of Astrophysics, the Lewis-Sigler Institute for Integrative Genomics, PICSciE and OIT. All  nodes are connected via FDR Infiniband high bandwidth low latency network. For more hardware details, see the Hardware Configuration information below.

 

How to Access the Della Cluster

To use the Della cluster you have to request an account and then log in through SSH.

  1. Requesting Access to Della

    Access to the large clusters like Della is granted on the basis of brief faculty-sponsored proposals (see For large clusters: Submit a proposal or contribute).

    If, however, you are part of a research group with a faculty member who has contributed to or has an approved project on Della, that faculty member can sponsor additional users by sending a request to cses@princeton.edu. Any non-Princeton user must be sponsored by a Princeton faculty or staff member for a Research Computer User (RCU) account.

  2. Logging into Della

    Once you have been granted access to Della, you connect via SSH.

    For CPU jobs (VPN required from off-campus):
    $ ssh <YourNetID>@della.princeton.edu
    
    For GPU jobs (VPN required from off-campus):
    $ ssh <YourNetID>@della-gpu.princeton.edu
    

    For more on how to SSH, see the Knowledge Base article Secure Shell (SSH): Frequently Asked Questions (FAQ). If you have trouble connecting then see our SSH page.

    If you prefer to navigate Della through a graphical user interface rather than the Linux command line, there is also a web portal called MyDella at mydella.princeton.edu which provides access to the cluster through a web browser.  This enables easy file transfers and interactive jobs: RStudio, Jupyter, Stata and MATLAB. A VPN is required to access the web portal from off-campus. We recommend using the GlobalProtect VPN service.

 

How to Use the Della Cluster

Since Della is a Linux system, knowing some basic Linux commands is highly recommended. For an introduction to navigating a Linux system, view the material associated with our Intro to Linux Command Line workshop. 

Using Della also requires some knowledge on how to properly use the file system, module system, and how to use the scheduler that handles each user's jobs. For an introduction to navigating Princeton's High Performance Computing systems, view the material associated with our Getting Started with the Research Computing Clusters workshop. Additional information specific to Della's file system, priority for job scheduling, etc. can be found below.

To attend a live session of either workshop, see our Trainings page for the next available workshop.
For more resources, see our Support - How to Get Help page.

 

Important Guidelines

The login node, della5, should be used for interactive work only, such as compiling programs, and submitting jobs as described below. No jobs should be run on the login node, other than brief tests that last no more than a few minutes.  Where practical, we ask that you entirely fill the nodes so that CPU-core fragmentation is minimized.

If you'd like to run a Jupyter notebook, we have a few options for running Jupyter notebooks so that you can avoid running on Della's head node.

 

Hardware Configuration

Della is composed of both CPU and GPU nodes:

Processor Nodes Cores per Node Memory per Node Max Instruction Set GPUs per Node
2.4 GHz Intel Broadwell 96 28 128 GB AVX2 N/A
2.6 GHz Intel Skylake 64 32 190 GB AVX-512 N/A
2.8 GHz Intel Cascade Lake 64 32 190 GB AVX-512 N/A
2.6 GHz AMD EPYC Rome 20 128 768 GB AVX2 2

Each GPU has 40 GB of memory. The nodes of Della are connected with FDR Infiniband. Run the "shownodes" command for additional information about the nodes. Note that there are some private nodes that belong to specific departments or research groups. If you are in the "physics" Unix group then use #SBATCH --partition=physics to access private nodes. There are a small number of large-memory nodes (>1.5 TB) available to all users that are not mentioned in the table above. These may only be used for jobs that utilize more than 190 GB of memory. Please write to cses@princeton.edu for more information. For more technical details about the Della cluster, see the full version of the systems table.

 

FPGAs

Della provides four Field-Programmable Gate Arrays (FPGAs) by Intel. If you have an account on Della then use this command to connect:

$ ssh <YourNetID>@della-fpga1.princeton.edu

The node della-fpga2.princeton.edu is also available. Each node has 2 FPGAs. See this Getting Started Guide by Bei Wang.

Consider watching "FPGA Programming for Software Developers Using oneAPI" by Intel.

 

Job Scheduling (QOS Parameters)

All jobs must be run through the Slurm scheduler on Della. If a job would exceed any of the limits below, it will be held until it is eligible to run. Jobs should not specify the qos into which it should run, allowing the Slurm scheduler to distribute the jobs accordingly.

Jobs will be assigned a quality of service (QOS) according to the length of time specified for the job:

QOS Time Limit Jobs per User Cores per User Cores Available
test 61 minutes 2 jobs [30 nodes] no limit
short 24 hours 350 jobs 400 cores no limit
medium 72 hours 200 jobs 300 cores 1600 cores
vlong 144 hours
(6 days)
50 jobs 160 cores 1300 cores

Jobs are further prioritized through the Slurm scheduler based on a number of factors: job size, run times, node availability, wait times, and percentage of usage over a 30 day period (fairshare). Also, these values reflect the minimum limits in effect and the actual values may be higher. Please use the "qos" command to see the limits in effect at the current time.

 

Running on Specific CPUs

The CPU nodes of Della feature different generations of Intel CPUs. The newer nodes (cascade) are faster than the older nodes (broadwell) with skylake sitting in between. This can often explain why the execution time of your code varies. To see the CPU generation of each node, run the "shownodes" command and look at the FEATURES column. The newest CPU generation determines the node type. For example, "skylake,broadwell,haswell" is skylake. To cause your jobs to run only on the new cascade nodes, add this line to your Slurm script:

#SBATCH --constraint=cascade

Note that to only run on skylake or broadwell you will need to use the --nodelist directive since cascade nodes are set to also run jobs specified for lower generations.

 

GPU Jobs

Della provides 20 AMD EPYC Rome nodes with 2 NVIDIA A100 GPUs per node. The login node to the GPU portion of Della is della-gpu.princeton.edu. Be aware of the following:

  • All GPU jobs must be submitted from the della-gpu login node.
  • If you are compiling from source then this must be done on the della-gpu login node.
  • The GPU nodes are for GPU jobs only. Do not run CPU jobs on the GPU nodes.

To connect to the login node:

$ ssh <YourNetID>@della-gpu.princeton.edu

Be aware that della-gpu and the GPU compute nodes are running a variant of the RHEL 8 operating system. You will notice that the environment modules are different (see "module avail") and the system version of GCC is 8.3.1. PNI has priority access to these nodes since they provided the funding.

Della GPU

The figure above shows the login node and 20 compute nodes. Each compute node has 2 AMD EPYC CPUs and 2 NVIDIA A100 GPUs. Each CPU has 64 CPU-cores. Users do not need to concern themselves with the details of the CPUs or the CPU memory. Simply choose up to 128 CPU-cores per node and up to 768 GB of memory per node in your Slurm script. The compute nodes are interconnected using FDR Infiniband making multinode jobs possible.

Each A100 GPU has 40 GB of memory and 6912 CUDA cores (FP32) across 108 Streaming Multiprocessors. Make sure you are using version 11.x of the CUDA Toolkit and version 8.x of cuDNN when possible. Not all codes will be able to take full advantage of these powerful accelerators. See the GPU Computing page to learn how to monitor GPU utilization using tools like nvidia-smi and gpustat. See an example Slurm script for a GPU job.

To see your GPU utilization every 10 minutes over the last hour, run this command:

$ gpudash -u $USER

One can run "gpudash" without options to see the utilization across the entire cluster. You can also directly examine your GPU utilization in real time.

To see the number of available GPUs, run this command:

$ shownodes -p gpu

PyTorch

PyTorch can take advantage of mixed-precision training on the A100 GPUs. Follow these directions to install PyTorch on della-gpu. Then see the docs on AMP. You should also try using a DataLoader with multiple CPU-cores. For more ways to optimize your PyTorch jobs see "PyTorch Performance Tuning Guide" from GTC 2021.

TensorFlow

TensorFlow can take advantage of mixed-precision training on the A100 GPUs. Follow these directions to install TensorFlow on della-gpu. Then see the Mixed Precision Guide. You should also try using a data loader from tf.data to keep the GPU busy (watch YouTube video). For more ways to optimize your TensorFlow jobs see "Train Faster: A Guide to Tensorflow Performance Optimization" from GTC 2021.

Compiling from Source

All GPU codes must be compiled on the della-gpu login node. When compiling, one should prefer the cudatoolkit/11.x modules. The A100 GPU has a compute capability of 8.0. To compile a CUDA kernel one might use the following commands:

$ module load cudatoolkit/11.3
$ nvcc -O3 -arch=sm_80 -o myapp myapp.cu

The compute capability should also be specified when building codes using CMake. For example, for LAMMPS and GROMACS one would use -DGPU_ARCH=sm_80 and -DGMX_CUDA_TARGET_SM=80, respectively.

For tips on compiling the CPU code of your GPU-enabled software see "AMD Nodes" on the Stellar page.

CUDA Multi-Process Service (MPS)

Certain MPI codes that use GPUs may benefit from CUDA MPS (see ORNL docs), which enables multiple processes to concurrently share the resources on a single GPU. To use MPS simply add this directive to your Slurm script:

#SBATCH --gpu-mps

In most cases users will see no speed-up. Codes where the individual MPI processes underutilize the GPU should see a performance gain.

 

Filesystem Usage and Quotas

/home (shared via NFS to all the compute nodes) is intended for scripts, source code, executables and small static data sets that may be needed as standard input/configuration for codes.

/scratch/network (shared via NFS to all the compute nodes) is intended for dynamic data that doesn't require high bandwidth i/o such as storing final output for a compute job. You may a create a directory /scratch/network/myusername, and use this to place your temporary files. Files are NOT backed up so this data should be moved to persistent storage once it is no longer needed for continued computation.

/scratch/gpfs (shared via GPFS to all the compute nodes, 800 TB) is intended for dynamic data that requires higher bandwidth i/o. Files are NOT backed up so this data should be moved to persistent storage as soon as it is no longer needed for computations.

/tigress (shared via GPFS to all TIGRESS resources, 6 PB) is intended for more persistent storage and should provide high bandwidth i/o (20 GB/s aggregate bandwidth for jobs across 16 or more nodes). Users are provided with a default quota of 512 GB when they request a directory in this storage, and that default can be increased by requesting more. We do ask people to consider what they really need, and to make sure they regularly clean out data that is no longer needed since this filesystem is shared by the users of all our systems.

/scratch (local to each compute node) is intended for data local to each task of a job, and it should be cleaned out at the end of each job. Nodes have about 130 to 1400 G available depending on the node.
 

Running Third-party Software

If you are running 3rd-party software whose characteristics (e.g., memory usage) you are unfamiliar with, please check your job after 5-15 minutes using 'top' or 'ps -ef' on the compute nodes being used. If the memory usage is growing rapidly, or close to exceeding the per-processor memory limit, you should terminate your job before it causes the system to hang or crash. You can determine on which node(s) your job is running using the "scontrol show job <jobnumber>" command.


Maintenance Window

Della will be down for routine maintenance on the second Tuesday of every month from approximately 6 AM to 2 PM. This includes the associated filesystems of /scratch/gpfs, /projects and /tigress. Please mark your calendar. Jobs submitted close to downtime will remain in the queue unless they can be scheduled to finish before downtime (see more).

 

Wording of Acknowledgement of Support and/or Use of Research Computing Resources

"The author(s) are pleased to acknowledge that the work reported on in this paper was substantially performed using the Princeton Research Computing resources at Princeton University which is consortium of groups led by the Princeton Institute for Computational Science and Engineering (PICSciE) and Office of Information Technology's Research Computing."

"The simulations presented in this article were performed on computational resources managed and supported by Princeton Research Computing, a consortium of groups including the Princeton Institute for Computational Science and Engineering (PICSciE) and the Office of Information Technology's High Performance Computing Center and Visualization Laboratory at Princeton University."