Overview

The Adroit cluster is intended for running smaller jobs, as well as developing, debugging, and testing codes. Despite being one of our smaller clusters, Adroit is built like our larger clusters (such as Della or Tiger), and is therefore ideal to use as training for eventual work on the larger clusters.

Some Technical Specifications:
Adroit is a Beowulf cluster acquired through a partnership between Dell Computer Corporation and OIT. The compute nodes on the "all" partition have 64 CPU-cores and 512 GB of memory while those on the "class" partition have 32 CPU-cores and 384 GB RAM. There is one AMD node with 128 CPU-cores. Run the "snodes" command for more information. There are also three nodes with GPUs and a visualization node. For more details, see the Hardware Configuration section below.

Layout showing Adroit's three head nodes, connected to several compute nodes, three GPU nodes, and the file systems.

Schematic diagram of the Adroit cluster.

How to Access the Adroit Cluster

To use the Adroit cluster you have to request an account on Adroit and then log in through SSH.
 

Requesting Access to Adroit

If you would like an account on Adroit, please fill out the Adroit Registration form to request an account. 

Logging into Adroit

Once you have been granted access to Adroit, you can connect by opening an SSH client and typing the following SSH command (VPN required from off-campus):

$ ssh <YourNetID>@adroit.princeton.edu

For more on how to SSH, see the Knowledge Base article Secure Shell (SSH): Frequently Asked Questions (FAQ). If you have trouble connecting then see our SSH page.

MyAdroit Web Portal

If you prefer to navigate Adroit through a graphical user interface rather than the Linux command line, Adroit has a web portal option called MyAdroit (VPN required from off-campus):

https://myadroit.princeton.edu

The web portal enables easy file transfers and interactive jobs with RStudio, Jupyter, Stata, Mathematica and MATLAB. A graphical desktop environment is also available on a compute node or the visualization node.

Jupyter for Classes

How to Use the Adroit Cluster

Since Adroit is a Linux system, knowing some basic Linux commands is highly recommended. For an introduction to navigating a Linux system, view the material associated with our Intro to Linux Command Line workshop. 

Using Adroit also requires some knowledge on how to properly use the file system, module system, and how to use the scheduler that handles each user's jobs. For an introduction to navigating Princeton's High Performance Computing systems, view our Guide to Princeton's Research Computing Clusters. Additional information specific to Adroit's file system, priority for job scheduling, etc. can be found below.

To work with visualizations, or applications that require graphical user interfaces (GUIs), use Adroit's visualization node.

To attend a live session of either workshop, see our Trainings page for the next available workshop.
For more resources, see our Support - How to Get Help page.

Important Guidelines

The head node on Adroit should be used for interactive work only, such as compiling programs, and submitting jobs as described below. No jobs should be run on the head node, other than brief tests that last no more than a few minutes.

If you'd like to run a Jupyter notebook, we have a few options for running Jupyter notebooks so that you can avoid running on Adroit's head node.

Hardware Configuration

For more technical details, please see the full version of the systems table.

CPU Cluster

ProcessorNodesCores per NodeMemory per Node
2.6 GHz Intel Skylake432384 GB
2.0 GHz Intel Ice Lake564512 GB

GPU Nodes

ProcessorGPUNodesCPU-Cores per NodeGPUs per NodeCPU Memory per NodeMemory per GPU
2.6 GHz Intel Sapphire RapidsA10014841 TB80 GB
2.8 GHz Intel Ice LakeA100 (MIG)14881 TB20 GB
2.6 GHz Intel SkylakeV1001564770 GB32 GB

Use the "snodes" command for more. The four A100 GPUs with 40 GB of memory have been converted into eight GPUs with 20 GB of memory using MIG. Each of the eight MIG GPUs offers about one-half of the performance of a full A100 GPU.

Job Scheduling (QOS Parameters)

All jobs must be run through the scheduler on Adroit. If a job would exceed any of the limits below, it will be held until it is eligible to run. A job should not specify the qos (quality of service) into which it should run, allowing the scheduler to route the job according to the resources it requires. The tables below apply to jobs submitted via Slurm and do not necessarily apply to MyAdroit.

CPU Jobs

QOSTime LimitJobs per userCores per UserCores Available
test15 minutes2 jobs80 cores
5 nodes/user
no limit
short4 hours32 jobs80 coresno limit
medium24 hours4 jobs64 cores
5 nodes/all users
100 cores
long7 days2 jobs64 cores
4 nodes/all users
80 cores

The values above reflect the minimum limits in effect and the actual values may be higher. Please use the "qos" command to see the limits in effect at the current time.

GPU Jobs

QOSTime LimitGPUs per User
gpu-test15 minutesno limit
gpu-short4 hours4
gpu-medium24 hours2
gpu-long2 days2

The values above reflect the minimum limits in effect and the actual values may be higher. Please use the "qos" command to see the limits in effect at the current time.

Running GPU Jobs

There are three GPU nodes on Adroit. Run the following command to see which GPUs are free:

$ shownodes -p gpu

Use the following Slurm directive to request a GPU for your job:

#SBATCH --gres=gpu:1

To explicitly run on the A100 GPUs, add the following directive:

#SBATCH --constraint=a100

Or to run on the V100 GPUs use:

#SBATCH --constraint=v100

To request an A100 with 80 GB of memory, use this constraint:

#SBATCH --constraint=gpu80

Use of the qos command to see the restrictions on the number of GPUs per QOS. For instance, when using four GPUs a job must have a runtime of less than four hours. See the "Job Scheduling" section above for the exact limits. See the priority page for information on estimating when your queued job will run. For measuring GPU utilization see the GPU Computing page.

A common mistake is to run a CPU-only code on a GPU. Only codes that have been explicitly written to run on a GPU can take advantage of a GPU. Read the documentation for the code that you are using to see if it can use a GPU.

Running Software using the Previous Operating System

The operating system on Adroit was upgraded from SDL 7 to SDL 8 in the summer of 2021. We provide two compatibility tools for effectively running software under the old operating system (SDL 7). The first is to create an interactive desktop (choose "Interactive Apps" then "Desktop") using the MyAdroit web portal (see above) and choosing the "mate" option for RHEL 7 on the configuration page. The second is command-line only. This involves prepending the command you want to run with /usr/licensed/bin/run7. Below are a few examples:

$ /usr/licensed/bin/run7 cat /etc/os-release
$ run7 bash
Singularity> source /usr/licensed/cadence/profile.20200824
Singularity> virtuoso

Visualization Node

The Adroit cluster has a dedicated node for visualization and post-processing tasks, called adroit-vis.

Hardware Details

This node features 64 CPU-cores, 512 GB RAM, and 2 A100 GPUs each with 80 GB of GPU memory.

This node has internet access.

How to Use the Visualization Node

Users can connect via SSH with the following command (VPN required if connecting from off-campus),

$ ssh <YourNetID>@adroit-vis.princeton.edu 

but to work with graphical applications on the visualization node, see our guide to working with visualizations and graphical user-interface (GUI) applications.

Note that there is no job scheduler on adroit-vis, so please be considerate of other users when using this resource. To ensure that the system remains a shared resource, there are limits in place preventing one individual from using all of the resources. You can check your activity with the command "htop -u $USER".

In addition to visualization, the node can be used for tasks that are incompatible with the Slurm job scheduler, or for work that is not appropriate for the login node (e.g., downloading large amounts of data from the Internet).

Filesystem Usage and Quotas

Please use /scratch/network/<YourNetID> for the output of running jobs and to store large datasets as described on the Data Storage page. This space is an NFS-mounted shared space of close to 24 TB. Run the checkquota command to see your usage. Files are NOT backed up so move any important files to long-term storage (e.g., your /home directory or your local machine).

Local scratch space is available in /tmp on every compute node. Since this storage is not shared across all nodes, it is ideally suited for temporary output.
Maintenance Window

Adroit will be down for maintenance in between semesters for a period of a few hours as announced on the mailing list. No significant changes are made during the fall and spring semesters.