Adroit

The Adroit cluster is intended for running smaller jobs, as well as developing, debugging, and testing codes. Despite being one of our smaller clusters, Adroit is built like our larger clusters (such as Della or Tiger), and is therefore ideal to use as training for eventual work on the larger clusters.

Some Technical Specifications:
Adroit is a Beowulf cluster acquired through a partnership between Dell Computer Corporation and OIT. Each compute node has thirty-two 2.6 GHz Intel Skylake CPU-cores and 384 GB RAM. There are also two nodes with GPUs and a visualization node. For more details, see the Hardware Configuration section below.

 

How to Access the Adroit Cluster

To use the Adroit cluster you have to request an account on Adroit and then log in through SSH.
 

  1. Requesting Access to Adroit

    If you would like an account on Adroit, please fill out the Adroit Registration form to request an account. 
     
  2. Logging into Adroit
     

    Once you have been granted access to Adroit, you can connect via SSH (VPN required from off-campus):

    $ ssh <YourNetID>@adroit.princeton.edu
    

    For more on how to SSH, see the Knowledge Base article Secure Shell (SSH): Frequently Asked Questions (FAQ). If you have trouble connecting then see our SSH page.

    MyAdroit Web Portal

    If you prefer to navigate Adroit through a graphical user interface rather than the Linux command line, Adroit has a web portal option called MyAdroit:

    https://myadroit.princeton.edu

    The web portal enables easy file transfers and interactive jobs with RStudio, Jupyter, Stata and MATLAB. A VPN is required to access the web portal from off-campus. We recommend using the GlobalProtect VPN service.

 

How to Use the Adroit Cluster

Since Adroit is a Linux system, knowing some basic Linux commands is highly recommended. For an introduction to navigating a Linux system, view the material associated with our Intro to Linux Command Line workshop. 

Using Adroit also requires some knowledge on how to properly use the file system, module system, and how to use the scheduler that handles each user's jobs. For an introduction to navigating Princeton's High Performance Computing systems, view the material associated with our Getting Started with the Research Computing Clusters workshop. Additional information specific to Adroit's file system, priority for job scheduling, etc. can be found below.

To attend a live session of either workshop, see our Trainings page for the next available workshop.
For more resources, see our Support - How to Get Help page.

 

Important Guidelines

The head node on Adroit should be used for interactive work only, such as compiling programs, and submitting jobs as described below. No jobs should be run on the head node, other than brief tests that last no more than a few minutes.

If you'd like to run a Jupyter notebook, we have a few options for running Jupyter notebooks so that you can avoid running on Adroit's head node.

 

Hardware Configuration

For more technical details, click here to see the full version of the systems table.

CPU Cluster

Processor Nodes Cores per Node Memory per Node Interconnect
2.6 GHz Intel Skylake 9 32 384 GB FDR Infiniband

 

GPU Nodes

Processor GPU Nodes CPU-Cores per Node GPUs per Node CPU Memory per Node Memory per GPU
2.8 GHz Intel Ice Lake A100 1 48 4 1 TB 40 GB
2.6 GHz Intel Skylake V100 1 40 4 770 GB 32 GB

Use the "snodes" command for more. The two GPU nodes are standalone. That is, they are not interconnected with each other or the CPU nodes.

 

Job Scheduling (QOS Parameters)

All jobs must be run through the scheduler on Adroit. If a job would exceed any of the limits below, it will be held until it is eligible to run. A job should not specify the qos (quality of service) into which it should run, allowing the scheduler to route the job according to the resources it requires. The tables below apply to jobs submitted via Slurm and do not necessarily apply to MyAdroit.

CPU Jobs

QOS Time Limit Jobs per user Cores per User Cores Available
test 2 hours 2 jobs 80 cores
5 nodes/user
no limit
short 4 hours 32 jobs 80 cores no limit
medium 24 hours 4 jobs 64 cores
5 nodes/all users
100 cores
long 7 days 2 jobs 64 cores
4 nodes/all users
80 cores

 

GPU Jobs

QOS Time Limit GPUs per User
gpu-test 1 hour no limit
gpu-short 4 hours 4
gpu-medium 24 hours 2
gpu-long 7 days 2

 

Running GPU Jobs

There are two standalone GPU nodes. The adroit-h11g1 node features four NVIDIA V100 GPUs while adroit-h11g2 offers four A100 GPUs. Run the "shownodes" command to see this. Use the following Slurm directive to request a GPU for your job:

#SBATCH --gres=gpu:1

To explicitly run on the A100 GPUs, add the following directive:

#SBATCH --constraint=a100

Or to run on the V100 GPUs use:

#SBATCH --constraint=v100

Use of the qos command to see the restrictions on the number of GPUs per QOS. For instance, when using all four V100 GPUs a job must have a runtime of less than four hours. See the "Job Scheduling" section for the exact limits. See the priority page for information on estimating when your queued job will run. See the GPU Computing page to see how to measure GPU utilization.

A common mistake is to run a CPU-only code on a GPU. Only codes that have been explicitly written to run on a GPU can take advantage of a GPU. Read the documentation for the code that you are using to see if it can use a GPU.

 

Visualization Node

The Adroit cluster includes adroit-vis which is a dedicated node for visualization and post-processing tasks. Connect via SSH with the following command (VPN required from off-campus):

$ ssh <YourNetID>@adroit-vis.princeton.edu

This node features 28 CPU-cores. There is no job scheduler on adroit-vis so users must be mindful of the resources they are consuming. In addition to visualization, the node can be used for tasks that are incompatiable with the Slurm job scheduler or for work that is not appropriate for the Adroit login node. These tasks include downloading large amounts of data from the Internet and running Jupyter notebooks.

 

Filesystem Usage and Quotas

Please use /scratch/network/<YourNetID> for the output of running jobs and to store large datasets as described on the Data Storage page. This space is an NFS-mounted shared space of close to 24 TB. Run the checkquota command to see your usage. Files are NOT backed up so move any important files to long-term storage (e.g., your /home directory or your local machine).

Local scratch space is available in /tmp on every compute node. Since this storage is not shared across all nodes, it is ideally suited for temporary output.


Maintenance Window

Adroit will be down for maintenance in between semesters for a period of a few hours as announced on the mailing list. No significant changes are made during the fall and spring semesters.