Adroit

OUTLINE

 

Overview

The Adroit cluster is intended for running smaller jobs, as well as developing, debugging, and testing codes. Despite being one of our smaller clusters, Adroit is built like our larger clusters (such as Della or Tiger), and is therefore ideal to use as training for eventual work on the larger clusters.

Some Technical Specifications:
Adroit is a Beowulf cluster acquired through a partnership between Dell Computer Corporation and OIT. The compute nodes on the "all" partition have 64 CPU-cores and 512 GB of memory while those on the "class" partition have 32 CPU-cores and 384 GB RAM. There is one AMD node with 128 CPU-cores. Run the "snodes" command for more information. There are also three nodes with GPUs and a visualization node. For more details, see the Hardware Configuration section below.

Layout showing Adroit's three head nodes, connected to several compute nodes, three GPU nodes, and the file systems.

Schematic diagram of the Adroit cluster.

How to Access the Adroit Cluster

To use the Adroit cluster you have to request an account on Adroit and then log in through SSH.
 

  1. Requesting Access to Adroit

    If you would like an account on Adroit, please fill out the Adroit Registration form to request an account. 
     
  2. Logging into Adroit
     

    Once you have been granted access to Adroit, you can connect by opening an SSH client and typing the following SSH command (VPN required from off-campus):

    $ ssh <YourNetID>@adroit.princeton.edu
    

    For more on how to SSH, see the Knowledge Base article Secure Shell (SSH): Frequently Asked Questions (FAQ). If you have trouble connecting then see our SSH page.

    MyAdroit Web Portal

    If you prefer to navigate Adroit through a graphical user interface rather than the Linux command line, Adroit has a web portal option called MyAdroit (VPN required from off-campus):

    https://myadroit.princeton.edu

    The web portal enables easy file transfers and interactive jobs with RStudio, Jupyter, Stata, Mathematica and MATLAB. A graphical desktop environment is also available on a compute node or the visualization node.

Adroit OnDemand

How to Use the Adroit Cluster

Since Adroit is a Linux system, knowing some basic Linux commands is highly recommended. For an introduction to navigating a Linux system, view the material associated with our Intro to Linux Command Line workshop. 

Using Adroit also requires some knowledge on how to properly use the file system, module system, and how to use the scheduler that handles each user's jobs. For an introduction to navigating Princeton's High Performance Computing systems, view our Guide to Princeton's Research Computing Clusters. Additional information specific to Adroit's file system, priority for job scheduling, etc. can be found below.

To work with visualizations, or applications that require graphical user interfaces (GUIs), use Adroit's visualization node.

To attend a live session of either workshop, see our Trainings page for the next available workshop.
For more resources, see our Support - How to Get Help page.

 

Important Guidelines

The head node on Adroit should be used for interactive work only, such as compiling programs, and submitting jobs as described below. No jobs should be run on the head node, other than brief tests that last no more than a few minutes.

If you'd like to run a Jupyter notebook, we have a few options for running Jupyter notebooks so that you can avoid running on Adroit's head node.

 

Hardware Configuration

For more technical details, click here to see the full version of the systems table.

CPU Cluster

Processor Nodes Cores per Node Memory per Node
2.6 GHz Intel Skylake 4 32 384 GB
2.0 GHz Intel Ice Lake 5 64 512 GB

 

GPU Nodes

Processor GPU Nodes CPU-Cores per Node GPUs per Node CPU Memory per Node Memory per GPU
2.6 GHz Intel Sapphire Rapids A100 1 48 4 1 TB 80 GB
2.8 GHz Intel Ice Lake A100 1 48 4 1 TB 40 GB
2.6 GHz Intel Skylake V100 1 56 4 770 GB 32 GB

Use the "snodes" command for more.

 

Job Scheduling (QOS Parameters)

All jobs must be run through the scheduler on Adroit. If a job would exceed any of the limits below, it will be held until it is eligible to run. A job should not specify the qos (quality of service) into which it should run, allowing the scheduler to route the job according to the resources it requires. The tables below apply to jobs submitted via Slurm and do not necessarily apply to MyAdroit.

CPU Jobs

QOS Time Limit Jobs per user Cores per User Cores Available
test 15 minutes 2 jobs 80 cores
5 nodes/user
no limit
short 4 hours 32 jobs 80 cores no limit
medium 24 hours 4 jobs 64 cores
5 nodes/all users
100 cores
long 7 days 2 jobs 64 cores
4 nodes/all users
80 cores

The values above reflect the minimum limits in effect and the actual values may be higher. Please use the "qos" command to see the limits in effect at the current time.

 

GPU Jobs

QOS Time Limit GPUs per User
gpu-test 15 minutes no limit
gpu-short 4 hours 4
gpu-medium 24 hours 2
gpu-long 7 days 2

The values above reflect the minimum limits in effect and the actual values may be higher. Please use the "qos" command to see the limits in effect at the current time.

 

Running GPU Jobs

There are three GPU nodes on Adroit each with four GPUs per node. Run the following command to see which GPUs are free:

$ shownodes -p gpu

Use the following Slurm directive to request a GPU for your job:

#SBATCH --gres=gpu:1

To explicitly run on the A100 GPUs, add the following directive:

#SBATCH --constraint=a100

Or to run on the V100 GPUs use:

#SBATCH --constraint=v100

To request an A100 with 80 GB of memory, use this constraint:

#SBATCH --constraint=gpu80

Use of the qos command to see the restrictions on the number of GPUs per QOS. For instance, when using four GPUs a job must have a runtime of less than four hours. See the "Job Scheduling" section above for the exact limits. See the priority page for information on estimating when your queued job will run. For measuring GPU utilization see the GPU Computing page.

A common mistake is to run a CPU-only code on a GPU. Only codes that have been explicitly written to run on a GPU can take advantage of a GPU. Read the documentation for the code that you are using to see if it can use a GPU.

 

Running Software using the Previous Operating System

The operating system on Adroit was upgraded from SDL 7 to SDL 8 in the summer of 2021. We provide two compatibility tools for effectively running software under the old operating system (SDL 7). The first is to create an interactive desktop (choose "Interactive Apps" then "Desktop") using the MyAdroit web portal (see above) and choosing the "mate" option for RHEL 7 on the configuration page. The second is command-line only. This involves prepending the command you want to run with /usr/licensed/bin/run7. Below are a few examples:

$ /usr/licensed/bin/run7 cat /etc/os-release
$ run7 bash
Singularity> source /usr/licensed/cadence/profile.20200824
Singularity> virtuoso

 

Visualization Node

The Adroit cluster has a dedicated node for visualization and post-processing tasks, called adroit-vis.

Hardware Details

This node features 64 CPU-cores, 512 GB RAM, and 2 A100 GPUs each with 80 GB of GPU memory.

This node has internet access.

How to Use the Visualization Node

Users can connect via SSH with the following command (VPN required if connecting from off-campus),

$ ssh <YourNetID>@adroit-vis.princeton.edu 

but to work with graphical applications on the visualization node, see our guide to working with visualizations and graphical user-interface (GUI) applications.

Note that there is no job scheduler on adroit-vis, so please be considerate of other users when using this resource. To ensure that the system remains a shared resource, there are limits in place preventing one individual from using all of the resources. You can check your activity with the command "htop -u $USER".

In addition to visualization, the node can be used for tasks that are incompatible with the Slurm job scheduler, or for work that is not appropriate for the login node (e.g., downloading large amounts of data from the Internet).

 

Filesystem Usage and Quotas

Please use /scratch/network/<YourNetID> for the output of running jobs and to store large datasets as described on the Data Storage page. This space is an NFS-mounted shared space of close to 24 TB. Run the checkquota command to see your usage. Files are NOT backed up so move any important files to long-term storage (e.g., your /home directory or your local machine).

Local scratch space is available in /tmp on every compute node. Since this storage is not shared across all nodes, it is ideally suited for temporary output.


Maintenance Window

Adroit will be down for maintenance in between semesters for a period of a few hours as announced on the mailing list. No significant changes are made during the fall and spring semesters.