Tiger

The Tiger cluster is one of Princeton's most powerful clusters. It is meant for running parallel jobs, as all other types of jobs are given low priority here.

Some Technical Specifications:
The Tiger cluster has two parts: tigercpu and tigergpu.

The tigercpu part is an HPE Apollo cluster comprised of 408 Intel Skylake CPU nodes. Each CPU processor core has at least 4.8 GB of memory. Every 40-core node is interconnected by a Omnipath fabric with oversubscription. There are 24 nodes per chassis all connected with the full bandwidth.

The tigergpu part is a Dell computer cluster comprised of 320 NVIDIA P100 GPUs across 80 Broadwell nodes, each GPU processor core has 16 GB of memory. The nodes are interconnected by an Intel Omnipath fabric. Each GPU is on a dedicated x16 PCI bus. The nodes all have 2.9TB of NVMe connected scratch as well as 256G RAM. The CPUs are Intel Broadwell e5-2680v4 with 28 cores per node. View a dashboard of GPU utilization.

For more hardware details, see the Hardware Configuration information below.

How to Access the Tiger Cluster

To use the Tiger cluster you have to enable your Princeton Linux account, request an account on Tiger, and then log in through SSH.

  1. Enabling Princeton Linux Account

    Tiger is a Linux cluster. If your Tiger account is your first Princeton OIT Linux account, then you need to enable your Linux account. If you need help, the process is described in the Knowledge Base article Unix: How do I enable/change the default Unix shell on my account? For more on Unix, you can see Introduction to Unix at Princeton. Once you have access, you should not need to register again unless your account goes unused for more than six months.
     
  2. Requesting Access to Tiger

    Access to the large clusters like Tiger is granted on the basis of brief faculty-sponsored proposals (see For large clusters: Submit a proposal or contribute).

    If, however, you are part of a research group with a faculty member who has contributed to or has an approved project on Della, that faculty member can sponsor additional users by sending a request to cses@princeton.edu. Any non-Princeton user must be sponsored by a Princeton faculty or staff member for a Research Computer User (RCU) account.
  3. Logging into Tiger

    Once you have been granted access to Tiger, you can log in through the SSH command as seen below. Use tigercpu for CPU-only usage, and use tigergpu for added GPU support.

    To log into tigercpu:

    $ ssh <YourNetID>@tiger.princeton.edu
    

    To log into tigergpu: 

    $ ssh <YourNetID>@tigergpu.princeton.edu

    For more on how to SSH, see the Knowledge Base article Secure Shell (SSH): Frequently Asked Questions (FAQ).

 

How to Use the Tiger Cluster

Since Tiger is a Linux system, knowing some basic Linux commands is highly recommended. For an introduction to navigating a Linux system, view the material associated with our Intro to Linux Command Line workshop. 

Using Tiger also requires some knowledge on how to properly use the file system, module system, and how to use the scheduler that handles each user's jobs. For an introduction to navigating Princeton's High Performance Computing systems, view the material associated with our Getting Started with the Research Computing Clusters workshop. Additional information specific to Tiger's file system, priority for job scheduling, etc. can be found below.

To attend a live session of either workshop, see our Trainings page for the next available workshop.
For more resources, see our Support - How to Get Help page.

 

Tiger Schematic

A schematic diagram of the tiger cluster

Important Guidelines

Please remember that these are shared resources for all users.

The head nodes, tigercpu and tigergpu, should be used for interactive work only, such as compiling programs, and submitting jobs as described below. No jobs should be run on the head node other than brief tests that last no more than a few minutes. Where practical, we ask that you entirely fill the nodes so that CPU core fragmentation is minimized.

Running Jobs

Jobs can be submitted for either portion of the Tiger system from either head node, but it is best to compile programs on the head node associated with the portion of the system where the program will run.  That is, compile GPU jobs on tigergpu and non-GPU jobs on tigercpu.  Running a job on the GPU nodes requires additional specifications in the job script.  Refer to the Tiger Tutorial for instructions and examples.

 

Wording of Acknowledgement of Support and/or Use of Research Computing Resources

"The author(s) are pleased to acknowledge that the work reported on in this paper was substantially performed using the Princeton Research Computing resources at Princeton University which is consortium of groups led by the Princeton Institute for Computational Science and Engineering (PICSciE) and Office of Information Technology's Research Computing."

"The simulations presented in this article were performed on computational resources managed and supported by Princeton Research Computing, a consortium of groups including the Princeton Institute for Computational Science and Engineering (PICSciE) and the Office of Information Technology's High Performance Computing Center and Visualization Laboratory at Princeton University."