Systems

Overview

Princeton Research Computing maintains seven different systems for the Princeton research community with more than 90,000 total cores and over 10.2 PFLOPS of processing power.

The following table is intended to provide a quick reference on the unique characteristics of each cluster, but a full comparison of hardware is also available. Please visit each system’s page for more detailed information, such as how to access and work with each system.

Note: The clusters receive regular maintenance and upgrades, so at times these CPU and GPU numbers may lag behind the latest changes. To see the most up-to-date information, ssh into your cluster of interest and use the snodes command.

System

Intended Use

Total Nodes

Total GPU Nodes

Uses Slurm to schedule jobs?

Nobel

This system is a good choice for running programs that need more time or processing power than a personal laptop can provide, and it is frequently used for course work. Nobel uses the OIT home filesystem for storage (a.k.a. the H: drive) and provides access to some licensed academic software (e.g. Mathematica, MATLAB, StataSE, among others).

2

0

No

Adroit

A small cluster built like our larger clusters, and therefore great for 1) running small parallel jobs, 2) as training for work on larger clusters, and 3) for classes. Adroit has more total cores than Nobel, has its own filesystem for storage, and like Nobel provides access to some licensed academic software (e.g. Mathematica, MATLAB, StataSE, among others).

9

3

Yes

Tigressdata

A computer built for doing data analysis and remote visualization of data generated on the larger clusters.

1

1

No

Della

A large, general-purpose cluster composed of CPU and GPU nodes for serial and parallel jobs.

300

90

Yes

Tiger

A CPU cluster for multinode, parallel jobs.

424

0

Yes

Stellar

A powerful cluster for large-scale parallel (particularly multi-node) jobs, and predominantly for researchers in astrophysical sciences, plasma physics, physics, chemical and biological engineering, and atmospheric and oceanic sciences.

483 6 Yes

Traverse

A GPU cluster predominantly used for plasma physics research.

46

46

Yes

If you need help choosing the right system for you, email our team at [email protected]. You can also view system utilization graphs.

 

Storage

Each large cluster provides local General Parallel File System (GPFS) storage for production jobs. Our long-term storage systems (/projects and /tigress) have a total capacity of 18.5 PB. For more information see our Storage page.

 

General Guidelines on How to Access the Clusters

Gaining access to the smaller clusters is easy. All Princeton University members with a netID already have access to Nobel, and can also request access to Adroit by filling out a form.

Gaining access to the larger cluster (Della, Stellar, Tiger, Traverse) is granted on the basis of brief faculty-sponsored proposals.

For the full details on how to access each system, please view each system's individual page.

 

Other HPC Resources

Princeton researchers may have access to a variety of other systems depending on their department affiliation and funding sources. 

 

Acknowledgements

The High Performance Computing Center and Visualization Laboratory at Princeton University is a collaborative facility that brings together funding, support, and participation from:

  • The Princeton Institute for Computational Science and Engineering (PICSciE)
  • The Office of Information Technology (OIT)
  • The School of Engineering and Applied Science (SEAS)
  • The Lewis-Sigler Institute for Integrative Genomics (Genomics)
  • The Princeton Institute for the Science and Technology of Materials (PRISM)
  • The Princeton Plasma Physics Laboratory (PPPL)
  • A number of academic departments and faculty members

The facility is designed to create a well balanced set of High Performance Computing (HPC) resources meeting the broad computational requirements of the Princeton University research community.

 

High Performance Computing Research  Center

All systems are located in the High Performance Computing Research Center (HPCRC). Built in late 2011, Princeton’s HPCRC on the Forrestal campus is a 47,000 s.f., 5 MW data center providing  up to 3 MW of conditioned power (currently 1.8 MW with a current project underway to add one more  uninterruptible power supply (UPS) to bring the capacity to 2.4 MW. There is space to add yet another  UPS to bring the capacity up to 3.0 MW when the demand requires.) to the 12,000 s.f. data hall. The  LEED gold certified facility offers significant power savings through the use of an air side economizer  (outside air used for air cooling when temperatures and humidity are appropriate – an estimated 250  days per year), water side economizers (evaporative cooling towers with the capacity to cool the entire  facility at full load when temperatures are below 45 degrees F), a 2 MW natural gas generator and  associated absorptive chiller that can act as a cogeneration plant when electrical rates are high, and  chilled water rear doors for high density systems. A 2.5 MW diesel generator is used for backup power.  The electrical system is designed to provide N+1 redundancy for all components in the power chain for  the administrative systems. The research systems in the facility have the benefits of the generators and  UPS’s, but they do not have additional redundancy in the electrical systems. The cooling systems for  the facility provide N+1 redundancy, and there is a chilled water storage tower and a UPS for the pumps  and fans to allow for continued operation of the cooling systems while the generators start up. The  HPCRC also houses many of the research, academic, and administrative systems of the university.  

Princeton has a second server room space on the main campus. The New South building contains an  2400 s.f. server room that houses administrative systems and serves as an offsite location with hot and  warm disaster recovery for many of the university’s enterprise applications. The New South server  room has both UPS and generator backup electrical power.