Available Systems

Research Computing operates three large clusters and several smaller systems.  The large clusters were purchased by combining contributions from a number of departments and faculty researchers with University funding.  Faculty who contribute to the purchase of a systems are guaranteed access based on their contributions.  Other researchers from Princeton University can gain access to the resources through a proposal process.

Large Clusters

  • Tiger is the most powerful system. It includes 320 NVIDIA P100 GPUs along with 408 non-GPU nodes.
  • Della is a large general purpose CPU cluster. It is used for both parallel and serial jobs.
  • Perseus is a large CPU cluster that is used for large-scale parallel jobs. It is predominantly used for astrophysics research.
  • Traverse is an IBM POWER9 cluster with 4 NVIDIA V100 GPUs attached via NVLink. It is predominantly used for plasma physics research.

Smaller Systems

  • Nobel and Adroit are small systems used for class work and program development.
  • Tigressdata is used for post-processing and visualization of the results from jobs run on the large clusters.

Details of these systems are summarized in the table below.  The system names are links to additional information.

Available resources also include /tigress, a 12 PB GPFS file system.

You can view system utilization graphs here.

Large Clusters

Processor
Speed
Nodes Cores
per Node
Memory
per Node
Total Cores Inter-connect Performance:
Theoretical
TigerGPU
Dell Linux Cluster

2.4 GHz Xeon Broadwell
E5-2680 v4

1328 MHz P100 GPU

80

 

28

 

4 GPU/node

256 GB

 

16 GB/GPU

2240  

 

320 GPUs

Omnipath

 

 86 TFLOPS

 

1504 TFLOPS

TigerCPU
HPE Linux Cluster
2.4 GHz Skylake 408 40

192 GB

(40 w/768 GB)

16320 Omnipath >1103 TFLOPS
Della
Dell Linux Cluster

2.5 GHz Ivybridge
2.6 GHz Haswell
2.4 GHz Broadwell
2.4 GHz Skylake
2.8 GHz Cascade Lake

16
64
32
64
64

20
20
28
32
32

128 GB
128 GB
128 GB
192 GB
190 GB

6592 QDR Infiniband 267+ TFLOPS
Traverse
IBM Linux Cluster

2.7 GHz POWER9

1530 MHz V100 GPU

46 128

4 GPU/node

 

256 GB

32 GB/GPU
8960

184 GPUs
EDR Infiniband 1435 TFLOPS
Perseus
Dell Linux Cluster
2.4 GHz Xeon 320 28 128 GB 5888 EDR Infiniband 344 TFLOPS

 

Smaller Systems

Processor
Speed
Nodes Cores per Node Memory per Node Total Cores Interconnect Performance
Adroit
Dell Linux Cluster
2.6 GHz Skylake 9 32 384 GB 288 FDR Infiniband 3.2 TFLOPS
Adroit (adroit-h11g1)
GPU node
2.6 GHz Skylake
1530 MHz V100
1 40
4 GPU/node
770 GB
32 GB/GPU
40   0.4 TFLOPS
7.1 TFLOPS
Adroit (adroit-h11g4)
GPU node
3.2 GHz Haswell
745 MHz K40c
1 16
2 GPU/node
64 GB
12 GB/GPU
16    
 
Tigressdata
Dell Linux Server
2.4 GHz
Gold 6148 Xeon
1 40 768 GB 20 N/A 400+ GFLOPS
Nobel
Dell Linux Server
2.3 GHz Haswell 2 28 224 GB 56 N/A 1.03 TFLOPS

Other HPC Resources

Princeton researchers may have access to a variety of other systems depending on their department affiliation and funding sources. 

Acknowledgements

The High Performance Computing Center and Visualization Laboratory at Princeton University is a collaborative facility that brings together funding, support, and participation from the Princeton Institute for Computational Science and Engineering (PICSciE), the Office of Information Technology (OIT), the School of Engineering and Applied Science (SEAS), the Lewis-Sigler Institute for Integrative Genomics (Genomics) , the Princeton Institute for the Science and Technology of Materials (PRISM), the Princeton Plasma Physics Laboratory (PPPL), and a number of academic departments and faculty members. The facility is designed to create a well balanced set of High Performance Computing (HPC) resources meeting the broad computational requirements of the Princeton University research community. 

High Performance Computing Research  Center

All systems are located in the High Performance Computing Research Center (HPCRC). The  47,000-square-foot with 5 MW facility provides a secure, reliable, and sustainable environment for IT equipment with 1.8 MW of UPS (scalable to 3.0 MW), 4.5 MW of diesel and gas power generation, and a designed Power Utilization Effectiveness of 1.5 at full load.