Available Systems

Research Computing operates three large clusters and several smaller systems.  The large clusters were purchased by combining contributions from a number of departments and faculty researchers with University funding.  Faculty who contribute to the purchase of a systems are guaranteed access based on their contributions.  Other researchers from Princeton University can gain access to the resources through a proposal process.

Large Clusters

  • Tiger is the newest, most powerful system.  It includes 320 nVidia P100 GPUs along with 392 non-GPU nodes.
  • Della is a large general purpose cluster.  It is used for both parallel and serial jobs.
  • Perseus is a large cluster that is used for large, computationally intensive, non-GPU jobs.

Smaller Systems

  • Nobel and Adroit are small systems used for class work and program development.
  • Tigressdata is used for post-processing and visualization of the results from jobs run on the large clusters.
  • bd is used for Hadoop jobs.
  • Macmillan is an experimental system.

Details of these systems are summarized in the table below.  The system names are links to additional information.

Available resources also include /tigress, a 6 PB GPFS file system.

You can view system utilization graphs here.

Large Clusters

Processor
Speed
Nodes Cores
per Node
Memory
per Node
Total Cores Inter-connect Performance:
Theoretical
TigerGPU
Dell Linux Cluster

2.4 GHz Xeon Broadwell
E5-2680 v4

1328 MHz P100 GPU

80

 

28

 

4 GPU/node

256 GB

 

16 GB/GPU

2240  

 

320 GPUs

Omnipath

 

 86 TFLOPS

 

1504 TFLOPS

TigerCPU
HPE Linux Cluster
2.4 GHz Skylake. 392 40

192 GB

(40 w/768 GB)

15680 Omnipath 1103 TFLOPS
Della
Dell Linux Cluster
2.67 GHz_Westmere
2.67 GHz_Westmere
2.5 GHz_Ivybridge
2.6 GHz_Haswell
2.4 GHz_Broadwell
16
64
80
32
48
12
12
20
20
28
48
96
128 GB
128 GB
128 GB
4544 QDR Infiniband 42 TFLOPS
Perseus
Dell Linux Cluster
2.4 GHz Xeon 320 28 128 GB 8960 FDR Infiniband 344 TFLOPS

Smaller Systems

Processor
Speed
Nodes Cores per Node Memory per Node Total Cores Interconnect Performance
Adroit
Dell Linux Cluster
2.6 GHz Skylake
705 MHz K20
9
4
32
4 GPU/node
384 GB
5.0 GB/GPU
288 FDR Infiniband 3.2 TFLOPS
9.36 TFLOPS
Tigressdata
Dell Linux Server
2.5 GHz
e5-2670v2 Xeon
1 20 512 GB 20 N/A 400 GFLOPS
Nobel
Dell Linux Server
2.3 GHz Haswell 2 28 224 GB 56 N/A 1.03 TFLOPS
BigData (bd)
SGI Linux Hadoop Cluster
2.80 GHz
Xeon CPU E5-2680 v2
    2.4 GB /
10.7 GB
  10g Ethernet 448 GFLOPS
Mcmillan
(experimental system)
3.47 GHz Xeon
+ GPUs (see detail page)
4
2
12
8
48 GB
48 GB
64 1g Ethernet  

Other HPC Resources

Princeton researchers may have access to a variety of other systems depending on their department affiliation and funding sources. 

Acknowledgements

The High Performance Computing Center and Visualization Laboratory at Princeton University is a collaborative facility that brings together funding, support, and participation from the Princeton Institute for Computational Science and Engineering (PICSciE), the Office of Information Technology (OIT), the School of Engineering and Applied Science (SEAS), the Lewis-Sigler Institute for Integrative Genomics (Genomics) , the Princeton Institute for the Science and Technology of Materials (PRISM), the Princeton Plasma Physics Laboratory (PPPL), and a number of academic departments and faculty members. The facility is designed to create a well balanced set of High Performance Computing (HPC) resources meeting the broad computational requirements of the Princeton University research community. 

High Performance Computing Research  Center

All systems are located in the High Performance Computing Research Center (HPCRC). The  47,000-square-foot with 5 MW facility provides a secure, reliable, and sustainable environment for IT equipment with 1.8 MW of UPS (scalable to 3.0 MW), 4.5 MW of diesel and gas power generation, and a designed Power Utilization Effectiveness of 1.5 at full load.