Research Computing operates a number of different computing systems designed for the varying needs of Princeton faculty and students.
The system names in the descriptions below are links to more detailed configuration information.
Della and Tiger are large general purpose clusters with hundreds of nodes. Della performs well for most parallel processing jobs and for users with large numbers of serial jobs. Tiger is used by the largest, most demanding parallel codes. Tiger also includes 320 nVidia P100 GPUs. For storage, Della and Tiger use /tigress.
Perseus is particularly well suited to large, computationally intensive parallel jobs because of its relatively large number of cores/node that all include the latest AVX vector processing units. There are no GPUs on Perseus. For storage, Perseus uses /tigress.
Faculty members or groups that have contributed to the purchase of these systems are automatically allocated a share of the system proportional to their contribution. For other users, getting access to these systems requires a proposal or sponsorship from a faculty member who is already using the system.
Nobel consists of a pair of large, multi-core servers. It is a good choice for jobs that need more time or processing power than a personal laptop can provide, and it is frequently used for course work. For storage, it uses the OIT home file system (the H: drive).
Adroit is a small cluster that is used for development, debugging and small production runs. It works well for small parallel jobs and as a first step before moving work to one of the bigger clusters. It has nine computing nodes with more total cores than Nobel, but parallelizing software is needed to use more than one core at a time. For storage, it has its own file system.
Tigressdata is a single computer with 20 processing cores, 512 G of RAM, and two nVidia K20 GPUs. For storage it uses the /tigress file system used by the large clusters. The system is intended for developing, debugging, and testing codes, for small production jobs and for post-processing and remote visualization of data produced on the large clusters and stored in /tigress. Any user of the large clusters will also be given access to Tigressdata.