Tigressdata is a single computer, with 40 processing cores using hyperthreading to look like 80 cores and 768 GB of RAM and an nVidia P100 Tesla GPU. The system is intended for post-processing and remote visualization as well as developing, debugging, and testing codes.

System Configuration and Usage

General Guidelines

Tigressdata is a shared resource with 40 physical processing cores (Intel Xeon Skylake) and 768 GB of memory. And, as its name implies, tigressdata also has fiber connectivity to /tigress, our large archival storage system, along with NFS connectivity to selected parallel or scratch storage spaces allocated to the other Princeton clusters. The system's resources and connectivity (10 GigE) make it ideal for doing post-processing analysis of data that has been migrated to /tigress, and several commercial and open-source packages are installed on this system.

Please be mindful that tigressdata is a shared resource for all users.

Hardware Configuration
  Processor Speed Nodes Cores per Node Memory per Node Total Cores Interconnect Performance: Theoretical
Dell Linux Server
2.4 GHz Gold 6148 Xeon 1 20 768 GB 40 N/A 400+ GFLOPS
Job Scheduling

There is no batch scheduler running on tigressdata, and while the only usage limit imposed is a 250 GB memory limit per process, users are urged to avoid launching processes that will overburden the system. Before launching any memory- or compute-intensive tasks, please check

cat /proc/loadavg
cat /proc/meminfo
The first three values in the loadavg output are running averages of the system load for the last one, five and 15 minutes. Oversubscribing the system with CPU-bound tasks will severely compromise throughput for all users. The MemFree field of the meminfo output is the most important to check before launching a task with a large memory footprint. However, if the SwapFree is significantly less than the SwapTotal value, then performance is already compromised, and adding another memory-intensive task will only exacerbate the problem.