Tigressdata

Tigressdata is a single computer, with 20 processing cores and 512 GB of RAM and two nVidia K20 GPUs. The system is intended for post-processing and remote visualization as well as developing, debugging, and testing codes.

System Configuration and Usage

General Guidelines

Tigressdata is a shared resource with twenty processing cores (Intel Xeon Ivybridge) and 512 GB of memory. And, as its name implies, tigressdata also has fiber connectivity to /tigress, our large archival storage system, along with NFS connectivity to selected parallel or scratch storage spaces allocated to the other Princeton clusters. The system's resources and connectivity (10 GigE) make it ideal for doing post-processing analysis of data that has been migrated to /tigress, and several commercial and open-source packages are installed on this system.

Please be mindful that tigressdata is a shared resource for all users.

Hardware Configuration

  Processor Speed Nodes Cores per Node Memory per Node Total Cores Interconnect Performance: Theoretical
Tigressdata
Dell Linux Server
2.5 GHz e5-2670v2 Xeon 1 20 512 GB 20 N/A 400 GFLOPS

Job Scheduling

There is no batch scheduler running on tigressdata, and while the only usage limit imposed is a 250 GB memory limit per process, users are urged to avoid launching processes that will overburden the system. Before launching any memory- or compute-intensive tasks, please check

cat /proc/loadavg
cat /proc/meminfo
The first three values in the loadavg output are running averages of the system load for the last one, five and 15 minutes. Oversubscribing the system with CPU-bound tasks will severely compromise throughput for all users. The MemFree field of the meminfo output is the most important to check before launching a task with a large memory footprint. However, if the SwapFree is significantly less than the SwapTotal value, then performance is already compromised, and adding another memory-intensive task will only exacerbate the problem.