Data Storage

Overview

The schematic diagram below shows the HPC clusters and the filesystems that are available to each cluster:

HPC clusters and the filesystems that are available to each. Users should write job output to /scratch/gpfs.

Here are the most important takeaways from the diagram above:

  • /home/<YourNetID>
    • This folder is small and it is intended for storing source code, executables, Conda environments, R or Julia packages and small data sets.
  • /scratch/gpfs/<YourNetID> 
    • This folder is a fast, parallel filesystem that is local to each cluster which makes it ideal for storing job input and output files. However, because /scratch/gpfs is not backed up you will need to transfer your completed job files to /tigress or /projects when a backup is desired.
  • /tigress and /projects
    • These are the long-term storage systems. They are shared by the large clusters via a single, slow connection and they are designed for non-volatile files only (i.e., files that do not change over time). For these reasons one should never write the output of actively running jobs to /tigress or /projects. Doing so may adversely affect the work of other users and it may cause your job to run inefficiently. Instead, write your output to /scratch/gpfs/<YourNetID> and then, after the job completes, copy or move the output to /tigress or /projects if a backup is desired.

Two additional important points:

  • /tmp (not shown in the figure) is local scratch space that exists on each compute node for high-speed reads and writes. If file I/O is a bottleneck in your code or if you need to store temporary data then you should consider using this.
  • use the checkquota command to check your storage limits and to request quota increases.

 

Using the Filesystems

Let's say that you just got an account on one of the HPC clusters and you want to start running jobs. Usually the first step is to install software in your /home directory. Most users begin by installing various packages in Python, R or Julia. By default these packages will be installed in /home/<YourNetID>. If you need to build your code from source then transfer the source code to your /home directory and compile it. If the software you need is pre-installed like MATLAB or Stata then you are ready to proceed to the next step.

With your software ready to be used, the next step is to run a job. The /scratch/gpfs filesystem on each cluster is the right place for storing job files. Create a directory in /scratch/gpfs/<YourNetID> (or /scratch/network/<YourNetID> on Adroit) and put the necessary input files and Slurm script in that directory. Then submit the job to the scheduler. If the run produces output that you want to backup then transfer the files to /tigress or /projects. The commands below illustrate these steps:

$ ssh <YourNetID>@della.princeton.edu
$ cd /scratch/gpfs/<YourNetID>
$ mkdir myjob
$ cd myjob
# put necessary files and Slurm script in myjob
$ sbatch job.slurm

Your files in /scratch/gpfs are not backed up, so after a job finishes, if you want to backup the output then copy or move the files to /tigress or /projects using a command like the following:

$ cp -r /scratch/gpfs/<YourNetID>/myjob /tigress/<YourNetID>

In summary, install your software in /home, run jobs on /scratch/gpfs and transfer final job output to /tigress or /projects for long-term storage and backup.

 

Additional Details

Given the small size of /home, users often run out of space which can lead to many issues. If you need to request more space then see the checkquota page. There are also directions on that page for finding and removing large files as well as dealing with large Conda environments.

The importance of not writing the output of actively running jobs to /tigress or /projects was emphasized above. Reading files or calling executables from these filesystems is allowed. However, in general, one will get better performance when using /scratch/gpfs or /home so those filesystems should be preferred.

A volatile file is one that changes over time. Actively running jobs tends to create volatile files such as a log file that records the progress of the run. One must avoid copying volatile files to /tigress or /projects since any subsequent change will cause a new backup of the modified version of the file to be made. The long-term storage systems are for non-volatile files only. Only after a job has completed should the job output files be transferred from /scratch/gpfs to /tigress or /projects.

There have been multiple failures of /scratch/gpfs in the past. In some cases data was lost. It is your responsibility to copy important files to /tigress and /projects for backup. Note that once you have copied the files to the backup system you can continue using them on /scratch/gpfs where the I/O performance is optimal.

Each compute node on each cluster has a local scratch disk at /tmp. Most users will never need to use the local scratch space. If your workflow is I/O bound or if you need to write large temporary files then it may be useful to you.

Tigressdata is a machine for visualization and data analysis. As indicated in the diagram above, it mounts the /scratch/gpfs filesystems of Della and Tiger as well as /tigress and /projects. After a job completes you can SSH to tigressdata to start working with the new output. This keeps the login nodes of the large HPC clusters free from this type of work.

 

FAQ