Local scratch (i.e., /tmp) refers to the local disk physically attached to each compute node on a cluster. This is the fastest storage available to a job while it is running.

However, data stored in /tmp on one compute node cannot be directly read by another compute node. Also, it is necessary to put commands in the Slurm script to copy the output data in /tmp to another location (e.g., /scratch/gpfs) before the job ends. Files written to /tmp are deleted upon completion of a job.

One may also want to copy data to /tmp at the beginning of a job for fast reads during the execution of the job.

A directory can be created in /tmp and the name of this directory could be passed to the application if needed. Below is an example Slurm script:

#!/bin/bash
#SBATCH --job-name=usetmp        # create a short name for your job
#SBATCH --nodes=1                # node count
#SBATCH --ntasks=1               # total number of tasks across all nodes
#SBATCH --cpus-per-task=1        # cpu-cores per task (>1 if multi-threaded tasks)
#SBATCH --mem-per-cpu=4G         # memory per cpu-core (4G is default)
#SBATCH --time=00:01:00          # total run time limit (HH:MM:SS)
#SBATCH --mail-type=begin        # send email when job begins
#SBATCH --mail-type=end          # send email when job ends
#SBATCH --mail-user=<YourNetID>@princeton.edu

ScratchDir="/tmp/myjob"  # create a name for the directory
mkdir -p ${ScratchDir}   # make the directory
./myprog ${ScratchDir}   # run your program passing the directory path as a parameter

cp -r ${ScratchDir} /scratch/gpfs/<YourNetID>  # copy the output files to /scratch/gpfs

If you are copying data to /tmp at the beginning of a job then add a line such as the following after "mkdir -p ${ScratchDir}":

cp -r /tigress/aturing/mydata ${ScratchDir}

With the line above, you can then access your data using a path such as /tmp/myjob/mydata/file1.dat. If you are using multinodes then precede the cp command with "srun --ntasks-per-node=1". In all cases, when your job completes, the files in /tmp will be deleted.

Note that you can also write your output files to /scrach/gpfs instead of /tmp if you are not seeing a performance advantage.

Remember: No /tmp or /scratch filesystem is ever backed up!