Containers on the HPC Clusters

Containers make it possible to easily run software developed for one system on a second system. Informally, the idea is to store all or most of the dependencies of the software in a single, large file so that when it comes time to run the software everything "just works".


Singularity is a Secure Alternative to Docker

Docker images are not secure because they provide a means to gain root access. For this reason Docker is not available on the Princeton HPC clusters. This is not a problem because we offer Singularity which is a secure HPC analog of Docker. Singularity is compatible with all Docker images and it can be used with GPUs and MPI applications. Singularity is available on Adroit, Della and Tiger. This page is useful for understanding the differences between Singularity, Docker and virtual machines.



Obtaining the Image

Some software is provided as a Singularity image with the .sif or .simg file extension. If you already have the image then proceed to the next section. More commonly, however, a Docker image will be provided and this must be converted to a Singularity image. For instance, if the installation directions are saying:

$ docker pull brinkmanlab/psortb_commandline

Then download and convert the Docker image to a Singularity image with:

$ singularity pull docker://brinkmanlab/psortb_commandline:1.0.2

This will produce the file psortb_commandline_1.0.2.sif in the current working directory, where 1.0.2 is a specific version of the software. Often times the URI of the image will begin with library://.

In some cases the build command should be used to create the image:

$ singularity build <name-of-image.sif> <URI>

Unlike pull, build will convert your image to the latest Singularity image format after downloading it.


To run the default command within the Singularity image use:

$ singularity run ./psortb_commandline_1.0.2.sif <arg-1> <arg-2> ... <arg-N>

To run a specific command use exec:

$ singularity exec ./psortb_commandline_1.0.2.sif <command> <arg-1> <arg-2> ... <arg-N>

Use the shell command to run a shell within the container:

$ singularity shell ./psortb_commandline_1.0.2.sif
Singularity> cat /etc/lsb-release
Singularity> exit

Singularity by default exposes all environment variables from the host inside the container. Use the --cleanenv argument to prevent this:

$ singularity run --cleanenv <image.sif> <arg-1> <arg-2> ... <arg-N>

One can learn a lot about the image by inspecting its definition file:

$ singularity inspect --deffile psortb_commandline_1.0.2.sif

Example Conversion

Here is an example of converting directions for Docker to Singularity. The Docker directions are:

docker run -v /host/gtdbtk_output:/data -v /host/release89:/refdata ecogenomic/gtdbtk --help

To convert the above to Singularity, one would use:

singularity run -B /host/output:/data -B /host/release89:/refdata </path/to>/gtdbtk_1.1.1.sif --help

The Singularity image in the above line can be obtained with:

singularity pull docker://ecogenomic/gtdbtk:1.1.1

To learn about binding a directory within the container to a directory on the host, look at the -B option in the output of this command: singularity help run



Below is a sample Slurm script:

#SBATCH --job-name=singularity   # create a short name for your job
#SBATCH --nodes=1                # node count
#SBATCH --ntasks=1               # total number of tasks across all nodes
#SBATCH --cpus-per-task=1        # cpu-cores per task (>1 if multi-threaded tasks)
#SBATCH --mem=4G                 # total memory per node (4 GB per cpu-core is default)
#SBATCH --time=00:05:00          # total run time limit (HH:MM:SS)
#SBATCH --mail-type=begin        # send mail when job begins
#SBATCH --mail-type=end          # send mail when job ends
#SBATCH --mail-type=fail         # send mail if job fails
#SBATCH --mail-user=<YourNetID>

singularity run </path/to>/psortb_commandline_1.0.2.sif <arg-1> <arg-2> ... <arg-N>



The Singularity User Guide is here. The help menu is shown below:

$ singularity --help

Linux container platform optimized for High Performance Computing (HPC) and
Enterprise Performance Computing (EPC)

  singularity [global options...]

  Singularity containers provide an application virtualization layer enabling
  mobility of compute via both application and environment portability. With
  Singularity one is capable of building a root file system that runs on any 
  other Linux system where Singularity is installed.

  -d, --debug     print debugging information (highest verbosity)
  -h, --help      help for singularity
      --nocolor   print without color output (default False)
  -q, --quiet     suppress normal output
  -s, --silent    only print errors
  -v, --verbose   print additional information
      --version   version for singularity

Available Commands:
  build       Build a Singularity image
  cache       Manage the local cache
  capability  Manage Linux capabilities for users and groups
  config      Manage various singularity configuration (root user only)
  delete      Deletes requested image from the library
  exec        Run a command within a container
  help        Help about any command
  inspect     Show metadata for an image
  instance    Manage containers running as services
  key         Manage OpenPGP keys
  oci         Manage OCI containers
  plugin      Manage Singularity plugins
  pull        Pull an image from a URI
  push        Upload image to the provided URI
  remote      Manage singularity remote endpoints
  run         Run the user-defined default command within a container
  run-help    Show the user-defined help for an image
  search      Search a Container Library for images
  shell       Run a shell within a container
  sif         siftool is a program for Singularity Image Format (SIF) file manipulation
  sign        Attach a cryptographic signature to an image
  test        Run the user-defined tests within a container
  verify      Verify cryptographic signatures attached to an image
  version     Show the version for Singularity

  $ singularity help []
  $ singularity help build
  $ singularity help instance start

For additional help or support, please visit