Stellar

OUTLINE

 

Coming Soon - Stellar

Stellar, a heterogeneous cluster with AMD and Intel processors, is being built to support large-scale parallel jobs predominantly for use by researchers in astrophysical sciences, plasma physics, physics, chemical and biological engineering and atmospheric and oceanic sciences.
 

Stellar is Operating in Limited Access Mode

A portion of the Intel nodes and the /scratch/gpfs file system have been installed, and a number of researchers have accounts on the system to facilitate the migration off of  Perseus and Eddy. Decommissioning those systems will enable the installation of the remainder of the Stellar nodes. See the "Perseus/Eddy to Stellar Migration Checklist" below for important information. All researchers should keep in mind that the Stellar cluster is operating in a beta state. If you encounter any problems please write to cses@princeton.edu.

 

How to Access the Stellar Cluster

To use the Stellar cluster you have to request an account on Stellar and then log in through SSH.
 

  1. Requesting Access to Stellar

    Access to the large clusters like Stellar is granted on the basis of brief faculty-sponsored proposals. See section titled For large clusters: Submit a proposal or contribute for details. Perseus and Eddy users will not get an account automatically. Have your PI send a request to Research Computing.

    If, however, you are part of a research group with a faculty member who has contributed to or has an approved project on Stellar, that faculty member can sponsor additional users by sending a request to cses@princeton.edu. Any non-Princeton user must be sponsored by a Princeton faculty or staff member for a Research Computer User (RCU) account.
     
  2. Logging into Stellar

    Once you have been granted access to Stellar, you should be able to SSH into it using the command below:
    $ ssh <YourNetID>@stellar.princeton.edu

    For more on how to SSH, see the Knowledge Base article Secure Shell (SSH): Frequently Asked Questions (FAQ).

 

How to Use the Stellar Cluster

Since Stellar is a Linux system, knowing some basic Linux commands is highly recommended. For an introduction to navigating a Linux system, view the material associated with our Intro to Linux Command Line workshop. 

Using Stellar also requires some knowledge on how to properly use the file system, module system, and how to use the scheduler that handles each user's jobs. For an introduction to navigating Princeton's High Performance Computing systems, view the material associated with our Getting Started with the Research Computing Clusters workshop. Additional information specific to Stellar's file system, priority for job scheduling, etc. can be found below.

To attend a live session of either workshop, see our Trainings page for the next available workshop.

For more resources, see our Support - How to Get Help page.

 

Important Guidelines

All users are required to read and abide by the Stellar usage guidelines:

 

Login Nodes

The login nodes, stellar-intel and stellar-amd, should be used for interactive work only such as compiling programs and submitting jobs as described below. Please remember that these are shared resources for all users. No jobs should be run on the login nodes with the exception of brief tests that last no more than a few minutes and use only a few CPU-cores. Where practical, we ask that you entirely fill the nodes so that CPU core fragmentation is minimized. For this cluster, stellar, that means multiples of 96 cores.

Use the "snodes" command to see the number of available nodes. Nodes have quad sockets with 24 cores/socket per node and 8GB/core memory. The back end network is 100Gb Infiniband, HDR100.

The /tigress and /projects directories are mounted on the login node (stellar) as well as the compute nodes over NFS. This is for access to data and software built for projects.

 

Perseus/Eddy to Stellar Migration Checklist

If you are coming from the Perseus/Eddy cluster(s) to Stellar then make sure you do these things:

  • Recompile your code. Stellar runs on the RHEL8 operating system while Perseus and Eddy used RHEL7. Additionally, all of the various software libraries such at MPI and HDF5 have been re-built from source specifically for the hardware and network fabric of Stellar. Because of this, all users need to recompile their code from source using the compilers and libraries provided by the new environment modules.
  • Use full environment module names. One can no longer use "module load anaconda3", for instance. Instead the full name of the module must be specified (e.g., module load anaconda3/2020.11). Use the "module avail" command to see the available environment modules.
  • Be aware that /scratch/gpfs is shared between Stellar and Traverse. Users must be careful not to overwrite files by using the same job path on both clusters.
  • Move your data from /scratch/gpfs on Perseus to Stellar by June 8. The hardware for the /scratch/gpfs filesystem of Perseus will be repurposed so users need to take action before the data is deleted. One can use scp, rsync or Globus (see below) to make the transfer from Perseus to Stellar.
  • Requests for new Stellar accounts must come from a PI. Persues and Eddy users will not automatically get an account on Stellar. Your PI must make the request to Research Computing.
  • /tigress and /project are not changing. Any data you have in those storage systems will go untouched.

 

Hardware Configuration

Processor Nodes Cores per Node Memory per Node Max Instruction Set
2.9 GHz Intel Cascade Lake TBA 96 768 GB AVX-512
2.6 GHz AMD EPYC Rome TBA 128 512 GB AVX2

Nodes on the Intel side will consist of PU-only and PPPL-only nodes. Think of a Venn diagram here with PU and PPPL circles.  There will be an intersection of some number of nodes so that either side can expand. Those nodes will be weighted differently so they are the very last to be assigned.

 

Slurm

The default memory allocation on Stellar is 8 GB per core. Relying on the default value will work nicely for the Intel nodes but it can cause problems for the AMD nodes since they offer 512/128=4 GB per core, which translate to --mem-per-cpu=4000M. Be sure to explicitly set the memory in Slurm scripts for the AMD nodes. Failure to do this may result in the following error message:

sbatch: error: Batch job submission failed: Requested node configuration

Learn more about Slurm scripts and memory.

 

 

Job Scheduling (QOS Parameters)

QOS Time Limit Jobs per User Cores per User Cores Available
stellar-debug 30 minutes 2 2048 4096
cimes-short 24 hours 2 2048 4096
cimes-medium 72 hours 2 2048 4096
cimes-long 7 days 2 2048 4096
pppl-short 24 hours 10 4096 7680
pppl-medium 72 hours 4 2048 4096
pppl-long 7 days 2 2048 4096
pu-short 24 hours 6 5000 7680
pu-medium 72 hours 2 3000 7680
pu-long 7 days 2 2048 4096

Serial Partition

Any job submitted to the PU or PPPL partition requesting 47 or fewer CPU-cores will be assigned to the serial queue. Jobs in this queue will have the lowest priority of all jobs since the cluster is intended for multinode jobs. If you need to run a large number of serial jobs (47 cores or less) then you should consider moving that work to another cluster such as Della.

Scheduling by Projects

Scheduling for PPPL and CIMES is done based on project. Users in these groups should add the following Slurm directive to all scripts:

#SBATCH -A <account-name>

 

Compiler Flags and Math Libraries

Intel Nodes

The Intel nodes feature Cascade Lake processors with AVX-512 as the highest instruction set. As a starting point, consider using these optimization flags when compiling a C++ code, for instance:

$ ssh <YourNetID>@stellar-intel.princeton.edu
$ module load intel/2021.1.2
$ icpc -Ofast -xCORE-AVX512 -o mycode mycode.cpp

The Intel Math Kernel Library (MKL) is automatically loaded as a module when an Intel compiler module is loaded.

For GCC:

$ module load gcc-toolset/10
$ g++ -Ofast -march=cascadelake -o mycode mycode.cpp

AMD Nodes

The AMD nodes feature the EPYC processor with AVX2 as the highest instruction set. See the Quick Reference Guide by AMD for compiler flags for different compilers (AOCC, GCC, Intel) and the AOCC user guide. As a starting point, consider using these optimization flags when compiling a C++ code, for instance:

$ ssh <YourNetID>@stellar-amd.princeton.edu
$ module load aocc/3.0.0 aocl/aocc/3.0_6
$ clang++ -Ofast -march=native -o mycode mycode.cpp

For a parallel Fortran code:

$ ssh <YourNetID>@stellar-amd.princeton.edu
$ module load aocc/3.0.0 aocl/aocc/3.0_6 openmpi/aocc-3.0.0/4.1.0
$ mpif90 -Ofast -march=native -o hw hello_world_mpi.f90

Load the aocl module to make available the BLIS and libFLAME linear algebra libraries by AMD as well as FFTW3 and ScaLAPACK. Excellent performance was found for the High-Performance LINPACK benchmark using GCC and these libraries.

If you wish to use the Intel compiler for the AMD nodes then consider these flags:

$ module load intel/2021.1.2
$ icpc -Ofast -march=core-avx2 -o mycode mycode.cpp

Use the -march option above if you encounter the following error message:

Please verify that both the operating system and the processor support Intel(R) X87, CMOV, MMX, FXSAVE, SSE, SSE2, SSE3, SSSE3, SSE4_1, SSE4_2, POPCNT, AVX and F16C instructions.

 

Environment Modules

Loading Modules

The environment modules that you load define part of your software environment which plays a role in determining the results of your code. Run the "module avail" command to see the available modules. For numerous reasons including scientific reproducibility, when loading an environment module you must specify the full name of the module. This can be done using module load, for example:

$ module load intel/19.1.1.217

You will encounter an error if you do not specify the full name of the module:

$ module load anaconda3
ERROR: No default version defined for 'anaconda3'

$ module load anaconda3/2020.11
$ python --version
3.8.5

Notable Modules and Modules to Avoid

  • aocc/<version> makes the AMD compilers available
  • aocl/<compiler>/<version> makes the AMD math libraries available
  • cmake/3.18.2 provides a newer CMake over the system version (3.11.4)
  • gcc/4.85 provides an older GNU Compiler Collection (GCC); it should only be used in rare cases
  • gcc/8.3.1 is equivalent to using the system GCC
  • gcc-toolset/10 makes GCC 10.2.1 available (use this when the system GCC is insufficient)
  • nvhpc/21.1 provides the NVIDIA compilers and libraries (the compilers replace PGI)
  • rh/devtoolset/7 makes GCC 7.3.1 available; it should be avoided in favor of the system GCC

Module Aliases

If you would rather use short aliases instead of full module names then see the environment modules page.

 

Software

The software environment on Stellar is very similar to the other Research Computing clusters. See the general documentation for Princeton University Research Computing. If you find that you need software packages that are not installed on Stellar then please send a request via e-mail to cses@princeton.edu.

Anaconda Python

The Anaconda Python distribution should be used when working with Python on Stellar:

$ module avail anaconda3
$ module load anaconda3/2020.11
$ python --version

See our Python page for more information on using the Anaconda Python distribution on the Research Computing clusters. One may also consider installing Miniconda. We do not provide a Python 2 anaconda module on Stellar since that version of the language has been unsupported for over a year.

System Python

The system Python is available but it should be avoided in favor of the Anaconda Python distribution which provides optimizations for our hardware. The system Python exists largely for the system administrators to install software. These commands illustrate its use:

$ python
-bash: python: command not found

$ python3
Python 3.6.8 (default, Nov 15 2020, 11:45:35) 
[GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>

$ python2
Python 2.7.17 (default, Nov 16 2020, 23:55:19) 
[GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>

Again, for scientific work one should use the Anaconda Python distribution and not the system Python.

 

Globus

To use Globus to transfer data to the /scratch/gpfs filesystem of Stellar, which is shared with Traverse, use this endpoint:

Princeton Traverse/Stellar Scratch DTN

 

Visualization Nodes

There are two dedicated nodes for visualization and data analysis:

$ ssh <YourNetID>@stellar-vis1.princeton.edu  # PU
$ ssh <YourNetID>@stellar-vis2.princeton.edu  # PPPL

These nodes support TurboVNC and they each offer two NVIDIA V100 GPUs for GPU-enabled software. Please use these nodes for visualization and data analysis instead of the stellar head nodes. For running Jupyter notebooks use jupyter.rc where /stellar/scratch/gpfs/<YourNetID> is available or follow these directions with the appropriate modifications for stellar-vis[1,2].