Campus network connectivity is provided and maintained by OIT. Currently, there are dual paths, for redundancy and load balancing, between campus the the Internet. Each link provides 10 Gbps connectivity. Two 100 Gbps connections provide access to Internet2 and ESnet. More information concerning these connections can be found on the  OIT web site

Head nodes are all connected to the campus network with a 10 Gbps connection. The machine tigressdata is connected with 10 Gbps links as well. The internal network on the clusters is very cluster dependent. All clusters use a 1 Gbps private network for local communication while the NFS servers are connected using Infiniband or Omnipath. A high performance, low latency Infiniband network also is attached for use with MPI parallel communication.

Princeton Campus Network
  • 268 buildings

  • 3250 switches

  • 11,750 wireless access points

  • 20 Gb/s Internet connectivity

  • 200 Gb/s Internet2 connectivity

  • over 74,000 registered devices

All private networks connect to the central GPFS storage (/tigress) over the Infiniband network connection. This is also a private network with fibre connections between the data center and the Lewis Library where some machines are currently housed. This private network also is used for /tigress backups.

Globus is an infrastructure for transferring large amounts of data between Princeton and any remote system that is also participating in the Globus system. Research Computing supports Globus data transfer to and from the GPFS-based /tigress, /projects, and scratch disk space (/della/scratch/gpfs/tiger/scratch/gpfs2 and /scratch/gpfs on Traverse) connected to the research computing cluster systems. For information about using Globus at Princeton, see the Globus Data Transfer at Princeton web page.