Skip to main content

HPC Basics

Introduction

Here are the basics for new users of the High Performance Computing (HPC) offered by GRIT, including the basic tools needed to efficiently access and use the HPC systems and the software they run.

The HPC Systems are (their operating system, RAM and # for slurm line with 'cpus per task'):

hammer: Fedora 34, 125 GB RAM, 44 cpus

anvil: Centos 7, 500 GB RAM, 30 cpus

forge: Fedora 34, 500 GB RAM, 24 cpus

tong: Fedora 34, 251 GB RAM, 28 cpus

bellows: Centos 8, 1.5 TB RAM, 96 cpus

Hammer is the oldest, Bellows is the newest. (Append ".eri.ucsb.edu" to the name if the full domain name is needed, e.g. hammer.eri.ucsb.edu). All should have R and Python installed.

Typically we set up some local scratch space for you to store your stuff.

 
#Info obtained from each system with: 
scontrol show node | grep CPU 

Access 

Access to the HPC systems using secure shell with your username and password from a command line is limited to campus IP addresses, and from other machines within the GRIT ecosystem. For example:

# connect to the bastion host
ssh username@ssh.grit.ucsb.edu

# or go straight to your HPC machine, e.g. bellows.eri.ucsb.edu
ssh username@bellows.eri.ucsb.edu

For much more detail, and some instructions on more efficient access see this page:

[this information requires an update, and a link will be re-posted once it is available]https://bookstack.grit.ucsb.edu/books/remote-access/page/ssh-key-setup

This page has information for connecting from off campus:

https://bookstack.grit.ucsb.edu/books/remote-access/page/ssh-mfa-setup

Storage Notes

Almost all storage in GRIT is available to all systems that you can access, this includes your home directory. For example /home/<user> will be the same on all systems, as will /home/<lab>. The benefit of this is it makes moving between machines very easy. The downside is that writing to these traverses the network, which can be an issue for jobs writing or reading lots of data. 

Each of the HPC machines has local scratch storage (meaning locally mounted to the computer and *not* backed up). On bellows it's under /scratch (so, scratch/<lab>/ for example), on some of the older machines it would be/raid/scratch/. This is specifically used for reading and writing by jobs running on the machine, available as a community resource and not for long term storage.

Moving Data

 rsync is a very powerful and widely used tool for moving data. The manual page has many useful examples (from the command line type "man rsync"). Here are a couple of examples to get you started:

# the command format is 
#rsync      
#
# So the following copies from a local folder to a destination folder on a remote host named bellows.eri.ucsb.edu:
rsync -avr /data/ username@bellows.eri.ucsb.edu:/some/other/folder/

# the -avr switches are: 
# 'a' for archive mode (when in doubt use this)
# 'v' for verbose (rsync will tell you what is going on)
# 'r' for recursive, recurse into directories

One trick to learn with rsync is the difference between leaving the trailing slash on or off.

# this command copies contents of /data/ to the destination directory /some/other/folder/
rsync -avr /data/ username@bellows.eri.ucsb.edu:/some/other/folder/

# ... while this command creates a folder 'data' on the destination and copies all of its contents:
rsync -avr /data username@bellows.eri.ucsb.edu:/some/other/folder/

When in doubt, test with --dry-run, and rsync will tell you what would have happened:

rsync -avr --dry-run /data username@bellows.eri.ucsb.edu:/some/other/folder/


Running Code

To run your code and use the HPC machine in a fair and efficient way, you'll use a queuing system. See: https://bookstack.grit.ucsb.edu/books/hpc-usage/page/slurm-usage

A few key slurm commands are:

# submit a job (where slurm_test.sh is a shell script for invoking a program)
sbatch slurm_test.sh

# show the whole queue
squeue -a

# look at a job's details
scontrol show job 

screen can be used when running jobs to allow you to disconnect your computer from a remote terminal session, for example when running a very long rsync job. See: https://bookstack.grit.ucsb.edu/books/hpc-usage/page/the-screen-program

Other Notes

All the HPC systems are built on the ZFS file system. To see information, for example e.g. about how much space is available:

[username@hpcsystem ~]$ zfs list
NAME                   USED  AVAIL     REFER  MOUNTPOINT
sandbox1              21.3T  4.00T       96K  /mnt/sandbox1
sandbox2              21.3T  4.00T       96K  /mnt/sandbox2
sandbox3              21.3T  4.00T     21.1T  /mnt/sandbox3

... this system has 4 Terabytes available for storage.

# Commands for retrieving the system specifications (OS, RAM, Cores):
cat /etc/redhat-release 
free -h

# get number of cpu's for slurm
scontrol show node | grep CPU

# old way:
grep 'processor' /proc/cpuinfo | uniq