Skip to main content

GPU Resources

The HPC cluster has a total of 32 NVIDIA L40S GPUs, and an NVIDIA A30 spread across various hosts and resources.

Interactive Apps

To access these resources via the Open OnDemand web ui simply check the Enable NVIDIA GPU box at the bottom of the interactive session form and your interactive session will be started with access to a GPU. 

Screenshot_2026-02-19_10-27-33.png

SLURM CLI

GPU resources can also be access via the SLURM CLI. Below are some examples:

#!/bin/bash
#SBATCH -J gpu-l40s-test
#SBATCH -p grit_nodes
#SBATCH --gres=gpu:1
#SBATCH --cpus-per-task=8
#SBATCH --mem=32G
#SBATCH -t 01:00:00

or as a one liner:

srun -p grit_nodes --gres=gpu:1 --cpus-per-task=4 --mem=16G --pty <your command here>
Notes

The GPU resources work a little differently in SLURM than the CPU and RAM resources. GPU's cannot be exclusively reserved in the current setup because we have limited GPU resources and SLURM cannot reserve any less than the resources of the full GPU. So jobs submitted to a GPU node may be sharing GPU resources with other jobs. This may change as we see increased use of GPUs.