Skip to main content

Running recurring jobs with scrontab

THIS IS A WORK IN PROGRESS!

Scheduling Recurring Jobs on GRIT HPC with scrontab

GRIT HPC supports periodic job scheduling using Slurm’s scrontab feature.
This allows users to run jobs on a schedule using cron-style syntax, similar to Linux crontab.

Typical uses include:

  • Automated data processing

  • Periodic log analysis

  • Monitoring workflows

  • Triggering simulations when new data arrives

  • Cleanup or maintenance scripts


What is scrontab?

scrontab is Slurm’s periodic job scheduler.

Instead of running commands directly like Linux cron, scrontab typically runs sbatch commands, which submit jobs to the Slurm scheduler.

This ensures scheduled jobs:

  • obey cluster scheduling policies

  • respect fairshare

  • run on compute nodes rather than login nodes


Basic Workflow

The typical workflow is:

scrontab schedule

sbatch command

Slurm scheduler

Compute node job execution

A scheduled job therefore consists of two parts:

  1. A Slurm batch script

  2. A scrontab entry that submits it


Example: Run a Job Daily at 3:30 AM

Step 1 — Create a Slurm Job Script

daily_job.sh

#!/bin/bash
#SBATCH --job-name=daily_job
#SBATCH --partition=grit_nodes
#SBATCH --account=slurm_users
#SBATCH --output=%x-%j.out
#SBATCH --time=00:10:00
#SBATCH --mem=1G
#SBATCH --cpus-per-task=1

echo "Starting job at $(date)"

python /home/$USER/scripts/daily_task.py

echo "Finished at $(date)"


Step 2 — Create a scrontab File

daily.scron

30 3 * * * sbatch /home/$USER/daily_job.sh

This runs every day at 03:30.


Step 3 — Install the scrontab

scrontab daily.scron


Example: Weekly Job

Run a job every Monday at 6:00 AM

scron entry:

0 6 * * 1 sbatch /home/$USER/weekly_job.sh


Recommended Pattern: Lightweight Check Jobs

Often you want to run a small check frequently, but only launch a large compute job when needed.

This avoids wasting cluster resources.

Example use cases:

  • run analysis when new data arrives

  • trigger pipelines when files appear

  • monitor experiment output


Example: Periodic Check That Launches a Large Job

scrontab entry

Run every 5 minutes:

*/5 * * * * /home/$USER/bin/check_and_submit.sh


Check Script

check_and_submit.sh

#!/usr/bin/env bash
set -euo pipefail

JOB_NAME="data_pipeline"
PARTITION="grit_nodes"
ACCOUNT="slurm_users"

# Example condition:
# run job if files exist in incoming folder

if ! find /data/incoming -type f | grep -q .; then
exit 0
fi

# Prevent duplicate submissions
if squeue -h -u "$USER" -n "$JOB_NAME" -t PD,R | grep -q .; then
exit 0
fi

echo "Submitting pipeline job..."

sbatch \
-J "$JOB_NAME" \
-A "$ACCOUNT" \
-p "$PARTITION" \
-c 8 \
--mem=32G \
-t 04:00:00 \
/home/$USER/bin/big_job.sh

This script:

  1. Checks if work exists

  2. Ensures a job isn’t already queued

  3. Submits the larger compute job


Cron Time Syntax

scrontab uses standard cron syntax:

┌──────────── minute (0 - 59)
│ ┌────────── hour (0 - 23)
│ │ ┌──────── day of month (1 - 31)
│ │ │ ┌────── month (1 - 12)
│ │ │ │ ┌──── day of week (0 - 6) (Sunday=0)
│ │ │ │ │
│ │ │ │ │
30 3 * * * command

Example:

30 3 * * * sbatch my_job.sh

Runs daily at 03:30.


Managing scrontab

View your scheduled jobs

scrontab -l


Remove your scrontab

scrontab -r


Edit your scrontab interactively

scrontab -e


Best Practices on GRIT HPC

Keep scheduled jobs lightweight

scrontab jobs should:

  • run quickly

  • avoid large compute workloads

  • primarily submit jobs via sbatch


Avoid duplicate jobs

Always check if a job is already running:

squeue -u $USER


Use proper partitions

Typical settings on GRIT:

--partition=grit_nodes
--account=slurm_users


Log job output

Use standard Slurm logging:

#SBATCH --output=%x-%j.out


Example Use Cases on GRIT

Common ways researchers use scrontab:

Use case Description
Automated pipelines Process newly uploaded datasets
Sensor ingestion Run periodic data import jobs
Simulation restarts Relaunch long simulations
Cleanup jobs Remove temporary files
Monitoring Check cluster experiment outputs

Summary

scrontab provides a simple way to schedule recurring Slurm jobs while still using the cluster scheduler.

Typical pattern:

scrontab schedule

lightweight check

sbatch submission

Slurm job runs on compute nodes

This allows users to automate workflows without running heavy workloads on login nodes.