Running recurring jobs with scrontab
THIS IS A WORK IN PROGRESS!
Scheduling Recurring Jobs on GRIT HPC with scrontab
GRIT HPC supports periodic job scheduling using Slurm’s scrontab feature.
This allows users to run jobs on a schedule using cron-style syntax, similar to Linux crontab.
Typical uses include:
-
Automated data processing
-
Periodic log analysis
-
Monitoring workflows
-
Triggering simulations when new data arrives
-
Cleanup or maintenance scripts
What is scrontab?
scrontab is Slurm’s periodic job scheduler.
Instead of running commands directly like Linux cron, scrontab typically runs sbatch commands, which submit jobs to the Slurm scheduler.
This ensures scheduled jobs:
-
obey cluster scheduling policies
-
respect fairshare
-
run on compute nodes rather than login nodes
Basic Workflow
The typical workflow is:
↓
sbatch command
↓
Slurm scheduler
↓
Compute node job execution
A scheduled job therefore consists of two parts:
-
A Slurm batch script
-
A scrontab entry that submits it
Example: Run a Job Daily at 3:30 AM
Step 1 — Create a Slurm Job Script
daily_job.sh
#SBATCH --job-name=daily_job
#SBATCH --partition=grit_nodes
#SBATCH --account=slurm_users
#SBATCH --output=%x-%j.out
#SBATCH --time=00:10:00
#SBATCH --mem=1G
#SBATCH --cpus-per-task=1
echo "Starting job at $(date)"
python /home/$USER/scripts/daily_task.py
echo "Finished at $(date)"
Step 2 — Create a scrontab File
daily.scron
This runs every day at 03:30.
Step 3 — Install the scrontab
Example: Weekly Job
Run a job every Monday at 6:00 AM
scron entry:
Recommended Pattern: Lightweight Check Jobs
Often you want to run a small check frequently, but only launch a large compute job when needed.
This avoids wasting cluster resources.
Example use cases:
-
run analysis when new data arrives
-
trigger pipelines when files appear
-
monitor experiment output
Example: Periodic Check That Launches a Large Job
scrontab entry
Run every 5 minutes:
Check Script
check_and_submit.sh
set -euo pipefail
JOB_NAME="data_pipeline"
PARTITION="grit_nodes"
ACCOUNT="slurm_users"
# Example condition:
# run job if files exist in incoming folder
if ! find /data/incoming -type f | grep -q .; then
exit 0
fi
# Prevent duplicate submissions
if squeue -h -u "$USER" -n "$JOB_NAME" -t PD,R | grep -q .; then
exit 0
fi
echo "Submitting pipeline job..."
sbatch \
-J "$JOB_NAME" \
-A "$ACCOUNT" \
-p "$PARTITION" \
-c 8 \
--mem=32G \
-t 04:00:00 \
/home/$USER/bin/big_job.sh
This script:
-
Checks if work exists
-
Ensures a job isn’t already queued
-
Submits the larger compute job
Cron Time Syntax
scrontab uses standard cron syntax:
│ ┌────────── hour (0 - 23)
│ │ ┌──────── day of month (1 - 31)
│ │ │ ┌────── month (1 - 12)
│ │ │ │ ┌──── day of week (0 - 6) (Sunday=0)
│ │ │ │ │
│ │ │ │ │
30 3 * * * command
Example:
Runs daily at 03:30.
Managing scrontab
View your scheduled jobs
Remove your scrontab
Edit your scrontab interactively
Best Practices on GRIT HPC
Recommended guidelines:
Keep scheduled jobs lightweight
scrontab jobs should:
-
run quickly
-
avoid large compute workloads
-
primarily submit jobs via
sbatch
Avoid duplicate jobs
Always check if a job is already running:
Use proper partitions
Typical settings on GRIT:
--account=slurm_users
Log job output
Use standard Slurm logging:
Example Use Cases on GRIT
Common ways researchers use scrontab:
| Use case | Description |
|---|---|
| Automated pipelines | Process newly uploaded datasets |
| Sensor ingestion | Run periodic data import jobs |
| Simulation restarts | Relaunch long simulations |
| Cleanup jobs | Remove temporary files |
| Monitoring | Check cluster experiment outputs |
Summary
scrontab provides a simple way to schedule recurring Slurm jobs while still using the cluster scheduler.
Typical pattern:
↓
lightweight check
↓
sbatch submission
↓
Slurm job runs on compute nodes
This allows users to automate workflows without running heavy workloads on login nodes.