Skip to main content

FAQ

Selecting Job Resources

When starting a job you are given options for the amount of CPU and RAM that will be allotted to the job. Selecting too few resources can result in the job failing, and selecting too many can result in the job waiting in the queue for resources to become available.  We have provided a dashboard to check the status of currently running jobs to see what resources they have consumed so far, it is available here: https://hpc.grit.ucsb.edu/pun/sys/job-efficiency/. This page will provide guidance on what resources to select for the next run of your job. 

Jupyterhub

Generally python / jupyterhub jobs will be single threaded and only use a single CPU core unless otherwise specified. Common libraries to check for that enable multi-threading are the following:

  • NumPy
  • SciPy
  • NumExpr
  • Numba
  • TensorFlow
  • PyTorch

If one or more of these libraries are you please do a trial run with 2-4 cores and verify that they are being utilized with the job resource utilization analyzer. 

R Studio

Generally R Studio jobs will be single threaded unless a specific library or function are called. Some common things that enable multi-threading are listed below:

  • OMP_NUM_THREADS
  • MKL_NUM_THREADS
  • OPENBLAS_NUM_THREADS
  • RhpcBLASctl
  • data.table::setDTthreads
  • future
  • parallel
  • foreach; control with worker count (plan(multisession, workers=…), makeCluster(n)).
Cluster Scratch Directory

There is a scratch directory at /home/hpc-scratch. Note that this is not backed up and data that has not been accessed for 30 days will be AUTOMATICALLY DELETED.