system_details:slurm_partitions

This page explains the partitions available to users and the accounting for each partition. This page assumes knowledge on partition usage and how to submit a job using SLURM. Please refer to the HPC user guide for a general introduction to these topics.

Compute nodes are grouped into partitions in order to allow the user to select different hardware to run their software on. Each partition includes a subset of nodes with a different type of hardware and a specific maximum wall time.

The partitions can be selected by users via the SLURM option:

#SBATCH --partition=<partition name>

or its short form:

#SBATCH -p <partition name>

The partitions available on Grid.UP are summarised in the table below. For details of the different hardware available on each node, please look at the Grid.UP hardware page.

Partition name Node type Smallest possible allocation Max wall time Notes
batch thin compute nodes 1 cores + 500 MB memory 5 days default partition for small jobs
big > 16 cores 16 cores + 16 GB memory 7 days only for jobs with > 16 cores or GPU
cfp restricted 1 core + 1 GB memory 28 days exclusive for cfp registered users
lsrelcm restricted 1 core + 1 GB memory 28 days exclusive for lsre/lcm registered users

<tbd>

  • system_details/slurm_partitions.txt
  • Last modified: 2024/04/23 17:01
  • by ptsilva