When you obtain a new HPC account, you will be provided with storage. The shared storage (/home, /groups, /xdisk) is accessible from any of the three production clusters: Puma, Ocelote and ElGato. The temporary (/tmp) space is unique to each compute node.
All University of Arizona Principal Investigators (PIs; aka Faculty) that register for access to the UA High Performance Computing (HPC) receive these free allocations on the HPC machines which is shared among all members of their team. Currently all PIs receive:
How to Find Your Remaining Allocation
You can view your remaining allocation using the HPC User Portal at https://portal.hpc.arizona.edu/portal/.
You can use this time on either the standard nodes which do not require special attributes in the scheduler script, or on the GPU nodes which do require special attributes. The queues are set up so that jobs that do not request GPU's will not run there.
Slurm and PBS Batch Queues
The batch queues on the different systems have the following memory, time and core limits.
|Max Wallclock |
Hrs Per Job
|Total Cores in|
Use Per Group
|Total GPUs in|
Use Per Group
(Max Memory GB)
Max Number of
** This limit is shared by all members of a group across all queues. So you can use the system 2016 core limit by one user on the standard queue or share it across multiple users or queues.
*** Groups who have purchased GPUs will have a limit set to the number purchased. If no GPUs were purchased, the high_pri hours will be restricted to standard CPU nodes.