All University of Arizona Principal Investigators (PIs; aka Faculty) that register for access to the UA High Performance Computing (HPC) receive allocation on the HPC machines which is shared among all members of their team. Currently all PIs receive:
|Permanent Storage Allocations|
|/home/uxx/netid||50 GB||Individual allocations specific to each user.|
|/groups/PI||500 GB||Allocated as a communal space to each PI and their |
|/xdisk/PI||Up to 20 TB||Requested at the PI level. Available for up to 150 days|
with one 150 day extension possible for a total of 300
|/tmp||~1640 GB NVMe (Coming with New HPC Q1 2020)|
~840 GB spinning (Ocelote)
~840 GB spinning (El Gato)
|Local storage specific to each compute node. Usable|
as a scratch space for compute jobs. Not accessible
once jobs end.
For more detailed information on storage allocations and policies, see our new Storage page.
Job Time Limits
Each group is allocated a base of 36,000 hours of compute time. This allocation is refreshed monthly. This allocation can be subdivided by the PI using the portal at https://portal.hpc.arizona.edu/portal/
Researchers may request additional hours as a Special Project.
The command va will display your remaining time
You can use this time on either the standard nodes which do not require special attributes in the scheduler script, or on the GPU nodes which do require special attributes. The queues are setup so that jobs that do not request GPU's will not run there.
ElGato has a different allocation method of time as it is funded through an NSF MRI grant with usage time provided for campus researchers outside of the grant recipients.
PBS Batch Queues
The batch queues on the different systems have the following memory, time and core limits.
|debug||Used as a high priority to test code or jobs.|
|standard||Used to consume the monthly allocation of hours provided to each group|
|windfall||Used when standard is depleted but subject to preemption|
|high_priority||Used by 'buy-in' users for purchased nodes|
Job Resource Limits
# of Compute Nodes
Max Wallclock Hrs per Job
Largest job (max cores)
|Total cores in use per group|
Largest job (max memory GB)
Max # of Running Jobs
|Max Queued Jobs|
*** This limit is shared by all members of a group across all queues. So you can use the system 2016 core limit by one user on the standard queue or share it across multiple users or queues.