When you obtain a new HPC account, you will be provided with storage. The shared storage (/home, /groups, /xdisk) is accessible from any of the three production clusters: Puma, Ocelote and ElGato. The temporary (/tmp) space is unique to each compute node.
|/home/uxx/netid||50 GB||Individual allocations specific to each user.|
|/groups/PI||500 GB||Allocated as a communal space to each PI and their |
|/xdisk/PI||Up to 20 TB||Requested at the PI level. Available for up to 150 days|
with one 150 day extension possible for a total of 300
|/tmp||~1400GB NVMe (Puma)|
~840GB spinning (Ocelote)
~840GB spinning (El Gato)
|Local storage specific to each compute node. Usable|
as a scratch space for compute jobs. Not accessible
once jobs end.
All University of Arizona Principal Investigators (PIs; aka Faculty) that register for access to the UA High Performance Computing (HPC) receive these free allocations on the HPC machines which is shared among all members of their team. Currently all PIs receive:
|HPC Machine||Standard Allocation Time per Month per PI||Windfall|
|Puma||100,000 CPU Hours per month||Unlimited but can be pre-empted|
|Ocelote||35,000 CPU Hours per month||Unlimited but can be pre-empted|
|El Gato||7,000 CPU Hours per month||Unlimited but can be pre-empted|
- Use your standard allocation first! The standard allocation is guaranteed time on the HPC. It refreshes monthly and does not accrue (if a month's allocation isn't used it is lost).
- Use the windfall queue when your standard allocation is exhausted. Windfall provides unlimited CPU-hours, but jobs in this queue can be stopped and restarted (pre-empted) by standard jobs.
- If your group consistently needs more time than the free allocations, consider the HPC buy-in program.
- Last resort for tight deadlines: PIs can request a special project allocation once per year (https://portal.hpc.arizona.edu/portal/; under the Support tab). Requesting a special project will provide qualified hours which are effectively the same as standard hours.
- For several reasons we do not offer checkpointing. It may be desirable to have this capability in your code.
How to Find Your Remaining Allocation
You can view your remaining allocation using the HPC User Portal at https://portal.hpc.arizona.edu/portal/.
PIs can create groups, manage time in each of their groups, subdivide their allocation etc. in the user portal as well.
The command va will display your remaining time on the terminal.
The command va -v will display more details
You can use this time on either the standard nodes which do not require special attributes in the scheduler script, or on the GPU nodes which do require special attributes. The queues are set up so that jobs that do not request GPU's will not run there.