The University of Arizona
    For questions, please open a UAService ticket and assign to the Tools Team.
Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 99 Next »

 

 

Allocation

Job Allocations

All University of Arizona Principal Investigators (PIs; aka Faculty) that register for access to the UA High Performance Computing (HPC) receive allocation on the HPC machines which is shared among all members of their team. Currently all PIs receive:

HPC MachineStandard Allocation Time per Month per PIWindfall
Ocelote36,000 CPU*Hours per monthUnlimited but can be pre-empted
El Gato7,000 CPU*Hours per monthUnlimited but can be pre-empted


Storage Allocations

See our new Storage page for:

When you obtain a new HPC account, you will be provided with the following storage:

LocationAllocationUsage
Permanent Storage
/home/uxx/netid50 GBIndividual allocations specific to each user.
/groups/PI500 GBAllocated as a communal space to each PI and their
group members.
Temporary Storage
/xdisk/PIUp to 20 TBRequested at the PI level. Available for up to 150 days
with one 150 day extension possible for a total of 300
days.
/tmp~1640 GB NVMe    (Coming with New HPC Q1 2020)
~840 GB spinning  (Ocelote)
~840 GB spinning  (El Gato)
Local storage specific to each compute node. Usable
as a scratch space for compute jobs. Not accessible 
once jobs end. 






Job Limits

Job Time Limits

 Each group is allocated a base of 36,000 hours of compute time. This allocation is refreshed monthly. This allocation can be subdivided by the PI using the portal at https://portal.hpc.arizona.edu/portal/
Researchers may request additional hours as a Special Project.

The command va will display your remaining time

You can use this time on either the standard nodes which do not require special attributes in the scheduler script, or on the GPU nodes which do require special attributes. The queues are setup so that jobs that do not request GPU's will not run there.

ElGato has a different allocation method of time as it is funded through an NSF MRI grant with usage time provided for campus researchers outside of the grant recipients.


 PBS Batch Queues

The batch queues on the different systems have the following memory, time and core limits.

QueueDescription
debugUsed as a high priority to test code or jobs.
standardUsed to consume the monthly allocation of hours provided to each group
windfallUsed when standard is depleted but subject to preemption
high_priorityUsed by 'buy-in' users for purchased nodes


Job Resource Limits

 System 

 Queue Name

 # of Compute Nodes

 Max Wallclock Hrs per Job

 Largest job (max cores)

Total cores in use per group

 Largest job (max memory GB)

 Max # of Running Jobs

Max Queued Jobs
Ocelotedebug210 minutes565633625

standard3482402016

2016 ***


80645003000

windfall4002402016
8064753000

high_pri527202016
80645005000

qualified348720

2016


12096100
El Gatostandard1312405125121024751000

windfall131240512512102475100

high_pri1317202016704120967045000


***  This limit is shared by all members of a group across all queues. So you can use the system 2016 core limit by one user on the standard queue or share it across multiple users or queues. 




  • No labels