Batch Queue Limits
The batch queues on the different systems have the following memory, time and core limits.
System
Priorities
Queue Name
# of Compute Nodes
# of CPU Hours / Job
Wall Clock Hours / Job
Max # of CPUs
Allocated/ Job
Max Memory
Allocated/ Job
Max # of Running Jobs
Gen 1
Gen 1
Gen 1
Gen 1
Gen 1
Gen 1
Gen 1
Gen 1
Gen 1
Cluster (Ice)
Standard
standard
124
3,200
240
256
512 GB
30
Windfall
windfall
229
3,200
240
256
512 GB
no limit
High Priority
cluster_high
105
11,520
720
512
1024 GB
64
SMP (UV1000)
Standard
standard
3,200
240
256
512 GB
30
Windfall
windfall
3,200
240
256
512 GB
no limit
High Priority
smp_high
11,520
720
512
1024 GB
64
HTC
Standard
standard
104
11,520
720
256
512 GB
30
Windfall
windfall
104
11,520
720
256
512 GB
no limit
High Priority
htc_high
10
11,520
720
512
1024 GB
Gen 2
Gen 2
Gen 2
Gen 2
Gen 2
Gen 2
Gen 2
Gen 2
Gen 2
Ocelote
Standard
standard
300
3,200
240
672**
1024 GB
60
Windfall
windfall
300
3,200
240
672**
1024 GB
no limit
High Priority
new_high
0
11,520
720
672**
2024 GB
Projects
new_qual
0
TBD
TBD
TBD
TBD
TBD
** This represents 24 physical nodes, and 4.6TB of memory
Page Banner |
---|
When you obtain a new HPC account, you will be provided with the following storage:
- /home/netid - 15GB (backed up nightly)
- /extra/netid - 200GB (no backups) *** Ocelote only ***
- /scratch/netid - temp space, data removed nightly (no backups) ***Not available for Ocelote due to the new /extra***
Additional storage:
- /xdisk/netid - 200GB to 1TB available on request. The time limit is 45 days with one extension. This allocation is deleted after 45 days if no extension is requested. There are directory level quotas so any files in this directory count against this quota not matter who created it. The data is not backed up.
- /rsgrps/netid - rented/purchased space (no backups)
Note: We strongly recommend that you do some regular housekeeping of your allocated space. Millions of files are hard to keep organized and even more difficult to migrate. Archiving or using a tool like tar will help keep our disk arrays efficient.
xdisk
Use this link for details on xdisk usage
extra
/extra is something new with Ocelote. When you log in to Ocelote for the first time, an allocation of 200GB will be created for you. It takes an hour or two to show up, and then it is permanent. Remember that it is not backed up like /home.
Job Time Limits
Each group is allocated a base of 24,000 hours of compute time, this allocation is refreshed monthly. The allocation can be used on either the htc/cluster/smp clusters or on the new cluster, Ocelote.
ElGato has a different allocation method of time as it is funded through an NSF MRI grant with usage time provided for campus researchers outside of the grant recipients. Reference the www.elgato.arizona.edu web site.
|
Excerpt Include | ||||||
---|---|---|---|---|---|---|
|
Panel | ||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||||||
Job AllocationsAll University of Arizona Principal Investigators (PIs; aka Faculty) that register for access to the UA High Performance Computing (HPC) receive these free allocations on the HPC machines which is shared among all members of their team. Currently all PIs receive:
Best practices
How to Find Your Remaining AllocationTo view your remaining allocation, use the command You can use this time on either the standard nodes which do not require special attributes in the scheduler script, or on the GPU nodes which do require special attributes. SLURM Batch QueuesThe batch queues, also known as partitions, on the different systems are the following:
|
Panel | ||||||
---|---|---|---|---|---|---|
| ||||||
Job LimitsTo check group, user, and job limitations on resource usage, use the command |
Panel | ||||||
---|---|---|---|---|---|---|
| ||||||
Special AllocationsSometimes you may need an extra allocation for a conference deadline or paper submission. Or something else. We can offer a temporary allocation according to the guidelines here: Special Projects |