Disk Storage
Page Banner | ||||
---|---|---|---|---|
|
Excerpt Include | ||||||
---|---|---|---|---|---|---|
|
High Priority
cluster_high
105
11,520
720
512
1024 GB
64
UV 1000
Standard
standard
3,200
240
256
512 GB
30
Windfall
windfall
3,200
240
256
512 GB
no limit
High Priority
smp_high
11,520
720
512
1024 GB
64
HTC
Standard
standard
104
11,520
720
256
512 GB
30
Windfall
windfall
104
11,520
720
256
512 GB
no limit
High Priority
htc_high
10
11,520
720
512
1024 GB
Gen 2
Gen 2
Gen 2
Gen 2
Gen 2
Gen 2
Gen 2
Gen 2
Gen 2
New Cluster
Standard
standard
300
3,200
240
512
1024 GB
60
Windfall
windfall
300
3,200
240
512
1024 GB
no limit
High Priority
new_high
0
11,520
720
512
2024 GB
Projects
new_qual
0
TBD
TBD
TBD
TBD
Batch Queue Limits
The batch queues on the different systems have the following memory, time and core limits.
System
Priorities
Queue Name
# of Compute Nodes
# of CPU Hours / Job
Wall Clock Hours / Job
Max # of CPUs
Allocated/ Job
Max Memory
Allocated/ Job
Max # of Running Jobs
Gen 1
Gen 1
Gen 1
Gen 1
Gen 1
Gen 1
Gen 1
Gen 1
Gen 1
Ice (Altix 8400)
Standard
standard
124
3,200
240
256
512 GB
30
Windfall
windfall
229
3,200
240
256
512 GB
no limit
Panel | |||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||||||||||||||||||||||||||||||||
Job AllocationsOverviewAll University of Arizona Principal Investigators (PIs; aka Faculty) that register for access to the UA High Performance Computing (HPC) receive these free allocations on the HPC machines which is shared among all members of their team. Currently all PIs receive:
Best practices
How Allocations are ChargedThe number of CPU hours a job consumes is determined by the number of CPUs it is allocated multiplied by its requested walltime. When a job is submitted, the CPU hours it requires are automatically deducted from the account. If the job ends early, the unused hours are automatically refunded. For example, a job requesting 50 CPUs for 10 hours will be charged 500 CPU hours. When the job is submitted, all 500 CPU hours are deducted from the user's account, however, if the job only runs for 5 hours and then completes, the unused 250 hours would be refunded. This accounting is the same regardless of which type of node you request. Standard, GPU, and high memory nodes are all charged using the same model and use the same allocation pool. If you find you are being charged for more CPUs that you are specifying in your submission script, it may be an issue with your job's memory request. Allocations are refreshed on the first day of each month. Unused hours from the previous month do not roll over. How to Use Your AllocationTo use your allocation, you will include your account and partition information as a SLURM directive in your batch script. The formatting for this can be found in our Running Jobs with SLURM documentation. How to Find Your Remaining AllocationTo view your remaining allocation, use the command
SLURM Batch QueuesThe batch queues, also known as partitions, on the different systems are the following:
|
Panel | ||||||
---|---|---|---|---|---|---|
| ||||||
Job LimitsTo check group, user, and job limitations on resource usage, use the command |
Panel | ||||||
---|---|---|---|---|---|---|
| ||||||
Special AllocationsSometimes you may need an extra allocation for a conference deadline or paper submission. Or something else. We can offer a temporary allocation according to the guidelines here: Special Projects |