Disk StorageWhen you obtain a new HPC account, you will be provided with the following storage:
Additional storage:
File count limit:
We strongly recommend that you do some regular housekeeping of your allocated space. Millions of files are hard to keep organized and even more difficult to migrate. Archiving or using a tool like tar will help keep our disk arrays efficient and potentially free up more space for you to use. xdisk Use this link for details on xdisk usage extra/extra is something new with Ocelote. When you log in to Ocelote for the first time, an allocation of 200GB will be created for you. It takes an hour or two to show up, and then it is permanent. Remember that it is not backed up like /home. The number of files within the 200GB is limited to 120,000. uquota is the command to display how much space you have used / remaining used soft limit hard limit files/limit Filesets with group access: /rsgrps/me 75.45G 2T 2T 539741/1228800 Job LimitsJob Time LimitsEach group is allocated a base of 24,000 hours of compute time. This allocation is refreshed monthly. The command va will display your remaining time You can use this time on either the standard nodes which do not require special attributes in the scheduler script, or on the GPU nodes which do require special attributes. The queues are setup so that jobs that do not request GPU's will not run there. ElGato has a different allocation method of time as it is funded through an NSF MRI grant with usage time provided for campus researchers outside of the grant recipients. Reference the www.elgato.arizona.edu web site. |
PBS Batch Queue Limits
The batch queues on the different systems have the following memory, time and core limits.
Queue | Description |
---|---|
standard | Used to consume the monthly allocation of hours provided to each group |
windfall | Used when standard is depleted but subject to preemption |
high_priority | Used by 'buy-in' users for purchased nodes |
System | Queue Name | # of Compute Nodes | Max Wallclock Hrs / Job | Largest job | Total cores in use / group | Largest job / memory | Max # of Running Jobs | Max Queued Jobs |
---|---|---|---|---|---|---|---|---|
Ocelote | standard | 348 | 240 | 1344** cores | 2016 | 8064GB | 500 | 3000 |
windfall | 400 | 240 | 1344** cores | 8064GB | 500 | 3000 | ||
high_pri | 52 | 720 | 1344** cores | 8064GB | 500 | 5000 |
*** This limit is shared by all members of a group across all queues. So you can use the system 1000 core limit by one user on the standard queue or share it across multiple users or queues.