The University of Arizona
    For questions, please open a UAService ticket and assign to the Tools Team.
Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Column
width

...

30%
Image Modified 


Column
width

...

70%

Table of Contents
 

Disk Storage


...

Storage Allocations

Tip

See our Storage page for:

When you obtain a new HPC account, you will be provided with

...

storage.  The shared storage (/home

...

Additional storage:

  • /xdisk/netid  -  200GB to 1TB available on request. The time limit is 45 days with one extension. This allocation is deleted after 45 days if no extension is requested. There are directory level quotas so any files in this directory count against this quota not matter who created it.  The data is not backed up.
  • /rsgrps/netid -  rented/purchased space  (no backups)

 

Tip

We strongly recommend that you do some regular housekeeping of your allocated space. Millions of files are hard to keep organized and even more difficult to migrate. Archiving or using a tool like tar will help keep our disk arrays efficient.

xdisk

Use this link for details on xdisk usage

extra

/extra is something new with Ocelote. When you log in to Ocelote for the first time, an allocation of 200GB will be created for you. It takes an hour or two to show up, and then it is permanent. Remember that it is not backed up like /home.

Job Limits

Job Time Limits

 Each group is allocated a base of 24,000 hours of compute time, this allocation is refreshed monthly.  The allocation can be used on either the htc/cluster/smp clusters or on the new cluster, Ocelote., or a combination.

ElGato has a different allocation method of time as it is funded through an NSF MRI grant with usage time provided for campus researchers outside of the grant recipients. Reference the www.elgato.arizona.edu web site.

 

 

 

 

 

Image Removed

 

 

 

Image Removed

...

, /groups, /xdisk) is accessible from any of the three production clusters: Puma, Ocelote and ElGato. The temporary (/tmp) space is unique to each compute node.

LocationAllocationUsage
Permanent Storage
/home/uxx/netid50 GBIndividual allocations specific to each user.
/groups/PI500 GBAllocated as a communal space to each PI and their
group members.
Temporary Storage
/xdisk/PIUp to 20 TBRequested at the PI level. Available for up to 150 days
with one 150 day extension possible for a total of 300
days.
/tmp~1400GB NVMe    (Puma)
~840GB spinning  (Ocelote)
~840GB spinning  (El Gato)
Local storage specific to each compute node. Usable
as a scratch space for compute jobs. Not accessible 
once jobs end. 



Job Allocations

All University of Arizona Principal Investigators (PIs; aka Faculty) that register for access to the UA High Performance Computing (HPC) receive these free allocations on the HPC machines which is shared among all members of their team. Currently all PIs receive:

HPC MachineStandard Allocation Time per Month per PIWindfall
Puma100,000 CPU Hours per monthUnlimited but can be pre-empted
Ocelote35,000 CPU Hours per monthUnlimited but can be pre-empted
El Gato7,000 CPU Hours per monthUnlimited but can be pre-empted

Best practices

  1. Use your standard allocation first! The standard allocation is guaranteed time on the HPC. It refreshes monthly and does not accrue (if a month's allocation isn't used it is lost).
  2. Use the windfall queue when your standard allocation is exhausted. Windfall provides unlimited CPU-hours, but jobs in this queue can be stopped and restarted (pre-empted) by standard jobs.
  3. If your group consistently needs more time than the free allocations, consider the HPC buy-in program.
  4. Last resort for tight deadlines: PIs can request a special project allocation once per year (https://portal.hpc.arizona.edu/portal/; under the Support tab). Requesting a special project will provide qualified hours which are effectively the same as standard hours.
  5. For several reasons we do not offer checkpointing.  It may be desirable to have this capability in your code.

How to Find Your Remaining Allocation

To view your remaining allocation, use the command va in a terminal.

You can use this time on either the standard nodes which do not require special attributes in the scheduler script, or on the GPU nodes which do require special attributes.

 Slurm and PBS Batch Queues

The batch queues on the different systems have the following memory, time and core limits.

...

Queue

...

 Gen 1

...

 Gen 1

...

 Gen 1

...

 Gen 1

...

 Gen 1

...

 Gen 1

...

 Gen 1

...

 Gen 1

...

 Gen 1

...

 Cluster (8400)

...

 Standard

...

 standard

...

 124

...

 3,200

...

 240

...

 256

...

 512 GB

...

 30

...

 

...

 Windfall

...

 windfall

...

 229

...

 3,200

...

 240

...

 256

...

 512 GB

...

 no limit

...

 

...

 High Priority

...

 cluster_high

...

 105

...

 11,520

...

 720

...

 512

...

 1024 GB

...

 64

...

 

...

 

...

 

...

 

...

 

...

 

...

 

...

 

...

 

...

 SMP (UV1000)

...

 Standard

...

 standard

...

 

...

 3,200

...

 240

...

 256

...

 512 GB

...

 30

...

 

...

 Windfall

...

 windfall

...

 

...

 3,200

...

 240

...

 256

...

 512 GB

...

 no limit

...

 

...

High Priority

...

 smp_high

...

 

...

 11,520

...

 720

...

 512

...

 1024 GB

...

 64

...

 

...

 

...

 

...

 

...

 

...

 

...

 

...

 

...

 

...

HTC 

...

Standard

...

 standard

...

 104

...

 11,520

...

 720

...

 256

...

 512 GB

...

 30

...

 

...

 Windfall

...

 windfall

...

 104

...

 11,520

...

 720

...

 256

...

 512 GB

...

 no limit

...

 

...

 High Priority

...

 htc_high

...

 10

...

 11,520

...

 720

...

 512

...

 1024 GB

...

 

...

 Gen 2

...

 Gen 2

...

 Gen 2

...

 Gen 2

...

 Gen 2

...

 Gen 2

...

 Gen 2

...

 Gen 2

...

 Gen 2

...

 Ocelote

...

 Standard

...

 standard

...

 331

...

 3,200

...

 240

...

 672**

...

 1024 GB

...

 60

...

 

...

 Windfall

...

 windfall

...

 331

...

 3,200

...

 240

...

 672**

...

 1024 GB

...

 no limit

...

 

...

 High Priority

...

 new_high

...

 36

...

 11,520

...

 720

...

 672**

...

 2024 GB

...

 

...

 

...

 Projects

...

 new_qual

...

 0

...

 TBD

...

 TBD

...

 TBD

...

 TBD

 TBD

 

...

Description

...

 Priorities

...

 Queue Name

...

 # of Compute Nodes

...

# of  CPU Hours / Job

...

 Wall Clock Hours / Job

...

 Max # of CPUs 

Allocated/ Job

...

 Max Memory

Allocated/ Job

...

 Max # of Running Jobs

standardUsed to consume the monthly allocation of hours provided to each group
windfallUsed when standard is depleted but subject to preemption
high_priorityUsed by 'buy-in' users for purchased nodes
qualifiedUsed by groups who have a temporary special project allocation




Job Limits

To check group, user, and job limitations on resource usage, use the command job-limits $YOUR_GROUP in the terminal.



Special Allocations

Sometimes you may need an extra allocation for a conference deadline or paper submission.  Or something else.  We can offer a temporary allocation according to the guidelines here:

Special Projects