The University of Arizona
    For questions, please open a UAService ticket and assign to the Tools Team.
Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 27 Next »

Disk Storage

 

When you obtain a new HPC account, you will be provided with the following storage:

  • /home/netid - 15GB    (backed up nightly)  
  • /extra/netid  -   200GB (no backups) *** Ocelote only *** 
  • /scratch/netid -  temp space, data removed nightly (no backups) ***Not available for Ocelote due to the new /extra***

Additional storage:

  • /xdisk/netid  -  200GB to 1TB available on request. The time limit is 45 days with one extension. This allocation is deleted after 45 days if no extension is requested. There are directory level quotas so any files in this directory count against this quota not matter who created it.  The data is not backed up.
  • /rsgrps/netid -  rented/purchased space  (no backups)

Note: We strongly recommend that you do some regular housekeeping of your allocated space. Millions of files are hard to keep organized and even more difficult to migrate. Archiving or using a tool like tar will help keep our disk arrays efficient.

xdisk

Use this link for details on xdisk usage

extra

/extra is something new with Ocelote. When you log in to Ocelote for the first time, an allocation of 200GB will be created for you. It takes an hour or two to show up, and then it is permanent. Remember that it is not backed up like /home.


Job Time Limits

 Each group is allocated a base of 24,000 hours of compute time, this allocation is refreshed monthly.  The allocation can be used on either the htc/cluster/smp clusters or on the new cluster, Ocelote., or a combination.

ElGato has a different allocation method of time as it is funded through an NSF MRI grant with usage time provided for campus researchers outside of the grant recipients. Reference the www.elgato.arizona.edu web site.

 

 

 

 

 

 PBS Batch Queue Limits

The batch queues on the different systems have the following memory, time and core limits.

 

 System 

 Priorities

 Queue Name

 # of Compute Nodes

# of  CPU Hours / Job

 Wall Clock Hours / Job

 Max # of CPUs 

Allocated/ Job

 Max Memory

Allocated/ Job

 Max # of Running Jobs

 Gen 1

 Gen 1

 Gen 1

 Gen 1

 Gen 1

 Gen 1

 Gen 1

 Gen 1

 Gen 1

 Cluster (Ice)

 Standard

 standard

 124

 3,200

 240

 256

 512 GB

 30

 

 Windfall

 windfall

 229

 3,200

 240

 256

 512 GB

 no limit

 

 High Priority

 cluster_high

 105

 11,520

 720

 512

 1024 GB

 64

 

 

 

 

 

 

 

 

 

 SMP (UV1000)

 Standard

 standard

 

 3,200

 240

 256

 512 GB

 30

 

 Windfall

 windfall

 

 3,200

 240

 256

 512 GB

 no limit

 

High Priority

 smp_high

 

 11,520

 720

 512

 1024 GB

 64

 

 

 

 

 

 

 

 

 

HTC 

Standard

 standard

 104

 11,520

 720

 256

 512 GB

 30

 

 Windfall

 windfall

 104

 11,520

 720

 256

 512 GB

 no limit

 

 High Priority

 htc_high

 10

 11,520

 720

 512

 1024 GB

 

 Gen 2

 Gen 2

 Gen 2

 Gen 2

 Gen 2

 Gen 2

 Gen 2

 Gen 2

 Gen 2

 Ocelote

 Standard

 standard

 300

 3,200

 240

 672**

 1024 GB

 60

 

 Windfall

 windfall

 300

 3,200

 240

 672**

 1024 GB

 no limit

 

 High Priority

 new_high

 0

 11,520

 720

 672**

 2024 GB

 

 

 Projects

 new_qual

 0

 TBD

 TBD

 TBD

 TBD

 TBD


 

**  This represents 24 physical nodes, and 4.6TB of memory

  • No labels