The University of Arizona
    For questions, please open a UAService ticket and assign to the Tools Team.
Page tree
Skip to end of metadata
Go to start of metadata

Our Mission


UA High Performance Computing (HPC) is an interdisciplinary research center focused on facilitating research and discoveries that advance science and technology. We deploy and operate advanced computing and data resources for the research activities of students, faculty, and staff at the University of Arizona. We also provide consulting, technical documentation, and training to support our users.

This site is divided into sections that describe the High Performance Computing (HPC) resources that are available, how to use them, and the rules for use.





Quick News




The current version of Singularity is 3.7.4.  Prior versions have been removed.  Only the latest one is considered secure.  Notify the consultants if you need help with transition to the current version. Singularity is installed on all of the system's compute nodes and can be accessed without using a module. 


Have you tried Puma yet?  Our latest supercomputer is larger, faster and has bigger teeth than Ocelote (ok, maybe not the last bit). Puma Quick Start


Ocelote's operating has been updated to CentOS 7 following a month-long maintenance period. It now shares the same modules with Puma and uses the same scheduling software SLURM. 


Anaconda is now available as a module.  It expands the capability of Jupyter with Jupyter Labs; includes RStudio, and the Conda ecosystem.  We recommend you access it through OnDemand and the Ocelote or Puma Desktops.  See these instructions.




Quick Links


Calendar

DateEvent

 

Maintenance downtime is scheduled from 6AM to 6PM on July 28 for ALL HPC services

 -
 

El Gato will be taken down for scheduled maintenance from July 12th through August 1st. Following
maintenance, it will use SLURM as its scheduling software and have the same software image and modules
as Ocelote and Puma.

 -
 

Ocelote will be taken down for scheduled maintenance from June 1st through June 30th. During that time, its
OS will be updated to CentOS 7 and its scheduler will be migrated to SLURM. 

-

Maintenance downtime is scheduled from 6AM on January 27th through 6PM on January 28th
for ALL HPC services

 

Maintenance downtime is scheduled from 7AM to 6PM on October 28 for ALL HPC services

 

Maintenance downtime is scheduled from 7AM to 6PM on July 29 for ALL HPC services

Maintenance downtime is scheduled from 7AM to 6PM on April 29 for ALL HPC services

Maintenance downtime is scheduled from 7AM to 6PM on January 29 for ALL HPC services

Maintenance downtime is scheduled from 7AM to 6PM on October 30 for ALL HPC services

Maintenance downtime is scheduled from 7AM to 6PM on July 31 for ALL HPC services.

We will be patching all the HPC systems and Upgrading R to version 3.6.1 during this time.



Debug queue is added to support testing code or trying script options. It has higher priority but short limits.

46 nodes with Nvidia GPU's are available for standard and windfall use.

The 2012 systems (cluster, smp, and htc) have been powered off as scheduled.

We offer a new web portal called Open OnDemand which includes Jupyter notebooks and a nifty file browser
 
Grand Opening of Ocelote 3pm
 
 Our new cluster is delivered.
Installation will take most of the week
  • No labels