Electrical maintenance work will be performed in the Research Data Center on Tuesday, October 3rd between 6:00am-12:00pm. ElGato will be unavailable during this time. ElGato jobs overlapping with this maintenance window will be held until maintenance concludes. For faster interactive sessions during this time, use Ocelote.
Our Mission
UA High Performance Computing (HPC) is an interdisciplinary research center focused on facilitating research and discoveries that advance science and technology. We deploy and operate advanced computing and data resources for the research activities of students, faculty, and staff at the University of Arizona. We also provide consulting, technical documentation, and training to support our users.
This site is divided into sections that describe the High Performance Computing (HPC) resources that are available, how to use them, and the rules for use.
- User Guide — This section has the basic knowledge that will introduce you to the resources and provides information on account registration, system access, how to run jobs, and how to request help.
- Resources — Detailed information on compute, storage, software, grant, data center, and external (XSEDE, CyVerse, etc.) resources.
- Policies — Policies related to topics that include acceptable use, access, acknowledgements, buy-in, instruction, maintenance, and special projects.
- Results — A list of research publications that utilized UArizona's HPC system resources.
- FAQ — A collection of frequently asked questions and their solutions.
- Secure HPC
- User Portal — Manage and create groups, request rental storage, manage delegates, delete your account, and submit special project requests.
- Open OnDemand — Graphical interface for accessing HPC and applications.
- Job Examples — View and download sample SLURM jobs from our GitHub site.
- Training Videos — Visit our YouTube channel for instructional videos, researcher stories, and more.
- Getting Help — Request help from our team.
Highlighted Research
Faster Speeds Need Faster Computation - Hypersonic Travel
Quick News
Faster Interactive SessionsAre you frustrated waiting for slow interactive sessions to start? Try using the standard queue on ElGato. We have provisioned 44 nodes to only accept the standard queue to facilitate faster connections. To access a session, try:
For more information on interactive sessions see our page: Running Jobs With SLURM. | |
Increased Ocelote AllocationDo you like using Ocelote? Good news! On November 9th, the standard allocation on Ocelote was increased from 35,000 to 70,000 CPU hours. | |
Singularity is Now ApptainerSingularity has been renamed Apptainer as the project is brought into the Linux Foundation. An alias exists so that you can continue to invoke We only keep a reasonably current version of Apptainer. Prior versions are removed since only the latest one is considered secure. Apptainer is installed on all of the system's compute nodes and can be accessed without using a module. | |
Anaconda on HPCAnaconda is very popular and is available as a module. It expands the capability of Jupyter with Jupyter Labs; includes RStudio, and the Conda ecosystem. To access GUI interfaces available through Conda (e.g., JupyerLab), we recommend using an Open OnDemand Desktop session. See these instructions. As a note, Anaconda likes to own your entire environment. Review those instructions to see what problems that can cause and how to address them. | |
Puma NewsHave you tried Puma yet? Our latest supercomputer is larger, faster and has bigger teeth than Ocelote (ok, maybe not the last bit). Puma Quick Start Since we upgraded Ocelote it has the same software suite as Puma. It is generally not as busy as Puma. So if your work does not need the capabilities of Puma, consider using Ocelote instead. This applies to GPU's also, if the P100s will work for you. Now that we are into the second year of use, we have determined that we can increase the standard allocation. From the end of April 2022 the standard allocation of CPU hours is increased from 70,000 to 100,000. |
Calendars
Date | Event |
---|---|
| Electrical maintenance is scheduled from 6AM to 12PM on October 3rd. ElGato will be unavailable during this period. |
| Quarterly maintenance is scheduled from 6AM to 6PM on July 26th. |
| Maintenance downtime is scheduled from 6AM to 6PM on April 26th for ALL HPC services. |
| Maintenance downtime is scheduled from 6AM to 6PM on January 25 for ALL HPC services |
| Maintenance downtime is scheduled from 6AM to 6PM on October 26 for ALL HPC services |
| Maintenance downtime is scheduled from 6AM to 6PM on July 20 for ALL HPC services |
| Maintenance downtime is scheduled from 6AM to 6PM on April 27 for ALL HPC services |
| Maintenance downtime is scheduled from 6AM to 6PM on January 26 for ALL HPC services |
| Maintenance downtime is scheduled from 6AM to 6PM on July 28 for ALL HPC services |
- | El Gato will be taken down for scheduled maintenance from July 12th through August 1st. Following maintenance, it will use SLURM as its scheduling software and have the same software image and modules as Ocelote and Puma. |
- | Ocelote will be taken down for scheduled maintenance from June 1st through June 30th. During that time, its OS will be updated to CentOS 7 and its scheduler will be migrated to SLURM. |
- | Maintenance downtime is scheduled from 6AM on January 27th through 6PM on January 28th for ALL HPC services |
Workshop | Date | Time | Location | Registration |
---|---|---|---|---|
Intro to HPC | 9/12/2023 | 9:00 - 10:00 AM | Zoom | Not needed |
Intro to Machine Learning (Python) | 9/13/2023 | 9:00 - 10:00 AM | Zoom | Not needed |
Intro to Machine Learning (R) | 9/13/2023 | 10:30 - 11:30 AM | Zoom | Not needed |
Intro to Parallel Computing | 9/14/2023 | 9:00 - 10:00 AM | Zoom | Not needed |
Intro to Containers on HPC | 9/14/2023 | 10:30 - 11:30 AM | Zoom | Not needed |
Data Management on HPC | 9/28/2023 | 9:00 - 10:30 AM | Zoom | Registration link |