The University of Arizona
    For questions, please open a UAService ticket and assign to the Tools Team.
Page tree
Skip to end of metadata
Go to start of metadata

University of Arizona High Performance Computing


Our Mission

UA High Performance Computing (HPC) is an interdisciplinary research center focused on facilitating research and discoveries that advance science and technology. We deploy and operate advanced computing and data resources for the research activities of students, faculty, and staff at the University of Arizona. We also provide consulting, technical documentation, and training to support our users.

This site is divided into sections that describe the High Performance Computing (HPC) resources that are available, how to use them, and the rules for use.





Contents

  • User GuideThis section has the basic knowledge that will introduce you to the resources and provides information on account registration, system access, how to run jobs, and how to request help.
  • ResourcesDetailed information on compute, storage, software, grant, data center, and external (XSEDE, CyVerse, etc.) resources.
  • PoliciesPolicies related to topics that include acceptable use, access, acknowledgements, buy-in, instruction, maintenance, and special projects.
  • ResultsA list of research publications that utilized UArizona's HPC system resources.
  • FAQA collection of frequently asked questions and their solutions.


Quick Links
  • User Portal  —  Manage and create groups, request rental storage, manage delegates, delete your account, and submit special project requests.
  • Open OnDemand —  Graphical interface for accessing HPC and applications.
  • Getting Help —  Request help from our team.




Highlighted Research

Faster Speeds Need Faster Computation - Hypersonic Travel






Quick News

We only keep a reasonably current version of Singularity. Prior versions have been removed. Only the latest one is considered secure. Singularity is installed on all of the system's compute nodes and can be accessed without using a module. Singularity will be renamed Apptainer as the project is brought into the Linux Foundation. An alias will be created so you can continue to invoke "singularity".

Anaconda is very popular and is available as a module. It expands the capability of Jupyter with Jupyter Labs; includes RStudio, and the Conda ecosystem. To access GUI interfaces available through Conda (e.g., JupyerLab), we recommend using an Open OnDemand Desktop session. See these instructions.

As a note, Anaconda likes to own your entire environment.  Review those instructions to see what problems that can cause and how to address them.

Have you tried Puma yet?  Our latest supercomputer is larger, faster and has bigger teeth than Ocelote (ok, maybe not the last bit). Puma Quick Start

Since we upgraded Ocelote it has the same software suite as Puma.  It is generally not as busy as Puma.  So if your work does not need the capabilities of Puma, consider using Ocelote instead.  This applies to GPU's also, if the P100s will work for you.

Now that we are into the second year of use, we have determined that we can increase the standard allocation.  From the end of April 2022 the standard allocation of cpu hours is increased from 70,000 to 100,000.

2022 WiDS Virtual Workshops

Wednesday, August 31st from 8:00am to 12:00pm PST

Register here: https://www.widsconference.org/wids-workshops-august-31.html




Training Calendar

Introduction to HPC
DateTimeLocationRegistration


9:00 - 10:00amMain Library B254Registration

9:00 - 10:00amMain Library, Data Studio CATalyst

9:00 - 10:00amRoom 130A UITS Building

9:00 - 10:00amRoom 130A UITS Building

9:00 - 10:00amRoom 130A UITS Building
Machine Learning on HPC
DateTimeLocationRegistration


9:00 - 10:00amMain Library, Data Studio CATalystRegistration

9:00 - 10:00amMain Library, Data Studio CATalyst

9:00 - 10:00amRoom 130A UITS Building

9:00 - 10:00amRoom 130A UITS Building

9:00 - 10:00amRoom 130A UITS Building
Intro to Parallel Computing
DateTimeLocationRegistration


10:30 - 11:30amMain Library, Data Studio CATalystRegistration


10:30 - 11:30amMain Library, Data Studio CATalyst
Introduction to Containers on HPC
DateTimeLocationRegistration


9:00 - 10:00amMain Library, Data Studio CATalystRegistration

9:00 - 10:00amMain Library, Data Studio CATalyst
Data Management Workshops
DateTimeLocationRegistration
Data Management Part 1


1:00 - 2:00pmOnlineRegistration

 


Online
Data Management Part 2


2:00 - 3:00pmOnlineRegistration


Online

System Calendar

DateEvent

 

Maintenance downtime is scheduled from 6AM to 6PM on July 20 for ALL HPC services

 

Maintenance downtime is scheduled from 6AM to 6PM on April 27 for ALL HPC services

 

Maintenance downtime is scheduled from 6AM to 6PM on January 26 for ALL HPC services

 

Maintenance downtime is scheduled from 6AM to 6PM on July 28 for ALL HPC services

 -
 

El Gato will be taken down for scheduled maintenance from July 12th through August 1st. Following
maintenance, it will use SLURM as its scheduling software and have the same software image and modules
as Ocelote and Puma.

 -
 

Ocelote will be taken down for scheduled maintenance from June 1st through June 30th. During that time, its
OS will be updated to CentOS 7 and its scheduler will be migrated to SLURM. 

-

Maintenance downtime is scheduled from 6AM on January 27th through 6PM on January 28th
for ALL HPC services


  • No labels