The University of Arizona
    For questions, please open a UAService ticket and assign to the Tools Team.
Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Page Banner
imagehttps://public.confluence.arizona.edu/download/attachments/86409305/HPC-Photo.jpg?api=v2
titleUniversity of Arizona High Performance Computing



Excerpt Include
Getting Help
Getting Help
nopaneltrue



Column
width35%



Column
width65%


Panel
borderColor#ffffff
bgColor#ffffff
borderStylesolid

Our Mission

UA High Performance Computing (HPC) is an interdisciplinary research center focused on facilitating research and discoveries that advance science and technology. We deploy and operate advanced computing and data resources for the research activities of students, faculty, and staff at the University of Arizona. We also provide consulting, technical documentation, and training to support our users.

This site is divided into sections that describe the High Performance Computing (HPC) resources that are available, how to use them, and the rules for use.







Panel
borderColor#07105b
bgColor#fcfcfc
titleColor#fcfcfc
titleBGColor#021D61
borderStylesolid
titleContents

Children Display
excerptTypesimple



Panel
borderColor#07105b
bgColor#fcfcfc
titleColor#fcfcfc
titleBGColor#D20021
borderStylesolid
titleQuick Links
  • User Portal  —  Manage and create groups, request rental storage, manage delegates, delete your account, and submit special project requests.
  • Open OnDemand —  Graphical interface for accessing HPC and applications.
  • Job Examples — View and download sample SLURM jobs from our GitHub site.
  • Status
    colourGreen
    titleNew!
    Training Videos — Visit our YouTube channel for instructional videos, researcher stories, and more.
  • Getting Help —  Request help from our team.






Panel
borderColor#979797
borderStylesolid

Highlighted Research

Faster Speeds Need Faster Computation - Hypersonic Travel








Panel
borderColor#979797
borderStylesolid

Quick News

Faster Interactive Sessions

Are you frustrated waiting for slow interactive sessions to start? Try using the standard queue on ElGato. We have provisioned 44 nodes to only accept the standard queue to facilitate faster connections. To access a session, try:

(puma) [netid@junonia ~]$ elgato
(elgato) [netid@junonia ~]$ interactive -a <your_group> 

For more information on interactive sessions see our page: Running Jobs With SLURM.


Increased Ocelote Allocation

Do you like using Ocelote? Good news! On November 9th, the standard allocation on Ocelote was increased from 35,000 to 70,000 CPU hours.

Singularity is Now Apptainer

Singularity has been renamed Apptainer as the project is brought into the Linux Foundation. An alias exists so that you can continue to invoke singularity. Local builds are now possible in many cases and remote builds with Sylabs are no longer supported

We only keep a reasonably current version of Apptainer. Prior versions are removed since only the latest one is considered secure. Apptainer is installed on all of the system's compute nodes and can be accessed without using a module.

Anaconda on HPC

Anaconda is very popular and is available as a module. It expands the capability of Jupyter with Jupyter Labs; includes RStudio, and the Conda ecosystem. To access GUI interfaces available through Conda (e.g., JupyerLab), we recommend using an Open OnDemand Desktop session. See these instructions.

As a note, Anaconda likes to own your entire environment.  Review those instructions to see what problems that can cause and how to address them.

Puma News

Have you tried Puma yet?  Our latest supercomputer is larger, faster and has bigger teeth than Ocelote (ok, maybe not the last bit). Puma Quick Start

Since we upgraded Ocelote it has the same software suite as Puma.  It is generally not as busy as Puma.  So if your work does not need the capabilities of Puma, consider using Ocelote instead.  This applies to GPU's also, if the P100s will work for you.

Now that we are into the second year of use, we have determined that we can increase the standard allocation.  From the end of April 2022 the standard allocation of CPU hours is increased from 70,000 to 100,000.







Calendars

Panel
borderColor#979797
titleColor#fcfcfc
titleBGColor#021D61
borderStylesolid
titleMaintenance Calendar


DateEvent

 

Maintenance downtime is scheduled from 6AM to 6PM on April 26th for ALL HPC services.

 

Maintenance downtime is scheduled from 6AM to 6PM on January 25 for ALL HPC services

 

Maintenance downtime is scheduled from 6AM to 6PM on October 26 for ALL HPC services

 

Maintenance downtime is scheduled from 6AM to 6PM on July 20 for ALL HPC services

 

Maintenance downtime is scheduled from 6AM to 6PM on April 27 for ALL HPC services

 

Maintenance downtime is scheduled from 6AM to 6PM on January 26 for ALL HPC services

 

Maintenance downtime is scheduled from 6AM to 6PM on July 28 for ALL HPC services

 -
 

El Gato will be taken down for scheduled maintenance from July 12th through August 1st. Following
maintenance, it will use SLURM as its scheduling software and have the same software image and modules
as Ocelote and Puma.

 -
 

Ocelote will be taken down for scheduled maintenance from June 1st through June 30th. During that time, its
OS will be updated to CentOS 7 and its scheduler will be migrated to SLURM. 

-

Maintenance downtime is scheduled from 6AM on January 27th through 6PM on January 28th
for ALL HPC services



Panel
borderColor#979797
titleColor#fcfcfc
titleBGColor#D20021
borderStylesolid
titleTraining Calendar


Deck of Cards
startHiddenfalse
idTraining Calendar


Card
defaulttrue
idintro-to-hpc
labelIntro to HPC
titleIntroduction to HPC

Introduction to HPC

Click here for more detailed information

Excerpt Include
Introduction to HPC
Introduction to HPC
nopaneltrue


Card
idintro-to-ml
labelIntro to Machine Learning
titleIntroduction to Machine Learning on HPC

Introduction to Machine Learning

Click here for more detailed information

Excerpt Include
Machine Learning on HPC
Machine Learning on HPC
nopaneltrue


Card
idintro-to-parallel
labelIntro to Parallel Computing
titleIntro to Parallel Computing

Introduction to Parallel Computing

Click here for more detailed information

Excerpt Include
Intro to Parallel Computing
Intro to Parallel Computing
nopaneltrue


Card
idintro-to-containers
labelIntro to Containers
titleIntroduction to Containers

Introduction to Containers

Click here for more detailed information

Excerpt Include
Introduction to Containers on HPC
Introduction to Containers on HPC
nopaneltrue


Card
iddata-mgmt
labelData Management Workshops
titleData Management Workshops

Data Management Workshops

Click here for more detailed information

Excerpt Include
Data Management Workshops
Data Management Workshops
nopaneltrue