The University of Arizona
    For questions, please open a UAService ticket and assign to the Tools Team.
Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 105 Next »


Local Workshops

The UITS Research Technologies group offers introductory workshops in High-Performance Computing (HPC) for students in a scheduled course that includes an HPC component. The workshops can be requested by the instructor that is teaching the course and can be delivered during the regular class hours. The goal of the workshop is to introduce users to the computational resources offered by the University of Arizona and provide users with the basic knowledge and skills they might need to use the HPC systems. We also discuss how computing on a remote shared HPC cluster is different from computing on a local machine (desktop, laptop).

We currently offer HPC workshops in 50-minute and 90-minute formats. The 50-minute workshop introduces users to the HPC resources of the University of Arizona and the 90-minute workshop includes a practical exercise of working with the HPC system. We can also schedule two 50-minute workshops to cover all the material from the 90-minute workshop. Additional topics relevant to the specific group of users (e.g. X-forwarding, Python virtual environments, working with Matlab etc.) can be covered by request.

All the attendees must obtain an HPC account prior to the workshop as there is at least a 15-minute lag between requesting an account and being able to use it. It is also desirable for the Windows users to download and install Putty (ssh client and terminal emulator) and WinSCP (program for secure data transfer). Both applications are available for the free download from the UA software license website:

The 50-minute UA HPC Introduction workshop covers the following topics:
-       Brief description of the UA HPC computing and storage resources
-       Computing on HPC cluster vs Computing on a laptop
-       Requesting an HPC account
-       Accessing the HPC system
-       Data transfer between local and UA HPC storage
-       Using software on the HPC system with environment modules
-       Resource management and scheduling batch jobs with Portable Batch System (PBS)
-       Writing a PBS script
-       UA HPC documentation and support

 Additional topics covered in 90-minute workshop:

-       Basic commands for working within Linux environment
-       Running a batch job on the HPC system
-       Checking a status of the batch job
-       Reading standard output and standard error files
-       Modifying parameters in the PBS script for a multicore batch job

 If the location of the workshop is in the Computer Center we also offer a tour of the Research Data Center.

Faculty and course instructors can coordinate a workshop by contacting our consultants at

Extensive Training Courses

We have linked to relevant training courses from other institutions.  
Rather than recreate them we recommend that you access them directly.
Here is a partial list from each site:
Cornell Virtual Workshops

  • Introduction to Linux
  • Introduction to C Programming
  • Introduction to Fortran Programming
  • Introduction to Python
  • Introduction to R
  • MATLAB Programming
  • Introduction to GPU and CUDA
  • Parallel Computing Courses including MPI and OpenMP
  • Code Improvement
  • Data Management including Globus, HDF5 and VisIt

CyberInfrastructure Tutor from NCSA

  • Debugging Code
  • MPI
  • Introduction to Performance Tools
  • Introduction to Visualization
  • Parallel Computing

Software Carpentry

  • The Unix Shell
  • Version Control with Git
  • Using Databases and SQL
  • Programming with Python
  • Programming with R
  • Programming with MATLAB
  • Automation and Make

Linux Self Guided 

We run RHEL/CentOS 6 Linux on our high-performance systems.

If you have never used Linux before or have had very limited use, read this useful guide:

If you have learned Linux in the past but want a quick reference to the syntax of commands, then read this:

Bash Cheat Sheet

Intel® Modern Code Training

Intel brought a workshop to campus in 2014 and the material is covered here.  If you want to do any work on the Intel® Xeon Phi™ Coprocessors we have 40 of them installed in ElGato.  You can obtain "standard" queue access and can request access to the nodes with them installed. 

Created by Colfax International and Intel, and based on the book, Parallel Programming and Optimization with Intel® Xeon Phi™ Coprocessors, this short video series provides an overview of practical parallel programming and optimization with a focus on using the Intel® Many Integrated Core Architecture (Intel® MIC Architecture).

Length: 5 hours

Parallel Programming and Optimization with Intel Xeon Phi Coprocessors

Intel® Software Tools

Intel offers the Cluster Studio XE.  On Ocelote we have installed modules (module avail intel ) as:

  • intel-cluster-checker/2.2.2

  • intel-cluster-runtime/ia32/3.8

  • intel-cluster-runtime/intel64/3.8

  • intel-cluster-runtime/mic/3.8

We have installed the Intel high performance libraries (module avail intel ):

  • Intel® Threading Building Blocks
  • Intel® Integrated Performance Primitives
  • Intel® Math Kernel Library
  • Intel® Data Analytics Acceleration Library

The University is licensed and has access to this toolset separate from HPC.   Portions of it are FREE for use in teaching/instruction and to students.


Introduction to OpenMP

This PDF file is a presentation from a series called Xsede*  HPC Workshop.

* XSEDE, the Extreme Science and Engineering Discovery Environment, is the most advanced, powerful, and robust collection of integrated digital resources and services in the world. It is a single virtual system that scientists and researchers can use to interactively share computing resources, data, and expertise. XSEDE integrates the resources and services, makes them easier to use, and helps more people use them.


Singularity containers let users run applications in a Linux environment of their choosing.  This is similar to but not the same as Docker.   

The most important thing to know is that you create the singularity container called an image on a workstation where you have root privileges, and then transfer the image to HPC where you can execute the image. If root authority is an issue then the answer might be a virtual environment on your laptop, like Vagrant for MacOS

For an overview and more detailed information refer to:

Here are some of the use cases we support using Singularity:

  • You already use Docker and want to run your jobs on HPC
  • You want to preserve your environment so that a system change will not affect your work
  • You need newer or different libraries than are offered on HPC systems
  • Someone else developed the workflow using a different version of linux
  • You prefer to use something other than Red Hat / CentOS, like Ubuntu 

Singularity tutorials

  • No labels