The University of Arizona
    For questions, please open a UAService ticket and assign to the Tools Team.
Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 27 Next »



Linux Self Guided 

We run RHEL/CentOS 6 Linux on our high-performance systems.

If you have never used Linux before or have had very limited use, read this useful guide:


If you have learned Linux in the past but want a quick reference to the syntax of commands, then read this:

Bash Cheat Sheet




Intel® Modern Code Training

Intel brought a workshop to campus in 2014 and the material is covered here.  If you want to do any work on the Intel® Xeon Phi™ Coprocessors we have 40 of them installed in ElGato.  You can obtain "standard" queue access and can request access to the nodes with them installed. 

Created by Colfax International and Intel, and based on the book, Parallel Programming and Optimization with Intel® Xeon Phi™ Coprocessors, this short video series provides an overview of practical parallel programming and optimization with a focus on using the Intel® Many Integrated Core Architecture (Intel® MIC Architecture).

Length: 5 hours

Parallel Programming and Optimization with Intel Xeon Phi Coprocessors


Intel® Software Tools

Intel offers the Cluster Studio XE.  On Ocelote we have installed modules (module avail intel ) as:

  • intel-cluster-checker/2.2.2

  • intel-cluster-runtime/ia32/3.8

  • intel-cluster-runtime/intel64/3.8

  • intel-cluster-runtime/mic/3.8

We have installed the Intel high performance libraries (module avail intel ):

  • Intel® Threading Building Blocks
  • Intel® Integrated Performance Primitives
  • Intel® Math Kernel Library
  • Intel® Data Analytics Acceleration Library

The University is licensed and has access to this toolset separate from HPC.   Portions of it are FREE for use in teaching/instruction and to students.








Singularity containers let users run applications in a Linux environment of their choosing.  This is different from Docker which is not appropriate for HPC due to security concerns.

For an overview and more detailed information refer to this location:

Here are some of the use cases we support using Singularity:

  • You already use Docker and want to run your jobs on HPC
  • You want to preserve your environment so that a system change will not affect your work
  • You need newer or different libraries than are offered on HPC systems
  • Someone else developed the workflow using a different version of linux
  • You prefer to use something other than Red Hat / CentOS, like Ubuntu 

Newer code example - Tensorflow

  1. Install Singularity on linux workstation -

  2. Create the container using a size of 1500MB on a Centos workstation / VM with root priveleges

    singularity create -s 1500 centosTFlow.img
  3. Create the definition file, in this example called centosTFlow.def

  4. Bootstrap process creates the installation following the definition file

    singularity bootstrap centosTFlow.img centosTFlow.def

  5. Copy the new image file to your space on HPC.  /extra might be a good location as the image might use up your remaining home.  There is a line in the definition file that will create the mount for /extra.  Any time you run from a location other than /home on ElGato you are likely to see a warning which you can ignore:

    WARNING: Not mounting current directory: user bind control is disabled by system administrator
  6. Test with a simple command

    $singularity exec centosTFlow.img python --version
    Python 2.7.5
  7. Or slightly more complex create a simple python script called

    $singularity exec centosTFlow.img python /extra/netid/
    Hello World: The Python version is 2.7.5 
    $singularity shell centosTFlow.img
    Hello World: The Python version is 2.7.5 
  8. And now test tensorflow with this example from their web site,

    $singularity exec centosTFlow.img python /extra/netid/
    (0, array([-0.08299404], dtype=float32), array([ 0.59591037], dtype=float32))
    (20, array([ 0.03721666], dtype=float32), array([ 0.3361423], dtype=float32))
    (40, array([ 0.08514741], dtype=float32), array([ 0.30855015], dtype=float32))
    (60, array([ 0.09648635], dtype=float32), array([ 0.3020227], dtype=float32))
    (80, array([ 0.0991688], dtype=float32), array([ 0.30047852], dtype=float32))
    (100, array([ 0.09980337], dtype=float32), array([ 0.3001132], dtype=float32))
    (120, array([ 0.09995351], dtype=float32), array([ 0.30002677], dtype=float32))
    (140, array([ 0.09998903], dtype=float32), array([ 0.30000633], dtype=float32))
    (160, array([ 0.0999974], dtype=float32), array([ 0.3000015], dtype=float32))
    (180, array([ 0.09999938], dtype=float32), array([ 0.30000037], dtype=float32))
    (200, array([ 0.09999986], dtype=float32), array([ 0.3000001], dtype=float32)) 





  • No labels