The University of Arizona
    For questions, please open a UAService ticket and assign to the Tools Team.
Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 50 Next »

 


Linux Self Guided 

We run RHEL/CentOS 6 Linux on our high-performance systems.

If you have never used Linux before or have had very limited use, read this useful guide:

http://www.ee.surrey.ac.uk/Teaching/Unix/

 

If you have learned Linux in the past but want a quick reference to the syntax of commands, then read this:

Bash Cheat Sheet

 

 

 

 

 

Intel® Modern Code Training

Intel brought a workshop to campus in 2014 and the material is covered here.  If you want to do any work on the Intel® Xeon Phi™ Coprocessors we have 40 of them installed in ElGato.  You can obtain "standard" queue access and can request access to the nodes with them installed. 

Created by Colfax International and Intel, and based on the book, Parallel Programming and Optimization with Intel® Xeon Phi™ Coprocessors, this short video series provides an overview of practical parallel programming and optimization with a focus on using the Intel® Many Integrated Core Architecture (Intel® MIC Architecture).

Length: 5 hours

Parallel Programming and Optimization with Intel Xeon Phi Coprocessors

https://software.intel.com/en-us/modern-code/training/short-video-series?utm_source=HPCwire&utm_medium=newsletter_2&utm_content=HPC_Developers&utm_campaign=DRD_16_80

 

Intel® Software Tools

Intel offers the Cluster Studio XE.  On Ocelote we have installed modules (module avail intel ) as:

  • intel-cluster-checker/2.2.2

  • intel-cluster-runtime/ia32/3.8

  • intel-cluster-runtime/intel64/3.8

  • intel-cluster-runtime/mic/3.8

We have installed the Intel high performance libraries (module avail intel ):

  • Intel® Threading Building Blocks
  • Intel® Integrated Performance Primitives
  • Intel® Math Kernel Library
  • Intel® Data Analytics Acceleration Library

The University is licensed and has access to this toolset separate from HPC.   Portions of it are FREE for use in teaching/instruction and to students.   

https://software.intel.com/en-us/qualify-for-free-software

https://software.intel.com/en-us/server-developer

 

 

 

 

 

Singularity

 

Overview

Singularity containers let users run applications in a Linux environment of their choosing.  This is different from Docker which is not appropriate for HPC due to security concerns.  Singularity is like a container for Docker images, but is not just for Docker.

For an overview and more detailed information refer to:
http://singularity.lbl.gov

Here are some of the use cases we support using Singularity:

  • You already use Docker and want to run your jobs on HPC
  • You want to preserve your environment so that a system change will not affect your work
  • You need newer or different libraries than are offered on HPC systems
  • Someone else developed the workflow using a different version of linux
  • You prefer to use something other than Red Hat / CentOS, like Ubuntu 

Depending on your environment and the type of Singularity container you want to build, you may need to install some dependencies before installing and/or using Singularity. For instance, the following may need to be installed on Ubuntu for Singularity to build and run properly. (user input in bold)

[user@someUbuntu ~]$ sudo apt-get install build-essential debootstrap yum dh-autoreconf

On Centos, these commands will provide some needed dependencies for Singularity:

[user@someCentos ~]$ sudo yum groupinstall 'Development Tools'
[user@someCentos ~]$ sudo yum install wget
[user@someCentos ~]$ wget http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm
[user@someCentos ~]$ sudo rpm -Uvh epel-release-7-8.noarch.rpm
[user@someCentos ~]$ sudo yum install debootstrap.noarch

You can find more information about installing Singularity on your Linux build system here. Because Singularity is being rapidly developed, we recommend downloading and installing the latest release from Github.

Binding Directories

Binding a directory to your Singularity container allows you to access files in a host system directory from within your container. By default, a Singularity will bind your /home/$USER directory and your current working directory (along with a few other directories such as /tmp and /dev). The examples below include a bind to /extra.

If you need more detailed information, follow this link:

Centos with Tensorflow Example

This is an example of creating a singularity image to run code that is not supported on HPC.  This example uses Tensorflow but any application could be installed in its place.  It also uses CentOS but it could just as easily be Ubuntu.

  1. Install Singularity on linux workstation - http://singularity.lbl.gov/install-linux

  2. Create the container using a size of 1500MB on a Centos workstation / VM with root privileges

    singularity create -s 1500 centosTFlow.img
    # Create an image file to host the content of the container.  
    # Think of it like creating the virtual hard drive for a VM.
    # In ext3, an actual file of specified size is created. 
  3. Create the definition file, in this example called centosTFlow.def

  4. Bootstrap process creates the installation following the definition file

    singularity bootstrap centosTFlow.img centosTFlow.def



  5. Copy the new image file to your space on HPC.  /extra might be a good location as the image might use up your remaining home.  There is a line in the definition file that will create the mount for /extra.  Any time you run from a location other than /home on ElGato you are likely to see a warning which you can ignore:

    WARNING: Not mounting current directory: user bind control is disabled by system administrator
  6. Test with a simple command

    $module load singularity
    $singularity exec centosTFlow.img python --version
    Python 2.7.5
  7. Or slightly more complex create a simple python script called hello.py:

    $singularity exec centosTFlow.img python /extra/netid/hello.py
    Hello World: The Python version is 2.7.5 
    $
    $singularity shell centosTFlow.img
    Singularity.centosTFlow.img>python hello.py 
    Hello World: The Python version is 2.7.5 
    Singularity.centosTFlow.img>
  8. And now test tensorflow with this example from their web site, TFlow_example.py:

    $singularity exec centosTFlow.img python /extra/netid/TFlow_example.py
    (0, array([-0.08299404], dtype=float32), array([ 0.59591037], dtype=float32))
    (20, array([ 0.03721666], dtype=float32), array([ 0.3361423], dtype=float32))
    (40, array([ 0.08514741], dtype=float32), array([ 0.30855015], dtype=float32))
    (60, array([ 0.09648635], dtype=float32), array([ 0.3020227], dtype=float32))
    (80, array([ 0.0991688], dtype=float32), array([ 0.30047852], dtype=float32))
    (100, array([ 0.09980337], dtype=float32), array([ 0.3001132], dtype=float32))
    (120, array([ 0.09995351], dtype=float32), array([ 0.30002677], dtype=float32))
    (140, array([ 0.09998903], dtype=float32), array([ 0.30000633], dtype=float32))
    (160, array([ 0.0999974], dtype=float32), array([ 0.3000015], dtype=float32))
    (180, array([ 0.09999938], dtype=float32), array([ 0.30000037], dtype=float32))
    (200, array([ 0.09999986], dtype=float32), array([ 0.3000001], dtype=float32)) 

Docker Example

 

This example is taken from the Singularity documentation and modified for our HPC. The example taken is tensorflow again but it could be PHP or any other Docker image.  Note that you will be creating a container that is running Ubuntu on top of the Red Hat or CentOS clusters.

  1. Create the Singularity container on the workstation or VM where you have root authority:

    $singularity create --size 4000 docker-tf.img
  2. Import the Docker Tensorflow workflow from the Docker hub:

    $singularity import docker-tf.img docker://tensorflow/tensorflow:latest
    Cache folder set to /root/.singularity/docker
    Downloading layer sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
    Extracting /root/.singularity/docker/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4.tar.gz
    Downloading layer sha256:65f3587f2637c17b30887fb0d5dbfad2f10e063a72239d840b015528fd5923cd
    Extracting 
    ...
    Extracting /root/.singularity/docker/sha256:56eb14001cebec19f2255d95e125c9f5199c9e1d97dd708e1f3ebda3d32e5da7.tar.gz
    Bootstrap initialization
    No bootstrap definition passed, updating container
    Executing Prebootstrap module
    Executing Postbootstrap module
    Done. 
  3. Move the image to HPC and test it:

    [user@host]$ singularity shell docker-tf.img
    Singularity: Invoking an interactive shell within container...
    
    Singularity.docker-tf.img> python 
    Python 2.7.6 (default, Oct 26 2016, 20:30:19) 
    [GCC 4.8.4] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import tensorflow
    >>> exit()
    Singularity.docker-tf.img> exit
    $singularity exec docker-tf.img lsb_release -a
    No LSB modules are available.
    Distributor ID:	Ubuntu
    Description:	Ubuntu 14.04.4 LTS
    Release:	14.04
    Codename:	trusty
    user@host$ singularity exec docker-tf.img python /extra/netid/TFlow_example.py 
    WARNING:tensorflow:From TFlow_example.py:21 in <module>.: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
    Instructions for updating:
    Use `tf.global_variables_initializer` instead.
    (0, array([ 0.72233653], dtype=float32), array([-0.00956423], dtype=float32))
    (20, array([ 0.24949318], dtype=float32), array([ 0.22735602], dtype=float32))
    (40, array([ 0.13574874], dtype=float32), array([ 0.28262845], dtype=float32))
    (60, array([ 0.10854871], dtype=float32), array([ 0.2958459], dtype=float32))
    (80, array([ 0.1020443], dtype=float32), array([ 0.29900661], dtype=float32))
    (100, array([ 0.10048886], dtype=float32), array([ 0.29976246], dtype=float32))
    (120, array([ 0.10011692], dtype=float32), array([ 0.29994321], dtype=float32))
    (140, array([ 0.10002796], dtype=float32), array([ 0.29998642], dtype=float32))
    (160, array([ 0.10000668], dtype=float32), array([ 0.29999676], dtype=float32))
    (180, array([ 0.1000016], dtype=float32), array([ 0.29999924], dtype=float32))
    (200, array([ 0.10000039], dtype=float32), array([ 0.29999983], dtype=float32))
    user@host$

Running Jobs

Singularity is not to be run on login nodes.  That is a general policy for any application.

To run a Singularity container image on ElGato or Ocelote interactively, you need to allocate an interactive session, and load the Singularity module. In this sample session, the Tensorflow Singularity container from above is started, and python is run. Note that in this example, you would be running the version of python that is installed within the Singularity container, not the version on the cluster.

ElGato Interactive Example

[netid@elgato singularity]$ bsub -Is bash
Job <633365> is submitted to default queue <windfall>.
<<Waiting for dispatch ...>>
<<Starting on gpu44>>

[netid@gpu44 singularity]$ module load singularity
[netid@gpu44 singularity]$ singularity exec docker-tf.img python\ /extra/chrisreidy/singularity/TFlow_example.py 
WARNING: Not mounting current directory: user bind control is disabled by system administrator
Instructions for updating:
Use `tf.global_variables_initializer` instead.
(0, array([ 0.12366909], dtype=float32), array([ 0.3937912], dtype=float32))
(20, array([ 0.0952933], dtype=float32), array([ 0.30251619], dtype=float32))
...
(200, array([ 0.0999999], dtype=float32), array([ 0.30000007], dtype=float32))
[netid@gpu44 singularity]$ exit

Ocelote Interactive Example

The process is the same except that the command to initiate the interactive session will look more like:

 $ qsub -I -N jobname -m bea -M netid@email.arizona.edu -W group_list=hpcteam -q windfall -l select=1:ncpus=28:mem=168gb -l cput=1:0:0 -l walltime=1:0:0

ElGato Job Submission

Running a job with Singularity is as easy as running other jobs.  The LSF script might look like this, and the results will be found in lsf_tf.out

###========================================
#!/bin/bash
#BSUB -n 1
#BSUB -q "windfall"
#BSUB -R "span[ptile=1]"
#BSUB -o lsf_tf.out
#BSUB -e lsf_tf.err
#BSUB -J testtensorflow
#---------------------------------------------------------------------

module load singularity
cd /extra/netid/data
singularity exec docker-tf.img python /extra/chrisreidy/singularity/TFlow_example.py


Ocelote Job Submission

The PBS script might look like this, and the results will be found in singularity-job.ojobid.

#!/bin/bash
#PBS -N singularity-job
#PBS -W group_list=pi
#PBS -q windfall
#PBS -j oe
#PBS -l select=1:ncpus=1:mem=6gb
#PBS -l walltime=01:00:00
#PBS -l cput=12:00:00
module load singularity
cd /extra/chrisreidy/singularity
date
singularity exec docker-tf.img python /extra/chrisreidy/singularity/TFlow_example.py
date

 

  • No labels