Ocelote has 46 new compute nodes with Nvidia P100 GPU's. These are available to researchers on campus. There will be fairshare limitations but the intention is for them to be as widely available as possible. There are still compute nodes on El Gato with 70 nodes provisioned with Nvidia Tesla K20's.
Currently the following Cuda modules are available on Ocelote:
We support OpenACC in the GCC Compiler 6.1 which is automatically loaded as a module when you log in. Verify with "module list".
The GCC 6 release includes a much improved implementation of the OpenACC 2.0a specification.
A useful quick reference guide can be found at:
About two times a year we host the Xsede Workshop on Programming GPU's with OpenACC. Watch for announcements to the HPC-Info list.
Many applications have been optimized to run faster on GPU's. These include:
- NAMD - installed as a module; module load namd
- VASP - A restricted license version is installed on Ocelote; only available to the licensed users
- GROMACS - Installed as a module on Ocelote; module load gromacs
- LAMMPS - Installed as a module on Ocelote; module load lammps/gcc/16Mar18
- MATLAB - Review the GPU Coder at their web site
- ANSYS Fluent
- ML and DL frameworks - See the next section below
*** Nvidia Provided GPU Codes ***
Nvidia builds the popular set of ML and DL frameworks which is not a trivial task. They have made them available to us and they will be updated regularly. They are currently located at:
Each is provided in a Singularity container.
The file name has a tag at the end that represents when it was made, so 18.01 is January 2018
Copy the file you wish to use to your directory. Your home path as well as /extra and /xdisk have been bound to the image, so those are your choices.
For interactive use, start an interactive job on a GPU node modifying this command:
You must change the group_list and you should change the other attributes as desired.
On the compute node assigned to you, as an example you can run:
You need to include the --nv and note it has two dashes. This will bind the Cuda libraries.
The example file is included in this directory. "tensorflow_example.py"
For batch use, you will include these two lines in your submission script
There are more detailed examples here
For more information on Singularity, see their web site at:
There are tutorials for Singularity on HPC here
We host workshops from the Pittsburgh Supercomputer Center which is a NSF funded location. We are working with Nvidia to offer a workshop in the April 2018 timeframe.
Watch for announcements from the hpc-info list.