The University of Arizona
    For questions, please open a UAService ticket and assign to the Tools Team.
Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Overview

El Gato

Implemented at the start of 2014 and it has been reprovisioned with CentOS 7 and new compilers and libraries.  At the end of  From July 2021 it will use has been using Slurm for job submission. El Gato is a large GPU cluster, purchased by an NSF MRI grant by researchers in Astronomy and SISTA. Whilst the Nvidia K20 GPUs are more than six years old, they are still valuable for single-precision workload. There are 90 nodes with one or two GPU's.

Ocelote

Implemented in the middle of 2016.  It is designed to support all workloads on the standard nodes except:

  1. Large memory workloads that do not run within the 192GB RAM of each node can also be beyond 188GB are handled with the large memory node which has 2TB of memory.
  2. A total of 46 GPU workload is supported on 46 nodes with Nvidia P100 GPU's are available.

Puma

Implemented in 2020, Puma is the biggest cat yet. Similar to Ocelote, it has standard CPU nodes (with 94 cores and 512 GB of memory per node), GPU nodes (with Nvidia V100) and two high-memory nodes (3 TB). Local scratch storage increased to ~1.4 TB. Puma runs on CentOS 7.

Free vs Buy-In

The HPC resources at UA are differentiated from many other universities in that there is central funding for a significant portion of the available resources. Each PI receives a standard monthly allocation of hours at no charge.  There is no charge to the allocation for windfall usage and that has proven to be very valuable for researchers with substantial compute requirements.  

Research groups can 'Buy-In' (adding additional compute nodes) to the base HPC systems as funding becomes available. Buy-In research groups will have highest priority on the resources they add to the system.  If the expansion resources are not fully utilized by the Buy-In group they will be made available to all users as windfall.

Details on allocations

Details on buy-in




Compute System Details

Name

El Gato

Ocelote


Puma

Model

IBM System X iDataPlex dx360 M4

Lenovo NeXtScale nx360 M5Penguin Altus XE2242

Year Purchased

2013

2016 (2018 P100 nodes)2020

Node Count

131

400

236 CPU-only
8 GPU
2 High-memory

Total System Memory (TB)

26TB

82.6TB128TB

Processors

2x Xeon E5-2650v2 8-core (Ivy Bridge)

2x Xeon E5-2695v3 14-core (Haswell)
2x Xeon E5-2695v4 14-core (Broadwell)
4x Xeon E7-4850v2 12-core (Ivy Bridge)

2x AMD EPYC 7642 48-core (Rome)

Cores / Node (schedulable)

16c

28c (48c - High-memory node)94c

Total Cores

2160*

11528*23616*

Processor Speed

2.66GHz

2.3GHz (2.4GHz - Broadwell CPUs)2.4GHz

Memory / Node

256GB - GPU nodes
64GB - CPU-only nodes

192GB (2TB - High-memory node)

512GB (3TB - High-memory nodes)

Accelerators

122 NVIDIA K20X
40 nodes with 2 K20X
42 nodes with 1 K20X

46 NVIDIA P100 (16GB)
15 NVIDIA K80 (buy-in only)

29 NVIDIA V100S

/tmp~840 GB spinning
/tmp is part of root filesystem
~840 GB spinning
/tmp is part of root filesystem
~1440 TB NVMe
/tmp

HPL Rmax (TFlop/s)

46

382

OS

Centos 7

 CentOS 7CentOS 7

Interconnect

FDR Inifinband

FDR Infiniband for node-node
10 Gb Ethernet node-storage

1x 25Gb/s Ethernet RDMA (RoCEv2)
1x 25Gb/s Ethernet to storage



* Includes high-memory and GPU node CPUs


Example Resource Requests

Node TypencpuspcmemMax memSample Request Statement
ElGato
Standard164gb62gb

#SBATCH --nodes=1
#SBATCH --ntasks=16
#SBATCH --mem-per-cpu=4gb

GPU11616gb250gb

#SBATCH --nodes=1
#SBATCH --ntasks=16
#SBATCH --mem-per-cpu=16gb
#SBATCH --gres=gpu:1

Ocelote
Standard286gb168gb

#SBATCH --nodes=1

#SBATCH --ntasks=28
#SBATCH --mem-per-cpu=6gb
GPU2288gb224gb

#SBATCH --nodes=1

#SBATCH --ntasks=28
#SBATCH --mem-per-cpu=8gb
#SBATCH --gres=gpu:1
High Memory4842gb2016gb

#SBATCH --nodes=1
#SBATCH --ntasks=48
#SBATCH --constraint=hi_mem

Puma
Standard945gb 470gb

#SBATCH --nodes=1
#SBATCH --ntasks=94
#SBATCH --mem-per-cpu=5gb

GPU3945gb 470gb

#SBATCH --nodes=1
#SBATCH --ntasks=94
#SBATCH --mem-per-cpu=5gb
#SBATCH --gres=gpu:1

High Memory9432gb3000gb

#SBATCH --nodes=1
#SBATCH --ntasks=94
#SBATCH --constraint=hi_mem

Two GPUs may be requested on ElGato with ngpus=2
There is a single node available on Ocelote with two GPUs. To request it, use --gres=gpu:2
Up to four GPUs may be requested on Puma with --gres=gpu:1, 2, 3, or 4