The University of Arizona
    For questions, please open a UAService ticket and assign to the Tools Team.
Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 43 Next »

Overview

El Gato was implemented at the start of 2014 and there is no planned date at this point to discontinue it.  El Gato is a large GPU/PHI cluster, purchased by an NSF MRI grant by researchers in Astronomy and SISTA.  30% of this system is available for general campus research, including the Nvidia GPU's and Intel Phi's.

Ocelote was implemented in the middle of 2016.  It is designed to support all workloads on the standard nodes except:

  1. Large memory workloads that do not run within the 192GB RAM of each node can also be handled with either the large memory node or virtual SMP nodes
  2. GPU's are available as a buy-in option or windfall and are now available standard on Ocelote.

Ocelote now has 46 nodes with Nvidia P100's that are available for "standard" and "windfall" queues. See details at Running Jobs


Free vs Buy-In

The HPC resources at UA are differentiated from many other universities in that there is central funding for a significant portion of the available resources. Each PI receives a standard monthly allocation of hours at no charge.  There is no charge for windfall usage and that has proven to be very valuable for researchers with substantial compute requirements.  

In the cases where there is funding for buy-in, those resources are dedicated to the group providing the funding.  If the expansion resources are not fully utilized by the Buy-In group they will be made available to all users.

Details on allocations

Details on buy-in

Test Environment

HPC has a test / trial environment as well as the primary clusters detailed below.  This environment is intended to be used for projects that are six months or less in duration and cannot be run on the production systems. Reasons for not being able to be run on the production systems include requiring root access, and hardware or software requirements that cannot be met by one of the production systems. If you have a project in mind that we might be able to support, contact hpc-consult@list.arizona.edu 

FeatureDetail
Nodes16
CPUXeon Westmere-EP X5650
Dual 6-core
Memory128GB
Disk10TB (5 x 2TB)
NetworkGbE and QDR IB



Compute System Details

Name

El Gato

Ocelote


Model

IBM System X iDataPlex dx360 M4


Lenovo NeXtScale nx360 M5

Year Purchased

 2013

2016-2018

Type

Distributed Memory

Serial, SMP, Distributed and Large Memory* 

Processors

Xeon Ivy Bridge E5-2650
Dual 8-core

Xeon Haswell E5-2695 Dual 14-core
Xeon Broadwell E5-2695 Dual 14-core

Processor Speed (GHz)

 2.66

2.3

Accelerators

140 Nvidia K20x
 40 Intel Phi

46 Nvidia P100
15 Nvidia K80 (windfall only) 

Node Count

 136

400

Cores / Node

 16

28

Total Cores

 2176

11528

Memory / Node (GB)

 64 or 256

 192
High memory node and vSMPnode - 2TB

Total Memory (TB)

 26TB

82.6TB
/tmp900GB
/localscratch
~840GB
/tmp is part of root filesystem

Max Performance
(TFLOPS)

 46

382

OS

 RedHat 6.4

 CentOS 6.7

Interconnect

FDR Inifinband

FDR Infiniband for node-node
10Gb Ethernet node-storage

Application Support

MPI, Serial, GPU, Phi

Parallel, MPI, OpenMP, Serial


* The new cluster includes a large memory node with 2TB of RAM available on 48 cores.  More details on the Large Memory Node

*Virtual SMP software implements large memory images.

  • No labels