The University of Arizona
    For questions, please open a UAService ticket and assign to the Tools Team.
Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 32 Next »

Overview

The clusters known as Cluster, UV and HTC were purchased in 2012.  Cluster and UV are due to be removed from service at the end of 2017.  This was planned to allow time for researchers to finish projects started on these clusters for whom migration to Ocelote presents issues.

 El Gato was implemented at the start of 2014 and there is no planned date at this point to discontinue it.  El Gato is a large GPU/PHI cluster, purchased by an NSF MRI grant by researchers in Astronomy and SISTA.  30% of this system is available for general campus research, including the Nvidia GPU's and Intel Phi's.

Ocelote was implemented in the middle of 2016.  It is designed to support all workloads on the standard nodes except:

  1. Large memory workloads that do not run within the 192GB RAM of each node can also be handled with either the large memory node or virtual SMP nodes
  2. GPU's are available as a buy-in option or windfall.  Any other need for GPU's can be satisfied on El Gato.  


Test Environment

HPC has a test / trial environment as well as the primary clusters detailed below.  This environment is intended to be used for projects that are six months or less in duration and cannot be run on the production systems. Reasons for not being able to be run on the production systems include requiring root access, and hardware or software requirements that cannot be met by one of the production systems. If you have a project in mind that we might be able to support, contact hpc-consult@list.arizona.edu 

FeatureDetail
Nodes16
CPUXeon Westmere-EP X5650
Dual 6-core *
Memory128GB
Disk10TB (5 x 2TB)
NetworkGbE and QDR IB

 * same as HTC nodes



Compute System Details

Name

Cluster  

(Legacy)

SMP (UV) 

(Legacy)

HTC 

(Legacy)

El Gato

Ocelote


Model

 SGI Altix 8400

SGI Altix UV 1000   

IBM System X iDataPlex
dx360 M3 

IBM System X iDataPlex dx360 M4


Lenovo NeXtScale nx360 M5

Year Purchased

 2011

 2011

 2011

 2013

2016

Type

Distributed Memory      

Shared Memory 

Discrete nodes 

Distributed Memory

Distributed and Large Memory* 

Processors

Xeon Westmere-EP X5650
Dual 6-core

Xeon Westmere-EX E7-8837
Dual 8-core

Xeon Westmere-EP X5650
Dual 6-core

Xeon Ivy Bridge E5-2650
Dual 8-core

Xeon Haswell E5-2695
Dual 14-core

Processor Speed (GHz)

 2.66   

2.66 

2.66

 2.66

2.3

Accelerators

 

 

 

140 Nvidia K20x
 40 Intel Phi

15 Nvidia K80
(windfall only) 

Node Count

 229

 58

 104

 136

336

Cores / Node

 12

 16

 12

 16

28

Total Cores

 2748

 928

 1248

 2176

10044

Memory / Node (GB)

 24 or 48

 32 or 128

 24, 48, or 96

 64 or 256

 192  (2TB & vSMP* )

Total Memory (TB)

 8.016

 2.688

 3.744

 26

71.5
/tmp150MB1.4TB1.7TB /localscratch
1GB /tmp 
900GB
/localscratch
~840GB
/tmp is part of root filesystem

Max Performance
(TFLOPS)

 29.24

 18.9

 13.28

 46

382

OS

 RedHat 6.0   

 RedHat 6.4

 RedHat 6.0

 RedHat 6.4

 CentOS 6.7

Interconnect

QDR Infiniband within chassis

 NUMAlink 5 within Chassis

 1 GigE

FDR Inifinband

FDR Infiniband for node-node
10Gb Ethernet node-storage

Application Support

Parallel, MPI

Parallel, OpenMP

Serial, Single Core

MPI, Serial, GPU, Phi

Parallel, MPI, OpenMP, Serial


* The new cluster includes a large memory node with 2TB of RAM available on 48 cores.  Virtual SMP software implements large memory images.

  • No labels