The University of Arizona
    For questions, please open a UAService ticket and assign to the Tools Team.
Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 22 Next »

Overview

The clusters known as Cluster, UV and HTC were purchased in 2012.  They are due to be removed from service at the end of 2017.  This was planned to allow time for researchers to finish projects started on these clusters for whom migration to the new cluster presents consistency issues.

 El Gato was implemented at the start of 2014 and there is no planned date at this point to discontinue it.  El Gato a large GPU/PHI cluster, purchased by an NSF MRI grant by researchers in Astronomy and SISTA.  30% of this system is available for general campus research, including the Nvidia GPU's and Intel Phi's.

Ocelote was implemented in the middle of 2016.  It is designed to support all workloads except:

  1. Large memory workloads that do not run within the 192GB RAM of each node can also be handled *
  2. GPU's are available as a buy-in option or windfall.  Any other need for GPU's can be satisfied on El Gato.  

 

 

Compute System Details

Name

Cluster  

(Gen 1)

SMP (UV) 

(Gen 1)

HTC 

(Gen 1)

El Gato

Ocelote

(Gen 2)

Model

 SGI Altix 8400

SGI Altix UV 1000   

IBM System X iDataPlex
dx360 M3 

IBM System X iDataPlex dx360 M4

 

Lenovo NeXtScale nx360 M5

Year Purchased

 2011

 2011

 2011

 2013

2016

Type

Distributed Memory      

Shared Memory 

Discrete nodes 

Distributed Memory

Distributed and shared Memory 

Processors

Xeon Westmere-EP X5650
Dual 6-core

Xeon Westmere-EX E7-8837
Dual 8-core

Xeon Westmere-EP X5650
Dual 6-core

Xeon Ivy Bridge E5-2650
Dual 8-core

Xeon Haswell E5-2695
Dual 14-core

Processor Speed (GHz)

 2.66   

2.66 

2.66

 2.66

2.3

Accelerators

 

 

 

140 Nvidia K20x
 40 Intel Phi

15 Nvidia K80
(windfall only) 

Node Count

 229

 58

 104

 136

336

Cores / Node

 12

 16

 12

 16

28

Total Cores

 2748

 928

 1248

 2176

10044

Memory / Node (GB)

 24 or 48

 32 or 128

 24, 48, or 96

 64 or 256

 192  (2TB & vSMP* )

Total Memory (TB)

 8.016

 2.688

 3.744

 26

71.5
/tmp150MB1.4TB1.7TB /localscratch
1GB /tmp 
900GB
/localscratch
~840GB
/tmp is part of root filesystem

Max Performance
(TFLOPS)

 29.24

 18.9

 13.28

 46

382

OS

 RedHat 6.0   

 RedHat 6.4

 RedHat 6.0

 RedHat 6.4

 CentOS 6.7

Interconnect

QDR Infiniband within chassis

 NUMAlink 5 within Chassis

 1 GigE

FDR Inifinband

FDR Infiniband for node-node
10Gb Ethernet node-storage

Application Support

Parallel, MPI

Parallel, OpenMP

Serial, Single Core

MPI, Serial, GPU, Phi

Parallel, MPI, OpenMP, Serial

 

* The new cluster includes a large memory node with 2TB of RAM available on 48 cores.  Virtual SMP software implements large memory images.

  • No labels