The University of Arizona
    For questions, please open a UAService ticket and assign to the Tools Team.
Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 20 Next »

 

There are 5 systems available for campus researchers.  Four of the systems are general access systems purchased and managed with central resources and available to all campus researchers free of charge.  The fifth system is a large GPU/PHI cluster called El Gato, purchased by an NSF MRI grant by researchers in Astronomy and SISTA,  30% of this system is available for general campus research, including the Nvidia GPU's and Intel Phi's.

 

 

Compute System Details

Name

Cluster  

(Gen 1)

SMP (UV) 

(Gen 1)

HTC 

(Gen 1)

El Gato

Ocelote

(Gen 2)

Model

 SGI Altix 8400

SGI Altix UV 1000   

IBM System X iDataPlex
dx360 M3 

IBM System X iDataPlex dx360 M4

 

Lenovo NeXtScale nx360 M5

Year Purchased

 2011

 2011

 2011

 2013

2016

Type

Distributed Memory      

Shared Memory 

Discrete nodes 

Distributed Memory

Distributed and shared Memory 

Processors

Xeon Westmere-EP X5650
Dual 6-core

Xeon Westmere-EX E7-8837
Dual 8-core

Xeon Westmere-EP X5650
Dual 6-core

Xeon Ivy Bridge E5-2650
Dual 8-core

Xeon Haswell E5-2695
Dual 14-core

Processor Speed (GHz)

 2.66   

2.66 

2.66

 2.66

2.3

Accelerators

 

 

 

140 Tesla K20x
 40 Intel Phi

 

Node Count

 229

 58

 104

 136

336

Cores / Node

 12

 16

 12

 16

28

Total Cores

 2748

 928

 1248

 2176

10044

Memory / Node (GB)

 24 or 48

 32 or 128

 24, 48, or 96

 64 or 256

 192  (2TB & vSMP* )

Total Memory (TB)

 8.016

 2.688

 3.744

 26

71.5
/tmp150MB1.4TB1.7TB /localscratch
1GB /tmp 
900GB
/localscratch
~840GB
/tmp is part of root filesystem

Peak Performance
(TFLOPS)

 29.24

 18.9

 13.28

 46

301

OS

 RedHat 6.0   

 RedHat 6.4

 RedHat 6.0

 RedHat 6.4

 CentOS 6.7

Interconnect

QDR Infiniband within chassis

 NUMAlink 5 within Chassis

 1 GigE

FDR Inifinband

FDR Infiniband for node-node
10Gb Ethernet node-storage

Application Support

Parallel, MPI

Parallel, OpenMP

Serial, Single Core

MPI, Serial, GPU, Phi

Parallel, MPI, OpenMP, Serial

 

* The new cluster includes a large memory node with 2TB of RAM available on 48 cores.  Virtual SMP software implements large memory images.

  • No labels