The University of Arizona
    For questions, please open a UAService ticket and assign to the Tools Team.
Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

 

There are 5 systems available for campus researchers.  Four of the systems are general access systems purchased and managed with central resources and available to all campus researchers free of charge.  The fifth system is a large GPU cluster purchased by an NSF MRI grant by researchers in Astronomy and SISTA,  30% of this system is available for general campus research.

 

 

Compute System Details

Name

ICE  

(Gen 1)

UV 

(Gen 1)

HTC 

(Gen 1)

New Cluster 

(Gen 2)

El Gato

Model

 SGI Altix 8400

SGI Altix UV 1000   

IBM System X iDataPlex

dx360 M3 

Lenovo NeXtScale nx360 M5

IBM System X iDataPlex dx360 M4

 

Year Purchased

 2011

 2011

 2011

 2016

 2013

Type

Distributed Memory      

Shared Memory 

Discrete nodes 

Distributed and shared Memory 

Distributed Memory

Processors

Xeon Westmere-EP

X5650

Dual 6-core

Xeon Westmere-EX

E7-8837

Dual 8-core

Xeon Westmere-EP X5650

Dual 6-core

Xeon

Haswell

E5-2695

Dual 14-core

Xeon

Ivy Bridge

E5-2650

Dual 8-core

Processor Speed (GHz)

 2.66   

2.66 

2.66

2.3 

 2.66

Accelerators

 

 

 

 

140 Tesla K20x

 40 Intel Phi

Node Count

 229

 58

 104

 300

 90

Cores/

Node

 12

 16

 12

 28

 16

Total Cores

 2748

 928

 1248

 8400

 4352

Memory/

Node (GB)

 24 or 48

 32 or 128

 24, 48, or 96

 192

  (2TB & vSMP* )

 64 or 256

Total Memory (TB)

 8.016

 2.688

 3.744

 57.6

 26

Peak Performance

(TFLOPS)

 29.24

 18.9

 13.28

 255

 46

OS

 RedHat 6.0   

 RedHat 6.4

 RedHat 6.0

 CentOS 6.7

 RedHat 6.4

Interconnect

QDR Infiniband     within chassis

 NUMAlink 5 within Chassis

 1 GigE

FDR Infiniband

for node-node,

10Gb Ethernet node-storage

FDR Inifinband

Kinds of Applications

Parallel, MPI

Parallel, OpenMP

Serial, Single Core

 Parallel, MPI, OpenMP, Serial

MPI, Serial, GPU, Phi

 

* The new cluster includes a large memory node with 2TB of RAM available on 48 cores.  Virtual SMP software implements large memory images.

  • No labels