Overview
El Gato Page Banner |
---|
image | https://public.confluence.arizona.edu/download/attachments/86409308/HPC-Photo.jpg?api=v2 |
---|
actionTitle | Batch Job Resource Request Examples |
---|
actionUrl | https://public.confluence.arizona.edu/display/UAHPC/Running+Jobs+with+SLURM#RunningJobswithSLURM-examplerequestsNodeTypes/ExampleResourceRequests |
---|
title | Compute Resources |
---|
|
Panel |
---|
borderColor | #9c9fb5 |
---|
bgColor | #fafafe |
---|
borderWidth | 2 |
---|
borderStyle | double |
---|
|
Free vs. Buy-InThe HPC resources at |
UA UArizona are differentiated from many other universities in that there is central funding for a significant portion of the available resources. Each PI receives a standard monthly allocation of hours at no charge. |
There is no charge to the allocation for windfall usage and that has proven to be very valuable for researchers with substantial compute requirements. Research groups can 'Buy-In' ( |
add resources such as processors, memory, storage, etc.adding additional compute nodes) to the base HPC systems as funding becomes available. Buy-In research groups will have highest priority on the resources they add to the system. |
If the expansion resources are not fully utilized by the Buy-In group they will be made available to all users as windfall. |
Test Environment
HPC has a test / trial environment as well as the primary clusters detailed below. This environment is intended to be used for projects that are six months or less in duration and cannot be run on the production systems. Reasons for not being able to be run on the production systems include requiring root access, and hardware or software requirements that cannot be met by one of the production systems. If you have a project in mind that we might be able to support, contact hpc-consult@list.arizona.edu
Feature | Detail |
---|
Nodes | 16 |
CPU | Xeon Westmere-EP X5650 Dual 6-core |
Memory | 128GB |
Disk | 10TB (5 x 2TB) |
Network | GbE and QDR IB |
Compute System Details Panel |
---|
borderColor | #9c9fb5 |
---|
bgColor | #fafafe |
---|
borderWidth | 2 |
---|
borderStyle | double |
---|
|
Compute System Details Note |
---|
During the quarterly maintenance cycle on April 27, 2022 the ElGato K20s and Ocelote K80s were removed because they are no longer supported by Nvidia. |
Name | El Gato | Ocelote
| Puma |
---|
Model | IBM System X iDataPlex dx360 M4 | Lenovo NeXtScale nx360 M5 | Penguin Altus XE2242 |
---|
Year Purchased |
---|
|
20132013 | 2016 (2018 P100 nodes) | 2020 | Node Count |
---|
|
131192 | 236 CPU-only 8 GPU 2 High-memory | Total System Memory (TB) |
---|
|
26 TB6 TB105 TBIvy Bridge 2650Dual 2650v2 8-core (Ivy Bridge) | 2x Xeon |
|
Haswell 2695 Dual 142695v3 14-core (Haswell) 2x Xeon |
|
Broadwell 2695 Dual 14-core2695v4 14-core (Broadwell) 4x Xeon E7-4850v2 12-core (Ivy Bridge) | 2x AMD EPYC 7642 |
|
Dual 48-core (Rome) | Cores / Node (schedulable) |
---|
|
16 | 28* | 9416c | 28c (48c - High-memory node) | 94c | Total Cores |
---|
|
216019200 | (GHz) 26633GHz (2.4GHz - Broadwell CPUs) | 2. |
|
4 (GB) 64 or 256 | 192 High memory node - 2TB | 512 High memory node - 3 TB | Accelerators | 137 Nvidia K20x 5 GB video mem
47 nodes with 2 K20x 43 nodes with 1 K20x | 46 Nvidia P100 16 GB video mem 15 Nvidia K80 (buy-in only) | 24 Nvidia V100 32 GB video mem | 256GB - GPU nodes 64GB - CPU-only nodes | 192GB (2TB - High-memory node) | 512GB (3TB - High-memory nodes) |
---|
Accelerators |
| 46 NVIDIA P100 (16GB) | 29 NVIDIA V100S |
---|
/tmp | ~840 GB spinning /tmp is part of root filesystem | ~840 GB spinning /tmp is part of root filesystem |
---|
|
~1640 GB is part of root filesystemMax Performance (TFLOPS) | 46 | HPL Rmax (TFlop/s) | 46 | 382 |
|
---|
OS |
---|
|
Centos .6 6.107 | CentOS 7 | Interconnect | FDR Inifinband | FDR Infiniband for node-node 10 Gb Ethernet node-storage |
---|
|
100 Gb Spine/Leaf 2x 25 Gb per compute node with via * Ocelote includes a large memory node with 2TB of RAM available on 48 cores. More details on the Large Memory Node
** Adjusted for the high memory nodeExample Resource Requests
Note Type | ncpus | pcmem | Max mem | Sample Request Statement |
---|
ElGato |
Standard | 16 | 4gb | 62gb | #PBS -l select=1:ncpus=16:mem=62gb:pcmem=4gb |
GPU1 | 16 | 16gb | 250gb | #PBS -l select=1:ncpus=16:mem=250gb:ngpus=1:pcmem=16gb |
Ocelote |
Standard | 28 | 6gb | 168gb | #PBS select=1:ncpus=28:mem=168gb:pcmem=6gb |
GPU2,3 | 28 | 8gb | 224gb | #PBS select=1:ncpus=28:mem=224gb:np100s=1:os7=True |
High Memory | 48 | 42gb | 2016gb | #PBS -l select=1:ncpus=48:mem=2016gb:pcmem=42gb |
Puma |
Standard | 94 | 5gb | 470gb | #SBATCH --nodes=1 #SBATCH --ntasks=94 #SBATCH --mem=470gb |
GPU4 | 94 | 5gb | 470gb | #SBATCH --nodes=1 #SBATCH --ntasks=94 #SBATCH --mem=470gb #SBATCH --gres=gpu:1 |
High Memory | 94 | 32gb | 3000gb | #SBATCH --nodes=1 #SBATCH --ntasks=94 #SBATCH --mem=3008gb |
1 Two GPUs may be requested on ElGato with ngpus=2 2 There is a single node available on Ocelote with two GPUs. To request it, use np100s=2 3 Set os7=False for a CentOS 6 GPU node 4 Up to four GPUs may be requested on Puma with --gres=gpu=1, 2, 3, or 4
|
Image Removed
Image Removed) 1x 25Gb/s Ethernet to storage
| * Includes high-memory and GPU node CPU |
---|
|