The University of Arizona
    For questions, please open a UAService ticket and assign to the Tools Team.
Page tree
Skip to end of metadata
Go to start of metadata

Overview

The University of Arizona's High Performance Computing (HPC) clusters are servers (computing nodes) and associated high performance storage. There are additional nodes to meet specific needs like high amounts of memory or GPUs. All UA research faculty can sign up for free monthly allocation (following these directions). For researchers who need compute resource beyond the free standard allocation, and who have funding available, we encourage 'buy-in' of additional compute nodes.

Benefits to Buy-In

Dedicated Research Compute.  Research groups can 'Buy-In' (add resources such as processors, memory, etc.) to the base HPC systems as funding becomes available. Researchers receive 100% of the CPU*hour time their purchases create as a monthly high-priority allocation. This time receives the highest priority queue on the HPC systems.

Quality Environment. The Buy-In option allows research groups to take advantage of the central machine room space that is designed for maintaining high performance computing resources. The UITS Research Technologies group physically maintains the purchased nodes, applies updates and patches, monitors the systems for performance and security, and manages software. Additionally, Research Technologies staff is available for research support and questions through hpc-consult@list.arizona.edu. In short, essentially all costs associated with maintaining compute resources are covered by UITS rather than individual researchers.

Flexible Capacity. Buy-In research group members also benefit from their resources being integrated into a larger computing resource. This means the buy-in resources can be used in conjunction with the free allocation and resources provided to address computational projects that would be beyond the capacity of a group running an independent system alone.

Shared Resource. The University Research Computing Community as a whole benefits from buy-in expansions to the HPC systems. As mentioned above, researchers who buy-in receive 100% of the allocation of time for their purchase. However if the buy-in resources are not fully utilized, they are made available as windfall resources. This helps to ensure full use of all HPC resources and can be used to justify future purchases of computing resources.

Cost Competitiveness. Lower costs included in the grant proposals (i.e. hardware only, no operations costs) and evidence of campus cost‐sharing give a positive advantage during funding agency review.

Pricing. For the year following the award the UA HPC request for proposal (RFP) pricing is locked in and is often considerably less than the "market price."

Buy-In Details

Puma 2020

There are several buy-in options for Puma:

  1. CPU-only node: Penguin Computing Altus XE2242 CPU chassis
    1. There are 4 CPU nodes in an Altus XE2242 chassis
    2. Technical specs for 1 node of 4 in an Altus XE2242 chassis
      1. 96 cores: Dual socket AMD EPYC 7642 CPU (2x48 cores, 2.4 GHz, 225W)

      2. 512GB RAM, DDR4-3200MHz REG, ECC, 2Rx4 (16 x 32GB)

      3. 2TB SSD local hard drive, 2.5”, NVMe, 4 Lane, 1 DWPD, 3D TLC

  2. GPU node: Penguin Computing Altus XE2214GT GPU chassis
    1. GPU chassis have 4 GPUs in them
    2. Technical specs for the full XE2214GT chassis
      1. 96 cores: Dual socket AMD EPYC 7642 CPU (2x48 cores, 2.3 GHz, 225W)
      2. 4 NVIDIA Tesla V100S-PCIe, 32GB video memory, 5120 CUDA, 640 Tensor, 250W
      3. 512GB RAM, DDR4-3200MHz REG, ECC, 2Rx4 (16 x 32GB)

      4. 2TB SSD local hard drive, NVMe, 4 Lane, 1 DWPD, 3D TLC

        altus-xe2214gt-server-amd-gpu-penguin-computing
  3. High memory node: Penguin Computing Altus XE1212 high memory chassis
    1. Technical specs
      1. 96 cores: Dual socket AMD EPYC 7642 CPU (2x48 cores, 2.4 GHz, 225W)
      2. 3072 GB RAM, DDR4-2933MHz LR, ECC, 4R (24 x 128GB)

      3. 2TB SSD local hard drive, NVMe, 4 Lane, 1 DWPD, 3D TLC

        altus-xe1212-server-amd-penguin-computing

Cost and Allocations

Option NumberCPU coresV100S GPURAM (GB)Monthly High-priority AllocationCost
CPU-Only Options
1A - One CPU node96
51270,080$8,037.50
1B - Two CPU nodes192
512140,160$16,075.00
1C - Three CPU nodes288
512210,240$24,112.00
1D - Full Altus XE2242384
512280,320$32,150.00
GPU Node Options
2A - 1/4 Altus XE2214GT24151217,520$8,523.75
2B - 2/4 Altus XE2214GT48251235,040$17,047.50
2C - 3/4 Altus XE2214GT72351252,560$25,571.25
2D - Full Altus XE2214GT96451270,080$34,095.00
High Memory Node
3 - Full Altus XE121296
307270,080$42,230.00

Buy-in Policies

  • University of Arizona can only purchase whole chassis units from Penguin Computing. That is 4 CPU nodes (option 1D), 1 GPU node with 4 GPUs (option 2D), or 1 high memory nodes (option 3). Research Computing will work to match partial node buy-in requests to make full nodes.
  • Monthly high priority time is calculated as: (Number of CPUs * 24 hours * 365 year) / 12 months
  • Purchasing GPUs expands the limit the PI has on number of GPUs that can be used at any time
  • Buy-in high priority allocations will last the lifetime of the system. Puma was purchased in August 2020 and will be end-of-life August 2025.
  • The HPC Buy-in program is not designed to replace or compete with the very large‐scale resources at national NSF and DOE facilities, e.g. XSEDE, the Open Science Grid. National resources are available at no financial cost to most US-based researchers through competitive proposal processes. Please contact hpc-consult@list.arizona.edu if you are interested in applying for these resources.
  • The HPC Buy-in program is designed to meet the needs of researchers with medium‐scale HPC requirements who want guaranteed, consistent access to compute resources.

High-Priority Allocation Policies

  • Standard and high priority jobs will preempt windfall jobs when necessary. 
  • High priority jobs are run on both the buy-in nodes and the centrally-funded nodes. This is advantageous if there is a short-term project deadline.



  • No labels