The University of Arizona
    For questions, please open a UAService ticket and assign to the Tools Team.
Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Column
width40%

Image Modified


Column
width60%

Table of Contents



Overview

These policies govern access, use, and resources on all HPC systems managed by the UA's University Information Technology Services (UITS). 

UA HPC systems are funded through UA Office of Research, Discovery and Innovation (ORD http://research.arizona.edu/) and CIO/UITS (Chief Information Officer, and University Information Technology Services).   

All UA HPC systems are available to all UA researchers (Faculty, staff, undergrad and grad students, postdocs and DCC's) at no cost.  Presently each group is provided with 36,000 cpu-hours per month standard priority.

Intended Audience

The intended audience for these policy statements are current and prospective HPC users who need an understanding of the accounts and allocation of HPC/HTC resources. This information is provided both as a statement of policy and for educating the user about the use of these systems. 

Acceptable Use

High Performance Computing (HPC) facility users are responsible for complying with all University policies.

http://policy.arizona.edu/information-technology

The supercomputers represent a unique resource for the campus community. These computers have special characteristics that are not found, or are of limited availability, on other central computers, including parallel processing, large memory, and a Linux operating system.  The allocation of High Performance Computing (HPC) resources requires close supervision by those charged with management of these resources.  

The login nodes are designated for small, short interactive jobs, and submitting batch jobs and not for running compute jobs.

UA HPC does not provide support for any type of controlled data. No controlled data (HIPAA, EAR, FERPA, PII, CUI, ITAR, etc.) can be analysed or stored on any HPC storage.


Arizona Sales Tax Exemption on Research Equipment

Equipment purchased exclusively for research purposes is exempt from Arizona State Sales Tax. See UA FSO statement of Research Equipment Tax Exemption. However, there are exceptions to this exemption that impact central, shared use research facilities. These exceptions include "research in social sciences or psychology"—see AZ ARS 42-5061 section B 14.

In order to provide research computing resources to researchers from these exception areas, some of the research computing resources have been purchased with Arizona taxes included. The result is that there are resources available to all campus researchers, with the caveat that researchers in the social sciences, psychology and instructional projects areas are restricted to using resources that are purchased with taxes paid.

Please contact hpc-consult@list.arizona.edu to learn about the resources that are available for social sciences, psychology and instructional purposes.

Access for Research and Limited Access for Instruction

As described in the 'Sales Tax Exemption' section above, most of the HPC systems are limited to research applications as defined in Section B-14 of ARS Statute 42-5061 by the Arizona Legislature. All users are expected to use these resources accordingly and to use the U-System or other computing systems for non-research purposes.

Acknowledgements

We ask that you acknowledge HPC support on any published work supported by HPC facilities or staff.

Acknowledgement Statement

Please acknowledge resources provided by HPC in the following manner: 
"An allocation of computer time from the UA Research Computing High Performance Computing (HPC) at the University of Arizona is gratefully acknowledged".

Send Submissions To:

Please submit copies of dissertations, reports, preprints, and reprints in which HPC is acknowledged to: hpc-consult@list.arizona.edu

Buy-In

The University of Arizona's High Performance Computing (HPC) clusters tend to be a set of standard nodes and associated storage. There are additional nodes to meet specific needs like large memory or GPU's. For researchers who need compute resource beyond the standard allocation, and who have funding available, we encourage 'buy-in' of additional compute nodes.

More details are here.

Governance

Research Computing Governance Committee, RCGC

The Research Computing Guidance Committee (RCGC) is a cross-departmental group of researchers and IT professionals at the University of Arizona with oversight of central research computing resources.

The charge to the committee is to design and implement the policies and procedures; the oversight of the operations; and promotion and recommendations for the centrally funded and administered research computing resources. The policies and procedures will be monitored and updated by this steering committee and related task forces created by this committee. The resources will be administered, maintained, and supported by UITS Research Computing, Systems, and Operations.

For more information on the RCGC and its subcommittees please refer to the web site (http://rcgc.arizona.edu/).

Policies Sub-committee

This sub-committee of the RCGC reviews and recommends policy changes. 


Maintenance

Overview

Most maintenance is performed during regular hours with no interruption to service.  System wide maintenance is usually planned ahead of time and is scheduled for Wednesdays from 8AM to 5PM with at least 10 days notice.  These will be planned to occur four times per year.

The notification will describe the nature and extent (partial or full) of the interruptions of HPC services. 

Batch queues will also be modified prior to scheduled downtimes to hold jobs which request more wallclock time than remains before the shutdown.

Emergency Maintenance

Unavoidable (emergency) downtime may occur as a result of any of the above reasons at almost any time. Such events are rare and great effort is made to avoid these situations. However, when emergency maintenance is needed, the UITS unit responsible for the item affected will provide as much notice to users as possible and work to resolve the fault as quickly as possible.

Any emergency outages will be announced via email through the hpc-announce@list.arizona.edu mailing list. 
 

Software Policies

Commercial / Fee-based Software

The University of Arizona Research Computing facility has many commercial and freeware packages installed on our supercomputers. Our approach to acquisition of additional software depends upon its cost, licensing restrictions, and user interest.   

  1. Single User Interest  The license for the software is purchased by the user and his/her department or sponsor.  This software is best installed by the user.  There are two main options; the first and easier, is to install the software in /home or /extra using the example procedure. The second is to use the "unsupported" environment.  The advantage is that you can share the software built here with other users. This is created by sending a request to HPC Consult who will create an "unsupported" group in which you can build software and add users.

  2. Group Interest  If a package is of interest to a group of several users, the best approach at first is for one user to act as a primary sponsor and arrange to split the procurement/licensing costs among the group. We can install the software and manage the user access according to requests from the group.   
  3. Broad Interest  The High Performance Computing team will consider acquiring and supporting software packages that have broad interest among our users. Full facility support will depend on the cost of the package and our ability to comply with any restrictive licensing conditions.

Academic / Free Software

There is a plethora of software generally available for scientific and research usage.  We will install that software if it meets the following requirements:

  1. Compatible with our module environment.  Some software though is not written with clusters in mind and tries to install into system directories, or needs a custom environment on every compute node.
  2. Generally useful.  Some software has to be configured to the specific compute environment of the user.  You are encouraged to use our "unsupported" environment to install your own.
  3. Public license.  We do not install software if that would be a violation of its licensing.
  4. Reasonably well written.  Some software takes days of effort and still does not work right.  We have limited resources and reserve the right to "give up". Sometimes software is written for workstations and 

Federal Regulations

By policy, it is prohibited to use any of the facility's resources in any manner that violates the US Export Administration Regulations (EAR) or the International Trafficking in Arms Regulations (ITAR). It is relevant in this regard to be aware that the facility employs analysts who are foreign, nonresident, nationals and who have root-access privileges to all files and data. Specifically, you must agree not to use any software or data on facility systems that are restricted under EAR and/or ITAR.