The University of Arizona
    For questions, please open a UAService ticket and assign to the Tools Team.
Page tree
Skip to end of metadata
Go to start of metadata


Where Should I Store My Data?

  1. Data undergoing active analyses should be stored in HPC's local High Performance Storage (Tier 1).

  2.  Large amounts of data that could be considered "warm" can be stored at reasonable rates on our Rental Storage.

  3. Research data not requiring immediate access should be stored in General Research Data Storage (Tier 2)
    For example:
    1. Large datasets where only subsets are actively being analyzed
    2. Results no longer requiring immediate access
    3. Backups (highly encouraged!)

  4. Data no longer involved in ongoing analyses that need long-term archiving should be stored in Long term Research Storage (Tier 3).

Storage Option Summary

PurposeCapacityCostRestricted data?AccessDurationBackup

Primary StorageResearch data. Supports compute. Directly attached to HPC/home 50GB /groups 500GB /xdisk 20TBFreeNot for restricted dataDirectly mounted to HPC. Also uses Globus and DTNsLong term. Aligns with HPC purchase cycleNo
Rental StorageResearch data. Large datasets. Typically for staging to HPCRented per Terabyte per yearRental rate: $47.35 per TB per yearNot for restricted dataUses Globus and DTNs. Copy data to PrimaryLong term. Aligns with HPC purchase cycleNo
Tier 2Typically research data. Unused data is archived15GB to TB'sTier-based system. First 1TB of active data and archival data are free. Active data > 1TB is paid.Not for restricted dataUses Globus and AWS command line interfaceTypically long term since use of Glacier is free and slowArchival
ReDataResearch data. Managed by UA LibrariesQuota systemFreeNot for restricted dataLogin and fill out fields, then uploadLonger than 10 yearsNo


Secure data enclaveIndividual requestsFree upon qualificationRestricted data; HIPAA, ePHIHIPAA training required, followed by request processLong termNo
BoxGeneral data50GBFreeNot for restricted dataBrowserLong termNo
Google DriveGeneral data15GBFree. Google rates for amounts > 15GBNot for restricted dataBrowserUnlimited usage expires March 1, 2023No

HPC High Performance Storage (Tier 1)

Data stored on HPC are not backed up! All data on this storage should be backed up elsewhere by UA researchers, preferably in three places and two formats. 

We strongly recommend that you do some regular housekeeping of your allocated space. Millions of files are hard to keep organized and even more difficult to migrate. Archiving or using a tool like tar will help keep our disk arrays efficient and potentially free up more space for you to use.


Every user has access to individual and group storage on the system where they can store data for active analyses as summarized below:

/home/uxx/netidAn individual storage allocation provided for every HPC user50GB
/groups/pi_netidA communal storage allocation provided for every research group500GB
/xdisk/pi_netidTemporary communal storage provided for every group. 200GB-20TBUp to 300 days
/tmpLocal storage available on individual compute nodes. < 800GB to 1.4TBOnly accessible for the duration of a job's run.

Checking your Storage Quota and Usage

Command Line

To check your storage usage, use the command uquota. For example:

(puma) [netid@junonia ~]$ uquota
                                            used  soft limit  hard limit
/groups/pi_netid                            6.6G      500.0G      500.0G
/home                                      37.1G       50.0G       50.0G
/xdisk/pi_netid                            12.9G        9.8T        9.8T

You can check your storage allocation through our online user portal by navigating to the Storage tab and clicking Check Disk Quotas:



Important /xdisk Basics

  • xdisks are temporary storage allocations available to each research group. No storage on HPC is designed to archive or persist data.
  • Only faculty members (PIs) may request, alter, or delete an allocation from the command line. Members of their research group may be given management rights allowing them to manage their xdisk through our web portal.
  • The maximum lifespan of an xdisk is 300 days. Allocations cannot be extended past the 300 day maximum.
  • Groups may only have one active xdisk at a time.
  • When an xdisk expires, the contents are deleted. 
  • Once an xdisk is deleted or expires, a new one may be immediately requested.
  • xdisks are not backed up.
  • Users must remove their data from HPC before their xdisk expires. We strongly recommend starting the backup process early.   

xdisk allocations are not backed up. It is the user's responsibility to save files stored in xdisk to alternate storage locations for backup and archive purposes. See Tier 2 Storage for more information on options for backing up your data. 

What is xdisk?

xdisk is a temporary storage allocation available to all PIs and offers up to 20 TB of usable space for their group for up to 300 days. A PI can request an allocation either via the command line or through our web portal (no paperwork necessary!). Once an xdisk allocation is created, it is immediately available for use. 

Because xdisk allocations are temporary, they will expire as soon as their time limit is reached. Warnings will be sent to every group member at their addresses beginning one week before the expiration. It is the group's responsibility to renew xdisk allocations or copy files to an alternate storage location prior to the expiration date. Once an xdisk allocation expires, everything in it is permanently deleted. PI's may request a new xdisk allocation immediately after their previous one has expired. 

Requesting an xdisk Space

Faculty members (PIs) or their designated xdisk delegates are able to request, alter, extend, and delete an xdisk allocation from the web portal under the storage tab:

To request a new allocation, select Manage XDISK, fill in the form, and submit.

Modifying an Existing Allocation

Faculty members (PIs) or their designated storage delegates are able to modify existing xdisk allocations through the web portal:

To do this, navigate to the Storage tab and select Manage XDISK

In the web form, enter the new size and time allocations needed for your allocation. The maximum size (in GB) allowed is 20000 and the maximum time limit is 300 days.

To save the changes, click Ok.

xdisk CLI

xdisk is a locally written utility for PI's to create, delete, resize, and expire (renew) xdisk allocations. This functionality is usable by PIs only.

Xdisk Function Information Command Examples
Display xdisk help Commands given in brackets are optional. If left blank, you will get system defaults.

$ xdisk -c help

$ xdisk -c help
/usr/bin/xdisk -c [query|create|delete|expire|size] [-d days] [-m size]

View current information Check current allocation size, location, and expiration date.

$ xdisk -c query

$ xdisk -c query
XDISK on host:
Current xdisk allocation for <netid>:
Disk location: /xdisk/<netid>
Allocated size: 200GB
Creation date: 3/10/2020 Expiration date: 6/8/2020
Max days: 45    Max size: 1000GB

Create an xdisk

Grants an xdisk allocation. 

Max Size: 20000 GB

Max Days: 300

$ xdisk -c create -m [size in gb] -d [days]

$ xdisk -c create -m 300 -d 30
Your create request of 300 GB for 30 days was successful.
Your space is in /xdisk/<netid>

Extend the xdisk time Prior to its expiration, if your xdisk's time is under the 300 days, you may increase it until the 300 day limit is reached.

$ xdisk -c expire -d [days]

$ xdisk -c expire -d 15
Your extension of 15 days was successfully processed

Resize an xdisk allocation

You may resize your allocation by specifying the increase/decrease in gb.

To reduce the size, use a negative sign, "-

$ xdisk -c size -m [size in gb]

$ # Assuming an initial xdisk allocation size of 200 gb
$ xdisk -c size -m 200
XDISK on host:
Your resize to 400GB was successful
$ xdisk -c size -m -100
XDISK on host:
Your resize to 300GB was successful

Delete an xdisk allocation Permanently deletes your current xdisk allocation. Be sure to remove any important data before deleting.

$ xdisk -c delete

$ xdisk -c delete
Your delete request has been processed

Delegating xdisk Management Rights

When a user is added as a delegate, it allows them to manage group storage on their PI's behalf through the user portal. It will still not be possible for the delegate to manage storage via the CLI interface.

PIs can delegate xdisk management rights. To add a group member as a delegate, the PI needs to click on Manage Delegates link on the home page of the portal:

Once a group member has been added, they can manage their group's xdisk through the web portal. To do this, they should log into our web portal, click the Switch User link, and enter their PI's NetID. They can then manage their group's space under the Storage tab.

  Q. Who owns our group's /xdisk?

A group's PI owns the /xdisk allocation. By default, your PI has exclusive read/write/execute privileges for the root folder /xdisk/PI.

  Q. Can a PI make their /xdisk accessible to their whole group?

By default, members of a research group only have write access to their subdirectories under /xdisk/PI. If they so choose, a PI may allow their group members to write directly to /xdisk/PI by running one of the following commands:

Command Result

$ chmod g+r /xdisk/PI

Group members can see the contents of the directory with ls, but may not access it or make modifications (e.g. add, delete, or edit files/directories)

$ chmod g+rx /xdisk/PI

Group members can access the directory and see files but cannot make modifications (e.g. add, delete, or edit files/directories)

$ chmod g+rwx /xdisk/PI

Group members are granted full read/write/execute privileges.
  Q. Where can group members store their files?

When an /xdisk allocation is created, a subdirectory is automatically generated for and owned by each individual group member. If the directory /xdisk/PI does not have group privileges, group members may not access the root directory, but may access their individual spaces by:

$ cd /xdisk/PI/netid

If a user joins the group after the xdisk was created and /xdisk/PI is not writeable for group members, contact our consultants and they can create one.

  Q. A group member's directory isn't in our /xdisk, how can we add it?

Typically when an /xdisk allocation is created, it will automatically generate a directory for each group member. In the unlikely event that it doesn't or, more commonly, a group member is added after the allocation has been created, either the PI may manually create a directory or, if the root directory is group writable, the user may create one themselves. If the group's /xdisk does not have full group permissions, the PI may run:

$ mkdir /xdisk/PI/netid

then can reach out to our hpc consultants to request an ownership change.

  Q. Do we need to request an individual allocation within the /xdisk for each user in our group?

No, the full /xdisk allocation is available for every member of the group. It's up to group members to communicate with one another on how they want to utilize the space.

  Q. Who owns our group's /xdisk?

A group's PI owns the /xdisk allocation. By default, your PI has exclusive read/write/execute privileges for the root folder /xdisk/PI.

  Q. Why am I getting xdisk emails?
xdisk is a temporary storage space available to your research group. When it's close to its expiration date, notifications will be sent to all members of your group. For detailed information on xdisk allocations, see:  Storage
  Q. Why am I getting "/xdisk allocations can only be authorized by principal investigators"?
xdisks are managed by your group's PI by default. This means if you want to request an xdisk or modify an existing allocation (e.g., extending the time limit or increasing the storage quota), you will need to consult your PI. Your PI may either perform these actions directly or, if they want to delegate xdisk management to a group member, they may do so by following the instructions under  Delegating xdisk Management Rights.
  Q. How can we modify our xdisk allocation?
To modify your allocation's time limit or storage quota, your PI can either do so through the Web Portal under the Storage tab, or via the command line. If your PI would like to delegate management rights to a group member, they may follow the instructions under  Delegating xdisk Management Rights. Once a group member has received management rights, they may manage the allocation through our web portal.
  Q. Why am I getting "xdisk: command not found"?

If you're getting errors using xdisk commands in a terminal session, check that you are on a login node. If you are on the bastion host (hostname: gatekeeper), are in an interactive session, or are on the filexfer node, you won't be able to check or modify your xdisk. When you are on a login node, your terminal prompt should show the hostname junonia or wentletrap. You can also check your hostname using the command:

$ hostname
  Q. Why am I getting errors when trying to extend my allocation?

If you're trying to extend your group's allocation but are seeing something like:

(puma) [netid@junonia ~]$ xdisk -c expire -d 1
invalid request_days: 1

for every value you enter, your xdisk has likely reached its maximum time limit. To check, go to, click Manage XDISK, and look at the box next to Duration. If you see 300, your allocation cannot be extended further.

If your allocation is at it's limit, you will need to back up your data to external storage (e.g., a local machine, lab server, or cloud service). Once your xdisk has expired (either by expiring or through manual deletion), you can immediately create a new allocation and restore your data. Detailed xdisk information can be found on our Storage page. You may also want to look at our page on Transferring Data

  Q. Can we keep our xdisk allocation for more than 300 days?

No, once an xdisk has reached its time limit it will expire. It's a good idea to start preparing for this early by making frequent backups and paying attention to xdisk expiration emails. 

  Q. What happens when our xdisk allocation expires?

Once an xdisk expires, all the associated data are deleted. Deleted data are non-retrievable since HPC is not backed up. It's advised to keep frequent backups of your data on different platforms, for example a local hard drive or a cloud-based service like Google Drive, or (even better) both!

Rental Storage

This service enables researchers to rent storage on an on-site data array located in the UITS research data center and networked near our HPC systems to enable efficient data transfer to/from the HPC.

Funded by RII, this new service is immediately available to any faculty researcher and can be accessed through the HPC user portal.

Details on the service:

  • The first-year rate is $94.50 per TB, and RII will provide matching funds for first-year allocations to make the actual first-year cost to researchers $47.35. These matching funds will be applied automatically.
  • The ongoing rate after year one is $47.35 per TB per year.
  • Researchers must provide a KFS account for this service, which will be charged at the end of the academic year (June 2023)
  • Your space will be mounted as /rental/netid and is mounted on the data transfer nodes
  • You can use Globus to move data that is external to the data center
  • You can use scp, sftp or Globus to move data to and from HPC resources 

More information especially on Getting Started is found at this page

This service is not intended for controlled or regulated research data, such as HIPAA/ePHI, ITAR, or CUI

Rental storage is not backed up. It is the user's responsibility to save files stored in xdisk to alternate storage locations for backup and archive purposes. See Tier 2 Storage for more information on options for backing up your data. 

Once you have a rental storage allocation, it is accessible from the file transfer nodes and not from the HPC environment. If you get "no such file or directory", ensure you are connected to

General Research Data Storage (Tier 2)

Google Drive Storage Notice

Free unlimited Google Drive storage will be going away for usage greater than 15GB, and UITS would like to migrate users off by April 2023. As a result, we will be transitioning to Amazon Web Services (AWS) S3 as a Tier 2 option. We will continue to support researchers using Google Drive during this transition.

Research Technologies in partnership with UITS is implementing an Amazon Web Services (AWS) S3 rental storage solution. This service provides researchers with an S3 account which is managed by AWS Intelligent Tiering. After 90 days of nonuse, data will be moved to Glacier. After 90 additional days, it will be moved to Deep Glacier. There will be no charge for data stored at either Glacier level, nor for any transfer charges. The data can be retrieved at any time, although it will take a while. 

For information on setting up and using an S3 account, see: Tier 2 Storage

For information on Google Drive, see: Google Drive

Long term Research Storage (Tier 3)

Individual groups are responsible for managing and archiving their data. Some options for data archival include:

NIH Data Management and Sharing Policy

The NIH has issued a new data management and sharing policy, effective January 25, 2023. The university libraries now offers a comprehensive guide for how to navigate these policies and what they mean for you.

What's new about the 2023 NIH Data Management and Sharing Policy?

Previously, the NIH only required grants with $500,000 per year or more in direct costs to provide a brief explanation of how and when data resulting from the grant would be shared.

The 2023 policy is entirely new. Beginning in 2023, ALL grant applications or renewals that generate Scientific Data must now include a robust and detailed plan for how you will manage and share data during the entire funded period. This includes information on data storage, access policies/procedures, preservation, metadata standards, distribution approaches, and more. You must provide this information in a data management and sharing plan (DMSP). The DMSP is similar to what other funders call a data management plan (DMP).

The DMSP will be assessed by NIH Program Staff (though peer reviewers will be able to comment on the proposed data management budget). The Institute, Center, or Office (ICO)-approved plan becomes a Term and Condition of the Notice of Award.

  • No labels