The University of Arizona
    For questions, please open a UAService ticket and assign to the Tools Team.
Page tree
Skip to end of metadata
Go to start of metadata

NGC

The NVIDIA GPU Cloud (NGC) provides GPU-accelerated HPC and deep learning containers for scientific computing.  NVIDIA tests HPC container compatibility with the Singularity runtime through a rigorous QA process. Application-specific information may vary so it is recommended that you follow the container specific documentation before running with Singularity. If the container documentation does not include Singularity information, then the container has not yet been tested under Singularity.

Pulling images

Singularity images may be pulled directly from the Ocelote login node. As the login node is a shared resource running Singularity containers should only be performed on a compute node, accessed through an interactive or batch job.

First load the Singularity module
$ module load singularity

Credentials used to be required before Singularity 3.x.  If you are using Singularity 3.x, move on to the singularity build.  Else, credential are established by setting the following variables in the build environment.
bash
$ export SINGULARITY_DOCKER_USERNAME='$oauthtoken'
$ export SINGULARITY_DOCKER_PASSWORD=<NVIDIA NGC API key>
csh
$ setenv SINGULARITY_DOCKER_USERNAME '$oauthtoken'
$ setenv SINGULARITY_DOCKER_PASSWORD <NVIDIA NGC API key>
More information describing how to obtain and use your NVIDIA NGC API key can be found here

Once credentials are set in the environment, You're ready to pull and convert the NGC image to a local Singularity image file. The general form of this command for NGC HPC images is:

$ singularity build <local_image> docker://nvcr.io/<registry>/<app:tag>

This singularity build command will download the app:tag NGC Docker image, convert it to Singularity format, and save it to the local file named local_image.

For example to pull the namd NGC container tagged with version 2.12-171025 to a local file named namd.simg you'd use the following:

$ singularity build ~/namd.simg docker://nvcr.io/hpc/namd:2.12-171025

After this command has finished you’ll have a Singularity image file, namd.simg, in your home directory.

The containers from nvidia that are in /unsupported or /contrib have been modified to include path bindings to /extra and /groups. They also include the path to the Nvidia commands like "nvidia-smi"


Running

Running NGC containers on Ocelote presents few differences from the run instructions provided on NGC for each application.

Directory access:

Singularity containers are themselves ostensibly read only. In order to provide application input and output host directories are generally bound to the container, this is accomplished through the Singularity -B flag. The format of this flag is -B <host_src_dir>:<container_dst_dir>. Once a host directory, host_src_dir, is bound into the container you may interact with this directory from within the container, located at container_dst_dir, the same as you would outside the container.

You may also make use of the --pwd <container_dir> flag, which will be used to set the present working directory of the command to be run within the container.

Ocelote does not support filesystem overlay and as such the container_dst_dir must exist within the image for a bind to be successful. To get around the inability to bind arbitrary directories $HOME and /tmp are mounted in automatically and may be used for application I/O.

GPU support:

As all NGC containers are optimized for NVIDIA GPU acceleration you will always want to add the --nv flag, to enable NVIDIA GPU support  within the container.

Standard run command:

The Singularity command below represents the canonical form that will be used on the Ocelote cluster. <work_dir> should be set to either $HOME or /tmp

singularity exec --nv --pwd <work_dir> <image.simg> <cmd>

  • No labels