CUDA is a parallel programming model and software environment developed by NVIDIA. It provides programmers with a set of instructions that enable GPU acceleration for data-parallel computations. The computing performance of many applications can be dramatically increased by using CUDA directly or by linking to GPU-accelerated libraries.

Setting up your environment

To link and run applications using CUDA you will need to make some changes to your path and environment. You should execute one of the following two commands, either at the command line or by putting it in your .cshrc or .bashrc file.

module load cuda/4.2
module load cuda/5.0

The list of available modules could be obtained by executing module avail command.

If you are adding this to your .cshrc file, don’t forget to run source .cshrc at the system prompt to make the changes to your .cshrc file effective immediately for the current login session.

Compiling a simple CUDA C/C++ program

Consider the following simple CUDA program gpu_info.cu that prints out information about GPUs installed on the system:

Download the source code of gpu_info.cu and transfer it to the directory where you are working on the SCC.
Then execute the following command to compile gpu_info.cu:

scc1% nvcc -o gpu_info gpu_info.cu

Running a CUDA program interactively on a GPU-enabled node

To execute a CUDA code, you have to login via interactive batch to a GPU-enabled node on the SCC. To request an xterm with access to 1 GPU for 24 hours (this command requires X to be running on your local machine):

scc1% qsh -V -l h_rt=24:00:00 -l gpus=1

To run a CUDA program interactively, you then type in the name of the program at the command prompt:

gpunode% gpu_info

Submit a CUDA program Batch Job

The following line shows how to submit the gpu_info program to run in batch mode on a single CPU with access to a single GPU:

scc1% qsub -l gpus=1 -b y gpu_info

where the –l gpus=# option indicates the number of GPUs requested for each processor (possibly a fraction). To learn about all options that could be used for submitting a job, please visit the running jobs page.

CUDA Libraries

Several scientific libraries that make use of CUDA are available:

  • cuBLAS – Linear Algebra Subroutines. A GPU accelerated version of the complete standard BLAS library.
  • cuFFT – Fast Fourier Transform library. Provides a simple interface for computing FFTs up to 10x faster.
  • cuRAND – Random Number Generation library. Delivers high performance random number generation.
  • cuSparse – Sparse Matrix library. Provides a collection of basic linear algebra subroutines used for sparse matrices.
  • NPP – Performance Primitives library. A collection of image and signal processing primitives.

Architecture specific options

There are currently two types of GPU cards available on the SCC – NVIDIA Tesla M2050 GPU cards (3 per node) with 3GB of memory on the nodes scc-e* and scc-f* and NVIDIA Tesla M2070 cards (8 per node) with 6GB of memory on the nodes scc-h* and scc-j*.

Architecture specific features can be enabled using –arch sm_## flag during compilation. The “sm” stands for “streaming multiprocessor” and the number following sm_ indicates the features supported by the architecture. For example, for a CUDA program running on the SCC you can add the –arch sm_20 flag to allow for functionality available on GPUs that have Compute Capability 2.0 (Fermi architecture). See the CUDA Toolkit documentation for more information on this.

Additional CUDA training resources

NVIDIA provides resources for learning CUDA programming at
https://developer.nvidia.com/cuda-training.

CUDA Consulting

SCV staff scientific programmers can help you with your CUDA code tuning. For assistance, please send email to help@scc.bu.edu.