Modern GPUs (graphics processing units) provide the ability to perform computations in applications traditionally handled by CPUs. Using GPUs is rapidly becoming a new standard for data-parallel heterogeneous computing software in science and engineering. Many existing applications have been adapted to make effective use of multi-threaded GPUs.
- GPU Resources
- Running on the GPU Nodes
- Software with GPU Acceleration
- CPU vs. GPU
- Using Only Your Assigned GPUs – CUDA_VISIBLE_DEVICES
- GPU Consulting
GPU Resources
The Shared Computing Cluster includes nodes with NVIDIA GPU cards, some of which are configured for computational workloads and some for interactive VirtualGL sessions. For more details on nodes available on the SCC, please visit the Technical Summary page.
Running on the GPU Nodes
Access to GPU enabled nodes is via the batch system (qsub/qrsh). Direct login to these nodes is not permitted. The GPU nodes support all of the standard batch options in addition to the following GPU specific options. ( -l gpus=G
is a required option).
The utility qgpus
can be used to see a list of all installed GPU models. Run the command to see the latest numbers for each GPU type installed on the SCC:
scc1% qgpus
gpu_type total in_use available
-------- ----- ------ ---------
A100 5 4 1
A100-80G 24 22 2
A40 68 68 0
A6000 34 32 2
K40m 14 8 6
L40 6 0 6
L40S 52 31 14
P100 28 27 1
P100-16G 23 17 6
RTX6000 5 4 1
RTX6000ada 4 3 1
RTX8000 10 9 1
TitanV 8 0 8
TitanXp 10 4 6
V100 65 50 15
V100-32G 4 4 0
The command qgpus -s
can be used to show the GPUs available in the shared compute node queues. When using these model names with the gpu_type
qsub option (as shown below) don’t use the the suffix indicating the amount of memory. For example, use A100, not A100-80G. The gpu_memory
qsub option is used to request specific amount of GPU RAM.
GPU Batch Option | Description |
---|---|
-l gpus=G | G is the number of GPUs. |
Check the output of the qgpus tool for current GPU types. |
|
-l gpu_memory=#G | #G represents the minimum amount of memory required per GPU. |
-l gpu_c=#CC | The GPU compute capability is NVIDIA’s jargon for the GPU architecture generation. NVIDIA maintains a list of GPU models and their compute capability. As of June 2022 the oldest GPUs (K40m) have a 3.5 compute capability. RCS recommends using the compute capability and not the gpu_type where software requires it to provide the widest range of GPUs that can run a job. Using “gpu_type=P100” only allows a job to run on a P100 GPU, but specifying “gpu_c=6.0” allows the job to run on a P100 or newer GPU (which is all of them except the K40m). |
Below are some examples of requesting GPU resources.
Interactive Batch
To request an interactive session with access to 1 GPU (any type) for 12 hours:
scc1% qrsh -l gpus=1
To request an interactive session with access to 1 GPU with compute capability of at least 6.0 (which includes the all GPUs except the K40m) and 4 CPU cores:
scc1% qrsh -l gpus=1 -l gpu_c=6.0 -pe omp 4
Non-interactive Batch Job
To submit a batch job with access to 1 GPU (compute capability of at least 7.0) and 8 CPU cores:
scc1% qsub -l gpus=1 -l gpu_c=7.0 -pe omp 8 your_batch_script
See an example of a script to submit a batch job.
Software with GPU Acceleration
As GPU computing remains a fairly new paradigm, it is not supported yet by all programming languages and is particularly limited in application support. We are striving to provide you with the most up to date information but this will be an evolving process. We are dividing languages and packages into three categories listed below.
Languages and software packages we have successfully tested for GPU support:
CUDA C/C++
CUDA FORTRAN
OpenACC C/C++
OpenACC Fortran
MATLAB (Parallel Computing Toolbox)
R (various packages)
Java (requires to load module jcuda)
CPU vs. GPU
Look below to see if your application seems suitable for converting to use GPUs.
CPUs are great for task parallelism:
- High performance per single thread execution
- Fast caches used for data reuse
- Complex control logic
GPUs are superb for data parallelism:
- High throughput on parallel calculations
- Arithmetic intensity: Lots of processor cores to perform simple math calculations
- Fast access to local and shared memory
Ideal applications for general programming on GPUs:
- Large data sets with minimal dependencies between data elements
- High parallelism in computation
- High number of arithmetic operations
Physical modeling, data analysis, computational engineering, matrix algebra are just a few examples of applications that might greatly benefit from GPU computations.
Using Only Your Assigned GPUs – CUDA_VISIBLE_DEVICES
Please only use the GPUs assigned to you. These are indicated by the environmental variable: CUDA_VISIBLE_DEVICES
As many of the SCC compute nodes have multiple GPUs, each job must only run on the GPUs assigned to it by the batch system to avoid interference with other jobs. To ensure that, the batch system sets CUDA_VISIBLE_DEVICES
to a comma-separated list of integers representing the GPUs assigned to the job. The CUDA runtime library consults this variable when it does device allocation. Therefore, unless the app does its own device allocation, it will automatically comply with this policy.
DO NOT manually set this variable to access other GPUs on the same node. For example, many Python codes written on developers’ own local computers often come with lines of code like these that should be fixed by the user before running on the SCC:
import os
# THIS IS WRONG DO NOT DO THIS
os.environ["CUDA_VISIBLE_DEVICES"]="0"
Instead, you can check out the system assigned GPU id by:
import os
print(os.getenv("CUDA_VISIBLE_DEVICES"))
GPU software that refers to a specific GPU should always use GPU 0, which the CUDA runtime library will match with the value of CUDA_VISIBLE_DEVICES. Software fore a two-GPU job would use GPUs 0 and 1, and so on.
GPU Consulting
RCS staff scientific programmers can help you with your questions concerning GPU programming. Please contact us at help@scc.bu.edu.