Programming for the SCC
Running short jobs interactively on the login nodes (scc1.bu.edu, scc2.bu.edu, geo.bu.edu, and scc4.bu.edu — for BUMC users with dbGaP data) are permitted to facilitate and expedite users’ program development and debugging. The interactive and background batch run procedures for various serial and parallel processing jobs are demonstrated in the following.
All Katana users have access to the SCC through the login nodes stated above. Each user has a home directory on SCC, $HOME, that is distinct from his home directory on Katana. /project or /projectnb directories for projects are shared on both Katana and the SCC and hence /katana is not needed. You may access or copy your home directory files between Katana and SCC by prepending your Katana directory with /katana. For example,
scc1:~ % cp /katana/usr3/...../myfolder/a.out . scc1:~ % vi /katana/usr1/...../myfolder/myfile.c scc1:~ % ls -l /katana/$HOME
The /katana path for Katana home directory is supported on the SCC login nodes only. On the compute nodes, files needed during runtime must reside physically on the SCC — a typical example is your executables or data files.
For a summary or suggestions on how to navigate between Katana and SCC, please visit the code porting page.
On the SCC, the GNU and Portland Group compilers for C, C++ and FORTRAN are available.
Sixty-four-bit compilation is the system default. 32-bit compilation is optional (see below).
|Fortran 77||GNU||gfortran (g77)||mpif77||gfortran -fopenmp2|
|Fortran 95/90||GNU||gfortran (f95)||mpif90||gfortran -fopenmp|
|C||GNU||gcc (cc)||mpicc||gcc -fopenmp|
|C++||GNU||g++ (c++)||mpiCC3||g++ -fopenmp|
|Fortran 77||PGI||pgfortran4||mpif77||pgfortran -mp|
|Fortran 95/90||PGI||pgfortran4||mpif90||pgfortran -mp|
1 For all languages shown, GNU compilers are the system default compilers. You may either use the GNU name or the generic name (shown in parentheses) for the respective languages.
gfortran to compile FORTRAN 77 program with OpenMP directives. It accepts all FORTRAN 77 OpenMP directive bindings and syntaxes, such as
mpic++ are two aliases for
4 Please read Portland Group Compilers Usage Notes on how PGI compilers impact job performance.
Compile a program parallelized with OpenMP directives
- Examples for GNU Compilers
scc1:~ % gfortran -fopenmp -o myexec myprogram.f scc1:~ % gfortran -fopenmp -o myexec myprogram.f90 scc1:~ % gcc -fopenmp -o myexec myprogram.c
- Examples for PGI Compilers (See How Portland Group compilers impact job performance.)
scc1:~ % pgfortran -mp -o myexec myprogram.f scc1:~ % pgfortran -mp -o myexec myprogram.f90 scc1:~ % pgcc -mp -o myexec myprogram.c
Running interactive OpenMP jobs
For program development and debugging purposes, short OpenMP jobs may run on the login nodes. These jobs are limited to 4 processors and 10 minutes of CPU time per processor.
- Set processor count via the environment variable OMP_NUM_THREADS:
scc1:~ % setenv OMP_NUM_THREADS 2
- Run job on SCC:
scc1:~ % a.out
Compiling an MPI program is made easy with the use of the wrapper scripts mpif77, mpif90, mpicc, and mpiCC. Demonstrated below are examples for using these compiler wrapper scripts to compile FORTRAN 77/90/95, C, and C++ programs.
scc1:~ % mpif77 -o myexec myprogram.f scc1:~ % mpif90 -o myexec myprogram.f90 scc1:~ % mpicc -o myexec myprogram.c scc1:~ % mpiCC -o myexec myprogram.C
Note that the MPI wrapper scripts used for compiling your source codes, such as mpif90 and mpicc, are by default linked to the corresponding GNU compilers along with the MPI library of the openmpi MPI implementation. If you prefer to use the Portland Group compilers, do this:
scc1:~ % setenv MPI_COMPILER pgi
With this, the MPI wrappers (mpif90, mpicc, . . .) will use the PGI compilers instead of the GNU compilers to compile MPI codes. The above statement may be inserted into your preferred shell script (.cshrc, .bashrc) for a permanent setting.
To switch from the PGI settings back to the GNU compilers, do either
scc1:~ % unsetenv MPI_COMPILER
scc1:~ % setenv MPI_COMPILER gnu
To show the status of MPI_COMPILER
scc1:~ % printenv MPI_COMPILER pgi
The above indicates that Portland Group compiler is in effect. Returning either nothing or
gnu means that GNU compiler is the active compiler.
If you plan to use the PGI compilers for your MPI program, it is important to read this on the PGI compilers’ impact on job performance and portability.
Running interactive MPI jobs
For program development and debugging purposes, short MPI jobs may run on the login nodes. These jobs are limited to 4 processors and 10 minutes of CPU time per processor.
scc1:~ % mpirun -np 2 a.out
By default, all of the compilers on the SCC Cluster produce 64-bit executables. Additional compiler flags are required to build 32-bit executables:
|Compiler Family||32-bit compilation flags|
The mpi wrapper scripts build 64-bit MPI programs by default. To build 32-bit MPI programs, use the flags appropriate to the underlying compiler, as specified by the MPI_COMPILER environment variable, with the wrapper scripts:
- Example for building 32-bit GNU executables
scc1:~ % mpicc -m32 myexample.c
The scratch disks are available as temporary storage space. They are open to all SCF users and there is no preset quota for use. Files stored on any scratch disk are NOT BACKED UP and these files can only be kept in scratch for up to 31 days. Files not removed from scratch by the owner will be deleted by the system after 31 days. All of the SCC nodes have their own /scratch disk. You can access a specific scratch disk via the pathname “/net/scc-xx#/scratch“, where xx is a two-letter string such as “ab” and # is a single-digit number such as “5″. See Technical Summary for the list of node names. For example,
scc:~ % cd /net/scc-ab5/scratch
- The SCC login and compute nodes each have significant amounts (427+ GB+) of scratch disk space; specific size may vary by node.
- On a compute node, a reference to /scratch points to the local node’s /scratch at runtime.
- Similarly, if you are on the login node, type “cd /scratch” to access its own scratch.
- You can access the login node’s scratch from any compute node with /net/scc/scratch.
- If you’d like to use scratch space in a batch job, please use the scratch space of the compute nodes assigned to the job. (See Item 2 above.)