For details, please consult the Gordon User Guide.

  • How do I log in to Gordon ?

    Just log in to Gordon with ssh from your xterm or Putty window. Here is an example of a user logging into Gordon from the node scc1.bu.edu on the Shared Computing Cluster (SCC).

    
    scc1% ssh userID@gordon.sdsc.edu
    password: [enter your password here]
    
    Last login: Thu Jan 17 02:27:42 2013 from scc1.bu.edu
    
                               WELCOME TO
    _______________________________   _________            _________
    __  ___/__  __ \_  ___/_  ____/   __  ____/__________________  /____________
    _____ \__  / / /____ \_  /        _  / __ _  __ \_  ___/  __  /_  __ \_  __ \
    ____/ /_  /_/ /____/ // /___      / /_/ / / /_/ /  /   / /_/ / / /_/ /  / / /
    /____/ /_____/ /____/ \____/      \____/  \____//_/    \__,_/  \____//_/ /_/
    
    Rocks 5.4.3 (Viper)
    Profile built 11:19 20-Mar-2012
    
    Kickstarted 11:28 20-Mar-2012
    ------------------------------------------------------------------------------
    ------------------------------------------------------------------------------
    [userID@gordon-ln1 ~]$
  • How do I copy files between sites ?

    Example: Download from the SCC to Gordon

    
    [userID@gordon-ln1 ~]$ mkdir MPI
    [userID@gordon-ln1 ~]$ cd MPI
    
    [userID@gordon-ln1 ~]$ scp userID@scc1.bu.edu:/my_directory/MPIcodes11.zip .
    The authenticity of host 'scc1.bu.edu (192.12.187.130)' can't be established.
    RSA key fingerprint is a1:xx:yy:ed:zz:16:2f:1e:2b:c5:bb:0a:ac:r8:23:96.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'scc1,192.12.187.130' (RSA) to the list
    of known hosts.
    userID@scc1.bu.edu's password:
    MPIcodes11.zip                  100%  314KB 314.3KB/s 314.3KB/s   00:00
    
  • How do I unzip files ?

    
    [userID@gordon-ln1 ~]$ unzip MPIcodes11.zip
    Archive:  MPIcodes11.zip
       creating: intro/
       creating: intro/basics/
       creating: intro/basics/C/
      inflating: intro/basics/C/make.ch_gm.gnu
      inflating: intro/basics/C/make.ch_gm.intel
    
       . . . . .
       . . . . .
    
    [userID@gordon-ln1 ~]$
    
    
  • How do I compile MPI programs ?

    SCC users take note: On Gordon, compiler scripts mpicc, mpif90, etc., as available on BU’s SCC, are linked to the Intel compiler by default. Gordon recommends that users use this. However, GNU and PGI compilers are also available via the module command (please consult Gordon User Guide) should you need it (for example due to compatibility issues, etc.).

    The following makefile demonstrates the compilation of serial C as well as C and C++ MPI codes.

    ##### User configurable options #####
    
    ROOT =
    OPTFLAGS    = -O3
    CC          = $(ROOT)mpicc $(OPTFLAGS)
    CCC         = $(ROOT)mpiCC $(OPTFLAGS)
    F90         = $(ROOT)mpif90 $(OPTFLAGS)
    CLINKER     = $(ROOT)mpicc $(OPTFLAGS)
    CCLINKER    = $(ROOT)mpiCC $(OPTFLAGS)
    FLINKER     = $(ROOT)mpif77 $(OPTFLAGS)
    F90LINKER   = $(ROOT)mpif90 $(OPTFLAGS)
    MAKE        = make
    SHELL       = /bin/sh
    
    ### End User configurable options ###
    
    CFLAGS  =
    FFLAGS =
    LIBS = -lm
    FLIBS =
    
    EXECS = example1 example1_1 example1_2 example1_3 example1_4
    
    default: $(EXECS)
    
    all: $(EXECS)
    
    example1:
            icc -o $@ $(OPTFLAGS) example1.c $(LIBS)
    
    example1_1: example1_1.o
            $(CC) -o $@ example1_1.o $(LIBS)
    
    example1_2: example1_2.o
            $(F90) -o $@ example1_2.o $(LIBS)
    
    example1_3: example1_3.o
            $(CCC) -o $@ example1_3.o $(LIBS)
    
    example1_4: example1_4.o
            $(CCC) -o $@ example1_4.o $(LIBS)
    
    clean:
            /bin/rm -f *.o *~ $(EXECS)
    .c.o:
            $(CC) $(CFLAGS) -c $*.c
    .C.o:
            $(CCC) $(CFLAGS) -c $*.C
    .cpp.o:
            $(CCC) $(CFLAGS) -c $*.cpp
    .f90.o:
            $(F90) $(FFLAGS) -c $*.f90
    
    
    .SUFFIXES: .c  .C  .cpp .f90
    

  • How to run jobs ?

    You can run serial jobs interactively. Multiprocessor jobs must be submitted through batch. There are two queues: normal and vsmp (virtual SMP). By default, jobs are submitted to the normal queue. Processor requests must be in increments of 16. Current dir is not on the search path. Be sure to prepend the executable with “./”, like ./a.out. Examples:

    • Serial Jobs

      Run interactively on the login node.
      [userID@gordon-ln1 C]$ ./example1

    • Interactive Batch Jobs

      Multiprocessing jobs must run via batch (mpirun not available in login node). For program development and debugging, you can submit an interactive batch job, say, with 16 procs and 90 minutes duration. Once the batch job starts, you can proceed in the interactive mode.

      [userID@gordon-ln1 C]$ qsub -I -q normal -lnodes=1:ppn=16:native,walltime=90:00
      qsub: waiting for job 522852.gordon-fe2.local to start
      qsub: job 522852.gordon-fe2.local ready

      The interactive batch session starts you at your home directory.

      [userID@gcn-18-32 ~]$ cd /home/userID/MPI/intro/basics/C
      [userID@gcn-18-32 C]$ mpirun -np 4 ./example1_2
      Process 1 has the partial integral of 0.324423
      Process 2 has the partial integral of 0.216773
      Process 3 has the partial integral of 0.076120
      Process 0 has the partial integral of 0.382684
      The Integral =1.000000
      [kadin@gcn-18-32 C]$
    • Batch Jobs

      Submit a 16-processor batch job using the batch script my_batch_script.

      [userID@gordon-ln1 C]$ qsub my_batch_script
      522731.gordon-fe2.local

      You can use qstat to query the status of your job

      [userID@gordon-ln1 C]$ qstat -u userID
      gordon-fe2.sdsc.edu:
      
                                                                               Req'd  Req'd   Elap
      Job ID               Username Queue    Jobname          SessID NDS   TSK Memory Time  S Time
      -------------------- -------- -------- ---------------- ------ ----- --- ------ ----- - -----
      522731.gordon-fe   userID    normal   MPI_ex_2            --      1  16    --  00:30 Q   --
      [userID@gordon-ln1 C]$

      my_batch_script looks like this:

      #!/bin/bash
      #PBS -l walltime=0:30:00
      #PBS -N MPI_ex_2
      #PBS -o my_example2.out
      #PBS -e my_example2.err
      #PBS -m ae
      #PBS -M kadin@bu.edu
      #PBS -l nodes=1:ppn=16:native
      cd /home/kadin/MPI/intro/basics/C
      mpirun -np 16 -hostfile $PBS_NODEFILE ./example1_2

      In the above, MPI_ex_2 is the batch job name, output and error reports go to my_example2.out and my_example2.err, respectively. Job status is sent to kadin@bu.edu while one node with 16 cores is requested. Don’t forget to specify the directory where the executable lives. Alternatively, you could prepend the absolute path to the executable to avoid having to specify that with cd. Note also that the number of processors provided for mpirun -np should be consistent with the product of number of nodes times the processors per node (ppn).

  • OpenMP and hybrid MPI + OpenMP paradigms are also available.

    Please consult the Gordon User Guide.

  • Programming Tools

    • gdb, idb – GNU and Intel debuggers, respectively (see man pages)
    • tau – profiling tool, see Packages below
    • PAPI – hardware performance counter software; see Packages below
  • Application Packages