Boston University Shared Computing Cluster (SCC)

Listed below are the technical details on the SCC login and compute nodes, including run time limits, SU charge rate for each node, and the configuration of the batch system. If for some reason your code is not able to run within these parameters, please don’t hesitate to send email to help@scc.bu.edu.

Hardware Configuration

Host Name(s) &
Node Type
# of Nodes Processors / Node Memory
/ Node
Scratch Disk
/ Node
Network CPU Architecture SU Charge per CPU hour
Login Nodes
scc1.bu.edu,
scc2.bu.edu
(General access, 12 processors)
2 2 six-core 2.5 GHz Intel Xeon E5-2640 96 GB 427 GB 10 Gbps Ethernet sandybridge 0
geo.bu.edu
(E&E Dept., 12 processors)
1 2 six-core 2.5 GHz Intel Xeon E5-2640 96 GB 427 GB 10 Gbps Ethernet sandybridge 0
scc4.bu.edu
(BUMC/dbGaP, 12 processors)
1 2 six-core 2.5 GHz Intel Xeon E5-2640 96 GB 427 GB 10 Gbps Ethernet sandybridge 0
tanto.bu.edu
(CNS Neuromorphics, 8 processors)
1 2 quad-core 2.13 GHz Intel Xeon E5606 24 GB 244 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 01
Compute Nodes – Shared
scc-aa1..aa8,
scc-ab1..ab8,
scc-ac1..ac8,
scc-ad1..ad8,
scc-ae1..ae4
16 processors
36 2 eight-core 2.6 GHz Intel Xeon E5-2670 128 GB 886 GB 1 Gbps Ethernet FDR Infiniband sandybridge 1.0
scc-ba1..ba8,
scc-bb1..bb8,
scc-bc1..bc4,
scc-bd3..bd8
16 processors
26 2 eight-core 2.6 GHz Intel Xeon E5-2670 128 GB 886 GB 1 Gbps Ethernet sandybridge 1.0
scc-ca1..ca8
16 processors
8 2 eight-core 2.6 GHz Intel Xeon E5-2670 256 GB 886 GB 10 Gbps Ethernet sandybridge 1.0
scc-c01
20 processors with GPUs4
1 2 ten-core 2.6 GHz Intel Xeon E5-2660v3 128 GB 886 GB 1 Gbps Ethernet haswell 1.0
scc-ga01..ga09
8 Processors
9 2 quad-core 2.93 GHz Intel Xeon X5570 24 GB 244 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 0.7
scc-ga10,
scc-ga11
8 Processors
2 2 quad-core 2.93 GHz Intel Xeon X5570 96 GB 244 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 0.7
scc-ha1..he2,
scc-ja1..je2
12 Processors with GPUs2
20 2 six-core 3.07 GHz Intel Xeon X5675 48 GB 427 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 0.85
scc-pa1..pa8,
scc-pb1..pb8,
scc-pc1..pc8,
scc-pd1..pd8,
scc-pe1..pe8,
scc-pf1..pf8,
scc-pg1..pg8,
scc-ph1..ph8,
scc-pi1..pi4,
16 processors
68 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 886 GB 1 Gbps Ethernet ivybridge 1.0
scc-q01..q08
16 processors6
8 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 256 GB 230 GB 10 Gbps Ethernet ivybridge 1.06
Compute Nodes – Buy-In – Buy-In nodes have no SU charge for use by their owners.
scc-ae5..ae7, scc-be4
16 processors
4 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 256 GB 886 GB 1 Gbps Ethernet ivybridge 1.01
scc-bc5..bc8,
scc-bd1,scc-bd2,
scc-be1,scc-be2
16 processors
8 2 eight-core 2.6 GHz Intel Xeon E5-2670 128 GB 886 GB 1 Gbps Ethernet sandybridge 1.01
scc-be3, scc-be5..be8
16 processors
5 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 886 GB 1 Gbps Ethernet ivybridge 1.01
scc-c03
16 processors
1 2 eight-core 2.0 GHz Intel Xeon E7-4809v3 1024 GB 794 GB 10 Gbps Ethernet haswell 1.01
scc-c04, scc-c05
20 processors with GPUs5
2 2 ten-core 2.6 GHz Intel Xeon E5-2660v3 128 GB 886 GB 10 Gbps Ethernet haswell 1.01
scc-cb1..cb4
16 processors
4 2 eight-core 2.6 GHz Intel Xeon E5-2670 256 GB 886 GB 10 Gbps Ethernet sandybridge 1.01
scc-cb5..cb8, scc-cc1..cc8
16 processors
12 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 886 GB 10 Gbps Ethernet ivybridge 1.01
scc-da1..da4,
scc-db1..db4,
scc-df3,scc-df4
16 Processors
10 2 eight-core 2.7 GHz Intel Xeon E5-2680 128 GB 427 GB 1 Gbps Ethernet sandybridge 1.01
scc-dc1..dc4,
scc-dd1..dd4,
scc-de1..de4,
scc-df1,scc-df2
16 Processors
14 2 eight-core 2.7 GHz Intel Xeon E5-2680 64 GB 427 GB 1 Gbps Ethernet sandybridge 1.01
scc-ea1..ea4,
scc-eb1..eb4,
scc-ec1..ec4,
scc-fa1..fa4,
scc-fb1..fb4,
scc-fc1..fc4
12 Processors with GPUs3
24 2 six-core 2.66 GHz Intel Xeon X5650 48 GB 427 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 0.71
scc-gb01..gb03
12 Processors
3 2 six-core 2.93 GHz Intel Xeon X5670 48 GB 517 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 0.71
scc-gb04..gb13
12 Processors
10 2 six-core 2.93 GHz Intel Xeon X5670 96 GB 517 GB 1 Gbps Ethernet nehalem 0.71
scc-gb14
8 Processors
1 2 quad-core 2.93 GHz Intel Xeon X5570 24 GB 244 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 0.71
scc-ka1..ka7,
scc-kb1..kb8
64 Processors
15 4 16-core 2.3 GHz AMD Opteron 6276 256 GB 518 GB 10 Gbps Ethernet
QDR Infiniband
bulldozer 0.851
scc-ka8
64 Processors
1 4 16-core 2.3 GHz AMD Opteron 6276 512 GB 518 GB 10 Gbps Ethernet
QDR Infiniband
bulldozer 0.851
scc-ma1..ma8,
scc-mb1,scc-mb2
16 processors
10 2 eight-core 2.6 GHz Intel Xeon E5-2670 128 GB 886 GB 1 Gbps Ethernet sandybridge 1.01
scc-mb3,scc-mb4,
scc-mc1..mc8,
scc-me1..me7,
scc-mf4,scc-mf5,
scc-mf8,
scc-mg1..mg8,
scc-mh1..mh6,
scc-ne5..ne7,
scc-pi5..pi8
16 processors
41 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 886 GB 1 Gbps Ethernet ivybridge 1.01
scc-mb5..mb8,
scc-md1..md8
scc-me8,
scc-mf1..mf3,
scc-mf6,scc-mf7,
scc-mh7,scc-mh8,
scc-ne4
16 processors
21 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 256 GB 886 GB 1 Gbps Ethernet ivybridge 1.01
scc-na1..na8,
scc-nb1..nb8,
scc-nc1..nc8,
scc-nd1..nd8,
scc-ne1..ne3
16 processors
35 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 886 GB 1 Gbps Ethernet
FDR Infiniband
ivybridge 1.01
scc-q09
16 processors6
1 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 256 GB 230 GB 10 Gbps Ethernet ivybridge 1.01  6
scc-q10..q16
16 processors6
7 2 eight-core 2.6 GHz Intel Xeon E5-2670 128 GB 230 GB 10 Gbps Ethernet sandybridge 1.01  6
scc-sa1..sa8,
scc-sb1..sb8
16 processors
16 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 256 GB 886 GB 1 Gbps Ethernet
FDR Infiniband
ivybridge 1.01
scc-sc1,scc-sc2
16 processors with GPUs4
2 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 886 GB 1 Gbps Ethernet
FDR Infiniband
ivybridge 1.01
scc-sc3..sc6
16 processors
4 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 886 GB 1 Gbps Ethernet
FDR Infiniband
ivybridge 1.01
scc-ta1..ta3,
scc-tb3, scc-to1
20 processors
5 2 ten-core 2.6 GHz Intel Xeon E5-2660v3 128 GB 886 GB 10 Gbps Ethernet haswell 1.01
scc-ta4, scc-tb1..tb3, scc-tc1..tc4, scc-td1..td4, scc-te1..te4, scc-tf1..ft4, scc-tg1..tg4, scc-th1..th4, scc-ti1..ti4, scc-tj1, scc-tj2, scc-tk1..tk4, scc-tl1..tl4, scc-tm1..tm4, scc-tn1, scc-tn2, scc-to2..to4
20 processors
51 2 ten-core 2.6 GHz Intel Xeon E5-2660v3 256 GB 886 GB 10 Gbps Ethernet haswell 1.01

1These machines have limited access; not all SCF users can fully utilize these systems. For those users with special access to these systems, the SU charge is 0.0 for these systems only.
2These nodes each also have 8 NVIDIA Tesla M2070 GPU Cards with 6 GB of Memory. Use of the GPUs is currently in ‘friendly user’ mode so only the main CPU usage is charged at the moment.
3These nodes each also have 3 NVIDIA Tesla M2050 GPU Cards with 3 GB of Memory. Use of the GPUs is currently in ‘friendly user’ mode so only the main CPU usage is charged at the moment.
4These nodes each also have two NVIDIA Tesla K40m GPU Cards with 12 GB of Memory. Use of the GPUs is currently in ‘friendly user’ mode so only the main CPU usage is charged at the moment.
5These nodes each also have four NVIDIA Tesla K40m GPU Cards with 12 GB of Memory. Use of the GPUs is currently in ‘friendly user’ mode so only the main CPU usage is charged at the moment.
6These nodes are intended primarily for running Hadoop jobs which are handled differently than other jobs on the system. Limited regular batch jobs can run on these nodes when they are not being utilized for Hadoop jobs.

Depending on the speed, memory, and other factors, each node type is charged at a different SU rate per hour, as shown in the table above. Our allocations policy is explained in more detail here.

Batch System and Usage

The batch system on the SCC is the Open Grid Scheduler (OGS), which is an open source batch system based on the Sun Grid Engine scheduler.

Job Run Time Limits
The limitations below apply to the shared SCC resources. Limitations on buy-In nodes are defined by their owners.

Limit Description
15 minutes of CPU Time Jobs running on the login nodes are limited to 15 minutes of CPU time and a small number of processors.
12 hours default wall clock Jobs on the batch nodes have a default wall clock limit of 12 hours but this can be increased, depending on the type of job. Use the qsub option -l h_rt=HH:MM:SS to ask for a higher limit.
720 hours -serial job
720 hours – omp (1 node) job
120 hours – mpi job
48 hours – GPU job
Single processor (serial) and omp (multiple processors all on one node) jobs can run for 720 hours, MPI jobs for 120 hours, and jobs using GPUs are limited to 48 hours.
512 processors An individual user is also only allowed to have 512 processors maximum simultaneously in the run state. This limit does not affect job submission.

Note that usage on the SCC is charged by wall clock time; if you request 8 processors for 12 hours and your code runs for 3 hours, you will be charged for the 24 hours (8 processors * 3 hours) even if your job did not use all requested processors. The charge is computed by multiplying the wall clock time by the SU factor of the node(s) used.