Boston University Shared Computing Cluster (SCC)

Listed below are the technical details on the SCC login and compute nodes, including run time limits, SU charge rate for each node, and the configuration of the batch system. If for some reason your code is not able to run within these parameters, please don’t hesitate to send email to help@scc.bu.edu.

Hardware Configuration

Host Name(s) &
Node Type
# of Nodes Processors / Node Memory
/ Node
Scratch Disk
/ Node
Network CPU Architecture SU Charge per CPU hour
Login Nodes
scc1.bu.edu,
scc2.bu.edu
(General access, 28 cores)
2 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 256 GB 886 GB 10 Gbps Ethernet broadwell 0
geo.bu.edu
(E&E Dept., 28 cores)
1 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 256 GB 886 GB 10 Gbps Ethernet broadwell 0
scc4.bu.edu
(BUMC/dbGaP, 28 cores)
1 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 256 GB 886 GB 10 Gbps Ethernet broadwell 0
tanto.bu.edu
(CNS Neuromorphics, 8 cores)
1 2 quad-core 2.13 GHz Intel Xeon E5606 24 GB 244 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 01
Compute Nodes – Shared
scc-aa1..aa8,
scc-ab1..ab8,
scc-ac1..ac8,
scc-ad1..ad8,
scc-ae1..ae4
16 cores
36 2 eight-core 2.6 GHz Intel Xeon E5-2670 128 GB 886 GB 1 Gbps Ethernet FDR Infiniband sandybridge 1.0
scc-ba1..ba8,
scc-bb1..bb8,
scc-bc1..bc4,
scc-bd3..bd8
16 cores
26 2 eight-core 2.6 GHz Intel Xeon E5-2670 128 GB 886 GB 1 Gbps Ethernet sandybridge 1.0
scc-ca1..ca8
16 cores
8 2 eight-core 2.6 GHz Intel Xeon E5-2670 256 GB 886 GB 10 Gbps Ethernet sandybridge 1.0
scc-c01
20 cores with K40m GPUs4
1 2 ten-core 2.6 GHz Intel Xeon E5-2660v3 128 GB 886 GB 1 Gbps Ethernet haswell 1.0
scc-c06, scc-c07
36 cores
2 2 eighteen-core 2.4 GHz Intel Xeon E7-8867v4 1024 GB 1068 GB 10 Gbps Ethernet broadwell 1.0
scc-c08..c11
28 cores with 2 P100 GPUs each6
4 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 256 GB 849 GB 10 Gbps Ethernet broadwell 1.0
scc-ga01..ga09
8 cores
9 2 quad-core 2.93 GHz Intel Xeon X5570 24 GB 244 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 0.7
scc-ga10,
scc-ga11
8 cores
2 2 quad-core 2.93 GHz Intel Xeon X5570 96 GB 244 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 0.7
scc-ha1..he2,
scc-ja1..je2
12 cores with M2070 GPUs2
20 2 six-core 3.07 GHz Intel Xeon X5675 48 GB 427 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 0.85
scc-pa1..pa8,
scc-pb1..pb8,
scc-pc1..pc8,
scc-pd1..pd8,
scc-pe1..pe8,
scc-pf1..pf8,
scc-pg1..pg8,
scc-ph1..ph8,
scc-pi1..pi4,
16 cores
68 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 886 GB 1 Gbps Ethernet ivybridge 1.0
scc-q01..q08
16 cores8
8 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 256 GB 230 GB 10 Gbps Ethernet ivybridge 1.08
scc-ua1..ua4, scc-ub1..ub4, scc-uc1..uc4, scc-ud1..ud4, scc-ue1..ue4, scc-uf1..uf4, scc-ug1..ug4, scc-uh1..uh4, scc-ui1..ui4
28 cores
36 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 256 GB 886 GB 10 Gbps Ethernet
EDR Infiniband (100 Gbps)
broadwell 1.0
scc-v02, scc-v03
8 cores with M2000 GPU7
2 1 eight-core 2.1 GHz Intel Xeon E5-2620v4 128 GB 427 GB 10 Gbps Ethernet broadwell 1.0
scc-wa1..wa4, scc-wb1..wb4, scc-wc1..wc4, scc-wd1..wd4, scc-we1..we4, scc-wf1..wf4, scc-wg1..wg4, scc-wh1..wh4, scc-wi1..wi4, scc-wl1..wl4, scc-wm1..wm4, scc-wn1..wn4,
28 cores
48 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 256 GB 886 GB 10 Gbps Ethernet broadwell 1.0
scc-wj1..wj4, scc-wk1..wk4
28 cores
8 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 512 GB 886 GB 10 Gbps Ethernet broadwell 1.0
Compute Nodes – Buy-In – Buy-In nodes have no SU charge for use by their owners.
scc-ae5..ae7, scc-be4
16 cores
4 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 256 GB 886 GB 1 Gbps Ethernet ivybridge 1.01
scc-bc5..bc8,
scc-bd1,scc-bd2,
scc-be1,scc-be2
16 cores
8 2 eight-core 2.6 GHz Intel Xeon E5-2670 128 GB 886 GB 1 Gbps Ethernet sandybridge 1.01
scc-be3, scc-be5..be8
16 cores
5 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 886 GB 1 Gbps Ethernet ivybridge 1.01
scc-c03
16 cores
1 2 eight-core 2.0 GHz Intel Xeon E7-4809v3 1024 GB 794 GB 10 Gbps Ethernet haswell 1.01
scc-c04, scc-c05
20 cores with K40m GPUs5
2 2 ten-core 2.6 GHz Intel Xeon E5-2660v3 128 GB 886 GB 10 Gbps Ethernet haswell 1.01
scc-c12..c14
28 cores with 4 P100 GPUs each6
3 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 256 GB 849 GB 10 Gbps Ethernet broadwell 1.0
scc-cb1..cb4
16 cores
4 2 eight-core 2.6 GHz Intel Xeon E5-2670 256 GB 886 GB 10 Gbps Ethernet sandybridge 1.01
scc-cb5..cb8
16 cores
4 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 256 GB 886 GB 10 Gbps Ethernet ivybridge 1.01
scc-cc1..cc8
16 cores
8 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 886 GB 10 Gbps Ethernet ivybridge 1.01
scc-da1..da4,
scc-db1..db4,
scc-df3,scc-df4
16 cores
10 2 eight-core 2.7 GHz Intel Xeon E5-2680 128 GB 427 GB 1 Gbps Ethernet sandybridge 1.01
scc-dc1..dc4,
scc-dd1..dd4,
scc-de1..de4,
scc-df1,scc-df2
16 cores
14 2 eight-core 2.7 GHz Intel Xeon E5-2680 64 GB 427 GB 1 Gbps Ethernet sandybridge 1.01
scc-ea1..ea4,
scc-eb1..eb4,
scc-ec1..ec4,
scc-fa1..fa4,
scc-fb1..fb4,
scc-fc1..fc4
12 cores with M2050 GPUs3
24 2 six-core 2.66 GHz Intel Xeon X5650 48 GB 427 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 0.71
scc-gb01..gb03
12 cores
3 2 six-core 2.93 GHz Intel Xeon X5670 48 GB 517 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 0.71
scc-gb04..gb13
12 cores
10 2 six-core 2.93 GHz Intel Xeon X5670 96 GB 517 GB 1 Gbps Ethernet nehalem 0.71
scc-gb14
8 cores
1 2 quad-core 2.93 GHz Intel Xeon X5570 24 GB 244 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 0.71
scc-ka1..ka7,
scc-kb1..kb8
64 cores
15 4 16-core 2.3 GHz AMD Opteron 6276 256 GB 518 GB 10 Gbps Ethernet
QDR Infiniband
bulldozer 0.851
scc-ka8
64 cores
1 4 16-core 2.3 GHz AMD Opteron 6276 512 GB 518 GB 10 Gbps Ethernet
QDR Infiniband
bulldozer 0.851
scc-ma1..ma8,
scc-mb1,scc-mb2
16 cores
10 2 eight-core 2.6 GHz Intel Xeon E5-2670 128 GB 886 GB 1 Gbps Ethernet sandybridge 1.01
scc-mb3,scc-mb4,
scc-mc1..mc8,
scc-me1..me7,
scc-mf4,scc-mf5,
scc-mf8,
scc-mg1..mg8,
scc-mh1..mh6,
scc-ne5..ne7,
scc-pi5..pi8
16 cores
41 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 886 GB 1 Gbps Ethernet ivybridge 1.01
scc-mb5..mb8,
scc-md1..md8
scc-me8,
scc-mf1..mf3,
scc-mf6,scc-mf7,
scc-mh7,scc-mh8,
scc-ne4
16 cores
21 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 256 GB 886 GB 1 Gbps Ethernet ivybridge 1.01
scc-na1..na8,
scc-nb1..nb8,
scc-nc1..nc8,
scc-nd1..nd8,
scc-ne1..ne3
16 cores
35 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 886 GB 1 Gbps Ethernet
FDR Infiniband
ivybridge 1.01
scc-q09
16 cores8
1 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 256 GB 230 GB 10 Gbps Ethernet ivybridge 1.01  8
scc-q10..q16
16 cores8
7 2 eight-core 2.6 GHz Intel Xeon E5-2670 128 GB 230 GB 10 Gbps Ethernet sandybridge 1.01  8
scc-sa1..sa8,
scc-sb1..sb8
16 cores
16 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 256 GB 886 GB 1 Gbps Ethernet
FDR Infiniband
ivybridge 1.01
scc-sc1,scc-sc2
16 cores with K40m GPUs4
2 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 886 GB 1 Gbps Ethernet
FDR Infiniband
ivybridge 1.01
scc-sc3..sc6
16 cores
4 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 886 GB 1 Gbps Ethernet
FDR Infiniband
ivybridge 1.01
scc-ta1..ta3,
scc-to1
20 cores
4 2 ten-core 2.6 GHz Intel Xeon E5-2660v3 128 GB 886 GB 10 Gbps Ethernet haswell 1.01
scc-ta4, scc-tb1..tb3, scc-tc1..tc4, scc-td1..td4, scc-te1..te4, scc-tf1..ft4, scc-tg1..tg4, scc-th1..th4, scc-ti1..ti4, scc-tj1, scc-tj2, scc-tk1..tk4, scc-tl1..tl4, scc-tm1..tm4, scc-tn1, scc-tn2, scc-to2..to4
20 cores
51 2 ten-core 2.6 GHz Intel Xeon E5-2660v3 256 GB 886 GB 10 Gbps Ethernet haswell 1.01
scc-tj3, scc-tj4, scc-tr1, scc-wo1, scc-wo2
28 cores
5 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 512 GB 886 GB 10 Gbps Ethernet broadwell 1.01
scc-tn3, scc-tn4, scc-tp1..tp4, scc-tq1..scc-tq4, scc-tr2, scc-wo3, scc-wo4
28 cores
13 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 256 GB 886 GB 10 Gbps Ethernet broadwell 1.01

1These machines have limited access; not all SCF users can fully utilize these systems. For those users with special access to these systems, the SU charge is 0.0 for these systems only.
2Each of these nodes has eight NVIDIA Tesla M2070 GPU Cards with 6 GB of Memory.
3Each of these nodes has three NVIDIA Tesla M2050 GPU Cards with 3 GB of Memory.
4Each of these nodes has two NVIDIA Tesla K40m GPU Cards with 12 GB of Memory.
5Each of these nodes has four NVIDIA Tesla K40m GPU Cards with 12 GB of Memory.
6Each of these nodes has two or four (specified above) NVIDIA Tesla P100 GPU Cards with 12 GB of Memory.
7These nodes are for support of VirtualGL and each has one NVIDIA Tesla M2000 GPU Card with 4 GB of Memory.
8These nodes are intended primarily for running Hadoop jobs which are handled differently than other jobs on the system. Limited regular batch jobs can run on these nodes when they are not being utilized for Hadoop jobs.

Use of all of the GPUs is currently in ‘friendly user’ mode so only the main CPU usage is charged at the moment for all SCC nodes with GPUs.

Depending on the speed, memory, and other factors, each node type is charged at a different SU rate per hour, as shown in the table above. Our allocations policy is explained in more detail here.

Batch System and Usage

The batch system on the SCC is the Open Grid Scheduler (OGS), which is an open source batch system based on the Sun Grid Engine scheduler.

Job Run Time Limits
The limitations below apply to the shared SCC resources. Limitations on buy-In nodes are defined by their owners.

Limit Description
15 minutes of CPU Time Jobs running on the login nodes are limited to 15 minutes of CPU time and a small number of cores.
12 hours default wall clock Jobs on the batch nodes have a default wall clock limit of 12 hours but this can be increased, depending on the type of job. Use the qsub option -l h_rt=HH:MM:SS to ask for a higher limit.
720 hours -serial job
720 hours – omp (1 node) job
120 hours – mpi job
48 hours – GPU job
Single processor (serial) and omp (multiple cores all on one node) jobs can run for 720 hours, MPI jobs for 120 hours, and jobs using GPUs are limited to 48 hours.
512 cores An individual user is also only allowed to have 512 cores maximum simultaneously in the run state. This limit does not affect job submission.

Note that usage on the SCC is charged by wall clock time; if you request 8 cores for 12 hours and your code runs for 3 hours, you will be charged for the 24 hours (8 cores * 3 hours) even if your job did not use all requested cores. The charge is computed by multiplying the wall clock time by the SU factor of the node(s) used.