Boston University Shared Computing Cluster (SCC)

Listed below are the technical details on the SCC login and compute nodes, including run time limits, SU charge rate for each node, and the configuration of the batch system. If for some reason your code is not able to run within these parameters, please don’t hesitate to send email to help@scc.bu.edu.

Hardware Configuration

Host Name(s) &
Node Type
# of Nodes Processors / Node Memory
/ Node
Scratch Disk
/ Node
Network CPU Architecture SU Charge per CPU hour
Login Nodes
scc1.bu.edu,
scc2.bu.edu
(General access, 12 processors)
2 2 six-core 2.5 GHz Intel Xeon E5-2640 96 GB 427 GB 10 Gbps Ethernet sandybridge 0
geo.bu.edu
(E&E Dept., 12 processors)
1 2 six-core 2.5 GHz Intel Xeon E5-2640 96 GB 427 GB 10 Gbps Ethernet sandybridge 0
scc4.bu.edu
(BUMC/dbGaP, 12 processors)
1 2 six-core 2.5 GHz Intel Xeon E5-2640 96 GB 427 GB 10 Gbps Ethernet sandybridge 0
tanto.bu.edu
(CNS Neuromorphics, 8 processors)
1 2 quad-core 2.13 GHz Intel Xeon E5606 24 GB 244 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 01
Compute Nodes – Shared
scc-aa1..aa8,
scc-ab1..ab8,
scc-ac1..ac8,
scc-ad1..ad7
16 processors
31 2 eight-core 2.6 GHz Intel Xeon E5-2670 128 GB 886 GB 1 Gbps Ethernet FDR Infiniband sandybridge 2.6
scc-ba1..ba8,
scc-bb1..bb8,
scc-bc1..bc4,
scc-bd3..bd8
16 processors
26 2 eight-core 2.6 GHz Intel Xeon E5-2670 128 GB 886 GB 1 Gbps Ethernet sandybridge 2.6
scc-ca1..ca8
16 processors
8 2 eight-core 2.6 GHz Intel Xeon E5-2670 256 GB 886 GB 10 Gbps Ethernet sandybridge 2.6
scc-ga01..ga09,
scc-gb14
8 Processors
10 2 quad-core 2.93 GHz Intel Xeon X5570 24 GB 244 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 1.9
scc-ga10,
scc-ga11
8 Processors
2 2 quad-core 2.93 GHz Intel Xeon X5570 96 GB 244 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 1.9
scc-ha1..he2,
scc-ja1..je2
12 Processors

with GPUs2

20 2 six-core 3.07 GHz Intel Xeon X5675 48 GB 427 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 2.2
Compute Nodes – Buy-In – Buy-In nodes have no SU charge for use by their owners.
scc-ad8,
scc-ae1..ae4
16 processors
5 2 eight-core 2.6 GHz Intel Xeon E5-2670 128 GB 886 GB 1 Gbps Ethernet FDR Infiniband sandybridge 2.61
scc-bc5..bc8,
scc-bd1,scc-bd2,
scc-be1,scc-be2
16 processors
8 2 eight-core 2.6 GHz Intel Xeon E5-2670 128 GB 886 GB 1 Gbps Ethernet sandybridge 2.61
scc-be3..be8
16 processors
6 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 886 GB 1 Gbps Ethernet ivybridge 2.61
scc-cb1..cb4
16 processors
4 2 eight-core 2.6 GHz Intel Xeon E5-2670 256 GB 886 GB 10 Gbps Ethernet sandybridge 2.61
scc-cb5..cb7
16 processors
3 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 256 GB 886 GB 10 Gbps Ethernet ivybridge 2.61
scc-da1..da4,
scc-db1..db4,
scc-df3,scc-df4
16 Processors
10 2 eight-core 2.7 GHz Intel Xeon E5-2680 128 GB 427 GB 1 Gbps Ethernet sandybridge 2.71
scc-dc1..dc4,
scc-dd1..dd4,
scc-de1..de4,
scc-df1,scc-df2
16 Processors
14 2 eight-core 2.7 GHz Intel Xeon E5-2680 64 GB 427 GB 1 Gbps Ethernet sandybridge 2.71
scc-ea1..ea4,
scc-eb1..eb4,
scc-ec1..ec4,
scc-fa1..fa4,
scc-fb1..fb4,
scc-fc1..fc4
12 Processors

with GPUs3

24 2 six-core 2.66 GHz Intel Xeon X5650 48 GB 427 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 1.81
scc-gb01..gb03
12 Processors
3 2 six-core 2.93 GHz Intel Xeon X5670 48 GB 517 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 1.91
scc-gb04..gb13
12 Processors
10 2 six-core 2.93 GHz Intel Xeon X5670 96 GB 517 GB 1 Gbps Ethernet nehalem 1.91
scc-ka1..ka7,
scc-kb1..kb8
64 Processors
15 4 16-core 2.3 GHz AMD Opteron 6276 256 GB 518 GB 10 Gbps Ethernet
QDR Infiniband
bulldozer 2.21
scc-ka8
64 Processors
1 4 16-core 2.3 GHz AMD Opteron 6276 512 GB 518 GB 10 Gbps Ethernet
QDR Infiniband
bulldozer 2.21
scc-ma1..ma8,
scc-mb1,scc-mb2
16 processors
10 2 eight-core 2.6 GHz Intel Xeon E5-2670 128 GB 886 GB 1 Gbps Ethernet sandybridge 2.61
scc-na1..na8
16 processors
3 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 886 GB 1 Gbps Ethernet
FDR Infiniband
ivybridge 2.61

1These machines have limited access; not all SCF users can fully utilize these systems. For those users with special access to these systems, the SU charge is 0.0 for these systems only.
2These nodes each also have 8 NVIDIA Tesla M2070 GPU Cards with 6 GB of Memory. Use of the GPUs is currently in ‘friendly user’ mode so only the main CPU usage is charged at the moment. At some point this will very likely change.
3These nodes each also have 3 NVIDIA Tesla M2050 GPU Cards with 3 GB of Memory. Use of the GPUs is currently in ‘friendly user’ mode so only the main CPU usage is charged at the moment. At some point this will very likely change.

Depending on the speed, memory, and other factors, each node type is charged at a different SU rate per hour, as shown in the table above. Our allocations policy is explained in more detail here.

Batch System and Usage

The batch system on the SCC is the Open Grid Scheduler (OGS), which is an open source batch system based on the Sun Grid Engine scheduler.

Job Run Time Limits
The limitations below apply to the shared SCC resources. Limitations on buy-In nodes are defined by their owners.

Limit Description
15 minutes of CPU Time Jobs running on the login nodes are limited to 15 minutes of CPU time and a small number of processors.
12 hours default wall clock Jobs on the batch nodes have a default wall clock limit of 12 hours but this can be increased, depending on the type of job. Use the qsub option -l h_rt=HH:MM:SS to ask for a higher limit.
720 hours -serial job
720 hours – omp (1 node) job
120 hours – mpi job
48 hours – GPU job
Single processor (serial) and omp (multiple processors all on one node) jobs can run for 720 hours, MPI jobs for 120 hours, and jobs using GPUs are limited to 48 hours.
256 processors An individual user is also only allowed to have 256 processors maximum simultaneously in the run state. This limit does not affect job submission.

Note that usage on the SCC is charged by wall clock time. Thus if you request 12 processors and your code runs for 10 hours, you will be charged for the full 120 hours (multiplied by the SU factor for the node(s) you are running on) even if your actual computation only ran for, say, 30 hours.


Computer Graphics Lab Workstations

The Computer Graphics Lab houses a number of high performance Linux and Windows workstations. The lab is accessible only to those with appropriate card keys. Follow the above link for more details on the machines available and getting access to the lab.