Boston University Shared Computing Cluster (SCC)

Listed below are the technical details on the SCC login and compute nodes, including run time limits, SU charge rate for each node, and the configuration of the batch system. If for some reason your code is not able to run within these parameters, please don’t hesitate to send email to help@scc.bu.edu.

Hardware Configuration

Host Name(s) &
Node Type
# of Nodes Processors / Node Memory
/ Node
Scratch Disk
/ Node
Network CPU Architecture SU Charge per CPU hour
Login Nodes
scc1.bu.edu,
scc2.bu.edu
(General access, 32 cores)
2 2 sixteen-core 2.8 GHz Intel Gold 6242 256 GB 885 GB 10 Gbps Ethernet cascadelake 0
geo.bu.edu
(E&E Dept., 32 cores)
1 2 sixteen-core 2.8 GHz Intel Gold 6242 256 GB 885 GB 10 Gbps Ethernet cascadelake 0
scc4.bu.edu
(BUMC/dbGaP, 32 cores)
1 2 sixteen-core 2.8 GHz Intel Gold 6242 256 GB 885 GB 10 Gbps Ethernet cascadelake 0
Compute Nodes – Shared
scc-aa1..aa8,
scc-ab1..ab8,
scc-ac1..ac8,
scc-ad1..ad8,
scc-ae1..ae4
16 cores
36 2 eight-core 2.6 GHz Intel Xeon E5-2670 128 GB 885 GB 1 Gbps Ethernet FDR Infiniband sandybridge 1.0
scc-ba2..ba8,
scc-bb1..bb8,
scc-bc1..bc4,
scc-bd3..bd8
16 cores
25 2 eight-core 2.6 GHz Intel Xeon E5-2670 128 GB 885 GB 1 Gbps Ethernet sandybridge 1.0
scc-ca1..ca8
16 cores
8 2 eight-core 2.6 GHz Intel Xeon E5-2670 256 GB 885 GB 10 Gbps Ethernet sandybridge 1.0
scc-c01
20 cores
1 with 2 K40m GPUs5 2 ten-core 2.6 GHz Intel Xeon E5-2660v3 128 GB 885 GB 1 Gbps Ethernet haswell 1.0
scc-c06, scc-c07
36 cores
2 2 eighteen-core 2.4 GHz Intel Xeon E7-8867v4 1024 GB 1068 GB 10 Gbps Ethernet broadwell 1.0
scc-c08..c11
28 cores
4 with 2 P100 GPUs each8 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 256 GB 849 GB 10 Gbps Ethernet broadwell 1.0
scc-ed1..ee4, scc-ef1
scc-fa1..fi4
32 cores
45 2 sixteen-core 2.8 GHz Intel Gold 6242 192 GB 885 GB 10 Gbps Ethernet cascadelake 1.01
scc-ha1..he2,
scc-ja1..jb2,
scc-jc1, sccjd1..je2
12 cores
19 with 8 M2070 GPUs each3 2 six-core 3.07 GHz Intel Xeon X5675 48 GB 427 GB 1 Gbps Ethernet
QDR Infiniband
nehalem 0.85
scc-ib1,scc-ib2
68 cores
2 1 sixtyeight-core 1.4 GHz Intel Xeon Phi (Knights Landing) 7250 192 GB 152 GB 10 Gbps Ethernet knl 0.0
scc-ic1,scc-ic2
28 cores
2 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 128 GB 885 GB 10 Gbps Ethernet broadwell 1.0
scc-pa1..pa5,
scc-pa7, scc-pa8,
scc-pb1..pb8,
scc-pc1..pc8,
scc-pd1..pd8,
scc-pe1..pe8,
scc-pf1..pf8,
scc-pg1..pg8,
scc-ph1..ph8,
scc-pi1..pi4,
16 cores
67 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 885 GB 1 Gbps Ethernet ivybridge 1.0
scc-q01..q08
16 cores2
8 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 256 GB 230 GB 10 Gbps Ethernet ivybridge 1.03
scc-ua1..ua4, scc-ub1..ub4, scc-uc1..uc4, scc-ud1..ud4, scc-ue1..ue4, scc-uf1..uf4, scc-ug1..ug4, scc-uh1..uh4, scc-ui1..ui4
28 cores
36 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 256 GB 885 GB 10 Gbps Ethernet
EDR Infiniband
broadwell 1.0
scc-v01
12 cores
1 with 1 K2200 GPU8 1 twelve-core 2.4 GHz Intel Xeon E5-2620v3 64 GB 427 GB 10 Gbps Ethernet haswell 1.0
scc-v02, scc-v03
8 cores
2 with 1 M2000 GPUs each9 1 eight-core 2.1 GHz Intel Xeon E5-2620v4 128 GB 427 GB 10 Gbps Ethernet broadwell 1.0
scc-va1..va4, scc-wa1..wa4, scc-wb1..wb4, scc-wc1..wc4, scc-wd1..wd4, scc-we1..we4, scc-wf1, scc-wf2, scc-wf4, scc-wg2..wg4, scc-wh1..wh4, scc-wi1..wi4, scc-wl1..wl4, scc-wm1..wm4, scc-wn1..wn4
28 cores
50 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 256 GB 885 GB 10 Gbps Ethernet broadwell 1.0
scc-wj1..wj4, scc-wk1..wk
28 cores
8 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 512 GB 885 GB 10 Gbps Ethernet broadwell 1.0
scc-x05,scc-x06
28 cores
2 with 2 V100 GPUs each10 2 fourteen-core 2.6 GHz Intel Gold 6132 192 GB 885 GB 10 Gbps Ethernet skylake 1.0
scc-ya1..ya4, scc-yb1..yb4, scc-yc1..yc4, scc-yd1..yd4, scc-ye1..ye4, scc-yf1..yf4, scc-yg1..yg4, scc-yh1..yh4, scc-yi1..yi4, scc-yp3, scc-yp4, scc-yr4
28 cores
39 2 fourteen-core 2.6 GHz Intel Gold 6132 192 GB 885 GB 10 Gbps Ethernet skylake 1.0
scc-yj1..yj4, scc-yk1..yk4, scc-ym4, scc-zk3
28 cores
10 2 fourteen-core 2.6 GHz Intel Gold 6132 384 GB 885 GB 10 Gbps Ethernet skylake 1.0
scc-za1..za4, scc-zb1..zb4, scc-zc1..zc4, scc-zd1..zd4, scc-ze1..ze4, scc-zf1..zf4, scc-zg1..zg4, scc-zh1..zh4, scc-zi1..zi4
28 cores
36 2 fourteen-core 2.6 GHz Intel Gold 6132 192 GB 885 GB 10 Gbps Ethernet
EDR Infiniband
skylake 1.0
Compute Nodes – Buy-In – Buy-In nodes have no SU charge for use by their owners.
scc-ae5..ae7, scc-be4
16 cores
4 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 256 GB 885 GB 1 Gbps Ethernet ivybridge 1.01
scc-bc5..bc8,
scc-bd1,scc-bd2,
scc-be1,scc-be2
16 cores
8 2 eight-core 2.6 GHz Intel Xeon E5-2670 128 GB 885 GB 1 Gbps Ethernet sandybridge 1.01
scc-be3, scc-be5..be8
16 cores
5 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 885 GB 1 Gbps Ethernet ivybridge 1.01
scc-c03
16 cores
1 2 eight-core 2.0 GHz Intel Xeon E7-4809v3 1024 GB 794 GB 10 Gbps Ethernet haswell 1.01
scc-c04, scc-c05
20 cores
2 with 4 K40m GPUs each6 2 ten-core 2.6 GHz Intel Xeon E5-2660v3 128 GB 885 GB 10 Gbps Ethernet haswell 1.01
scc-c12..c14
28 cores
3 with 4 P100 GPUs each7 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 256 GB 849 GB 10 Gbps Ethernet broadwell 1.0
scc-cb1..cb4
16 cores
4 2 eight-core 2.6 GHz Intel Xeon E5-2670 256 GB 885 GB 10 Gbps Ethernet sandybridge 1.01
scc-cb5..cb8
16 cores
4 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 256 GB 885 GB 10 Gbps Ethernet ivybridge 1.01
scc-cc1..cc8
16 cores
8 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 885 GB 10 Gbps Ethernet ivybridge 1.01
scc-da1..da4,
scc-db1,scc-db2,
scc-db4,scc-df4
16 cores
8 2 eight-core 2.7 GHz Intel Xeon E5-2680 128 GB 427 GB 1 Gbps Ethernet sandybridge 1.01
scc-dc2,scc-dc4,
scc-dd1..dd4,
scc-de1,scc-df1
16 cores
8 2 eight-core 2.7 GHz Intel Xeon E5-2680 64 GB 427 GB 1 Gbps Ethernet sandybridge 1.01
scc-e01, scc-e03
32 cores
2 with 10 A6000 GPUs each 14 2 sixteen-core 2.9 GHz Intel Xeon Gold 6226R 384 GB 1729 GB 10 Gbps Ethernet cascadelake 1.01
scc-e02
32 cores
1 with 10 A6000 GPUs 14 2 sixteen-core 2.9 GHz Intel Xeon Gold 6226R 384 GB 849 GB 10 Gbps Ethernet cascadelake 1.01
scc-e04
32 cores
1 with 10 RTX8000 GPUs 15 2 sixteen-core 2.9 GHz Intel Xeon Gold 6226R 384 GB 3312 GB 10 Gbps Ethernet cascadelake 1.01
scc-e05
28 cores
1 with 10 TitanXp GPUs 16 2 fourteen-core 2.2 GHz Intel Xeon Gold 5120 384 GB 3312 GB 10 Gbps Ethernet skylake 1.01
scc-ea1,scc-ea2,
scc-eb1..3, scc-ec1, scc-ec2
scc-gb1..4, scc-gc1, scc-gc2,
scc-gd4, scc-gk1, scc-gk2,
scc-gl1, scc-gl3,
scc-gm2, scc-gm3,
scc-gn1..4, scc-go1..4,
scc-gr1, scc-gr2
scc-zk4, scc-zp2
32 cores
32 2 sixteen-core 2.8 GHz Intel Gold 6242 384 GB 885 GB 10 Gbps Ethernet cascadelake 1.01
scc-ea3,scc-ea4
scc-ga1..4, scc-gc3, scc-gc4
scc-gd1..3, scc-ge1..4,
scc-gf1..4, scc-gg1..4,
scc-gh1..4, scc-gi1..4,
scc-gj1..4, scc-gk3, scc-gk4
scc-gl2, scc-gl4,
scc-gm1, scc-gm4,
scc-gp1..gp4, scc-gq1..gq4
scc-zl1..4,
scc-zm1..4, scc-zn1..4,
scc-zo1..4,
scc-zp1, scc-zp3, scc-zp4,
scc-zq1..4, scc-gr3
32 cores
73 2 sixteen-core 2.8 GHz Intel Gold 6242 192 GB 885 GB 10 Gbps Ethernet cascadelake 1.01
scc-f01
24 cores
1 with 8 TitanV GPUs 17 2 twelve-core 2.3 GHz Intel Xeon Gold 5118 384 GB 1641 GB 10 Gbps Ethernet skylake 1.01
scc-f02
32 cores
1 with 5 RTX6000 GPUs 18 2 sixteen-core 2.8 GHz Intel Gold 6242 384 GB 1802 GB 10 Gbps Ethernet cascadelake 1.01
scc-f03
32 cores
1 with 6 A40 GPUs 19 2 sixteen-core 2.9 GHz Intel Gold 6326 256 GB 885 GB 10 Gbps Ethernet cascadelake 1.01
scc-k01, scc-k02
28 cores
2 with 4 P100 GPUs each12 2 fourteen-core 2.6 GHz Intel Gold 6132 192 GB 189 GB 10 Gbps Ethernet skylake 1.01
scc-k03..k05
28 cores
3 with 4 P100 GPUs each12 2 fourteen-core 2.6 GHz Intel Gold 6132 192 GB 189 GB 10 Gbps Ethernet skylake 1.01
scc-k06
28 cores
1 with 1 P100 GPU12 2 fourteen-core 2.6 GHz Intel Gold 6132 384 GB 885 GB 10 Gbps Ethernet skylake 1.01
scc-k07..09, scc-k11
28 cores
4 with 2 V100 GPUs each10 2 fourteen-core 2.6 GHz Intel Gold 6132 192 GB 885 GB 10 Gbps Ethernet skylake 1.01
scc-k10
28 cores
1 with 1 V100 GPU 10 2 fourteen-core 2.6 GHz Intel Gold 6132 192 GB 885 GB 10 Gbps Ethernet skylake 1.01
scc-ka1..ka6
64 cores
6 4 16-core 2.3 GHz AMD Opteron 6276 256 GB 518 GB 10 Gbps Ethernet
QDR Infiniband
bulldozer 0.851
scc-mb3,scc-mb4,
scc-mc1..mc8,
scc-me1..me7,
scc-mf4,scc-mf5,
scc-mf8,
scc-mg1..mg8,
scc-mh1..mh6,
scc-ne5..ne7,
scc-pi5..pi8
16 cores
41 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 885 GB 1 Gbps Ethernet ivybridge 1.01
scc-mb5..mb8,
scc-md1..md8
scc-me8,
scc-mf1,scc-mf2,
scc-mf6,scc-mf7,
scc-mh7,scc-mh8,
scc-ne4
16 cores
20 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 256 GB 885 GB 1 Gbps Ethernet ivybridge 1.01
scc-na1..na8,
scc-nb1..nb8,
scc-nc1..nc8,
scc-nd1..nd8,
scc-ne1..ne3
16 cores
35 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 885 GB 1 Gbps Ethernet
FDR Infiniband
ivybridge 1.01
scc-q09..12
32 cores
4 2 sixteen-core 2.8 GHz Intel Gold 6242 192 GB 1802 GB 10 Gbps Ethernet cascadelake 1.01
scc-q13..q16
16 cores2
4 2 eight-core 2.6 GHz Intel Xeon E5-2670 128 GB 230 GB 10 Gbps Ethernet sandybridge 1.01  2
scc-q17..q21
28 cores2
5 2 fourteen-core 2.6 GHz Intel Gold 6132 192 GB 885 GB 10 Gbps Ethernet skylake 1.01
scc-q23, scc-209
32 cores
2 with 1 V100 GPUs each10 2 sixteen-core 2.8 GHz Intel Gold 6242 192 GB 885 GB 10 Gbps Ethernet cascadelake 1.01
scc-q24
32 cores
1 with 1 V100 GPU 10 2 sixteen-core 2.8 GHz Intel Gold 6242 384 GB 885 GB 10 Gbps Ethernet cascadelake 1.01
scc-q25..28
32 cores
4 with 2 V100 GPUs each 10 2 sixteen-core 2.8 GHz Intel Gold 6242 192 GB 885 GB 10 Gbps Ethernet cascadelake 1.01
scc-q29
32 cores
1 with 1 V100 GPU 11 2 sixteen-core 2.8 GHz Intel Gold 6242 192 GB 885 GB 10 Gbps Ethernet cascadelake 1.01
scc-q30
32 cores
1 with 4 V100 GPUs each10 2 sixteen-core 2.6 GHz Intel Gold 6132 192 GB 189 GB 10 Gbps Ethernet skylake 1.01
scc-q31..q36,scc-201..204
32 cores
10 with 4 V100 GPUs each 10 2 sixteen-core 2.8 GHz Intel Gold 6242 192 GB 189 GB 10 Gbps Ethernet cascadelake 1.01
scc-sa1..sa8,
scc-sb1..sb8
16 cores
16 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 256 GB 885 GB 1 Gbps Ethernet
FDR Infiniband
ivybridge 1.01
scc-sc1,scc-sc2
16 cores
2 with 2 K40m GPUs each5 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 885 GB 1 Gbps Ethernet
FDR Infiniband
ivybridge 1.01
scc-sc3..sc6
16 cores
4 2 eight-core 2.6 GHz Intel Xeon E5-2650v2 128 GB 885 GB 1 Gbps Ethernet
FDR Infiniband
ivybridge 1.01
scc-ta1..ta3,
scc-to1
20 cores
4 2 ten-core 2.6 GHz Intel Xeon E5-2660v3 128 GB 885 GB 10 Gbps Ethernet haswell 1.01
scc-ta4, scc-tb1..tb3, scc-tc1..tc4, scc-td1..td4, scc-te1..te4, scc-tf1..ft4, scc-tg1..tg4, scc-th1..th4, scc-ti1..ti4, scc-tj1, scc-tj2, scc-tk1..tk4, scc-tl1..tl4, scc-tm1..tm4, scc-tn1, scc-tn2, scc-to2..to4
20 cores
51 2 ten-core 2.6 GHz Intel Xeon E5-2660v3 256 GB 885 GB 10 Gbps Ethernet haswell 1.01
scc-tj3, scc-tj4, scc-tr1, scc-tr3, scc-tr4, scc-wo1, scc-wo2, scc-wp3, scc-xa1..xa4, scc-xb1..xb4, scc-xc1..xc4, scc-xd1..xd4, scc-xe1..xe4, scc-xf1..xf4
28 cores
32 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 512 GB 885 GB 10 Gbps Ethernet broadwell 1.01
scc-tn3, scc-tn4, scc-tp1..tp4, scc-tq1..scc-tq4, scc-tr2, scc-uj1..uj4, scc-uk1..uk4, scc-ul1..ul4, scc-um1, scc-um2, scc-wo3, scc-wo4, scc-wp1, scc-wp2, scc-wp4, scc-wq1..scc-wq4, scc-wr1..wr4, scc-xg1..xg4, scc-xh1..xh4, scc-xi1..xi4, scc-xj1..xj4, scc-xk1..xk4, scc-xl1..xl4, scc-xm1..xm4
28 cores
66 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 256 GB 885 GB 10 Gbps Ethernet broadwell 1.01
scc-un1..4, scc-uo1..4, scc-yq1..4
28 cores
12 2 fourteen-core 2.6 GHz Intel Gold 6132 192 GB 885 GB 10 Gbps Ethernet skylake 1.01
scc-up1, scc-up2, scc-zk1, scc-zk2
28 cores
4 2 fourteen-core 2.6 GHz Intel Gold 6132 384 GB 885 GB 10 Gbps Ethernet skylake 1.01
scc-up3, scc-up4, scc-uq1..4, scc-yl1..4, scc-ym1..3, scc-yn1..4, scc-yo1..4, scc-yp2, yr1..3, scc-zj1..4
28 cores
29 2 fourteen-core 2.6 GHz Intel Gold 6132 192 GB 885 GB 10 Gbps Ethernet skylake 1.01
scc-v04..v07
8 cores
4 with 1 M2000 GPU each9 1 eight-core 2.1 GHz Intel Xeon E5-2620v4 128 GB 427 GB 10 Gbps Ethernet broadwell 1.01
scc-x01..x04
28 cores
4 with 2 P100 GPUs each7 2 fourteen-core 2.4 GHz Intel Xeon E5-2680v4 256 GB 849 GB 10 Gbps Ethernet broadwell 1.01
scc-x07,scc-q22
32 cores
2 2 sixteen-core 2.4 GHz AMD Epyc 7351 1024 GB 885 GB 10 Gbps Ethernet epyc 1.01
scc-yp1
28 cores
1 2 fourteen-core 2.6 GHz Intel Gold 6132 768 GB 885 GB 10 Gbps Ethernet skylake 1.01
scc-205
32 cores
1 with 4 A100s GPUs13 2 sixteen-core 2.9 GHz Intel Gold 6326 256 GB 859 GB 10 Gbps Ethernet icelake 1.01
scc-207
32 cores
1 with 1 A100 GPU13 2 sixteen-core 2.8 GHz Intel Gold 6242 768 GB 885 GB 10 Gbps Ethernet cascadelake 1.01
scc-208
32 cores
1 with 2 V100 GPUs 10 2 sixteen-core 2.8 GHz Intel Gold 6242 192 GB 885 GB 10 Gbps Ethernet cascadelake 1.01

1These machines have limited access; not all SCF users can fully utilize these systems. For those users with special access to these systems, the SU charge is 0.0 for these systems only.
2These nodes are intended primarily for running Hadoop jobs which are handled differently than other jobs on the system. Limited regular batch jobs can run on these nodes when they are not being utilized for Hadoop jobs.
3Each of these nodes has eight NVIDIA Tesla M2070 GPU Cards with 6 GB of Memory.
4Each of these nodes has three NVIDIA Tesla M2050 GPU Cards with 3 GB of Memory.
5Each of these nodes has two NVIDIA Tesla K40m GPU Cards with 12 GB of Memory.
6Each of these nodes has four NVIDIA Tesla K40m GPU Cards with 12 GB of Memory.
7Each of these nodes has two or four (specified above) NVIDIA Tesla P100 GPU Cards with 12 GB of Memory.
8This node is for support of VirtualGL and has one NVIDIA Tesla K2200 GPU Card with 4 GB of Memory.
9These nodes are for support of VirtualGL and each has one NVIDIA Tesla M2000 GPU Card with 4 GB of Memory.
10Each of these nodes has one, two, or four (specified above) NVIDIA Tesla V100 GPU Cards with 16 GB of Memory.
11Each of these nodes has one, two, or four (specified above) NVIDIA Tesla V100 GPU Cards with 32 GB of Memory.
12Each of these nodes has one, two, or four (specified above) NVIDIA Tesla P100 GPU Cards with 16 GB of Memory.
13Each of these nodes has one NVIDIA Tesla A100 GPU Card with 40 GB of Memory.
14Each of these nodes has the indicated number of NVIDIA A6000 GPU Cards with 48 GB of Memory each.
15Each of these nodes has the indicated number of NVIDIA RTX8000 GPU Cards with 48 GB of Memory each.
16Each of these nodes has the indicated number of NVIDIA TitanXp GPU Cards with 12 GB of Memory each.
17Each of these nodes has the indicated number of NVIDIA TitanV GPU Cards with 12 GB of Memory each.
18Each of these nodes has the indicated number of NVIDIA RTX6000 GPU Cards with 24 GB of Memory each.
19Each of these nodes has the indicated number of NVIDIA A40 GPU Cards with 48 GB of Memory each.

Use of all of the GPUs is currently in ‘friendly user’ mode so only the main CPU usage is charged at the moment for all SCC nodes with GPUs.

The Knights Landing nodes (scc-ib1 and scc-ib2) are also currently in ‘friendly user’ mode with no charge for CPU usage on these nodes.

Depending on the speed, memory, and other factors, each node type is charged at a different SU rate per hour, as shown in the table above. Our allocations policy is explained in more detail here.

Batch System and Usage

The batch system on the SCC is the Open Grid Scheduler (OGS), which is an open source batch system based on the Sun Grid Engine scheduler.

Job Run Time Limits
The limitations below apply to the shared SCC resources. Limitations on buy-In nodes are defined by their owners.

Limit Description
15 minutes of CPU Time Jobs running on the login nodes are limited to 15 minutes of CPU time and a small number of cores.
12 hours default wall clock Jobs on the batch nodes have a default wall clock limit of 12 hours but this can be increased, depending on the type of job. Use the qsub option -l h_rt=HH:MM:SS to ask for a higher limit.
720 hours -serial job
720 hours – omp (1 node) job
120 hours – mpi job
48 hours – GPU job
Single processor (serial) and omp (multiple cores all on one node) jobs can run for 720 hours, MPI jobs for 120 hours, and jobs using GPUs are limited to 48 hours.
1000 cores An individual user is also only allowed to have 1000 shared cores maximum simultaneously in the run state. This limit does not affect job submission.

Note that usage on the SCC is charged by wall clock time; if you request 8 cores for 12 hours and your code runs for 3 hours, you will be charged for the 24 hours (8 cores * 3 hours) even if your job did not use all requested cores. The charge is computed by multiplying the wall clock time by the SU factor of the node(s) used.