SCF Principal Investigators Information
Table of contents
If you have not yet applied for a new project and are looking into doing so, please also read this information for prospective principal investigators. All users of our systems must be a member of a research project. The new project application is here.
The Boston University Scientific Computing Facilities (SCF) consist of a collection of high-performance computers, high-speed networks, and advanced visualization facilities. These facilities are managed by the Scientific Computing and Visualization (SCV) group of Information Services & Technology (IS&T) in close consultation with the Research Computing Governance Committee, the BU Center for Computational Science, and the Rafik B. Hariri Institute for Computing and Computational Science & Engineering. The shared services are offered free of charge.
As a principal investigator on a project, you have additional resources and responsibilities, which are described below.
As a principal investigator, it is your responsibility to keep track of and maintain your project. This is done via web pages we have set up for each project, which are accessible from our Resource Requests page. On these pages you can:
- Add, delete, or alter the status of users on your project.
- Request additional processor time or project disk space.
- Renew your project annually (required).
- Look at recent summaries of usage on your projects.
- Submit a request to start a new project.
All of the above actions except the last one will require you to enter your BU login ID and Kerberos password. Access to information on your project is limited to you (and your Administrative Contact if you have one). Some users also have a local SCF password but as of July, 2013, only your Kerberos password can be used to access our web materials.
You should be able to access the web section for your project as soon as you are told that your project has been approved and your account set up.
Although there are no monetary charges for use of our shared servcies, all projects are given a specific annual usage allocation, in terms of “service units” (SUs). This allocation must be renewed (and adjusted if appropriate) annually. The table below shows the SU charge on each of our systems for 1 hour of CPU usage. This charge is based on clock speed and other performance-related factors:
|Cluster Name||Host Names||Processor Type & Speed||SU charge for each CPU hour|
|SCC||scc1, scc2, geo, scc4||2.5 GHz Intel Xeon E5-2640 nodes (96GB Memory)||2.5|
|SCC||scc-a*, scc-b*||2.6 GHz Intel Xeon E5-2670 nodes (128GB Memory)||2.6/0.01|
|SCC||scc-c*||2.6 GHz Intel Xeon E5-2670 nodes (256GB Memory)||2.6/0.01|
|SCC||scc-da1..da4, scc-db1..db4, scc-df3, scc-df4||2.7 GHz Intel Xeon E5-2680 nodes (128GB Memory)||2.7/0.01|
|SCC||scc-dc1..dc4, scc-dd1..dd4, scc-de1..de4, scc-df1, scc-df2||2.7 GHz Intel Xeon E5-2680 nodes (64GB Memory)||2.7/0.01|
|SCC||scc-e*, scc-f*||2.66 GHz Intel Xeon X5650 nodes (48GB Memory) +GPUs||1.8/0.01|
|SCC||scc-ga01..ga09, scc-gb14||2.93 GHz Intel Xeon X5570 nodes (24GB Memory)||1.9|
|SCC||scc-ga10, scc-ga11||2.93 GHz Intel Xeon X5570 nodes (96GB Memory)||1.9|
|SCC||scc-gb01..gb031||2.93 GHz Intel Xeon X5670 nodes (48GB Memory)||1.9/0.01|
|SCC||scc-gb04..gb131||2.93 GHz Intel Xeon X5670 nodes (96GB Memory)||1.9/0.01|
|SCC||scc-h*, scc-j*||3.07 GHz Intel Xeon X5675 nodes (48GB Memory) +GPUs||2.2||SCC||scc-k*||2.3 GHz AMD Opteron 6276 nodes (256GB Memory with the exception of scc-ka8 which has 512GB)||2.2/0.01|
|IBM Blue Gene||levi.bu.edu1, lee.bu.edu1||IBM Blue Gene (700 MHz PowerPC 440)||0.25|
|Katana Cluster||katana.bu.edu, katana-a02..a14, katana-b01..b08||2.6 GHz AMD Opteron 2218HE nodes||1.0|
|Katana Cluster||katana-b09..b14||3.0 GHz Intel Xeon E5450 nodes||1.5|
|Katana Cluster||katana-c01..c14||2.4 GHz AMD Opteron 2216HE nodes||0.9|
|Katana Cluster||katana-f01..f14, katana-g01..g13||2.4 GHz AMD Opteron 2216HE nodes||0.9|
|Katana Cluster||katana-h01..h02||2.93 GHz Intel Xeon X5670 nodes (48GB Memory)||1.9|
|Katana Cluster||katana-z01||2.93 GHz Intel Xeon X5670 node (24GB Memory)||1.9|
|Katana Cluster||katana-z02..z03||2.6 GHz Intel Xeon E5-2670 nodes (64GB Memory)||2.2|
|Katana Cluster||katana-z04..s06||2.5 GHz Intel Xeon E5-2640 nodes (96GB Memory)||2.2|
1 All or some of these machines have limited access; not all SCF users can fully utilize these systems. For those users with special access to these systems, the SU charge is 0.0 for these systems only. The Technical Summary provides more details on exactly which nodes are limited access.
Charging on the SCC began on July 1, 2013 at the rates shown above.
Note that by “CPU” we are referring to a single “core.” Also, note that there is a distinction between how time is charged on the Katana Cluster from the other systems. On the Katana Cluster, you will only be charged for the CPU time you actually use; thus a disk-intensive job will be charged less than a cpu-intensive job over the same wall clock period. On the SCC and the Blue Gene, you will be charged for the amount of wall clock time you use each processor for (so requesting 32 processors and running for 2 hours on the Blue Gene will cost 32 * 2 * 0.25 = 16 SUs regardless of how much actual computation your job does).
Since we like to have a rough idea of the anticipated load on each machine, requests are made for the number of CPU-hours on the different cluster types, and these are converted to SUs. However, your SU allocation may be spent on any of the machines to which you have access. Your CPU-time usage will be reported to you in SUs.
As an example, let’s say you request 4000 CPU-hours on the Blue Gene. You will be awarded 1000 SUs (4000*0.25). You could use your 1000 SUs to run 4000 CPU-hours on the Blue Gene (1000/0.25) or 667 CPU-hours on the 3.0 GHz Katana blades (1000/1.5) or some combination of usage on the various systems. Note that even though the SCC/Katana Cluster group contains blades with several different charge rates, SUs are awarded at the rate of 1.0 SU per requested CPU-hour.
Projects which exceed their allocation will be prohibited from running additional jobs. It is your responsibility as a principal investigator to monitor your project’s usage and to request an appropriate allocation of time/SUs. Information on accounts and allocations may also be found on the Accounts & Project Maintenance pages.
The default allocation is a combined 500 SUs. You can request a larger allocation during your annual project renewal or at other times by submitting the “Request Additional CPU Allocation” form, which can be found by following the appropriate link on the Resource Requests page. Large allocation requests are reviewed by the SCF Allocation Committee and generally require approximately one to two months for a decision. All projects must be renewed annually (or at the project’s end date; whichever is sooner) at which time the project’s SU allocation is reset to the new year’s amount. Left-over SUs from the prior year do not carry over to the new year.
Each month we will send you a summary of usage and remaining allocations for each of your projects. Individual researchers on your projects will be sent a summary of their own usage for each project they are on. Individuals may also review the details of their recent usage and monthly summary information using the password protected web pages which may be found under the Accounts & Project Maintenance page.
Individuals are automatically granted disk space in their home directories. Projects are automatically granted Project Disk Space, both backed up and not backed up. This space will be under /project/project_group_name/ and /projectnb/project_group_name/ or, for those projects with restricted use data, /restricted/project/project_group_name/ and /restricted/projectnb/project_group_name. If you need additional space you can request it by following the appropriate link from here. Note that you will generally need to pay for requests over 1 Terabyte.
Queries regarding any information in this document should be directed to firstname.lastname@example.org.