Boston University is a partner in the Massachusetts Green High-Performance Computing Center (MGHPCC), a collaboration of universities, industry, and the Massachusetts state government. The group has recently opened a new data center in Holyoke, Massachusetts to take advantage of the abundant source of clean renewable energy from Holyoke Gas and Electric’s hydroelectric power plant on the Connecticut River. MGHPCC partners include university consortium members Boston University, Harvard University, Massachusetts Institute of Technology, Northeastern University, and the University of Massachusetts; industry partners Cisco and EMC; and the Commonwealth of Massachusetts.

MGHPCC is a world-class, high performance computing center with an emphasis on green, sustainable computing. The green aspects of the project range from the use of clean, sustainable electric generation to power the data center to fostering research collaborations in energy, climate and the environment. The continuing development of this center is creating unprecedented opportunities for collaboration between research, government, and business in Massachusetts.

The MGHPCC has become the first university research data center to receive a LEED® Platinum certification, the highest green building ranking. The MGHPCC is also one of only 13 data centers in the country to receive a Platinum certification.

The MGHPCC is designed to support the growing scientific and engineering computing needs at five of the most research-intensive universities in Massachusetts – Boston University, Harvard University, MIT, Northeastern and The University of Massachusetts. The computing infrastructure in the MGHPCC facility includes 33,000 square feet of computer room space optimized for high performance computing systems, a 19MW power feed, and a high efficiency cooling plant that can support up to 10MW of computing load. The on-site substation includes provisions for expansion to 30MW and the MGHPCC owns an 8.6 acre site, leaving substantial space for the addition of new floor space. The communication infrastructure includes a dark fiber loop that passes through Boston and New York City and connects to the NoX, the regional education and research network aggregation point. Boston University is connected to the MGHPCC through two pair of 10 GigE connections, providing an aggregate capacity of 40 GB/s, from its campus to its resources located in the Holyoke facility.


  • Modern, controlled data center facility for research computing
  • 8.6 acre site and 90,000 sq. ft. building provide for future expansion
  • High-performance networking between the campuses and the resources in Holyoke
  • Inexpensive, renewable and clean power
  • Efficient, low power usage effectiveness (PUE) design with a low carbon footprint
  • Brownfield cleanup and remediation of an old mill site
  • Economic development and revitalization in Holyoke, MA
  • Opportunities for shared facilities and services
  • Opportunities for collaboration with other institutions

Boston University Service Models and Programs

Several service models and programs, most of which are already in practice, are being offered. These offerings, which include shared, buy-in/coop, dedicated and co-located systems, provide researchers with a full spectrum of computing options ranging from university-wide, fully-shared resources to dedicated, individually owned and operated machines. In most case the systems are centrally managed by RCS staff; in all cases the physical infrastructure – space, power, cooling, and core networking – is provided by the University without charge-back to individual researchers or departments.

  • The shared service model applies to equipment which is acquired with a significant university contribution, either fully funded centrally or under an institutional-level infrastructure grant leveraged by substantial matching funds. These computing resources are offered without charge to all faculty and research staff on a fair-share, allocation basis. Allocations are reviewed by a committee of faculty and staff. Nearly half of the Shared Computing Cluster (SCC) is available to researchers as a shared service.
  • The buy-in program allows researchers to acquire additional, standardized hardware to support their individual research projects. The additional resources are integrated into the shared facility and managed centrally by the RCS group. The owners of the equipment are given priority access while any excess capacity is returned to the pool for general, shared use. The owners of the equipment determine the access and queuing policies for their portion of the facility. All other standard services are provided without charge. Over half of the SCC is owned by buy-in participants.  Detailed information about the Buy-in Program is available.
  • Dedicated service is provided for systems that are non-standard or otherwise cannot be shared under the buy-in model due to their specialized computing requirements. The systems are acquired under individual research grants, but are hosted and managed centrally by the RCS staff. Physical infrastructure is provided without charge, but equipment, systems administration, software licenses and other direct costs are paid directly by the researcher. Usage policies and software stacks are specified by the owner.
  • Co-location (co-lo) is a stand-alone service that is provided at the MGHPCC facility only. The University provides only rack space, power, cooling and a base (1GigE) network connection. The owner is responsible for all of the costs of the system: hardware, software, systems administration and maintenance, including time and material cost for operational support (remote hands) provided by/through the MGHPCC.


MGHPCC opened November 16, 2012, and the new Boston University Shared Computing Cluster went into friendly user use for all SCF users on April 26, 2013 and has been in production since June, 2013.

Latest News

  • Research Computing Shared Computing Cluster (SCC) Downtime – March 7, 2023

    March 3, 2023

    The Shared Computing Cluster (SCC) will be offline from Tuesday, March 7, 9am to 5 pm for networking maintenance. RCS will post updates before, during, and after the downtime at Downtime: Tuesday, March 7, 9 am to 5 pm. Systems Impacted: SCC (login and batch nodes, home directories and project disk space), SCC OnDemand (, Linux Virtual Lab (,, and ATLAS. Queue Draining: The SCC scheduler will not dispatch jobs that have specified a runtime that extends... [ More ]