Server room

Massachusetts Green High Performance Computing Center

Holyoke, MA

Boston University is a founding partner in the Massachusetts Green High-Performance Computing Center (MGHPCC), a collaboration of universities, industry, and the Massachusetts state government. The group is in the process of building a new data center in Holyoke, Massachusetts to take advantage of the abundant source of clean renewable energy from Holyoke Gas and Electric’s hydroelectric power plant on the Connecticut River. This project will create a world-class, high performance computing center with an emphasis on green, sustainable computing and unprecedented opportunities for collaboration between research, government, and business in Massachusetts.

The computing infrastructure in the MGHPCC facility includes 33,000 square feet of computer room space optimized for high performance computing systems, a 15MW power feed, and a high efficiency cooling plant that can support up to 10MW of computing load. The on-site substation includes provisions for expansion to 30MW and the MGHPCC owns an 8.6 acre site, leaving substantial space for the addition of new floor space. The communication infrastructure includes a dark fiber loop that passes through Boston and New York City and connects to the NoX, the regional education and research network aggregation point. Boston University initially will have a pair of 10 GigE connections from its campus to its resources located in the Holyoke facility.

Service Models and Programs for Research Computing

Several service models and programs, most of which are already in practice, will be offered at the MGHPCC. These offerings, which include shared, buy-in/coop, dedicated and co-located systems, provide researchers with a full spectrum of computing options ranging from university-wide, fully-shared resources to dedicated, individually owned and operated machines. In most case the systems are centrally managed by IS&T’s Scientific Computing and Visualization staff; in all cases the physical infrastructure – space, power, cooling, and core networking – is provided by the University without charge-back to individual researchers or departments. Since older equipment does not afford the same energy efficiency as newer equipment, a five year life-time is assumed for all equipment to be installed in the MGHPCC.


The shared service model applies to equipment which is acquired with a significant university contribution, either fully funded centrally or under an institutional-level infrastructure grant leveraged by substantial matching funds. These computing resources are offered without charge to all faculty and research staff on a fair-share, allocation basis. Allocations are reviewed by a committee of faculty and staff. The primary shared computing resource today is the Katana Linux cluster comprising 1,308 cores. Once moved to the MGHPCC, this cluster is expected to grow to a few thousand cores and will include the recently acquired 320 core, 160 GPU HP/Nvidia system. For more information on using the shared computing facility please visit


The buy-in program allows researchers to acquire additional, standardized hardware to support their individual research projects. The additional resources are integrated into the shared facility and managed centrally by the SCV group. The owners of the equipment are given priority access while any excess capacity is returned to the pool for general, shared use. The owners of the equipment determine the access and queuing policies for their portion of the facility. All other standard services are provided without charge. While buy-in nodes can be purchased at any time, the next bulk buy-in program targeted for the MGHPCC will be announced in spring of 2013. More information including hardware models and pricing is available at More than half of the current shared Linux cluster consists of buy-in nodes. Buy-in equipment has a 5 year life-time and 5 years of on-site maintenance is included in the purchase price.


Dedicated service is provided for systems that are non-standard or otherwise cannot be shared under the buy-in model due to their specialized computing requirements. The systems are acquired under individual research grants, but are hosted and managed centrally by the SCV staff. Physical infrastructure is provided without charge, but equipment, systems administration, software licenses and other direct costs are paid directly by the researcher. Usage policies and software stacks are specified by the owner. Please contact Glenn Bresnahan ( for more information. Dedicated equipment is expected to have a 5 year refresh cycle and must be covered by a hardware maintenance contract with on-site service.


Co-location (co-lo) is a stand-alone service that will provided once MGHPCC facility is operational. The University will provide only rack space, power, cooling and a base (1GigE) network connection. The owner is responsible for all of the costs of the system: hardware, software, systems administration and maintenance, including time and material cost for operational support (remote hands) provided by/through the MGHPCC. Please contact Glenn Bresnahan ( for more information.


The MGHPCC is now open and will be brought into production operations during the first two quarters of 2013. The next bulk buy-in for installation in the MGHPCC will be announced in the spring of 2013.