As AI continues its explosive growth, so does its energy demand, pushing the U.S. electric grid toward its limits. With the rise of increasingly complex AI models and cloud-based applications, data centers are becoming power-hungry giants. Projected to use up to 9% of U.S. power by 2030, data centers increasingly strain the grid, threatening the resilience of everyday services like air conditioning, ATMs, and internet access.

Ayse Coskun, a professor of engineering (ECE, SE) and Center for Information & Systems Engineering (CISE) Director at Boston University, has pioneered transformative research at the forefront of a paradigm shift in how data centers should approach energy consumption. Her research, promoting data center energy flexibility, is increasingly critical to support grid stability and sustainability. Coskun is now extending her expertise to the commercial sector as the Chief Scientist at Emerald AI, a new company that aims to control the computational power demand from data centers running AI workloads, while ensuring performance guarantees.
Emerald AI is developing a software platform that interfaces with grid signals to dynamically orchestrate compute workloads, adjusting data center power use to meet both grid and performance requirements. With this capability, Emerald AI envisions a system of “Al Virtual Power Plants” that transform data centers from power-hungry liabilities into grid-stabilizing assets. As chief scientist, Coskun is working on Emerald’s vision and technical scoping of the software, guiding the demos, prototypes, and products.
“Today, two massive infrastructures are colliding—data centers and the power grid,” says Coskun. “The explosive growth in data center energy demand is outpacing what the grid can handle. Our platform sits at the interface, enabling power flexibility so data centers can come online faster; AI can scale more broadly; and the grid can grow more resilient, reliable, and affordable.”
Adaptive Frameworks for Flexible Computing
Historically, the focus in data center development has been on scaling computing resources—more cores, faster processors, and bigger storage solutions. This approach has put immense pressure on local power grids, leading to rising energy consumption. The traditional model, which prioritizes raw computational capabilities over power consumption, is unsustainable in today’s energy-constrained landscape.
Coskun’s research reimagines the role of AI data centers, whereby, through flexible computing, data centers can dynamically adjust their power usage in response to grid conditions, helping to stabilize the power grid when demand fluctuates.
“The key idea is to compute more when electricity is more available, cheaper, or greener, and less when it’s scarce,” says Coskun.
Coskun’s early work focused on power efficiency in multi-core processors and high-performance computing (HPC) environments. She first brought her energy-aware computing ideas into the power systems field over a decade ago through a collaboration with CISE co-founder Michael Caramanis, a BU professor of Systems Engineering and expert in electricity markets and demand response. Together with her team at the Performance and Energy Aware Computing Lab (PEACLab), they developed EnergyQARE, an adaptive bidding and runtime policy that enables data centers to provide regulation services—balancing short-term grid supply and demand—while maintaining good workload Quality of Service (QoS) and reducing electricity costs.
Building on EnergyQARE, Ayse Coskun later teamed with Yannis Pachalidis, distinguished professor of engineering and Hariri Institute Director, and PEACLab researchers to develop a scalable demand response method for high performance computing clusters. In their paper, HPC Data Center Participation in Demand Response: An Adaptive Policy With QoS Assurance, they introduce an adaptive policy that dynamically schedules workloads while maintaining strict QoS under real-world constraints such as server power limits. Through a prototype on real-world hardware from the Massachusetts Green High Performance Computing Center, the team demonstrated accurate, real-time tracking of power signals.
“By integrating adaptive policies into demand response, we demonstrated how data centers could actively participate in grid management, optimizing energy use without compromising performance constraints,” says Coskun. “This work helped advance ideas about shifting data centers from being energy consumers to active contributors to grid stability. With the emergence of AI, our next step was developing predictive models to optimize power consumption even further, maximizing efficiency and grid reliability.”
AI-driven Strategies and Grid-Wide Synergy for Flexible, Scalable Demand Response in Data Centers
Coskun’s recent focus has turned to developing multi-layered, scalable data center demand response solutions that employ machine learning and collaborative frameworks without compromising Quality of Service.
In their paper Learning a Data Center Model for Efficient Demand Response, Coskun and collaborators introduce a machine-learning-based model that optimizes demand response at the individual data center level by predicting power market bidding parameters for power and reserve capacity. Coskun and her team build demand response methods to coordinate regional data centers in their paper, A Collaboration Framework for Multi-Data-Center Demand Response, which describes a framework whereby independent data centers dynamically coordinate power adjustments based on real-time QoS feedback. Coordination among data centers improves participation in power grid programs and workload QoS further.
Transforming Data Centers into Energy Assets
“Our current vision is the result of over a decade of research and a broad, collaborative effort,” says Coskun. “My PhD alum Daniel Wilson (PhD’24, ECE) and I began exploring commercialization through BU Spark!, where we conducted customer discovery and assessed technology needs. We also benefited from the guidance of Richard Stuebi, a Questrom lecturer with more than 35 years of energy industry experience, who is now an Emerald AI advisor.”
Under Coskun’s technical leadership, the Emerald AI team is rapidly moving from research to real-world impact, proving that data centers can be both high-performance and grid-supportive. Currently, Emerald AI’s software platform is being demonstrated in field tests, working closely with state agencies and grid operators to fine-tune the technology. In a recent technology field test in Phoenix, Arizona, Emerald AI’s software was shown to reduce data center power usage by 25 percent while meeting flexible service level agreement (SLA) constraints during a period of peak electricity demand.
Emerald AI will be demoing in the Electric Power Research Institute’s (EPRI) DCFlex program, a three-year initiative aimed at balancing power system demands with the energy needs of large loads to alleviate the strain on the power grid. The program has 10 upcoming demos featuring different industry solutions for data center flexibility. Emerald AI’s involvement in the demonstration will showcase the potential of computational flexibility to address power grid needs. The results of the DCFlex program will guide the development of a framework for implementing operational flexibility.
“This is a paradigm shift—data centers are no longer passive energy hogs, but intelligent agents shaping the future of the power grid,” says Coskun. “We’re unlocking a new era where performance and sustainability go hand in hand. It’s incredibly exciting to see years of research now translating into real-world impact, helping build a more sustainable and resilient energy future.”