Boston University Ramps Up Strategy on AI
New Artificial Intelligence Development Accelerator (AIDA) for Academic and Administrative Excellence a strategic initiative to “create an environment that nurtures innovation and progress”

BU President Melissa Gilliam said of the initiative, “There are pivotal moments where we can embrace change, where we have to envision a future and create an environment that nurtures innovation and progress.” Photo via iStock/MF3d
Boston University Ramps Up Strategy on AI
New Artificial Intelligence Development Accelerator (AIDA) for Academic and Administrative Excellence a strategic initiative to “create an environment that nurtures innovation and progress”
Artificial intelligence is reshaping broad swaths of life so fast that it can feel impossible to keep up. Everything from education to weather forecasting, from virtual personal assistants to preventive healthcare, from financial trading to social media now incorporates AI in some form. It’s becoming clearer by the day that the future of work and learning will require at least some AI literacy—if not fluency.
Positioning itself to be at the forefront of this revolution, Boston University is creating what it’s calling the Artificial Intelligence Development Accelerator (AIDA) for Academic and Administrative Excellence, an initiative designed to coalesce the long-standing and widespread investment in—and experimentation with—AI at BU. (A new website for AIDA can be found here.)
“Over the past three years, artificial intelligence, and particularly generative AI, has matured at an unprecedented pace,” BU President Melissa Gilliam said at a recent University symposium on generative AI. “And so it’s no longer a distant future—AI is the current reality that is reshaping industries and communities, and the way we understand and interact with the world. And the implications for us as educators, scholars, researchers, administrative staff, learners, and people are quite significant.
“There are pivotal moments where we can embrace change, where we have to envision a future and create an environment that nurtures innovation and progress, and today, we are at such a moment, and we have decided to act boldly,” Gilliam told the standing-room-only crowd of faculty, staff, and researchers from across the University.
Used appropriately, AI—including its newest iteration, generative AI—is a tool that can transform our society for the better. It’s more than a resource for computer science fields, or even just STEM fields. It’s a tool already in use across an array of business sectors and academic disciplines. But how to wield that tool, and when, are questions practitioners are just now catching up with. AI tools and resources have developed unevenly, with some being better (more secure, more accurate) than others. Knowing which resources to rely on—as well as how to tell the good from the bad—is also crucial.
AIDA (pronounced ah-EE-da, like the musical) was built to answer those questions. Informed by two interdisciplinary task forces whose members studied the use of AI on the academic and administrative sides of the University, the initiative will foster collaboration across BU. Faculty and staff can share best practices, coordinating the University’s adaptation of AI as a tool to make BU more innovative, efficient, and creative.
“AIDA is a foundational initiative that helps complement the constellation of BU’s broad investments in AI, to make sure that we are a leading institution in advancing AI, and to move society forward in positive ways, in all dimensions possible,” says Kenneth Lutchen, senior advisor to the president, who is leading a new University-wide office focused on strategy and innovation.
Across BU, faculty and staff are experimenting with AI as a classroom tool and a means for making sometimes tedious administrative tasks more efficient. This means that BU’s approach to AI cannot be one-size-fits-all, says Azer Bestavros, associate provost for computing and data sciences, and part of the inaugural governing board for AIDA.
“I really like the last letter in AIDA, which stands for ‘Accelerator.’ AIDA is all about accelerating what we know is going to happen,” says Bestavros, who is also a William Fairfield Warren Distinguished Professor in the College of Arts & Sciences and the founding director of the Rafik B. Hariri Institute for Computing and Computational Science & Engineering.
“Academic institutions are typically very slow to change, and for good reasons, but with the technological paradigm shifts such as those we are seeing with AI, we can’t wait to be disrupted,” he says. “AIDA is all about helping the University—both on the academic side and on the administrative side—to embrace that change, noting that it will not be one-size-fits-all. Rather, it will be a shoe that everyone will have to learn how to walk in.”
BU has already become an AI leader in the state, in some ways, by playing a leading role in a Massachusetts initiative called the Massachusetts Green High Performance Computing Center (MGHPCC)—a 12-year-old supercomputing facility that opened in Holyoke with funding from BU and other universities, businesses, and the commonwealth. Governor Maura Healey has said MGHPCC will provide greater access to high-powered computing, which is essential for AI to thrive among institutions, businesses, and startups.
Embracing AI—with Caution
Faculty at BU are already leading cutting-edge research into AI and its nearly infinite applications. Irving J. Bigio, a College of Engineering professor of biomedical engineering and of electrical and computer engineering, pioneered the technology that powers a handheld, non-invasive skin cancer detection device (which the US Food & Drug Administration recently cleared for US markets). Ioannis Paschalidis, director of the Hariri Institute, designed a promising new AI computer program that can predict, with a high degree of accuracy, whether someone with mild cognitive impairment is likely to remain stable over the next six years—or fall into the dementia associated with Alzheimer’s disease.
And it’s not just faculty in the traditionally hard-science fields who are breaking new ground with AI.
At the College of Fine Arts, for example, James Grady, an assistant professor of art, graphic design, is incorporating ChatGPT to encourage his design students’ learning and to unlock new ways of thinking about, and working with, the technology. In one of his classes, students combine AI-generated images with handmade artwork and reflect on the similar themes that emerge from both, ultimately creating a final project that is rich with exploration and embraces both digital and analog art forms.
A group of researchers in the Hariri Institute are studying, among other things, the AI-driven innovations that could make education more equitable.
“We’re really focused on seeding and growing research about AI and education in order to be more systematic and thoughtful about how we identify areas for educational transformation, how we drive innovation, how we identify methods for teaching, policies, and practices, and how we evaluate those already in place,” says Naomi Caselli, director of the AI & Education Initiative. “The idea is: how do we leverage all of the expertise at BU and beyond so that AI isn’t just something that happens to education?”
In Caselli’s own work as an associate professor of deaf studies in the Wheelock College of Education & Human Development, she’s thinking about AI for sign language. “So much of the AI technology now is focused on the spoken and written word that we risk leaving out people who primarily use sign language,” she says.
Still other faculty are urging their students to grapple with the ethical considerations of a technology that can increasingly churn out convincing essays on any number of topics. For example, Wesley Wildman, a professor of philosophy, theology, and ethics in the School of Theology and of computing and data sciences in the Faculty of Computing & Data Sciences, worked with his students to create a Generative AI Assistance (GAIA) policy that’s now used in curricula across the Faculty of Computing & Data Sciences.
The policy encourages students to learn how to use generative AI “to enhance, rather than damage, their developing abilities as writers, coders, communicators, and thinkers.” It’s a way of embracing AI while collectively and collaboratively setting up guardrails for its use in higher education.
“The University’s overall policy is one of ‘critical embrace,’” says Wildman, who was part of the academic task force that informed the implementation of AIDA. “What that means is that we need to embrace it, because letting people go out into the world without knowing how to use it is almost a dereliction of duty. We have to make sure students know what they’re doing: skillful prompt engineering, watching for opportunities to use it creatively, adapting to new situations, all of that. But the ‘critical’ part in ‘critical embrace’ means: please be aware of the shortcomings of this technology and improve on it, use it to do better, not just to replicate what itself is a replication of human work.”
Part of the work AIDA is undertaking is to centralize a number of AI resources and programs for use by faculty and students who continue to conceive of creative uses for AI technology. The new website, for example, provides basic guidelines and best practices for various AI platforms.
“There are IS&T policies around data security and privacy that are applied to AI, but we don’t know that it necessarily connects and resonates for everyone who is using AI,” says Bob Graham, associate vice president for enterprise architecture and applications and a member of the core leadership team within AIDA. “So we want to make sure we have a way of getting those training materials and resources to the community. ‘What should I be doing? How do I get up to speed with all of this AI stuff I hear about?’ Those are the kinds of questions we can answer across the entire University.”
Beyond that, AIDA will help the University keep tabs on a technology that’s changing and evolving faster than ever.
“Given the rapid evolution of AI, AIDA also plans to continuously monitor market and technological advancements, ensuring that BU remains agile, informed, and strategically positioned to leverage cutting-edge AI developments,” Graham says.
A core team of BU administrators and faculty, including Lutchen; Graham; John Byers, senior associate dean of the faculty for mathematical and computational sciences and a CAS professor of computer science; and Yannis Paschalidis, a Distinguished Professor of Engineering in the College of Engineering and director of the Hariri Institute, will lead AIDA with input from an internal governing board and two advisory councils—one focused on the use of AI in academics and the other on its use in administrative work. Lutchen will serve as the interim executive director of AIDA while a search for a permanent director is underway.
Comments & Discussion
Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.