BU Annual Report: Taking a Scalpel to AI
Excerpt from The BU Annual Report
A first-of-its-kind program is helping Boston University computer scientist Mark Crovella investigate AI—asking, can we trust it? Should we trust it? Is it safe? Is it perpetrating biases and spreading misinformation?
The National Artificial Intelligence Research Resource (NAIRR) Pilot, backed by the National Science Foundation and Department of Energy, aims to bring a new level of scrutiny to AI’s promise and peril by giving 35 projects, including Crovella’s, access to advanced supercomputing resources and data at top national laboratories.
A professor of computer science and chair of academic affairs in the Faculty of Computing & Data Sciences, Crovella will audit a type of AI known as large language models (LLMs). These software tools help drive everything from ChatGPT to automated chatbots to smart speaker assistants. Evimaria Terzi, professor in the Department of Computer Science, will join Crovella on the project.
The use of LLMs is spreading rapidly, Crovella says, finding uses in education, social settings, and research, among many other areas. Apple, Microsoft, and Meta have all announced integrations of LLMs into their product lines. In the near future, Crovella predicts, we will each have our own personalized LLM that will know a lot about us and help with tasks on a minute-to-minute basis.