AIDA Gathers Faculty From Across the University to Discuss Effective Artificial Intelligence Policies
On Wednesday, October 1, around 60 faculty gathered on the 17th floor of the Duan Family Center for Computing and Data Sciences for an AIDA panel discussion about effective artificial intelligence policies across the University.

John Byers, Co-Director of AIDA’s AI in Academics, opened the event with an introduction to AIDA’s team, mission, and TerrierGPT — a free tool for AI literacy, exploration, and innovation that allows BU faculty, students, and staff to engage with cutting-edge AI in a responsible and secure environment. Developed by AIDA and IS&T, TerrierGPT offers free, secure access to top GenAI models from OpenAI, Anthropic, and Google. Built on the open-source LibreChat platform, TerrierGPT is designed to support AI literacy and innovation, while ensuring compliance with privacy and data protection policies (none of the data entered is used to train external models).
Kevin Gold, Associate Professor of the Practice of Computing & Data Sciences, then moderated the panel which consisted of Michael Dowding, Master Lecturer at the College of Communication, Andy Fan, Senior Lecturer at the College of Engineering, David Shawn, Master Lecturer at the College of Arts and Sciences, and Tal Gross, Professor at Questrom School of Business.

Each panelist gave a presentation relevant to their discipline and experience, which are now available on the AIDA website. An open question and answer session followed with the audience. Although the applications of artificial intelligence differed across disciplines, several common themes emerged.
What should the policy be about the use of artificial intelligence on homework assignments?
When it comes to the use of AI on homework assignments, transparency and reflection were two major themes.
As part of his opening statements, Gold reviewed how he adapted CDS’s GAIA policy to find a solution that worked for himself and his students. For example, while GAIA requires the inclusion of a transcript in the appendix, Gold realized transcripts took time to review and came with compliance issues. Instead, he requires his students to submit a statement about how they used AI to help measure the correlation between the students’ reported AI use and their grades.
Shawn takes a similar approach. For major assignments, he requires students to write a memo about their use of AI and to reflect on it — not as justification, but to identify how they’re learning from the AI tools. Shawn summarized the goal of the memo as the following: “What is this that is really you, and what is this that is not so much you?”
While AI use statements give students and faculty the opportunity to reflect, the panelists agreed that it’s difficult to enforce additional AI policies for homework.
“Homework has to be worth less than it used to; we can’t enforce it,” said Gross. Both Gross and Gold make their AI assignments worth less to encourage the exploration of AI rather than the dependency on it. “Leaning too much on AI will cause problems at test time,” said Gold.
Fan qualified the enforcement of AI homework policies: “You can’t enforce it, but there are different kinds of homework and different types of enforcement.”

In Fan’s classes, he takes into account an individual’s experience as a student and their aptitude in mathematics and physics. Juniors, for example, don’t have the same requirements as freshman — they may use AI on more of their assignments since they’ve proven their understanding of core topics. However, no matter the experience, Fan encourages all students to constantly question their decisions.
“What are the changes you made, and why are those better?” asked Fan.
Dowding, on the other hand, doesn’t allow AI on his assignments. While he’s aware that some students may still use AI at home, he focuses his efforts on the baseline of their work to determine their writing abilities.
“Get people to do writing in the classroom and then compare it to what you’re receiving at home,” Dowding said.
How does AI play into real-time classroom experiences?
While AI seemed helpful in homework assignments, the discussion around AI use during classroom time invited new perspectives on how technology can enhance or undermine the classroom experience.
“When you’re in a classroom, it’s all about logic and reasoning,” said Fan. Fan often teaches mathematics and coding in his discipline, inviting students to follow along as he handwrites code. “Me? I like a chalkboard… I want my students to follow along as we solve a problem together.”
Likewise, Dowding also valued his classroom sessions as a time for students to use their brains—and their brains only.

“To me, learning happens through friction, adversity — powering through and muscling through that,” said Dowding. “However, friction in the working world is completely undesirable; it’s a barrier.”
This fine line appeared throughout all discussions: Preparing students for AI’s role in the workplace while also nurturing their critical thinking skills.
As a professor of markets, public policy, and law, Gross strives to create this balance in his classes. Through AI-based assignments and AI tutoring, Gross encourages his students to have discussions with AI through effective prompts.
“We can create an environment where students have to think,” said Gross. “We train pilots with simulators, maybe we can train white collar workers with simulators.”
In addition to simulators, Gross also partakes in “warm calls,” where he’ll ask students to work together and then share what their groups talked about.
If AI is allowed on homework and other classroom assignments, how should BU faculty approach exams?
All panelists agreed that AI wasn’t allowed during their exams. However, some encouraged the use of AI to help students prepare for exams. For example, Gross and Fan encourage their students to ask AI to “quiz them on the following” to help study for exams. They also provide practice exams.
Additionally, Dowding recommends lockdown browsers as an effective substitution to a blue book exam. While blue book exams guarantee there is no use of AI during the exam due to the pen-and-paper format, penmanship makes grading the exam harder. Lockdown browsers prevent the use of AI as well by only allowing the student to access the exam site.
How should faculty create their own policies?
While most colleges within the University have AI policies in place, the panel agreed that policies should be a discussion between faculty and their students.

“I believe every faculty should have an obligation to have an AI use policy in class and explicitly explain why,” said Ken Lutchen, Vice President for Research. “Get them to use it in a way that amplifies their learning.”
While this symposium allowed faculty to share their ideas and concerns, the conversation about AI’s role at Boston University is ongoing. Copies of the panelists’ presentations can be found on AIDA’s Classroom and Resources webpage, and future symposiums can be found on AIDA’s News and Events page.