Student-Designed Policy on Use of Generative AI Adopted by Data Sciences Faculty

Wesley Wildman lecturing to undergraduate students in the course Data, Society and Ethics.
Student-Designed Policy on Use of Generative AI Adopted by Data Sciences Faculty
Classroom project on ethical use of artificial intelligence tools like ChatGPT addresses how students can use it, how faculty should grade
Back in January, when ChatGPT started making news as the artificial intelligence tool of the future, Wesley Wildman tore up his lesson plan and instead assigned his undergraduate students in Data, Society and Ethics to write a policy for responsible use of generative AI programs in the classroom.
Only a few weeks later, the Faculty of Computing & Data Sciences has adopted that class project as the department’s official policy.
“It’s from the first week of the first BU course on the ethics of computing and data sciences, in our first semester in the new building,” says Azer Bestavros, BU’s inaugural associate provost for computing and data sciences. “I could not have written a better script.”

Wildman is a professor of philosophy, theology, and ethics and of computing and data sciences, who splits his time between the School of Theology and the Faculty of Computing & Data Sciences. The 47 students in his class planned to abide by their Generative AI Assistance (GAIA) Policy themselves and thought it might serve as a starting point for others around campus. But they didn’t think it would become CDS policy so soon after they started working on it.
“It’s been really interesting and something that definitely was unexpected,” says Olivia Bene (CAS’23), a computer science major in the class. Wildman “understands that his classroom is full of people who understand technology and know how to use it, so it’s better to get in front of it. This is something that exists, so let’s use this as a tool rather than something that we’re going to be afraid of.”
The debate around generative AI “chatbots” like ChatGPT, created by the company OpenAI, has centered around how they can be used for productive purposes or as shortcuts around legitimate research and writing. ChatGPT, for example, can serve as a research tool—or it can write a passable paper that a student could turn in as their own work. Also known as large language models, or LLMs, this cutting-edge technology is also under development by Microsoft and Google, among many others, with endless possible applications. No less an authority than Bill Gates says it “will change our world.”
“The nice thing about this document is that it really strikes a nice balance between the pluses and minuses of something like this when it comes to learning,” says Bestavros, who is also a William Fairfield Warren Distinguished Professor and a computer science professor at the College of Arts & Sciences.
The policy the class wrote is “somewhat embracing” compared to others, Wildman says. It doesn’t ban the technology, because CDS students need to learn to use it—but responsibly.
“I am really excited that BU is not one of those universities that’s like, ‘Nope, we’re gonna block it from our servers, we’re not gonna let you access it, if you access it you’re going to get kicked out,’” says Bene. “I’m very excited that they understand how much power this tool has. It has the capability to do so many things, and if you’re just going to label it as something that’s cheating and bad, you’re doing it a disservice.
“It’s such a powerful technology,” she adds. “And if you’re going to be a research university, you should be one that isn’t afraid of new things that are coming out.”
The policy says students should give credit to LLMs whenever they are used, even if only to generate ideas rather than usable text. And when writing papers or take-home assignments using LLMs, they should include in an appendix their entire exchanges with the LLM, highlighting the most relevant sections and explaining their purpose in using the technology. Students should not use LLMs to help with in-class examinations, tests, or assignments, unless explicitly organized around an LLM.
This is something that exists, so let’s use this as a tool rather than something that we’re going to be afraid of.
“We’re looking for responsible ways of setting boundaries around this,” says Mark Crovella, a professor of computer science and of computing and data sciences at CAS and chair of the CDS Academic Policy Committee. “Communicating to students how they’re expected to behave in a world in which it’s possible for us to use these remarkable machine learning tools to do things that look a lot like intellectual work.”
If students use generative AI to do their work for them, “they’re short-circuiting the educational process,” Crovella says. “But, at the same time, the world is never going to go back to pre-ChatGPT, and we have to understand how to productively coexist with these kinds of tools.”
The policy also provides guidelines for faculty in grading, saying they should treat work submitted by students who don’t use LLMs as the baseline for grading, and use a lower baseline for students who do use them, depending on how extensive the usage.
Faculty should also use AI-generated text detection tools to evaluate the degree to which it is present in student work, the policy says. And faculty should impose a significant penalty for low-energy or unreflective reuse of wording generated by LLMs, to the point of assigning zero points for merely reproducing LLM output.
The policy’s final language is quite close to the original text the students hashed out after a few weeks of class discussions and small-group efforts. It was discussed by the Academic Policy Committee, as well as in many informal conversations, before Bestavros polled the faculty this week.
“I would call it the baseline policy,” Bestavros says, “because different faculty may want to tweak it further, depending on what the courses are about, and to what extent it is a tool versus something that you want to avoid.”
Crovella says it’s “pretty cool” that the students came up with such a workable policy so soon. Wildman says the students are proud of their work—as is he. He’s waiting for students to start asking how to put a project like this in their CVs or refer to it when they’re applying for fellowships or positions.
Wildman will also speak on a panel discussion at Monday’s CDS symposium for BU faculty, staff, graduate students, and postdoctoral scholars: ChatGPT and Other AI Tools: Challenges, Opportunities, and Strategies for Teaching.
“There’s a tremendous amount of confusion both among students and among their teachers, so something needs to be done fast,” Wildman says. “CDS’ responsiveness was partly due to the fact that it is small and new and agile, and also because it wants to be a leader helping the University think through these things.”
Text generation is of course just the beginning for generative AI, which is already roiling fields such as visual art and music. The consensus seems to be that we’ve barely begun to see its ramifications.
But what Wildman’s students have done is a pretty good start, right? Bestavros is “excited, delighted, proud, you name it,” he says with a grin.
Comments & Discussion
Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.