Boston Globe: BU creates standards for chatbots in the classroom

Originally published via the Boston Globe.

Hiawatha Bray Globe Staff, Updated April 4, 2023, 1:29 p.m.

Scientists and tech leaders throughout the US are debating whether to “pause” the deployment of new artificial intelligence systems until they can devise ethical guidelines for their use. Now, computer scientists at Boston University have gotten a head start on the ethics.

The school’s Faculty of Computing and Data Sciences issued new standards last week to guide students and faculty on how to use powerful AI systems like ChatGPT in their academic work.

It’s called the Generative AI Assistance Policy, and BU computer science professor Azer Bestavros believes it could be adopted, with modifications, by the entire university, and perhaps by educational institutions nationwide.

“Every school from K-12 to higher ed,” Bestavros said, “the faculty have to realize the moment and teach themselves about it.”

BU theologian and data scientist Wesley Wildman, who helped develop the plan, said that to the best of his knowledge, this is the first time students and faculty at a university have hashed out a mutually agreed-upon set of standards for AI use.

The new policy could serve as a model for other branches of the school, said BU spokeswoman Maureen McCarthy. “Currently, the university is convening a committee to provide recommendations about possible guidelines to the various schools and colleges at BU,” she said. “It is conceivable that other schools, colleges, or departments at BU may adopt it or a variant of it, but that will be up to each unit to decide.”

The idea for the new rules began in an ethics class taught by Wildman.

“I was concerned that I didn’t know how to grade midterm essays,” he said, since students might write them using ChatGPT, which can generate intelligent-sounding prose without much human assistance. How could Wildman be certain that students wouldn’t cheat by having a computer do the work?

The students in his class were also concerned. “I was worried that people were going to be cheating all the time,” said senior computer science student Natalie Clark. She feared that fellow students would use ChatGPT to get better grades than their more ethical classmates. Soon, the class of 47 students was debating possible solutions.

“It just made sense to step back and ask ourselves to analyze the whole thing from an ethical perspective,” said Wildman. “We analyzed concepts philosophically, like “what is cheating?”

None of the students wanted to use ChatGPT to avoid the hard work of learning about the ethics of data science, said Wildman. “They don’t want their skill sets damaged.”

At the same time, they realized that a total ban on AI assistance was a non-starter. Not only would it be unenforceable, it would leave the students unprepared for working in the real world, where AI tools are being rapidly deployed in every sort of enterprise.

Microsoft, for instance, is rolling out a new feature called Copilot, which will add AI to its most popular products, including Word, Excel, PowerPoint, Teams, and Visual Studio. With Copilot, a business analyst might not have to learn complex financial formulas. Instead, he or she could just ask Copilot to produce an Excel spreadsheet.

In such a world, said Bestavros, it’s counterproductive to ban AI use in schools. “You cannot stop that train,” he said.

Instead, Wildman and his class decided that AI should be used to assist students in performing mundane tasks, like writing simple essays or basic computer programs. Then students would use their own skills to enhance the AI’s output. They might polish up the writing style of an essay and put in additional information from their own research. Or they might add more sophisticated features to the AI-generated programs to make them faster-running or easier to use.

The students are supposed to inform their instructors when they use AI assistance in their work, and provide a record of their AI interactions to the instructor along with their assignments. This would enable a teacher to grade the work based on the amount and quality of original content produced by the student. Also, instructors would use a lower “baseline” in grading assignments completed with the help of AI, forcing such students to make substantial improvements to the AI-generated work in order to get good grades.

“Once you get something from a generative AI,” said Wildman, “you have to improve on it.”

Wildman’s class unanimously approved the standards, which have since been adopted by the entire Faculty of Computing and Data Sciences.

The rules might serve as a template for schools nationwide that are seeking effective ways to apply AI and control its use. When ChatGPT was first opened for public use last fall, a number of major public school systems, including those of New York City, Los Angeles, and Seattle, banned student use of the technology.

A spokesman for Boston Public Schools said that the school system has set up a training program to help teachers who want to bring AI tools into their classrooms.

“As this technology evolves we will continue evaluating our policies,” the spokesman said.


Hiawatha Bray can be reached at hiawatha.bray@globe.com. Follow him on Twitter @GlobeTechLab.