Since the November 2022 launch of OpenAI's ChatGPT, the AI text generator has gained an estimated 100 million monthly active users. Leaning on the expertise of faculty and the discipline's fundamental openness to embrace the large language model, Boston University’s Faculty of Computing & Data Sciences (CDS) has taken groundbreaking steps to create and adopt a policy on the use of generative AI tools in the classroom.
“CDS is instituting a policy that will help our community navigate the brave new world of teaching and learning with the assistance of large language models and generative AI technologies,” says Bestavros, Warren Distinguished Professor of Computer Science. “When it comes to regulating the classroom use of a technology, it was only proper for CDS to take the lead.”
The GAIA policy is designed specifically for coursework and includes guidelines for computing and data science students and educators. Students are encouraged to learn how to use AI tools — to enhance, not damage—their developing writing, coding, and communications skills. For educators, the policy creates a baseline to ensure fair grading for both those who do and do not use AI tools.
"The GAIA policy stresses transparency, fairness, and honoring relevant stakeholders such as students eager to learn and build careers, families who send students to the university, professors who are charged with teaching vital skills, the university that has a responsibility to attest to student competency with diplomas, future employers who invest in student because of their abilities and character, and colleagues who lack privileged access to valuable resources." -- Excerpt from CDS GAIA Policy
The CDS-wide policy is largely based on a blueprint that resulted from students’ work on the first case study of the inaugural offering of CDS’ Data, Society and Ethics (DS 380) course, led by Wesley Wildman, professor of Philosophy, Theology, and Ethics, and of Computing and Data Sciences, and CDS Chair of Faculty Affairs.
“The DS 380 class didn't set out to create a CDS-wide policy; we set out to develop a policy that we could use in our class, and that we could share the blueprint with others for their consideration,” says Wildman. “It turned out to be useful for discussion, and eventually useful even as a policy.”
These blueprints and the challenges and opportunities of embracing generative AI in the academy were discussed in a public forum on “Learning to Think” with assistive AI and further refined based on feedback from our Academic Policy Committee and from the CDS faculty members at large.
Mark Crovella, a professor in both Computer Science and the Faculty of Computing & Data Sciences and Chair of Academic Affairs in CDS, says the policy's foundation is grounded in teaching students to approach problems that data science can help answer.
"The students and faculty in CDS are in contact with the technologies behind generative AI on a daily basis, and CDS explicitly considers the ethical use of data science tools to be part of its curriculum and mission," says Crovella, chair of the CDS Academic Policy Committee. "So our students and faculty and particularly in this case, Wesley’s class, are well equipped to think about how best to incorporate generative AI into classroom activities."
When describing the importance of adopting generative AI policy and the role of educators, Wildman says every aspect of the university is affected by generative AI and by creating policy. "CDS challenges the rest of the university to think through how they can take generative AI seriously, and still maintain academic standards."
“As university faculty and administrators, we fulfill our sacred duty to our students, their parents, and their future employers by digging deep, rethinking our pedagogies, and showing students how to respond to generative AI with honor, moral agility, and intellectual creativity," says Wildman.
According to Bestavros, the policy development and adoption process is consistent with CDS' professional pledge aspiring "to improve public understanding of computing technologies and their potential for good and ill."
“I want to give a shout-out to our community for the unique way in which it approached this issue and how it converged on this policy," says Bestavros. "This communal approach involving all stakeholders exemplifies the participatory approach to regulating technology.”
Resources & News Coverage
- Boston Globe: BU creates standards for chatbots in the classroom
- Policy Adopted: Student-Designed Policy on Use of Generative AI Adopted by Data Sciences Faculty
- A Campus with ChatGPT: The Ethics Behind AI Text Generation in Education
- Learning to Think after ChatGPT Panel Discussion
- Why ChatGPT Is Both Exciting and Unsettling for Students, Faculty
- Reddit I/AmA with CDS Prof. Wesley Wildman
- The Crux of the Story: The Ethics of Using Generative AI