Students’ POV: Generative AI in the Classroom
By Natalia Clark (CAS'23) and George Trammell (CDS'24)
The whiteboard was covered with notes as three separate panels adorned the titles: “ALLOW,” “RESTRICT,” and “BAN.” Hands flung up from every corner of the classroom, adding questions and concerns to combat each new idea as we searched for common ground. When classes resumed this past January, the emerging language model ChatGPT seemed little more than a conversation starter among our more tech-adjacent peers — but within a week, we knew otherwise. During the inaugural meetings of what would become a significant and impeccably timed course on data and ethics, Dr. Wesley Wildman replaced our first case study with an immediately relevant debate on the proper academic use of such tools. On that day, our class of 47 would design and adopt the Generative AI Assistance (GAIA) policy in an attempt to establish our class’s new definition of academic dishonesty. In the coming weeks, the CDS faculty would vote unanimously to adopt the GAIA policy for the entire academic unit. As two of the students in Wildman’s DS380: Data, Ethics, and Society, we’ll explain our perspectives on exactly why our class wanted a policy in the first place, how we created GAIA, and our hopes for this discussion to stay lively in the future.
"On that day, our class of 47 would design and adopt the Generative AI Assistance (GAIA) policy in an attempt to establish our class’s new definition of academic dishonesty. In the coming weeks, the CDS faculty would vote unanimously to adopt the GAIA policy for the entire academic unit."
To Ban or Not to Ban? Why our class wanted a ChatGPT Policy
As Dr. Wildman introduced us to the countless fields already impacted by large language models, we understood the need for both caution and timely action. We were on the frontlines of creating a policy that could help control our understanding of classroom AI usage and integration. As students who value their intellectual toolkits and active learning, we, along with a majority of our peers, were worried that this technology would hinder us by leading to careless cheating without fruitful learning. It was encouraging for us to hear that others felt the same way: we didn’t want to get screwed over and we didn’t want to get dumber.
However, we weren’t all on the same page on what kind of policy we wanted.
During the beginning of our class’s policy discussion, there were those – myself (George) included – that felt the only sensible conclusion was an outright ban of ChatGPT and any similar applications. To me, it seemed hopeless to adequately monitor the use of an easily accessible browser tool; those who would use it will use it, regardless of an official policy. But, as the discussion continued, my classmates highlighted the importance of drafting a more nuanced and future-focused policy. After all, this technology isn’t going anywhere, and it would be disingenuous for us, as academics, not to integrate these new tools into our active skill sets. By the end of the discussion, no one wanted a simple ban, and after some considerations for wording and enforcement, the first draft of GAIA was born.
Working with GAIA

With GAIA now cemented in our class’s code of conduct, we scanned our assignments on the syllabus and contemplated our first academically mandated use of ChatGPT. While some students were eager to showcase their prompt-making proficiency, others were put off by the complexity of the policy and its ambiguous grading criteria. For many students, it ultimately seemed safer to avoid ChatGPT altogether. Upon thinking about completing assignments with the GAIA policy, I (Natalia) was partly deterred for fear of not using the technology creatively enough to challenge what I knew I could achieve with my own organic thoughts. In a way, I think all of our classmates felt that to use this technology in conjunction with academic work, we really needed to spend a lot of time understanding its capabilities.
While I played with it in my free time, I wasn’t confident enough in my own prompting abilities to have it create groundbreaking additions to my assignments. On the other hand, more experienced students were finding all sorts of innovative ways to engineer useful results, including everything from source material summaries to drafting full outlines. In any case, neither type of student seemed to have a noticeable advantage; those who used ChatGPT found a new entry point into essay writing, and those who stayed the traditional course did not feel disadvantaged. Overall, the implementation of GAIA calmed students’ anxieties around ChatGPT and made space for those of us who found productive uses for large language models as a whole.
"The power of these new tools can no longer be ignored as they stand ready to reshape our conceptions of writing and education as a whole. It is our responsibility as an academic community to explore the uses of generative AI and to embrace innovation brought about by new technologies."
Looking Ahead
While this policy was important to devise during this changing academic landscape, we recognize it does no more than acknowledge equity concerns as ChatGPT develops its premium, monthly subscription option. In these past months, this has already become a concern for students to have access to an equitable learning environment. We hope that the conversation continues surrounding this policy as students reanalyze and adjust their expectations for their own experiences during this transformative time in academics. Additionally, we firmly believe it is imperative that students, professors, and faculty from all departments join in on this conversation immediately as generative AI continues evolving. The power of these new tools can no longer be ignored as they stand ready to reshape our conceptions of writing and education as a whole. It is our responsibility as an academic community to explore the uses of generative AI and to embrace innovation brought about by new technologies. Sure, we took the first steps...but will you take the next ones?
About the Authors:
