AI Task Force Report Recommends Critical Embrace of Technology and Cautious Use of AI-Detector Apps
Task force cochairs say BU should “leave room for faculty to make their own decisions on generative AI policies they adopt for their classes”
AI Task Force Report Recommends Critical Embrace of Technology and Cautious Use of AI-Detector Apps
Task force cochairs say BU should “leave room for faculty to make their own decisions on generative AI policies they adopt for their classes”
The headlines have been blaring, with new ones coming every day: “AI Will Shake Up Higher Ed: Are Colleges Ready?” and “From admissions to teaching to grading, AI is infiltrating higher education.” And this one, “Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach.”
Love it, like it, or loathe it, artificial intelligence (AI) is here to stay. Responding to that reality, BU formed the Boston University AI Task Force in fall 2023 to examine the impact of generative AI tools “on education and research, and review existing policies adopted by BU schools, colleges, and other academic institutions.” It’s an exercise colleges across the country have undertaken as they seek to put in place guidelines and policies for a technology that is still new and still taking shape. Inside Higher Ed recently reported that of “universities making AI policies, nearly half of institutions (43 percent) are working with a third party to develop AI strategy, while 30 percent are working with peer institutions or networks and 22 percent are working with professional associations.”
BU’s task force met throughout the fall and into this spring, consulting with AI experts, talking with faculty, meeting with industry leaders, and ultimately writing a report that was released this month.
Its title is self-explanatory: “Report on Generative AI in Education and Research.” Among its key points are that faculty should “critically embrace” GenAI (generative artificial intelligence), faculty should be expected to inform students in their syllabus of the course policy regarding AI, and all academic units should be expected to educate students on what AI is, what it’s capable of, and how it can best be used. Read the report’s full recommendations at the bottom of this story. Read the report in full here.
BU Today spoke with the two cochairs of the task force, Yannis Paschalidis, Distinguished Professor of Engineering at the College of Engineering, a founding professor of computing and data sciences, and director of the Rafik B. Hariri Institute for Computing and Computational Science & Engineering, and Wesley J. Wildman, a professor of philosophy, theology, and ethics at the School of Theology, a founding professor of computing and data sciences, and chair of faculty affairs, Faculty of Computing & Data Sciences, about the new report.
Q&A
with Yannis Paschalidis and Wesley J. Wildman
BU Today: The report says that Boston University should “critically embrace” the use of AI. In your view, what does that mean, practically speaking, for faculty and for students, to critically embrace this technology?
Yannis Paschalidis: Acknowledge that AI is here to stay and it can be an effective tool that can be used to accelerate research and enhance teaching. But, at the same time, acknowledge its limitations, outlined in the report, and understand the ethical and sustainability concerns associated with training generative AI models. In practice, this implies using it with caution. With the University adopting a policy to “critically embrace” AI, schools and colleges would be guided to adopt their own local policies adapted to the needs of the disciplines they serve. These policies should be consistent with the University policy but, respecting academic freedom, leave room for faculty to make their own decisions on generative AI policies they adopt for their classes.
Wesley Wildman: The embrace side of this judicious approach to GenAI is rooted partly in the fact that almost every aspect of our students’ careers will be affected by GenAI, and graduates with excellent skills in prompt engineering and flexible, fluid deployment of GenAI tools will enjoy a competitive advantage in the workplace. The critical side involves known and unknown ethical risks.
On the unknown side, consider that longitudinal studies have recently established that extensive social media use among teenagers leads to deficits in face-to-face socialization skills, including social anxiety and intensified loneliness—an example of unintended side effects of an apparently irresistible suite of technologies. What might the unintended side effects be of extending human cognition to the point that it becomes dependent on GenAI tools? Perhaps no longer knowing how to do simple arithmetic is a tolerable consequence of extensive machine use, but is the loss of generative abilities or reduced creativity—consequences that seem likely—an acceptable consequence of extensive use of GenAI?
On the known side, we already confront ethically and sometimes legally laden challenges, such as using work of others for training (with an associated IP problem), low-paid workers for feedback-based model training, traumatic exposure of low-paid content moderators, output mimicking styles of human creators, deep-fake audio and video used to spread disinformation (political campaigns, cyberbullying, revenge porn…), violation of privacy and security expectations, economic disruption through job displacement, climate impacts due to the massive server farms that train and run widely used GenAI tools, and inequalities of access. The individual and institutional challenges of the critical-embrace philosophy are to develop necessary skills while keeping eyes wide open to these known and unknown problems.
BU Today: The report also says faculty should use AI-monitoring programs cautiously. It may be fair to say that students are using AI right now. Can you elaborate on why you want to proceed cautiously on this and how you hope to see BU promote academic integrity in the age of AI?
Paschalidis: With respect to programs that detect whether generative AI has been used, we just want to emphasize that they do not provide “bulletproof” answers. They just provide a probability that a certain passage (or image, audio, etc.) was generated by AI. They can also be inconsistent, giving different answers for the same input. Thus, and particularly for assessing academic misconduct, they should be used with caution and understanding their limitations. They are not reliable enough to be used as the only piece of evidence.
Wildman: Ideally, students and instructors can both use the same GenAI-detector apps and get the same probability rating, just as they currently do for plagiarism-checking apps. At present, however, GenAI-detector apps appear to be inconsistent, in the sense of sometimes giving different ratings at different times on the same piece of prose. They are also unreliable in the sense of being inaccurate, with different detectors being easier to deceive when students introduce spelling or grammar errors or modify GenAI prose specifically to avoid detection. Accordingly, we are advising instructors and students to stay apprised of detector technology developments and to exercise caution and explicitness in their use. It is also important to notice that some companies appear to be cashing in on the established reputations of existing detectors (e.g., the new arrival ZeroGPT is being confused with the original GPTZero, though the two are very different in reliability).
BU Today: Can you talk about the timing of this report: why BU undertook this effort, how long it took, why it’s being released now, and when you hope to see some of the recommendations implemented?
Paschalidis: The taskforce was formed in the fall semester and met throughout fall 2023 and early spring 2024. We heard from many experts, consultants, people in industry, and many experts we have at BU. The roster is representative of the many disciplines we have here, and each discipline brought a different perspective on generative AI. The motivation for the task force is, of course, that AI is revolutionizing research, teaching, our daily lives, and rather than having a patchwork of local policies it is prudent for the University to have a unified policy framework and devote resources to enable training and support and incentivize new courses and new ways of teaching existing courses.
Wildman: We heard from experts representing disciplines not already named to the task force. We hope to see key recommendations implemented immediately, especially the “critical embrace” posture. Some elements will take longer, as they involve infrastructure and personnel investments.
BU Today: Can you talk about the risks versus the rewards of AI technology in higher education and how BU hopes to strike that balance?
Paschalidis: In terms of research, some of the risks are associated with uninformed use of the technology (e.g., not understanding the risk for hallucinations) and its potential to “turbocharge” and make it harder to detect plagiarism. In terms of teaching, some of the risks include overreliance and atrophy of critical skills. Hence the need to critically embrace [see first question above]. Beyond recommending AI literacy for everyone, the report includes recommendations to help mitigate these risks: for instance, adapting class assignments to emphasize process over the final product and guide students through an iterative process to produce a final product, whose steps could involve using generative AI.
Wildman: Turbocharging plagiarism refers to asking GenAI to paraphrase an existing block of text written by someone else, and then including the paraphrase as one’s own writing without citing the original source or acknowledging the role of GenAI in producing the paraphrase.
The key policy recommendations of the Boston University Task Force:
- Critical Embrace: BU should not universally prohibit or restrict the use of GenAI tools. Rather, the University should critically embrace the use of GenAI, support AI literacy among faculty and students, supply resources needed to maximize GenAI benefits in research and education, and exercise leadership in helping faculty and students craft adaptive responses.
- Pedagogical Clarity: GenAI policies adopted by any college, school, or departmental unit should be consistent with the University’s policies and reflect the critical embrace of GenAI technology. Consistent with academic freedom, individual instructors should be free to define GenAI policies suited to the learning goals of their courses, and every syllabus needs to state the instructor’s GenAI policy. Consistent with citation practices, instructors and students should acknowledge use of GenAI tools.
- Academic Misconduct: BU should advise instructors to exercise caution when using GenAI detectors as evidence of GenAI use. GenAI detector output should be regarded as only one part of a wider evidence base in evaluating possible academic misconduct. If used, GenAI detectors should be applied equally and fairly, and faculty should be aware of selection bias when applying GenAI detectors to specific suspected cases. Advance notice should be given in syllabi, including naming the specific detectors employed, so that students have an opportunity to use them also. Instructors need to be informed and supported in using reliable and consistent detectors.
- Security and Privacy: BU should adopt policies to prevent the inadvertent publicizing of sensitive or valuable information through uploading it to GenAI tools.
- Centralized Decisions: BU should centralize decisions on GenAI tool acquisition and licensing, on resourcing and personnel for supporting GenAI literacy and pedagogical reflection, and on administrative structures to ensure ongoing adaptive response to rapidly developing GenAI technology.
Comments & Discussion
Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.