The Crux of the Story: The Ethics of Using Generative AI
ChatGPT is a hot topic, often with more questions than answers. Dr. Wesley Wildman, professor of Philosophy, Theology, and Ethics at Boston University, and his “Data, Ethics, and Society” class put together the first-ever blueprint for the academic use of Chat GPT and similar AI models. The Generative AI Assistance (GAIA) Policy “stresses transparency, fairness, and obligations for both students and teachers.”
Wildman joined Gary Sheffer and Mike Fernandez for an episode of The Crux of the Story, surrounding where AI and ethics intersect. In this podcast episode, Wildman emphasizes that AI is not just a technological tool but also a social, cultural, economic, and political force that will fundamentally change our world. He notes that AI is not inherently good or bad but rather a tool that can be used for both good and bad purposes. Therefore, ethics in AI is essential to ensure that it is designed and used in ways that benefit society.
"Ethics is not just about doing the right thing. It's also about anticipating the consequences of our actions and making sure that we're creating a future that we want to live in," said Wildman
The reality is that AI-generated language is not going away. Wildman stresses the importance of exploring ways to use the technology to increase skill sets without allowing it to damage an industry. Moreover, he highlights that AI will not replace human decision-making but rather augment it.
Therefore, we need to figure out how to use AI to enhance human autonomy and dignity rather than assuming that AI is a "silver bullet" that will solve all our problems.
"We need to think about AI as a collective responsibility. It's not just up to technologists or policymakers or ethicists to solve these problems. We all have a role to play in shaping the future of AI," said Wildman.