Demystifying AI to Mitigate its Negative Consequences
Shlomi Hod (BU PhD candidate, CS) Leads Congressional Workshop on Legal, Ethical, and Societal Implications of Artificial Intelligence
By Margo Stanton, Hariri Institute for Computing
The rise of Artificial Intelligence (AI) has given pause due to rising concerns about privacy, bias, safety, and security. While many systems have been lauded for their positive impact, such as their use in helping organizations detect and report child sexual abuse and human trafficking, other applications have caused harm.
More disturbingly, the National Eating Disorder Association (NEDA) suspended its Tessa chatbot when it was discovered that the AI tool unexpectedly started to encourage weight loss and dieting to individuals suffering from anorexia, bulimia, and other eating disorders.
NEDA’s explanation for this issue was that an update to Tessa, which they said they were not aware of, led to the chatbot moving beyond the pre-programmed, approved responses to generate its own responses. Cass, the company that designed Tessa, explained that this update and the chatbot’s new response feature, though unexpected, was consistent with the contract it had with NEDA.
As AI continues to permeate our society, these challenges are expected to continue and with it, the need for guidelines for the design, deployment, and maintenance of these systems.
Lawmakers are working to draft policies that will implement regulations and guardrails for this technology. However, few have the background that is necessary for understanding how AI works and the impact it can have.
This August, congressional staff had the opportunity to improve their technical acumen with the help of Shlomi Hod, a computer science PhD candidate at Boston University who is advised by Ran Canetti, a professor at BU’s Faculty of Computing & Data Sciences and Department of Computer Science, and Head of Hariri Institute’s RISCS Center.
Hod was invited to Washington D.C. by the office of New Mexico Senator Ben Ray Luján to present a workshop on the ethical, legal, and societal implications of AI. The workshop was a condensed version of a course that is co-taught by Hod at Boston University and offered globally to students of other academic institutions.
The objective of the two-day, hands-on workshop was to enhance Responsible AI literacy of congressional staffers to help improve their understanding of current issues and have effective communication with experts. With a focus on the human values embedded in AI systems, workshop participants designed and analyzed AI systems, exploring the relationship between AI design choices with ethical and legal implications.
“We tried to create a tailor-made program for the kinds of issues staffers are working on. For example, foundation models, such as the technology behind Chat-GPT, allows developers to create systems that appear to work well without conducting proper performance evaluations,” Hod explained. “We wanted to demystify this technology and help staffers understand how it’s being built, what are its limits, and how it could be deployed in a safe manner, if at all in its current state.”
Congressional staffers are responsible for gathering information including interviewing industry experts and other professionals ahead of Congressional hearings. They present this research to senators and representatives and are involved in drafting bills. With AI on the forefront of new legislation in Washington D.C., it’s critical for staffers to have not just a foundational understanding of how the technology works and its impact but also to be able to critically convey this knowledge in their work and conversations.
“Companies, journalists, academics, everyone is pushing out information on AI,” Hod said. “With so much hype in the news and online about AI, our goal was to empower staffers to be able to critically examine claims on AI and get a signal out of the noise.”
Hod’s workshop helps bridge the gap between the government and developers of AI systems, taking steps to make the use of AI safer. By starting a conversation between academics and policymakers with different expertise, new methods to find a solution to regulating AI can be created.
“We want to learn about current challenges that staffers face with their existing knowledge, so we could support them to produce more effective regulation,” Hod explained. In the future, this could take the form of a workshop focused on targeting these challenges.
This workshop, co-sponsored by Boston University, IEEE-USA, Technion, and Tel-Aviv University, was based on a course currently offered at Boston University co-taught by Hod: CDS DS 682: “Responsible AI, Law, Ethics, and Society”. In the past years, the course was delivered in collaboration with Tel Aviv University, the Technion and Bocconi University. The course employs a training methodology to gain Responsible AI literacy and acquire competence to facilitate an effective dialogue on Responsible AI between professionals across the law, business, computer science, and data science disciplines. It explores topics such as fairness, liability, transparency, and privacy values with AI. In spring 2024, the course will be taught at BU together with UC Berkeley.
“By bringing people from different disciplines together, we can create structured experiences to investigate the normative ramifications of AI,” Hod said.
Shlomi Hod is a fourth year Computer Science PhD candidate at Boston University. His research interest in responsible AI focuses on differentially different private synthetic data for government data, the interaction between computer science and the law, and interpretable machine learning.. Prior to coming to Boston University, Hod co-founded the Israeli Cyber Education Center where they led the development of cyber education programs for children and teens. Hod also worked in the Israeli government at the intersection of cyber security and data science
More information about the workshop can be found here.
Press inquiries, contact email@example.com