Comments on AI Accountability: Harnessing BU’s Interdisciplinary Community

Stacey Dogan, BU Faculty of Computing & Data Sciences

Voice of Stacey Dogan, Professor of Law, BU School of Law, BU Faculty of Computing & Data Sciences

Where might you find a gang of law and computer science professors hunkered down on a fine June day to finalize comments on AI governance for the National Telecommunications and Information Administration? At BU, of course! This kind of project lies at the heart of CDS’s mission: to build “connective tissue” between disciplines to generate timely, informed research that has real-world impact. In this case, the connective tissue has been many years in the making. Back in 2016, a group of Law and Computer Science researchers interested in technology policy and ethics began meeting over brownbag lunches to discuss our work. We lawyers learned about cryptography, multi-party-computation (MPC), and zero-knowledge proofs; in turn, we shared our views on the legal status of new technologies and their implications for privacy, intellectual property, consumer protection, and other areas of law.

Over time, our informal working group spawned a speaker series, interdisciplinary coursework, and NSF-funded research on the relationship between MPC and privacy law. Not surprisingly, as the CDS Faculty came together, several of us joined its ranks. The CDS community has reinforced our existing connections and has introduced us to other interdisciplinary-minded colleagues interested in the ethical, legal, and policy implications of computing and data-science tools. Each of us brings a unique perspective to these questions. Some of us – like Ran Canetti, Mayank Varia, Adam Smith, Marco Gaboardi, Leo Reyzin, and Aloni Cohen (a former BU post-doc now teaching at University of Chicago) – seek technology solutions to privacy-and-security challenges; others – such as Mark Crovella – study how recommendation systems and other algorithmic tools operate and shape the content perceived by their users; still others – including Andy Sellars, Woody Hartzog, Ngozi Okidegbe, Chris Conley, Rory Van Loo, and myself – consider how existing and proposed laws apply to those who develop, distribute, and use computer-related technologies. For the NTIA project, my colleagues Kate Silbaugh and Chris Robertson offered valuable insights on product liability and tort law.

This breadth and depth of interests and expertise made us uniquely positioned to respond to the NTIA’s recent request for comments on AI accountability. The agency was seeking input on “what policies can support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems.” Of course, these assessment mechanisms are not ends in themselves; by facilitating the transparency and understanding of AI systems, the reasoning goes, they will enable users and the government to demand greater accountability, which should lead to more trustworthy technologies.

Having discussed AI accountability countless times with Law and CDS colleagues, I trusted that we had something to contribute – and indeed we did! After weeks of brainstorming sessions, debates, and contested drafts, we submitted a set of comments addressing both legal and technological aspects of the NTIA’s inquiry. Despite our differences in methodology, priorities, and substantive expertise, we reached consensus on five principles that we believe should guide the government as it approaches the AI accountability project:

1. AI accountability must be implemented through the entire lifecycle of systems.

2. Accountability mechanisms must be both robust and broadly accessible.

3. Access and transparency are consistent with protecting privacy and intellectual property rights.

4. Accountability and transparency mechanisms are a necessary but not sufficient aspect of AI regulation.

5. AI regulation requires rules for both generalized and specific contexts; we recommend collaboration between specialized agencies and a meta-agency with AI-specific expertise.

To a remarkable extent, the principles derive from our own research and experience: Rory, Woody, and I have written about the need for substantive legal rules that protect consumers and don’t simply accept technology design as inevitable; Mark has experience with platforms blocking his attempts to study their operation, which demonstrated the need for rules that protect third-party access and interrogation; research by Ran, Marco, and Aloni demonstrates the feasibility of proving a system’s suitability for purpose without the need to reveal trade secrets or code.

Overall, our collaboration reinforced my conviction that the CDS mission is both critically important and challenging. Research and teaching across disciplines is hard. We speak different languages. We approach questions with different methodologies and have varying levels of confidence in the prospect of reliable technological solutions. These challenges made the drafting process cumbersome and alternatively energizing and frustrating. But partnerships like ours are essential to our government’s efforts to achieve a better balance in the relationship between technology developers and the public. AI accountability presents special challenges to regulators, not only because of the complexity and fast-paced nature of the technology, but also because decades of technological optimism led to a norm of non-intervention that has only recently started to turn. For our government to make effective policy, it needs to hear from experts on the front line of AI research, not only to understand how current systems work but to appreciate legal, regulatory, and technological options going forward.

See the final comments on AI governance for NTIA here:

About the author: Stacey Dogan is a Professor of Law at Boston University, and is a leading scholar in intellectual property, competition, and technology law, who has been instrumental in building interdisciplinary and inter-institutional collaborations in the areas of law, technology, and entrepreneurship.