BU Law Faculty and Students Take on Algorithmic Bias
Combating such bias is at the heart of the Technology Law Clinic’s work and a central aspect of several other courses and initiatives at BU Law.
The white mask that Technology Law Clinic client Joy Buolamwini wore as part of her inquiry into artificial intelligence-powered gender classification products will soon be on display at the Barbican Centre in London, but Boston University School of Law students got an up-close-and-personal look last fall.
That’s because clinic students, under the guidance of Director Andrew Sellars, represented Buolamwini in her study of the facial analysis technology, called Gender Shades. They helped her make sure her research didn’t run afoul of the Computer Fraud and Abuse Act, and they helped notify the relevant companies of the results in advance to give them a chance to address the products’ shortcomings, including poor recognition of people of color and those with traditionally feminine facial features.
The study is a stark example of the algorithmic bias that has taken root in so much of the technology that runs through our daily lives, directing the online ads we see for products or jobs or housing, determining the care we receive from government agencies, and even shaping how law enforcement agencies allocate crime-prevention resources.
Combating such bias is at the heart of the Technology Law Clinic’s work with Buolamwini and other clients and a central aspect of several other courses and initiatives at BU Law. The law school has a longstanding collaboration with the Hariri Institute for Computing and Computational Science and Engineering—the BU Cyber Security, Law and Society Alliance—in which law professors, computer science researchers, and social scientists engage on critical questions involving technology and ethics. Last fall, two BU Law professors—Stacey Dogan and Daniela Caruso—teamed up with computer science Professor Ran Canetti and faculty from Harvard, Columbia, and the University of California at Berkeley to launch a new online course called Law for Algorithms. And this summer, Danielle Keats Citron, an internationally recognized information privacy expert and a leading scholar on algorithmic bias, is joining the BU Law faculty.
Algorithmic bias is a product of machine learning, which “at its simplest,” Sellars says, involves developing an algorithm “that is adaptive to data you feed it.”
“If you feed it large quantities of data over time, it can, for lack of a better word, learn,” he says.
The problem, of course, is that these algorithms are learning from humans who come with all kinds of implicit and explicit biases. Discrimination is hard enough to prove when a human is to blame; now civil rights advocates and regulators are grappling with how to prove discrimination by ever-changing algorithms. In March, the US Department of Housing and Urban Development sued Facebook, claiming its algorithms allow advertisers to discriminate by only allowing certain types of people to see ads. In 2018, an Arkansas judge ordered that state’s Department of Human Services to stop using an algorithm to determine the number of at-home care hours people with disabilities receive because so many patients’ hours had decreased dramatically. And the ACLU and other organizations have raised questions about the use of algorithms by law enforcement agencies trying to predict where crimes will occur.
More than a decade ago, Citron raised the idea of “technological due process”—the ability to have notice of and challenge decisions made by non-human arbiters. She has argued that one way to ensure such due process would be to have routine algorithmic auditing by the US Federal Trade Commission, which protects consumers against unfair business practices. More recently, Citron has made the case for abandoning punitive algorithmic decision-making in favor of more “pro-social” uses of the technology, such as offering translation for non-English speakers interacting with government agencies.
“We are in a moment of deep uncertainty, and, often when it comes to tech, we adopt first, ask questions later,” Citron says. “We are doing that in ways that are having great consequences on people’s lives and opportunities.”
Dogan agrees.
“The question of how the law should handle this bias is extremely difficult,” she says. “Part of the problem is the lack of appreciation among legal scholars and policy makers of just how complex machine learning is. Technologists, on the other hand, don’t always appreciate the complexity of the legal and regulatory framework in which they work. We are at a stage in which we really need to learn from one another and develop a sophisticated understanding from the legal perspective of what is happening with the technology, and vice versa. That’s why we started this course and what we are trying to achieve in our collaboration with the Hariri Institute.”
The approach seems to be working. Julia Schur (’20), who wants to work in technology law, took the Law for Algorithms class last fall.
“I didn’t go into that class looking for answers, but looking for which questions to ask,” she explains. Schur also spent her second year helping clients in the Technology Law Clinic, including on issues of algorithmic bias, and will be a research assistant for Citron next year.
“To best advise computer scientists, you need to understand their language, and they need to understand yours,” she says. “I think that’s what we’re doing very successfully by having these classes: We’re teaching future generations of lawyers to not fear novelty but to tackle it head on.”
Reported by Rebecca Beyer
Related News
- “Jurisdictional Ping-Pong” in Patent Cases
- BU Law Professors Plan to Study Impact of Biopharmaceutical Funder CARB-X
- Who Approves the CIA’s Jokes?