Q&A with Ahmed Ghappour, Associate Professor of Law and Data Science Faculty Fellow
BU’s Rafik B. Hariri Institute spoke with Ghappour about his work at the intersection of law and computer science.
Following the March 2017 launch of the Boston University Data Science Faculty Fellows program and call for nominations, Provost Jean Morrison announced the program’s inaugural fellows, John Byers, professor of computer science, and Ahmed Ghappour, associate professor of law (joining BU in fall 2017), this past May.
Designed as an integral part of BU’s Data Science Initiative (DSI) and housed in the Hariri Institute for Computing, the Data Science Faculty Fellows program aims to assemble a cluster of uniquely talented faculty whose expertise transcends traditional boundaries, leveraging the field’s three methodological disciplines of computer science, statistics, and electrical & computer engineering to enable fundamental advances in the entire landscape of academic disciplines.
The Hariri Institute caught up with Ahmed to learn about how he uses data science in his research and what he’s looking forward to as the Data Science Faculty Fellows program continues to grow.
What are your current research interests, and what is the intended societal impact of your work?
Broadly, my research bridges computer science and the law to address the contemporary challenges wrought by new technologies in the institutional design and administration of criminal justice and national security, with a focus on the emerging field of cybersecurity.
The advancement of modern technologies in a globally networked world poses new challenges to traditional conceptions of power, territory and violence. Threat actors are able to deploy malicious code across a borderless Internet, causing substantial physical, economic, or political damage on a global scale. At the same time, the state has mobilized the use of new technologies—expanding and indeed, redefining surveillance capabilities—to predict, prevent and defend against threats in the modern era. My research lies at the juncture of these threads, drawing from computer science concepts to examine the critical points at which they converge.
Through interdisciplinary engagement with the underlying technologies, my work aims to deepen our understanding of existing legal norms and protections, identify the challenges that exist in applying them to new technologies, and propose integrated institutional reforms where necessary. In doing so, I explore how our conceptions of privacy, due process and violence are impacted by modern surveillance technologies and the evolution of cyberspace as a theater of conflict.
How do you use data science in your current research?
One of my current research projects considers the question of when and to what extent fact-finders in the criminal adjudicative process defer to factual outcomes generated by “opaque” algorithms, whose functions cannot readily be digested by human-scale reasoning.
As criminal adjudication moves towards data-driven fact-finding, the issue of fact-deference to algorithmic classifications of culpability—what I term machine generated culpability—will have constitutional significance. There is a growing body of legal scholarship on law enforcement’s use of automation technologies, or “predictive policing.” However, the analytic focus has almost exclusively been on the data that is fed into the algorithm, or the “classifications” that come out of the algorithm, with little attention being paid to the algorithm itself.
In order to satisfy legal and ethical norms, algorithmic outputs must be acquired non-accidentally. That is, the algorithm must have acquired its conclusion by taking reliable steps in the course of its normal functioning. The criminal legal process provides mechanisms that can evaluate data science tools on this basis. For example, an expert may be required to testify about an algorithmic output and what it means in the evaluation of machine generated culpability. This may include testimony about the purported functionality of an algorithm (i.e. what it is supposed to do), and its architectural integrity (i.e. how well it does what it is supposed to do).
This evaluation process requires a better understanding of how opaque algorithms present different challenges, and require different solutions, depending on the nature of the opacity. There are three distinct, but often conflated, forms of opacity: (1) opacity wherein information that is critical to understanding the algorithm is restricted (e.g. due to proprietary interests of a private party, or government classification); (2) opacity stemming from the complexity of algorithmic programming, which requires specialized expertise in computer science; and (3) opacity that results from the disconnect between the logic of the algorithm and the demands of human-scale reasoning, which requires some form of semantic interpretation.
Against this backdrop, I am developing a normative framework for evaluating deference to machine generated culpability in criminal adjudicative fact-finding. It suggests that all law enforcement tools (including binoculars, the breathalyzer, the dog sniff, software, big-data analytics) are best thought of as species of an algorithm, which easily divide into categories that reflect their agency and opacity.
What excites you most about the new Data Science Faculty Fellows program at BU?
I’m most excited to be teaching a data ethics class to both law and computer science students. I’m confident that this, in conjunction with two courses I am developing at the law school—Cybersecurity Law, and National Security and Technology: Law and Policy—will open the door for new research and academic opportunities in the future.
How do you see data science changing or impacting your field over the next 3-5 years?
The advancement of modern technologies in a globally networked world poses new challenges to traditional conceptions of power, territory and violence. Chief amongst these advancements is the use of data science in the administration of security, both on the Internet and in physical space. The benefits of using these technologies is virtually unbound, but it is critical that we tread deliberately and with great care to avoid undermining the values we hold most dear.
This Q&A originally appeared in the Hariri Nexus, a newsletter created by BU’s Rafik B. Hariri Institute for Computing and Computational Science & Engineering. Read the full article here.