Algorithmic Fairness Requires Diverse Developers, Regulatory Vigilance
Growing data science field is urged to adopt ethical standards and collaborate with multiple disciplines to create new algorithms.
Regulators worldwide want organizations that collect and analyze data to use what they gather responsibly and to prevent discriminatory practices. But in a world in which a growing swath of algorithms guide decisions on everything from hiring people for jobs to determining who is eligible for life insurance, how should governments, data scientists, and societies at large think about ensuring the outputs of these systems are fair?
That question was the tentpole for an October 13, 2023 roundtable discussion, “Algorithmic Fairness: Regulation, Research, and the Reality of Implementation,” hosted at the Faculty of Computing and Data Sciences (CDS). The discussion was one session in a two-day Public Interest Technology University Network annual conference in which participants explored how universities can partner with government, the private sector and community organizations to foster public interest technologies and address modern challenges.
The discussion highlighted both how challenging it is to eliminate bias from algorithms – and how essential the work remains. Among the panelists’ insights:
Algorithmic fairness begins with design
It’s not enough to focus on the output of an algorithm, said Sina Fazelpour, assistant professor of philosophy and computer science at Northeastern University’s Khoury College of Computer Sciences. This is an approach that some regulators use and it risks missing unjust outcomes. “The impact of the algorithm, the social consequence of integrating the tool is not the same as the quality of the facility,” he said.
Instead, Fazelpour, whose research focuses on issues of justice, diversity and reliability in data-driven and AI technologies, said he recommends examining the lifecycle of an algorithm and injecting diverse viewpoints into its creation. For example, reviewing the design, development and deployment process ensures the integrity of choices data scientists make. And including a variety of technical and business backgrounds on the development and deployment teams elevates the quality of algorithms.
Employing diverse technical experts safeguards fairness and yields business benefits
At MassMutual, a mutual life insurance and financial services company founded in 1851, also a supporter of BU Faculty of Computing & Data Sciences’ responsible use of data research , student experiential learning and broadening participation programs, underwriters were calculating risks long before computers and AI came along. Today, the company employs nearly 100 people focused on data science from backgrounds including science, technology, math and engineering, said Kevin Fitzpatrick, head of privacy, data and artificial intelligence governance at MassMutual.
Implementing AI-based algorithms has enabled the company to introduce life insurance policy applications that can tell customers if they are approved within a day, instead of waiting weeks for paperwork and getting a medical exam, Fitzpatrick said.
When it comes to reviewing algorithms for fairness, MassMutual relies on its multidisciplinary approach and data and AI governance policies. The company also employs experts in medicine, law and actuarial science, so it ensures it complies - and often exceeds - applicable laws, Fitzpatrick said.
“The key is this cross-functional approach. We’re engaging with lots of different disciplines early and throughout the process to deployment and ongoing operations,” Fitzpatrick said. He added that he leads a business function dedicated to this work – a demonstration of the company’s commitment to algorithmic fairness.
Countries’ adoption of AI algorithms requires constant vigilance to ensure fairness
Governments implementing algorithms and AI systems to guide decision-making raises intense concerns, because the adoption of ethical approaches varies widely, said Merve Hickok, president of the Center for AI and Digital Policy. Among its projects, the Center issues an annual report documenting countries’ AI policies and how they stack up against democratic principles. Hickock also led an effort to establish ethical AI guidelines – work that the Organization for Economic Cooperation and Development (OECD) used in its ethical principles.
“You would be amazed at the difference between what a country commits to and what it actually does on the ground. [They say] we respect human rights, but then go and use AI and algorithmic systems, automated decision-making systems and predictive policing, and facial recognition for mass surveillance,” Hickok said.
Another challenge for policymakers: ensuring due process when a person feels wronged by the result of an algorithm. “Public agencies have a duty to ensure due process. If you’re an individual subjected to an outcome by an algorithmic decision system, can you actually contest the decision?” she asked. The lack of such a process has the potential to harm citizens and communities at large, she said.
About the Moderator:
Prof. Michelle Johnson, Professor Emerita, Journalism, Boston University
About the Panelists:
- Prof. Sina Fazelpour, Assistant Professor of Philosophy and Computer Science in the Department of Philosophy and in the Khoury College of Computer Sciences at Northeastern University
- Kevin Fitzpatrick, leads MassMutual’s Privacy, Data, and AI Governance Organization
- Merve Hickok, President – CAIDP, Founder-AIethicist.org, 2022 top 100 most brilliant women in AI ethics
- Prof. Manish Raghavan, Drew Houston (2005) Career Development Professor at the MIT Sloan School of Management
By Michael S. Goldberg, CDS Contributor