Can an Ancient African Philosophy Save Us from AI Bias?

Sabelo Mhlambi, who will speak Friday as part of the Hariri Institute’s speaker series AI & Inequalities–Creating Change, says technology companies have to understand that they have an indispensable duty to the community. Photo courtesy of Carr Center for Human Rights Policy
Can an Ancient African Philosophy Save Us from AI Bias?
Inaugural Hariri Institute for Computing and Computational Science & Engineering Speaker Series kicks off Friday with exploration of colonial roots of AI inequities
The biases of artificial intelligence systems, from programs that predict criminal recidivism to those that vet job applications, are well known, but their origins are rarely tracked back to the colonial expansion of Europe. Sabelo Mhlambi, a Practitioner Fellow at Stanford’s Digital Civil Society Lab and a 2018-2020 Technology & Human Rights Fellow at Harvard’s Carr Center for Human Rights Policy, researches what he sees as the centuries-old legacy of artificial intelligence (AI) bias and a possible remedy for that bias in Ubuntu, a pan-African philosophy that puts the good of the community before individual self-interests.
Mhlambi will share his thoughts on the problem and the solution at 11 am Friday, February 12, at the first of four events in the Hariri Institute for Computing and Computational Science & Engineering Inaugural Distinguished Speaker Series, AI & Inequalities—Creating Change. Mhlambi, who is also an affiliate at the Berkman-Klein Center for Internet & Society, is the founder of Bantucracy, a technology and technology policy company that focuses on Ubuntu ethics.
BU Today talked to Mhlambi about the long history of injustice behind AI bias, and how a broader appreciation of Ubuntu could lead to more equitable applications of technology.
Q&A
With Sabelo Mhlambi
BU Today: Your talk is titled Decolonial AI: Confronting AI’s Socially Constructed Role and Its Undermining of Human Rights and Human Dignity. So my first question is: what is decolonial AI?
Sabelo Mhlambi: Decolonial AI tries to look at the underlying systems built on top of legacies of slavery, genocide, and colonialism that shape the way we build artificial intelligence, how we use artificial intelligence, and the problematic role that artificial intelligence plays in society. The core idea is that the legacy of colonialism continues today, it is just happening in different form—in literature this is described as “coloniality.”
How did that happen?
It happens because we build artificial intelligence systems without examining the other interconnected systems of inequality, and we end up creating an environment where artificial intelligence exacerbates existing inequalities.
What are some of the inequalities exacerbated by AI?
Racism is one. And sexism. To better understand AI through the lens of race, we have to look at the crucial role of racism and colonialism in shaping today’s world. Artificial intelligence picks up on this unresolved status quo. When we look at sentencing recommendations from AI systems, we find that they tend to inaccurately determine that African Americans are more prone to recommit crimes than white people. That’s because the previous sentencing was determined by judges whose opinions were biased. We have also seen that systems that rate résumés submitted in job applications give women and Black people lower scores. We see the same thing in credit scores used to lend money. Race is always in the background, shaping the way we operate, and when we build artificial intelligence systems based on such data we reproduce the biases.
What does “decolonial” mean as it is used in your talk tomorrow?
To understand the meaning, we first have to accept that our world was largely shaped by colonization and other human rights violations. If we can’t accept that, we can’t move forward.
We are trying to point out and address the social, political, and economic inequities that are a direct result of colonization. This is an emerging movement, and the main idea is that the supposed rationality that shaped our economic system and our political systems was in fact built on a series of contradictions and injustices. The concept of the rational European was used to deny the reality that Africans and indigenous people were also rational. Therefore, those people should be subjugated.
So the rational European was actually not rational?
Yes. If you look at the philosophy of rationality as personhood, you find that it was flawed from the start by irrationality. While promoting liberty and individual rights, it was sustained by the oppression of human rights and that was often justified by prominent European and American thinkers. In the United States, we had liberty and freedom, but with slavery. We had “free markets,” but they were sustained by forced markets in Africa and by forced markets in Asia. People were dispossessed of their own economic systems and livelihoods and forced to participate in systems developed in Europe and America for the benefit of the West. That was the great achievement for Europeans, yet it was founded on inconsistent and irrational ideologies. The system has been broken for centuries.
All of that doesn’t mean there aren’t benefits from the scientific revolution, but it is saying that we should look at how it created such disparities between the Western world and the rest of the world.
How does all of this relate to decolonial AI?
A driving force behind decolonial AI is that if we don’t look at how these historic systems of inequality still persist, the AI systems will always reproduce the inequities. We cannot risk continuing to build AI systems on a broken, partial, and biased foundation.
How can we do that?
Well, at the very least, we have to start having conversations about it. We have to take a step back and think about where we went wrong. And we have to go far enough back to see the way our economic, social, and political systems have been shaped. The economy was always designed to lift one group up at the expense of another, often through force and violence. That’s how capitalism spread. It wasn’t because of its own merits or its own virtues.
We have to have a conversation and invite different stakeholders, including those people who have been affected by historic injustices. We have to look at the problems together, collectively, and we have to lay out the possible solutions collectively. One solution is Ubuntu.
Tell us about Ubuntu. What is it and how does it play into your discussion of AI bias?
Ubuntu is a sub-Saharan African moral philosophy that gives us a good framework to think about how we might build artificial intelligence and also how we might restructure society and redress historical harms. Ubuntu literally means “becoming a person.” It is a philosophy that many African ethnicities have developed over thousands of years, and it defines what it means to be a person. With Ubuntu it is not enough to simply be a rational thinking being. That’s not what makes us people or separate from other creatures. With Ubuntu, being a person is about being relational. It’s through our harmonious relationships with others that we become people, and also through meaningful relationships with nature, community, and society that we enrich our own humanity
The main point is that for civilization or for society to advance, it’s not enough to be scientific and rational. Through our technology and scientific developments we can easily destroy each other and the world. Progress and prosperity occur when we cooperate and collaborate. We have to work together and enrich the humanity of each other. We have to place the ultimate value in [the fact] that we are one human family, inextricably linked to each other. Ubuntu captures that understanding. We are all connected.
How do we incorporate that understanding in AI?
There are different ways we can do that. One concerns our relationship to the people who will use these systems. We have to build systems that reflect their values and well-being. We have to involve the idea of community into the building of AI systems.
And we have to look at the funding of those systems and what kind of people get the venture capital to build the systems. Also, the skill set needed to build AI systems needs to be democratized and the benefits of the system have to be accessible to all, even the most underserved parts of the population.
Lastly, I would say that the tech companies, the large and small tech companies, have to understand that they have an indispensable duty to the community. They do not exist in isolation. They are also part of the community, with obligations to the well-being of the community. When companies have to make a choice between people and profit, they have to choose people. Through such unity we can face the challenges of AI and of the ongoing historic inequalities that shape AI and the society we live in today.
The Hariri Institute for Computing and Computational Science & Engineering’s virtual Inaugural Distinguished Speaker Series, AI & Inequalities—Creating Change, begins Friday, February 12, at 11 am, when Sabelo Mhlambi discusses Decolonial AI: Confronting AI’s Socially Constructed Role and Its Undermining of Human Rights and Human Dignity. Register here.
The remaining Hariri Institute speaker series events include From Ethics to Organizing: Getting Serious about AI, a conversation with Meredith Whittaker, Minderoo Research Professor at New York University, founder of Google’s Open Research group, and cofounder of AI Now Institute, on Friday, February 19, at 11 am; The Hierarchy of Knowledge in Machine Learning and Related Fields and Its Consequences, with computer scientist Timnit Gebru, cofounder of Black in AI, on Monday, February 22, at 11 am; and a series panel event featuring all three speakers on Friday, March 5, at noon. Find more details and register here.
Comments & Discussion
Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.