POV: AI Has a Powerful Brain. It Still Needs a Heart
POV: Artificial Intelligence Has a Powerful Brain, but It Still Needs a Heart
“Efforts to cultivate algorithmic fairness lag far behind the enthusiasm to adopt the technology”
American industry is in the midst of another revolution. This one is taking us to a place where decisions of many kinds, from when you should go in for a coronary bypass to where your car should turn left, will no longer be made entirely by us; they will be guided by artificial intelligence. That’s good news, because artificial intelligence (AI) holds great promise for improving the health and welfare of much of the planet. But for society to take full advantage of the power of AI, algorithmic outcomes must be fair, and the application of those outcomes must be ethical.
So far, efforts to cultivate algorithmic fairness lag far behind the enthusiasm to adopt the technology. Industry, with its drive for competitive advantage and focus on profits, has shown little inclination to shoulder this responsibility. The institution that needs to play a critical role in leading the way to an AI-powered world that is both ethical and fair is higher education.
At many technology companies, concern about the potential for unethical use of AI is the elephant in the room, and employee unease has prompted the hiring of ethics officers and review boards. This is a step in a positive direction, but it is not enough to overcome the challenge we face.
That problem is twofold: there are the underpinnings of technology itself, and there is the application of AI in ethical and unbiased ways. Machines may be fast learners, but the data they learn from are often a compilation of human decisions. If those decisions are freighted with bias, and they often are, AI can bake that in too, creating a system that is perpetually unfair.
Research has shown, for example, that some AI-powered facial recognition software returns more false matches for African Americans than it does for white people. Such technological shortcomings are exacerbated by AI’s failure to recognize them as shortcomings, as humans might do. Algorithms do not self-correct. They self-reinforce.
And humans, although gifted with the ability to identify problems and make ethical corrections, apply that ability in a very human way—selectively. Officials from Immigration and Customs Enforcement recently combed the databases of driver’s licenses in states that use facial recognition technology without informing the drivers. And Amazon shareholders recently rejected a proposal to audit its facial recognition software, which critics claim can lead to false matches and arrests. With law enforcement engaging in questionable practices and industry failing to do the work needed, academia must redouble its leadership in this critical area.
With its teaching mission, its predilection for interdisciplinary research, and its indifference to quick profits from research projects, academia is well positioned to lay down the path to ethical AI. The teaching part of that effort is already underway. A Law for Algorithms course, which explores the impact of algorithms on society, was recently jointly taught at Boston University, Harvard, Columbia, and Berkeley, and many other universities include similar considerations in computer science courses. Cornell, MIT, Stanford, and the University of Texas offer specific courses on the ethical design of intelligent systems. Universities are undertaking the necessary research to power unbiased AI as well.
At Boston University, researchers are investigating techniques that could be used to reliably apply algorithms trained on one population to other populations that were underrepresented in the training set. They are also trying to determine exactly how learning machines reach their conclusions, because it seems doubly unethical to apply potentially unfair conclusions when those conclusions cannot be explained to the humans who live by them. The goal of all this is the development of what is aptly called “fair machine learning,” which will allow us to tap the power of AI to study societal problems, ranging from affordable housing to the influence of fake news.
To accomplish that, universities need the government’s help. In line with the National Artificial Intelligence R&D Strategic Plan recommendations, public policy and federal funding agencies should support research initiatives that bring the expertise of academic scientists to bear on projects that serve the public good. Laws and regulations must advance and support the research and applicable solutions coming out of higher education and require industry to incorporate the ethics of AI as a critical component of their business model. Additionally, federal funding will not only help advance AI, but as a recent study shows, it will also fuel innovation and job growth. That’s a win-win.
That’s not enough. A comprehensive examination of AI’s potential for harm, as well as good, should be an integrated part of all computer science education, and should work toward cultivating ways of thinking that are particularly attuned to fairness. Furthermore, as AI expands its role in the guidance of our lives, the technology that we have long labeled computer science must be considered as much a social science discipline as it is a STEM discipline, replete with the moral mission inherent in fields like public health or economics. This broadening of the discipline, aided by greater engagement with the humanities and social sciences, is critical as higher education trains the next generation of computer science citizens.
All of us, in government, academia, and industry, must work together to set the future of AI on the most beneficial and safest course for everyone—but higher education, with proper support and with leadership from the computer science community, must step up and lead the charge.
Comments & Discussion
Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.