BU Today

Science & Tech

Is Your Computer Sexist?

It may say “boss” is a man’s job, BU and Microsoft researchers discover

2

It had to happen. In an era when the nation’s president-elect has been routinely criticized for his sexist remarks about women, BU researchers, working with Microsoft colleagues, have discovered your computer itself may be sexist.

Or rather, they’ve discovered that the biased data we fallible humans feed into computers can lead the machines to regurgitate our bias. And there are potential real-world consequences from that.

Those findings are in a paper produced by the team, whose two BU members are Venkatesh Saligrama, a College of Engineering professor of electrical and computer engineering with a College of Arts & Sciences computer science appointment, and Tolga Bolukbasi (ENG’18).

The team studied word embeddings—algorithms that one member of the team described to National Public Radio as dictionaries for computers. Word embeddings allow computers to make word associations. To take a hypothetical example NPR used, a tech company looking to hire a computer programmer can use an embedding that knows “computer programmer” is related to terms such as “JavaScript” or “artificial intelligence.” A computer program with that word embedding could cull résumés that contain such related words. So far, so harmless.

But word embeddings can recognize word relationships only by studying batches of writing. The researchers particularly focused on word2vec, a publicly accessible embedding nourished on texts from Google News, an aggregator of journalism articles. Turns out that those articles contain gender stereotypes, as the researchers found when they asked the embedding to find analogies similar to “he/she.”

The embedding spit back worrisome analogies involving jobs. For “he” occupations, it came up with words like “architect,” “financier,” and “boss,” while “she” jobs included “homemaker,” “nurse,” and “receptionist.”

Theoretically, these distinctions could promote real-world inequity. Companies increasingly rely on computer software to analyze job applications. Say that hypothetical tech company seeking a computer programmer used embeddings to weed through résumés.

“Word embeddings also rank terms related to computer science closer to male names than female names,” the BU-Microsoft team says in its paper, to be presented at this week’s Neural Information Processing System (NIPS) conference in Barcelona, the top annual meeting on machine learning. “In this hypothetical example, the usage of word embedding makes it even harder for women to be recognized as computer scientists and would contribute to widening the existing gender gap in computer science.”

“These are machine learning algorithms that are looking at documents, and whatever bias exists in our everyday world is being carried into these word embeddings,” Saligrama says. “The algorithm itself is pretty agnostic. It doesn’t care whether there exists an underlying bias or no biases in the document itself.…It is just picking up on what words co-occur with what other words.” The bias is in the data set being examined, like Google News.

“What our paper uncovers is that just because a machine does stuff agnostically doesn’t mean it is going to be unbiased.…What machine learning is about is: you look at the world, and then you learn from the world. The machine is also going to learn biases that exist in the world it observes.”

The researchers didn’t just decide on their own which pairings were sexist and which weren’t; they ran each analogy by 10 people using Amazon Mechanical Turk, the crowdsourcing online marketplace. If a majority deemed an analogy sexist, the researchers accepted their judgment.

The researchers say that they wrote their own algorithms that maintain appropriate gender-based associations while screening out sexist stereotypes. “It sounds like an ugly problem, because there are many, many, many, many words, and it seems very hard to go individually and remove these biases,” says Saligrama. But the computer’s ability to make word associations enables it, when fed some biased words, to predict other words that could be similarly sexist, he says. “So it is able to then remove biases…without a human being performing, word by word, the whole dictionary.”

They will make their algorithms publicly available shortly on the computer code-sharing platform GitHub, he and Bolukbasi say.

The two plan to pursue additional research. They’ve begun to look at racial bias in Google News articles, and they hope to expand their study beyond English. “We’ve thought about how to quantify bias among different languages, when you look at gender or when you look at bias,” says Saligrama. “Do certain languages have more bias versus others? We don’t know the answer to that.”

2 Comments
Rich Barlow

Rich Barlow can be reached at barlowr@bu.edu.

2 Comments on Is Your Computer Sexist?

  • Logic Rules on 12.06.2016 at 11:43 am

    I am not biased when I consider gender, creed, color, national origin, religious brief and a number of other properties. However, I am biased against self-serving academicians, who make a big deal out of assumptions, and who promote importance on issues which will not better society in substantial ways.

    • Implicit Bias on 12.07.2016 at 11:26 am

      As learning beings, we are all influenced by our environments and have implicit biases. Recognizing and exploring these biases is essential. Research like this is so important because it demonstrates how these biases manifest and can create inequity.

      I urge you to read and learn more about implicit bias. You can start here: http://kirwaninstitute.osu.edu/research/understanding-implicit-bias/

Reply to Implicit Bias

cancel reply

(never shown)