Inside the Mind of an Internet-Safety Vanguard
“You can never say your job is done,” says BU computer engineer Gianluca Stringhini, whose research focuses on cyber safety and making the internet a safer place for everyone
Inside the Mind of an Internet-Safety Vanguard
“You can never say your job is done,” says BU computer engineer Gianluca Stringhini, whose research focuses on cyber safety and making the internet a safer place for everyone
Gianluca Stringhini’s job of rooting out malicious internet users is like a digital Whac-A-Mole. A computer engineer whose focus is on cybersecurity, Stringhini says he finds the ever-evolving problem of harmful online activity—and the persistent, anonymous antagonists causing it—intellectually stimulating.
It’s a surprisingly plucky viewpoint. Any casual internet user who occasionally drops into the increasingly troll-infested playgrounds of online forums and social media sites might instead call his work “Sisyphean.”
“I got interested in this field because it seemed to me that it offered good opportunities to help people and make a difference,” says Stringhini, a Boston University College of Engineering associate professor of electrical and computer engineering. “The problem is constantly evolving, with adversaries adapting to whatever defense researchers might develop to keep carrying out their malicious activities. You can never say that your job is done.”
Stringhini and his colleagues have probed the extent to which spammers have infiltrated social networks, examined the relative influence of state-sponsored disinformation campaigns on Twitter, and delved into communities of cybercriminals that misuse online accounts to spread harmful content or steal sensitive information.
His fascination with malicious activity online, and his drive to stop it, has traveled with him around the world. Stringhini, who grew up in Italy, has earned degrees in computer engineering and science in his home country and the United States; he also taught computer science and security and crime science for four years at University College London.
“I’m a computer scientist by training,” Stringhini says. “My research field is cybersecurity—or at least that’s where I started. I’ve always been interested in better understanding and mitigating malicious online activity. I started with things like online spam, malware, fraud, and so on. More recently, I got interested in a more nuanced and human-driven type of malicious activity: cyberbullying, cyber-aggression, misinformation and disinformation, influence operations, those sorts of things.”
The Consequences of Deplatforming
Lately, Stringhini has been digging into spaces on the internet that are designed for and by women. He was curious about two highly toxic websites that had sprung up after their users were banned from the discussion board platform Reddit. These two sites became a clubhouse for so-called gender-critical feminists (people who refuse to acknowledge transgender women as women), and femcels, the female version of the better-known incels, men who describe themselves as “involuntarily celibate.”
Stringhini and his colleagues were interested in these two groups because they’d been deplatformed, he says—banned from the popular social media sites that had once helped them connect to each other and, crucially, to spread their messages to the wide audience of people using those same websites. In other words, if Reddit gave these groups a platform to broadcast their ideas, kicking them off was akin to pulling that platform out from under them.
Deplatforming has become a fairly common consequence for bad actors online. It happened to former President Donald Trump, when he was banned from Twitter in early 2021, just days after the January 6 insurrection. Twitter officials “permanently suspended the account due to the risk of further incitement of violence,” according to a news release from January 8. The suspension turned out not to be permanent: Twitter’s new owner, Elon Musk, restored Trump’s account at the end of 2022.
“What we are interested in, as computer scientists, as engineers, and as security researchers, is, what are the consequences of these deplatforming events?” says Stringhini, who is also affiliated with the BU Faculty of Computing & Data Sciences.
When Trump was forced off Twitter, he turned to other sites with less stringent content moderation, eventually creating his own. But what about people who hadn’t been leaders of the free world, what would they do? What Stringhini found was surprising.
When I was working on spam and malware and fraud, blocking content was a good thing. If you don’t see spam in your email anymore, that’s good. But in [deplatforming], these people don’t go away, they just move somewhere else. . . Those who do migrate become more active, they become more toxic.
“When I was working on spam and malware and fraud, blocking content was a good thing. If you don’t see spam in your email anymore, that’s good,” Stringhini says. “But in this case, these people don’t go away, they just move somewhere else. These communities, they migrate, they create their own servers, and their own websites after they are suspended—but only a fraction of the members of these communities will migrate, because those are only the ones that are very committed to the cause. Those who do migrate become more active, they become more toxic. And, potentially, they come back and organize aggression attacks against their original community.”
In some cases, pushing a toxic group into the shadows only serves to harden its resolve. At the same time, he says, deplatforming as a means to shield other users from these vitriolic viewpoints does seem to work. When certain topics or users are kicked off mainstream social media sites, fewer people see their posts. The catch-22 is that those posts, swept under the rug, tend to fester there.
Flagging Harassment and Stopping Trolls
Stringhini isn’t satisfied with just identifying the problem; he also wants to develop meaningful solutions. And, for some problems, he has.
In 2019, Stringhini and his collaborators built a machine learning tool that helps online platforms identify and flag coordinated harassment attacks in their nascency. These sorts of pile-ons are typically the result of an organized campaign against a person or group, and result in a bombardment of hateful and aggressive comments from a brigade of online trolls. The targeted harassment that female game developers and gamers faced a few years ago is one such example.
There’s typically a pattern of posting and cross-posting in the lead-up to these types of attacks, Stringhini says, and he’s trained an algorithm to detect that pattern before the attack campaign can gain too much steam.
This work animates Stringhini, but it can also be exhausting at times, he says. He himself has been trolled, seen his image edited into “all sorts of unsavory poses,” and has received “weird and threatening emails” because of the work he does.
“We try to do our best at keeping our sanity by taking breaks, talking to others about the issues we may get, and so on,” Stringhini says. “Luckily, I’ve never felt like there was a real danger. But this also shows how dangerous this can be. If I get these kinds of messages as a white guy, I can only imagine how more marginalized people may get targeted.”
As the line between our online and offline lives becomes blurrier, Stringhini feels a greater urgency to his work of making the web a safer place for everyone.
“Maybe these aren’t the biggest challenges we’re facing as a society, but it has become clear that these are important challenges,” he says. “The rise in conspiracy theories, especially around the [US] elections, is alarming. At the same time, as online life has become more important for people—especially during the pandemic—you should be able to feel safe there. If I can contribute to people feeling even a little safer, that’s what I want to do.”
Comments & Discussion
Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.