Sejin Paik smiling portrait with Boston skyline in background.

Sejin Paik (’24), a PhD candidate in the Emerging Media Studies program, is interested in the impact of AI and algorithms on newsrooms and journalism ethics. Photo by Guramar Lepiarz

When Robots Deliver the News

A Q&A With Sejin Paik (’24) on the growing role of—and concern with—AI- and algorithm-driven journalism

November 27, 2023
Twitter Facebook


When Robots Deliver the News

It used to be that to get the day’s news, one would open a locally reported newspaper or turn the television to one of three national broadcast networks. The advent of the Internet turned all of that upside down, and the more recent use of artificial intelligence (AI)-generated content and algorithms to decide which content we see promises to disrupt the journalism industry even further—if it doesn’t destroy it altogether. 

Sejin Paik (’24), a PhD candidate in the Emerging Media Studies program, is interested in the impact of AI and algorithms on newsrooms and journalism ethics. For a 2023 paper in Digital Journalism, “Journalism ethics for the algorithmic era,” Paik interviewed 16 news editors from as many US states to learn how the adoption of algorithmic patterns and other technologies to gather and distribute news may bump up against those newsrooms’ own ethical guidelines—while also helping journalists do their jobs better. 

With the right guardrails and oversight, I believe that generative AI has potential to tackle some long standing challenges that local journalism has faced for years, by empowering individual journalists as well as newsrooms that are willing to experiment.

Sejin Paik

Paik arrived at BU by way of Google. There, she worked with a team to launch, in 2018, the company’s AI-generated news aggregator app, Google News. She says the experience of seeing the way news was being curated and distributed on the Big Tech side pushed her to pursue a PhD at COM so that she could explore the impacts new technologies are having on journalism. “I really come from love for the journalism industry,” she says. “I’m always on the side that we need to support journalists to create good information flow.”

COMtalk spoke with Paik about the effect AI and algorithms are having on local newsrooms and consumers, and what she thinks should be put in place as safeguards. 


With Sejin Paik

COMtalk: For those who don’t know, can you briefly define algorithms, algorithmic systems and AI, and how they work in the context of news creation and distribution?

Sejin Paik: An algorithm is a sequential procedure used to solve a problem or achieve an intended outcome. In my research, I consider algorithms in the context of AI, in which algorithms are run by AI techniques such as machine learning, natural language processing and generation to simulate and scale human intelligence processes. I focus on platforms such as social networking sites and search engines that use algorithms to provide services such as curating news feeds. This year, I have expanded my research into investigating generative AI tools driven by large-language and multimodal models both for text and visual content generation for news.

COMtalk: How are journalists and newsrooms engaging with AI and algorithms currently?

Sejin Paik: For more than ten years, the most deliberate use of AI has been through analytics. [These are] audience engagement tools that really shot up with the rise of social media—predicting and aggregating things like ad audience, content and analytics. And I think journalists know that they are using those tools. Where things get a little unintentional for some journalists is that even research on Google search engines—that’s AI. I think now, everybody is aware of that. But in the past, it was more like, “Oh, I’m using the search engine” not “I’m using AI and AI-driven algorithms.” Then, journalism breaks off into general reporting—turning around stories daily—and computational journalists, who I consider the investigative journalists. They [often] had a background in data science or computer science. So they would use their own artisanal AI or advanced computational tools. And now—it’s been about a year with [LLM conversational AI tools] ChatGPT and Bard coming out, so I am pretty sure that most journalists use it for some capacity of brainstorming and lightweight production. The younger generations—as they’re so focused on the platformization of news [on] different platforms of social media that I’m sure they use AI to generate different voices of content.

COMtalk: What are some of the tensions traditional journalists face in doing so?

Sejin Paik: Going back to the first one—the analytics tools—ultimately, journalists want to see raw data, but these analytics tools won’t show that. These dashboards will only show what the company wants to show. A lot of journalists told me that they can’t fully trust the data they have on their audience and on content engagement.

The most interesting finding for me was the realization that [journalists] really don’t have the ability to set context around a story, particularly on algorithmic feeds like social media. One reporter coined this the “digital soup” issue. When they’re putting their stories on these mainstream platforms—such as—journalists can know readers are coming to a website with a particular context. Whereas on social media, readers are coming in with completely different needs and they might not even be going in for news. That’s where journalists are like, “We don’t know how the readers are actually getting to our news, what kind of headspace they’re in.” And that is the algorithmic curation effect of the digital soup. You see the news next to a story of your friends having a baby, followed by a story about your favorite basketball team. Currently, journalists don’t have any agency over this functionality, so they’re unable to grasp how the same exact news is being interpreted in different information feeds and whether curated feeds actually have a significant impact on news consumption.

The digital soup affects the whole information ecosystem: those who create the information, those who consume it and ultimately, the perception of the information itself. This, in my opinion, is one major reason why tech companies developing advanced AI models that distribute and produce various content, including the news, need to embed journalistic ethical standards into the training of their algorithms. From an implementation side, I understand this is tricky because tech companies have other stakeholder needs to fulfill—and often, they have shied away from taking responsibility for the power they wield over the journalism industry.

COMtalk: What effect is the use of AI and algorithms in journalism having on news consumers?

Sejin Paik: One huge thing is the amplification of national news outlet stories. Existing algorithmic newsfeed platforms might have partnerships with bigger newsrooms or journalistic organizations, which incentivizes them to show stories that are from these partners. And on the consumption side, I think a lot of user engagement shows people clicking through more well-known organizations, which continues to amplify that cycle and pushes out smaller local news organizations’ articles or stories. Importantly, people are also more prone to engaging with news headlines that are polarizing and spark emotion-driven conversations, whether that be humorous or highly sensational. This means that stories produced with the objective and balanced tone of traditional journalism will be less popular than a news article that stokes controversy or an individual’s social media post expressing personal opinions about a certain topic.

COMtalk: Are there ways to recognize characteristics of news content generated by algorithms or artificial intelligence?

Sejin Paik: Right now, no. I think companies or platforms are working on it; they call it watermarking technology. When we talk about the generation of text [by AI], that’s still hard to detect. With images, it’s a little different because the image generation technology isn’t all the way there, particularly in generating hyper-realistic images of humans and scenes. Of course, some of the platforms already do it very well. For example, when it’s used for artistic purposes, I think it’s almost indiscernible, or it’s explicitly labeled, “this is created by AI” or “this is AI art.” But if we’re talking about political news generation, and news that’s intended for disinformation purposes, news images are a bit harder to fake. Though we’ve been seeing more and more fake news images and videos created by “deep fakes” that confuse and mislead people. And, of course, people can further edit photos, which is nearly impossible to detect. In other words, for both text and image generation, the AI technology is good enough that it’s hard to tell whether something is human-produced or AI-generated.

COMtalk: Is it too early to say whether the integration of AI and algorithms into traditional journalism is negative or positive—or is that the wrong question altogether?

Sejin Paik: I hope that for both news consumers and experts, we don’t get too caught up in the hype of whether this is going to completely wipe out humanity or if this technology will replace our jobs. I think that narrative is overblown. With generative AI, I do think journalists and content producers can lean into using it as an expert tool. The more expert you are [in a subject], I think the more helpful it is, because you can discern, “Is this giving me what I need?” and you would know how to fine-tune prompting language. A hot take said with caution: I do think it could also be a good tool when used with full transparency by smaller, under-resourced organizations or even by freelancers. With the right guardrails and oversight, I believe that generative AI has potential to tackle some long standing challenges that local journalism has faced for years, by empowering individual journalists as well as newsrooms that are willing to experiment.

To answer your question: the technology is here, and it will bring both positive and negative outcomes to journalism. While there are certain journalistic ethical values that won’t and shouldn’t change, I do think the traditional industry conceptualizations around editorial agency, accountability and responsibility will evolve due to AI. Newsrooms will need to be specific about which parts of the editorial process will demand value-alignment assessments with the journalistic standards and algorithmic and AI systems. Some of my practical projections on AI’s impact in journalism (with an emphasis on the positives) will be the new ways of storytelling that journalists can explore through different user interfaces and content design approaches beyond just a ranked, scrolling feed. I also think AI agents will become significant to a journalists’ workflow as well as in their ability to reach and connect to a more diverse audience. Individual journalists could train their own AI agents and use them to communicate their stories with niche audiences simultaneously. This will certainly change the scale and scope of journalistic reach. These agents would be in the full ownership of the journalist who creates them which means the unique voice and perspective of the human journalist will be key to the news agents’ identity.

Finally, the perceived credibility of existing journalistic institutions is changing due to the way younger generations interact with digital platforms and information online. Researchers in academia and in practice will need to closely monitor and test how news consumers perceive and trust news agents over humans. This will be an important effort in maintaining journalistic integrity in the face of AI. 

COMtalk: What changes would you make to alleviate some of your concerns with AI and algorithms in journalism?

Sejin Paik: I think the first one is around context-setting. There’s still so much to be learned from how news consumers are affected by being exposed to the news on these algorithmic platforms. For my dissertation, I’m doing an online experiment around news curation effects. My preliminary findings show that when news is surrounded by memes—over content like activism posts and other news-only posts—the news in that particular curated feed was perceived to have higher credibility and shareability. This is fascinating, and there is more for me to dig into, but on the surface, it shows that there is a significant impact in feeds for news consumption. Where the focus is on how these technologies are affecting the user engagement, we need more research. By doing so, we can try to push on these tech companies for information on how they are choosing to use this data. And this step may start with the enactment of specific policies around user engagement data, particularly as it relates to political news and geopolitical information flow. Being able to show that journalistic content is getting affected in these algorithmic feeds is going to be important for the sustainability of our democracy-driven journalistic organizations in the US and around the world.

This interview has been edited for clarity and brevity.