Experts at a journalism and AI industry roundtable in Toronto discussed how AI can be incorporated with journalism. Photo by Sahaana Ranganathan

How newsrooms should be thinking about AI

Ethical implications and experimentation central to conversations about emerging technology Continue Reading How newsrooms should be thinking about AI

While trying to balance the benefits of generative AI and the need to uphold reporting ethics, researchers stress that newsrooms should be prioritizing AI literacy and experimentation. 

There are many reasons to be wary of AI in the newsroom such as a lack of transparency, biased sources, potential plagiarism and the risk of inserting errors into content,because AI doesn’t necessarily yield accurate information. Programs such as ChatGPT are not like Google, and you cannot treat AI the same way you would treat a search engine. Nevertheless, AI is already changing techniques used in newsrooms. 

“There’s a lot of AI hidden in the software we use,” says Angela Misri, assistant professor at Toronto Metropolitan University, who is leading a project on journalism, AI and ethics.

People are experimenting with different uses for AI, so far largely prioritizing its use for repetitive tasks, such as the creation of charts and tables. 

An example of AI in the newsroom is MITT Media, a news organization based in Sweden that uses AI to produce content. Editors found that their real estate articles were their best performing content but it was not sustainable to have journalists writing an article for property being sold. So they implemented a “Homeowners Bot” which uses AI to generate articles on home sales every week. 

The Associated Press began using AI as early as 2014 for its business news desk to automate stories on corporate earnings. The newsroom believes that this allows their editors and reporters to focus their resources on higher-impact journalism. Even independent journalists using transcription software like Otter.ai are using AI to some capacity. 

In May, experts at a journalism and AI industry roundtable in Toronto discussed how AI can be incorporated with journalism. They emphasized the importance of having open discussions about the potential ethical implications of AI in newsrooms. 

Nikita Roy, data scientist, journalist and host of the Substack report “Newsrooms Robots,” kicked off the roundtable hosted by Carleton University in partnership with the Canadian Association of Journalists and the Polis/London School of Economics JournalismAI Project by defining generative AI. She explained that AI tools are large language models, meaning that AI produces content such as text and images based on the data the program has been trained to use. 

She emphasized that because AI programs draw from whatever knowledge base they have been trained on, which may not always be accurate, they should be treated as tools that are part of the reporting process and not a way to replace journalistic work. Importantly, a journalist using AI should be careful to double check any work they produce with AI assistance. 

“They’re (AI) not large fact models, they’re large language models,” said Gina Chua, executive editor for news site Semafor, who was a participant at the roundtable. “I use them in my experiments mostly for their language capabilities rather than any fact finding capabilities.” 

Chua highlighted that AI is not meant to be like a Google search engine, but rather a way to examine, draw upon and synthesize existing data. For example, she would not ask AI to tell her the date of when Benjamin Franklin died. She would give AI a book on Benjamin Franklin and ask it to find where the book says the date of Benjamin Franklin’s death. 

This definition of AI as a language model can help users  understand how to leverage it in journalism. However, it’s equally important to experiment with AI in order to have open discussions about potential ethical challenges. 

“The ones who aren’t using it are not having as much of the ethical discussions,” says Misri. She explained that people who are experimenting with AI, developing software or creating AI methods are usually the ones thinking about the potential ethical boundaries of AI and journalism.

This sentiment is echoed by Florent Daudens, who develops initiatives in journalism and AI at Hugging Face, a platform “on a mission to democratize good machine learning.” Daudens has helped Radio-Canada develop their foundational guidelines on AI. For him, it is “very important to be transparent with the public about this ecosystem.” Daudens thinks that ethics in journalism and AI boils down to transparency and experimentation. He highlights the importance of being clear about what is unknown when using AI and the potential biases of the model. 

According to Naomi Robinson, the Canadian Media Guild CBC branch president, another ethical concern is ensuring that AI does not replace journalists. 


“The public needs to trust that our work is created by real people who they can engage with when they read, watch or listen to journalism,” Robinson wrote in an email. “We also need to ensure AI is not used to alter or replicate the work of journalists without their consent, and that it’s not used to monitor or conduct surveillance on their work.”  

Robinson also explains that the CBC branch reached an agreement with CBC/Radio Canada to protect workers against the use of AI without their permission in their last round of contract bargaining. This is some of the first media union contract language around the use of AI in Canada. 

According to Reuters Digital News Report 2024, audiences are comfortable with the use of AI behind the scenes to make the workload for journalists more efficient. However, they are less comfortable with the use of AI to generate completely new content. The study also found a general consensus among audiences that humans should always remain in the loop for editorial oversight. 

A survey by the Canadian Journalism Foundation and Maru Public Opinion found that 92 per cent of Canadians believe that there should be clear and transparent policy on how AI is used to produce news. The survey also found 85 per cent of Canadians are concerned that misinformation will spread when using AI to produce journalism.

Misri explains that without journalists, AI generated content lacks context. 

“The downside of taking in data, and then punching it out straight to the net is that all of us are going to punch out the same thing. If you think about 10 newsrooms in Toronto all getting the same crime data and pounding out the same story, they’re going to be exactly the same because there’s no reporter taking an angle on that,” says Misri. 

According to an Associated Press survey of 292 journalists, 70 per cent of respondents say the most common use for AI is content production. Despite this, the survey also found that while many organizations have guidelines for AI and journalism, there is still a demand for clearer guidelines, more training and better enforcement to ensure the responsible use of AI. 

This is especially important given that AI presence in newsrooms is increasing. According to a global study, out of 105 news organizations, 80 per cent of respondents said that AI will play a larger role in their newsrooms going forward. 

Daudens says that biases will be ingrained in AI models and that it is important for everyone to know that. With this knowledge and experimentation, journalists can understand the limitations of AI and ensure that they are always considering the ethics of incorporating it into their work. 

mp

Sahaana is a final Master of Journalism student at TMU. She’s an assistant reporter at the Investigative Journalism Bureau, producer for two podcast shows, Beyond the U and Reviewed. Her reporting has been featured in TVO, The Otter, and The Review of Journalism.