Photo by Sebastian Svenson on Unsplash

AI in Canadian newsrooms: media engaging cautiously

Canadian journalism’s AI adoption reveals a patchwork of policies and gaps

By Jessica Patterson and Terra Tailleur 

Newsrooms across Canada are figuring out how to use AI, and that leaves journalism educators with a challenge: how to teach students about AI when the industry itself is still working it out. To understand where journalism stands, we spent the summer and part of the fall studying AI adoption across newsrooms in Canada, capturing everything from implementation and experimentation to hiring. 

We set out to ask a key question: What do newsrooms in Canada expect from new hires regarding AI literacy? 

We interviewed CEOs and editors-in-chief from 12 Canadian news organizations and monitored news websites’  job ads and popular journalism job boards for references and expectations related to AI. 

What we discovered is an ad hoc collection of policy and use cases, and newsrooms taking a cautious approach.

In Part 1, we reveal a divide: while major outlets like CBC and The Globe and Mail have established comprehensive AI policies, many smaller newsrooms lack the time and resources to develop them. We examine the shared principles in policies, and how newsrooms are cautiously experimenting with AI for everything from transcription to data analysis, while deeply concerned about maintaining their audiences’ trust.

In Part 2, we examine hiring practices, training, what skills newsroom leaders prioritize and how labour concerns are shaping the conversation around technology adoption.

Part 1

Paul MacNeill wants to do the kind of journalism that few would attempt. 

As publisher of The Eastern Graphic on Prince Edward Island, he has obtained documents showing every payment Health PEI has made over five years, plus detailed breakdowns of every government position by department. It’s a massive dataset that could reveal important stories about where public money goes. He’s actively exploring AI tools like ChatGPT, and says it’s the first time they’re trying to use it in a way to help tell a story, figuring it out as he goes.

“We like to do stuff that nobody else is doing,” he says. “Plus, it’s just good public journalism.”

His approach highlights a growing gap between what’s technically possible and what’s practically achievable for Canadian newsrooms. Across the country, newsrooms have experienced extensive layoffs in recent years, leading to shrinking mastheads and a lack of resources. The industry could use AI tools in their editorial workflows to be more efficient, but many newsrooms are too strapped for time, money and expertise to adopt them effectively and quickly. 

“Our biggest challenge as a news organization is human hours,” says Lauren Kaljur, managing editor of Discourse Community Publishing, the B.C.-based independent news publisher behind IndigiNews and five other small outlets.

“We have such limited time because we’re so small… So, anything we can do to speed up processes and tasks in a way that doesn’t threaten the integrity of the reporting is something that we would consider,” she says.

A patchwork of policy

A sharp divide separates newsrooms in Canada when it comes to adopting AI, according to our research. Major broadcasters such as CBC, news outlets such as The Globe and Mail and Postmedia, and wire services such as The Canadian Press have developed AI policies or guidelines, while some smaller organizations are still determining their approach. Policies are important because they establish guardrails — what newsrooms are allowed to do versus what they aspire to do — and that transparency can help maintain audience trust.

The major media organizations share a consistent approach to AI. Across CBC/Radio-Canada, The Globe and Mail, Postmedia and The Canadian Press, several principles repeat: mandatory human verification, transparency and clear labelling, strict restrictions on using AI for writing or editing, prohibitions on sharing confidential information and outright bans on AI-generated photography and video.

CBC/Radio-Canada’s framework is comprehensive.  The public broadcaster follows seven corporate principles around responsible human oversight, transparency, respect for copyright and authors’ rights, security, privacy, accessibility and collaboration. Its news guidelines, from editor-in-chief Brodie Fenlon are even more specific. There must always be a human in the loop; nothing is published without a CBC journalist vetting it and AI-generated content won’t be used without full disclosure and advance approval from the Journalistic Standards and Press Procedures Office, according to CBC’s advisor for AI projects, Rignam Wangkhang. 

“We try to over-communicate really, because there is a lot of information and people are excited, but they’re also fearful,” Wangkhang says. “We understand that there are a lot of emotions.”

To manage those emotions, Wangkhang developed a steering committee that staff can email with questions or use cases. It’s a practical move, acknowledging that policy alone isn’t enough when technology changes faster than guidelines can be enacted.

The Globe and Mail takes a similar approach. “We as a newsroom are not going to use it to write our stories for us. We are not going to use it to edit our stories,” says Melissa Stasiuk, head of newsroom development. “We do not see AI as replacing the core duties of a journalist.”

At The Canadian Press, editor-in-chief Andrea Baillie describes months of careful policy development, aligned closely with partners from The Associated Press. “Nothing can replace the original work of CP journalists,” she says. “Any information generated from AI needs to be treated as unvetted source material, and every word needs to be checked and verified, using the standards that are outlined in our Stylebook.”

Toronto Star editor-in-chief Nicole MacIntyre said in an emailed statement, “We have policies in place and are actively rolling out training to help our staff use AI responsibly and transparently.”

The Star uses artificial intelligence to support their journalism, according to their journalistic standards guide. “All work that involves AI is guided by our core values of accuracy, accountability, transparency and respect for intellectual property, and every piece is reviewed by a journalist before publication,” it says. 

The Investigative Journalism Foundation, a non-profit that launched in 2023, has a policy on AI that is intended to be both an ethical and practical guide, according to CEO and editor-in-chief Zane Schwartz. The policy is centred on transparency, vigilance and human oversight; it may allow AI for research and non-editorial functions but strictly prohibits using generative AI to directly write news stories or generate photos for publication. All material produced by AI must undergo human vetting and be checked for accuracy, bias and proper sourcing. 

“We think it’s really important to be transparent with our readers about how we’re using it and to create a culture of collaboration and learning like we would for any other advanced tool,” he says. “We always disclose when we’re using AI, we always are very specific about how we’re using it.”

Small newsrooms face unique obstacles to AI adoption: they work on tight budgets, are resource-constrained and may be limited by technology.

At Discourse Community Publishing, Kaljur says they’re in the process of creating a policy. “We have communicated to all of our staff what our interim policy is, which is that there may be specific, appropriate uses for it, but every use of AI has to be checked with the editor to make sure that it’s clear when it is being used and that it’s an appropriate use of it,” she says. 

At Cabin Radio in Yellowknife, editor Ollie Williams doesn’t have the time to research AI uses and meet with staff to develop a policy. He’s too busy running the day-to-day operations. “In terms of the amount of time I get to meaningfully interact with AI and decide what would be best for our newsroom, it happens so far off the side of the desk that it’s like the movie Inception and it’s like the desk has folded back in on itself three times before I get to it,” he says. 

Jimmy Thomson, editor-in-chief of Canada’s National Observer, realized they needed a policy when a freelancer submitted an article. He ran the piece through AI detectors, which concluded it was very likely written by AI. 

“I read it and thought there’s no way this is a human-written piece. It was just so vague and so lacking in details, very few human voices. And that was actually a recent reminder to me that we really do need an AI policy at some point soon, because I need to be able to say to freelancers, that’s absolutely unacceptable,” says Thomson. 

Publishers are cautious about incorporating AI into their newsrooms due to a series of high-profile errors and ethical concerns that threaten their credibility and trustworthiness. Examples include a fake book list published by the Chicago Sun-Times,  inaccurate summaries by Bloomberg, and CNET’s error-strewn finance articles, among others. In Canada, Nicholas Hune-Brown, executive editor at The Local, recently came across a freelancer with numerous bylines, and under scrutiny, was revealed to likely be a scammer

Media companies that have AI policies are making them public to protect their reputation and to show audiences they are dedicated to transparency, says Vincent Pasquier, an associate professor in human resources management at HEC Montreal and co-author of Generative AI and the Journalism Profession: Good or Bad News?

The report, based on a survey of 400 journalists in Canada and at least five other countries, found that 36 per cent didn’t know whether their media organization had a policy on generative AI. 

The report suggests change is happening slowly in newsrooms, and adoption is driven by individuals and not organizational policies. 

“What struck us is how conservative the industry is and also how powerful, to some extent, journalists are. Media managers don’t want to shock the journalists. The way change is made is that they want journalists to embark by themselves in the change. They don’t want to force change because journalism is a real autonomous profession. From what we heard, it’s hard to force them to adapt to technology, so you have to go step by step, because otherwise they’re going to react against the change,” Pasquier says.

This same survey found that two-thirds of journalists have used generative AI tools in their work.

The reality on the ground

If policies reveal guardrails and aspirations, actual use reveals cautious experimentation. Across the organizations we interviewed, AI usage centres primarily on efficiency: for transcription, research, document analysis, audio editing, translation, SEO and headline generation.

Transcription is the most widely adopted use case. The Globe and Mail, CBC, The Canadian Press, Great West Media, Discourse Community Publishing and Cabin Radio all use AI transcription tools like Trint or Otter. “Otter has basically been part of our workflow since Cabin Radio was knee high to a grasshopper,” Williams says.

At CP, transcription tools are used by everyone, including new hires. “Journalists are very, very busy,” Baillie says. “There’s a lot of disruption in the industry. So, I think we would be foolish to ignore the potential to help us do rote tasks, to free us up to do the original journalism that is most valuable.”

But, even this seemingly straightforward AI use requires caution. At The Globe and Mail, reporters “don’t put things in Otter that are confidential or sensitive, because we know it may leak and we know these systems train on your content and we can’t be assured of that,” says Patrick Dell, a senior video editor who oversees AI initiatives. “That’s always the trade-off for us (with) any of these tools, unless we have solid evidence of data protection. And even then, if it’s something sensitive, it just doesn’t go outside of our own systems.”

Research and document analysis comes second, for tasks involving large volumes of data or complex research.  

CBC promotes Google’s NotebookLM to investigative teams for analyzing massive troves of documents that humans couldn’t process in reasonable timeframes. “The big one, I think right now that a lot of our investigative teams are looking at is, how can AI help with investigative journalism?” says Wangkhang. “Humans could not analyze this much data.”

The Investigative Journalism Foundation uses AI exclusively for data analysis, like in a recent investigation examining 40,000 inspection reports from British Columbia’s energy regulator in collaboration with The Narwhal, according to Schwartz. “We like to use AI to allow us to do stories we couldn’t otherwise do, not to do stories that we would do as a matter of course, more quickly,” he says. 

“The work that we do, we can manually verify everything with humans,” Schwartz says. “It’s just an extra tool. When you get into the territory of where you’re replacing humans, that’s where it runs up hard against journalistic ethics in a way that’s hard to reconcile.”

The IJF has used a number of tools, including a ChatGPT paid team account, NotebookLM and GitHub Copilot. But success isn’t always guaranteed. “We’ve also had a lot of experiments where we’ve tried to use AI and it hasn’t gotten us very far,” Schwartz says.

Translation is another common use case mentioned by newsrooms. The Canadian Press uses AI for translation, with every translation vetted by a CP journalist. At Great West Publishing, president Brian Bachynski notes that for journalists for whom English is not their first language, they will sometimes ask AI to translate words into their first language so they can fully grasp the meaning.

Several newsrooms noted that they’ve relied on AI for audio editing. For editor-in-chief Justin Brake at The Independent NL, this has enabled the small independent publisher to create podcasts. “I come from a print background,” he says. “Never in my 20+ year career was I ever trained in audio editing. I had been wanting to learn how to do that for a while, but couldn’t find the time and resources. Being able to create my first-ever podcasts through Descript, that’s leveled the playing field a little bit because I couldn’t do podcasts before, and now I can.”

Summarization is another AI use that helps newsroom staff with rote tasks. Discourse Community Publishing uses Gemini to summarize meetings and allow staff who may not have been there to get up to date. “Similarly, using ChatGPT, for example to look at FOI documents, to provide support with a summary when it’s an extremely long document,” Kaljur says. 

Another use is SEO-optimization and headline generation. Canadian newsrooms powered by Villager CMS have access to AI-powered features for routine tasks: grammar checking, alternative story angles, SEO-optimized headlines and intros, and automated meta titles for search and social media. In Villager CMS, reporters have access to a feature called Suggest Title, where AI creates an SEO-optimized headline, Bachynski notes. “Our newsrooms are using this as a launching point, where AI suggests something and the writer/editor uses it as a starting point. People really like this feature.”

“Our journalists are expected to perform their duties with integrity and transparency,” Bachynski says. “If AI can make a journalist’s job more efficient without compromising originality of content, then it is encouraged. The tools in Villager are timesavers while maintaining the integrity of their work.”

At Eastern Graphic, MacNeill sees AI as a backup for headlines when deadlines matter. “Some of our headlines can be just kooky and local,” he says. “You don’t want to lose that because that’s the touch and feel of who we are. But, if you’re on deadline, and you need to turn something out and you’re trying to fit a headline into four columns, AI can help you on that.”

AI is also being used sparingly for facial recognition. At CP, Baillie says they are using AI-powered facial recognition in their image archive of CP content. That said, their editorial policy is strict on AI-generated imagery, and says editors must avoid inadvertent use of such content. 

“It is acceptable, however, to use an AI-generated image if it is the subject of a story. Such images must be approved by a manager prior to publication and clearly labelled,” the policy states.

The CP Stylebook serves as their bedrock: “The Canadian Press does not alter the content of photos. Pictures must always tell the truth – what the photographer saw happen. Nothing can damage our credibility more quickly than deliberate untruthfulness. The integrity of our photo report is our highest priority.” 

“We don’t foresee a world where AI will ever change this principle,.” said Baillie in an email.

This cautious approach is shared across newsrooms. The Globe and Mail maintains a hard line against using AI for photojournalism due to risks to authenticity and CBC also has strict policies against generating images or video.

The Winnipeg Free Press is using AI for a weekly news quiz, explains editor Paul Samyn. “What it does is comb through the last week and come up with questions and multiple choice answers, which someone always has to review,” he says. 

But it’s not all smooth sailing. 

Williams has been playing with several tools to create an interactive map of wildfires in the Northwest Territories that pulls in data from various sources. He can’t get what he wants.

“I’ve spent some hours bouncing back and forth between Gemini and Claude, and whenever one of them starts to break down and get it wrong, I take it to the other one and be like, ‘look what the other AI did, can you fix it?”’ he says. “And, then it would be great for another half hour, and then it would degenerate, and I take it back to the first AI and be like, ‘you’ll never believe what Claude did.’”

A cautious approach 

This tentative approach stems from deep concerns about journalism’s core value: trust.

In recent years, newsrooms have been struggling to earn and maintain trust. A 2025 Reuters Institute study found that 62 per cent of respondents are comfortable with entirely human-made content, compared to only 12 per cent of respondents comfortable with fully AI-generated news.

For The Canadian Press’s editor-in-chief, the stakes are existential.  “CP has been around for over 100 years, and what we are known for and most proud of, is our first-hand reporting,” she says. “It’s my job to protect those standards. So, I would say we have only tiptoed into the AI world. We want to proceed very cautiously because our reputation is sacred. Anything that could possibly compromise that reputation keeps me up at night, so we are proceeding very, very cautiously.”

The risks for the 108-year-old wire service are high. Although AI has huge potential to help journalists, it also has the potential in the blink of an eye to damage reputations. 

“Our stories are going to hundreds of websites, newspapers and radio stations,” Baillie explains. “Putting the genie back in the bottle when we make a mistake, it’s really hard. It takes days, and beyond that,  it can damage our reputation that we’ve spent a hundred years trying to build.”

Stasiuk echoes the sentiment, describing The Globe’s approach as a cautious, skeptical one. “Responsible experimentation is how I would phrase it,” she says. “There are still a lot of errors that AI makes. Our message to the newsroom has been, by all means, experiment with AI, but be skeptical of everything it tells you, always verify everything it tells you.” 

Dell acknowledges the industry’s collective anxiety, saying, “I think there’s a fair bit of conservatism when it comes to using it extensively because we do share errors and risks that other publications have faced. And, that really makes us especially wary because that would just erode trust in our audience immediately and we call it to question everything.”

But, practical barriers often stand in the way of more widespread adoption, like the cost of annual subscriptions to generative AI tools, integrating them into a CMS, hiring people with skills in AI, machine learning, data science and software development. 

“To pour resources into AI is not currently something that we have budget room for, and I think that’s going to be a bit of a challenge,” says Samyn. “Does the Globe and La Presse and others have the resources to forge ahead and adopt strategies The New York Times has? And where does that leave a smaller newspaper, but still a significant regional player, like the Free Press?”

What comes next

For newsrooms already stretched thin, it’s a heavy lift. But publishers around the world are on board with AI.

An ArcXP and Digiday report found that 97 per cent of publishers plan to increase their investment in AI in 2025. A 2025 WAN-IFRA survey of more than 100 media leaders worldwide found that most organizations are just starting (58 per cent) to use AI or growing their usage (31 per cent). Only 11 per cent describe their usage as advanced.  

Efficiency gains lead the way, with 75 per cent reporting improved workflows, 64 per cent improved content production and 55 per cent faster publishing. Despite struggling to measure a return on investment, there’s growing confidence that AI will soon move from experimental time-saving tools to essential drivers of revenue and product innovation.

In Canada, AI is certainly on everyone’s radar, says Brent Jolly, president of the Canadian Association of Journalists. The organization has been fielding concerns about artificial intelligence from its members over the last two years. “The last couple of conferences that’s been a key piece of the programming. I think there’s a lot of concern about how to use it.”

AI ranks among the top areas members want to learn about, alongside data journalism, interviewing and trauma-informed reporting, Jolly says. 

The challenge isn’t just learning the tools, it’s understanding how to use them responsibly. “There’s a lot of uncertainty around what these tools are, how do we use them, and how do we establish guardrails around them so that they are not abused and used to usurp traditional journalistic news gathering functions,” Jolly says.

Currently, the CAJ ethics advisory committee is putting together some language around AI use and best practices.

So, if current journalists are learning AI, what should new reporters know? What skills matter? What types of training should they receive? 

In Part 2, we examine what newsrooms expect from new hires, whether they’re hiring for these skills and what job ads reveal about changing requirements.

*Disclaimer: This piece was not AI-generated. Any errors are wholly human-made.

AI was used for transcription of interviews, as used by many of the organizations mentioned in these two stories. 

**Special thanks to Toronto editor Brian Baker. His help is deeply appreciated.