In June 2025, editors Fernando Arce and Saima Desai made a difficult decision to postpone their long-awaited summer Food Issue for The Grind Magazine, Toronto’s only free, city-wide, local politics and culture print magazine.
While they had worked hard to prepare a paper packed with stories about Toronto’s rich food culture and the people behind it, they realized something was very wrong: of the roster of new freelance writers they hired, it seemed several were trying to pawn off AI-generated stories about fake people and places – a first in Arce and Desai’s combined several decades of journalism experience.
“I was enraged, I was livid, and I felt a lot of despair about the future of journalism,” Desai said. “There was a particular indignity about people using AI to scam this small new progressive, independent publication that was trying to tell stories about migrant workers and food delivery workers and people who make the food that we love in restaurants in Toronto.”
In their June 4 announcement postponing the Food Issue, Arce and Desai explained one draft they received included quotes from restaurant owners and vividly described their interior as “perfumed with turmeric and cardamom,” recalling a “stained handwritten notebook by the counter.”
To fact-check, they scheduled a call to find out if the writer had actually been there, and seen or smelled these things. When they couldn’t find evidence of the restaurant online, the writer agreed to share contact information for their sources, but ultimately admitted: “The characters and places in my article are fictional composites … based on real themes.” The team quickly cancelled the article, but then began seeing worrisome signs in other submissions.
“Once I knew that I was looking for AI, I started seeing those kinds of red flags all over the place,” Desai said.
They identified seven articles they “strongly suspect” were written by AI, given a “similar feel” that was too neat and vague. Some red flags Arce and Desai that alerted them an article might have been AI-generated were U.S. instead of Canadian spellings, double-barreled article headlines that didn’t match the content of the story, the same author writing an eloquent pitch and then awkward follow-up emails and drafts riddled with em-dashes.
“We’re committed to keeping AI slop outta here. Only real stories written by real humans, please,” read the public announcement. But, it’s getting harder and harder to avoid machine-generated content, and it began to feel personal for the editors so invested in the small-shop newsroom at The Grind.
“It was really Fernando’s sharp eye and doggedness in noticing the discrepancies and refusing to let them fly that I think was our saving grace here,” Desai said. “I realized when Fernando brought that story to our editors’ meeting, that I had to start approaching the drafts that I was working with really differently and actually assume bad faith, which was kind of a horrible thing to have to do.”
The writers, or scammers, went to unbelievable lengths to keep up their cons, including setting up fake emails pretending to be the fake experts and sources that they had never interviewed. This is uncharted territory for most editors across Canada, which Desai said is why she and Arce were so transparent about their challenges.
“We knew that this was something that could only be addressed if all of us in the journalism industry were talking openly about it together. And I don’t know who else is experiencing these kinds of scams, and I don’t know how they’re dealing with it, and that makes me very nervous.”
Indeed, The Grind is far from alone navigating these uncertain waters; publications have struggled to accommodate the fast-changing world of generative AI, and some have done so with less finesse. Earlier this spring, several reputable U.S. papers including the Chicago Sun-Times and The Philadelphia Inquirer published the same summer reading list where 10 of the 15 listed books were hallucinated by AI, which the author later called a “really stupid error on my part,”
Things went very wrong for the Gannett news service, publisher of USA Today and hundreds of local media outlets across the United States, after AI-generated errors were discovered in their sports stories. Other outlets have been accused of knowingly and disingenuously using AI-generated content; back in 2023, Sports Illustrated was caught listing nonexistent authors with fake biographies and AI-generated portraits for product reviews.
But AI-content is getting harder to distinguish in style, though errors in content remain.
“In the last four, five, six months now, it’s ramping up,” Arce said. “The technology itself is getting better every single day,” adding the publication is working hard to keep up – but still has many important details to sort out.
“We don’t necessarily have a policy yet, because we’re still deciding what the ethical use of AI is,” Arce said, acknowledging it can be an important tool for journalists who are still going out of their way to do the hard work themselves, but distinguishing this legitimate use from the kind of “scam” they experienced.
Ultimately though, how big the issue will become remains unclear. According to a 2024 Pew Research Center survey, roughly half of adults say they believe AI will have a negative impact on the news in the next 20 years. Just 10 percent say they think AI technology will have a positive effect.
“I think it’s going to become a bigger problem because there are no fact checkers. There is no budget to really have the kind of fact-checking that you need,” Arce said.
Despite these concerns, though, some journalists are claiming AI might be part of the fact-checking solution. A July 2025 publication from the European Digital Media Observatory by authors Laurence Dierickx, Carl-Gustav Lindén and Duc-Tien Dang-Nguyen explains how generative AI technologies, large language models in particular, have potential to assist fact-checkers at various stages of their work.
However, the authors still argue that while AI can help with detection and mitigation of misinformation, human expertise remains essential for fact-checking because AI systems cannot fully grasp context, intent or credibility. The models also struggle with ethics and critical thinking required of newsroom fact-checkers. In the meantime, Arce and Desai say they will move forward with a new sense of caution.
“It was absolutely a wake up call and a learning experience,” Arce said. “This is a conversation that the industry needs to have as a whole. I think (we’re) beginning to have it.”
Leah Borts-Kuperman is an award-winning freelance journalist based in Northern Ontario. Her previous reporting has also been published by TVO, CBC and The Narwhal.

