The battle against AI-driven disinformation
Canada’s democracy is under growing threat from systems driven by artificial intelligence (AI) that are capable of spreading disinformation at an alarming speed. Jean Baudrillard’s vision of a world where reality is replaced by simulations — copies of things that no longer have an original — is quickly becoming our own.
These AI-driven simulations blur the line between what is real and what is imitation. Imagine waking up to a world where most of the news you read, images you see and tweets you scroll through are fabrications created by automated AI of some foreign state or interest.
This is not some dystopian fantasy. Canada must develop new digital strategies to counter this dangerous 21st century menace.
A recent experiment called CounterCloud shows how AI can be harnessed as a “fully automatic propaganda system” generating and spreading disinformation with minimal human intervention and at an incredible pace.
The process is frighteningly efficient: AI scrapes the internet for content, identifies targets and generates alternative content, counter-articles and reports published under fake journalist profiles. These are then amplified on social-media platforms spreading fabricated feeds, comments and narratives that incite further division and mistrust.
Picture this: Across Canada — from the bustling streets of Toronto to quiet rural towns on the Prairies — people are waking up, grabbing their morning coffee and checking their phones for the latest news about an election just a few weeks away. But today something feels different.
Your phone buzzes with notifications from friends, family and colleagues. Everyone is talking about a video that’s going viral. You open it and see a familiar political leader standing at a podium and making statements that are shocking, divisive and completely out of character. The comments section is ablaze with anger, disbelief and accusations. You start to see your own friends and neighbours sharing the video, their trust beginning to crumble.
Within hours, people start losing faith in our democratic institutions. By the time the truth comes out, the damage is done.
This is just one of many scenarios where truth is lost with catastrophic consequences. Canada must act now to counter the imminent threat of disinformation before it unravels the fabric of our democracy.
The science behind identifying deepfakes — hyper-realistic but fake videos and images — is still in its infancy. Deepfake detection tools can be easily tricked with simple editing techniques or when dealing with images of people with darker skin tones. This is a significant problem because fake content can slip through the cracks and spread unchecked.
The inability to identify fake images accurately can have serious repercussions. For example, in late July, an image of a Sikh man allegedly urinating into a cup at a Canadian gas station went viral on social media, fuelling anti-immigrant rhetoric. A post on the social-media site X, allegedly from the gas station owner, denied that it happened.
The Washington Post tested the image by uploading it into a popular deepfake detection tool, which initially identified the image as likely to be real. It took a human analyst to confirm that the image had been manipulated. This inconsistency in results is dangerous because it makes it hard to know what to believe.
So what’s the solution?
We can take a page from Taiwan, which has been dealing with disinformation campaigns for years, particularly from China.
Taiwan is deploying digital democracy platforms like Pol.is and Join to create an environment in which citizens, policy experts and politicians work together to counteract fraudulence and protect their democratic activities. These platforms have successfully thwarted disinformation campaigns aimed at manipulating the public. Imagine the same success in Canada if we were able to prevent a manufactured political scandal.
Pol.is is particularly effective in combating disinformation through constructive online discussions and consensus-building. Interactions are made more engaging by using elements of game-playing wherein participants earn points, avatars and badges for contributing and voting on ideas, or taking part in discussions.
At the same time, Pol.is filters out extremist or misleading views by preventing direct replies to comments, thereby reducing the risk of escalating confrontational exchanges. Instead, users vote on comments. The platform groups similar opinions together and creates a visual map to highlight common ground and consensus. This makes it more difficult for disinformation to gain traction.
Join complements Pol.is by ensuring that those who are not tech-savvy can still engage in discussions about public issues. By capturing a wide range of perspectives, Join helps ensure that the narrative includes informed and diverse voices. This inclusivity is crucial to prevent disinformation from exploiting gaps left by under-represented groups.
Taiwan also employs real-time verification tools like its FactCheck Center and Cofacts chatbot. They emphasize transparency and accountability and feature open-source databases, public access to verification methods and clear documentation of sources. These tools build public trust by allowing users to check suspicious content almost instantly and by making verification visible and understandable.
Canada’s Advisory Council on Artificial Intelligence is uniquely positioned to promote and test similar initiatives. The council’s AI experts, academics, industry leaders and community representatives understand both the technological landscape and the diverse needs of the public. Their expertise allows them to create platforms that are not only technically robust but also user-friendly and inclusive.
Some may argue that digital platforms alone cannot effectively combat the rapid spread of AI-driven disinformation, especially in a polarized digital environment where falsehoods often spread faster than truth. I agree.
We need an immediate and multi-faceted approach to combat deception and distortion. There is no single magic formula. However, by fostering transparency, citizen participation and real-time fact-checking, these platforms can significantly reduce the impact of disinformation.
This article first appeared on Policy Options and is republished here under a Creative Commons license.