skip to main content

In the age of AI, is doubt becoming a political strategy?

French President Emmanuel Macron disembarks a plane with his wife Brigitte Macron in Hanoi, Vietnam
French President Emmanuel Macron disembarks a plane with his wife Brigitte Macron in Hanoi, Vietnam

When video footage of French President Emmanuel Macron appearing to have an altercation with his wife Brigitte circulated online last week, the Élysée Palace initially suggested the footage could have been created using AI.

The footage was later confirmed to be real.

As references to AI become more common in political discourse, what happens when democratic governments begin using it to question the authenticity of real events?


Last Sunday the Associated Press (AP) captured footage showing French President Emmanuel Macron stepping off a plane in Vietnam, followed closely by his wife, Brigitte Macron.

As the doors open, she appears to push him in the face with both hands. In the video, President Macron looks momentarily startled, before quickly regaining his composure and waving out through the plane's open doorway.

With Brigitte Macron mostly hidden by the plane, it’s hard to know the full context or what preceded the push — but the clip still spread widely online.

As the clip gained traction, journalists covering the president sought answers.

According to a French political journalist who spoke to RTÉ, a senior adviser to President Macron initially suggested to reporters asking questions about the push that the video may have been AI-generated.

The explanation was offered before any formal verification had taken place and, according to those briefed, was the first line of response to a potentially sensitive story.

However, the Élysée Palace later acknowledged that the footage was genuine and described the incident as a private interaction between the couple.

But this clarification only came after the AP published the video in its entirety, which ruled out the suggestion that it was generated using AI.

The response from the senior adviser also came just weeks after Macron was the target of a viral disinformation campaign, in which he was falsely accused of handling a bag of cocaine during a diplomatic meeting, an incident that some suggest may have influenced their instinct to invoke AI as an explanation.

While debate about the Vietnam plane video, and what may have prompted the push from Brigitte, has continued throughout the week, less attention has been paid to the early suggestion that AI might have been involved.

That offhand remark, reported widely in the hours after the clip emerged, raises broader questions about how some democracies are beginning to invoke AI in moments of uncertainty, according to some experts.

Dr Tetyana Lokot, an associate professor at Dublin City University who researches digital governance and state media strategies, says moments like this can erode trust in democratic institutions.

"It's not just the deepfakes themselves that undermines people's trust in the media or in, political officials or leaders but also very often the shorthand is like, 'oh, this is a deepfake.’ It becomes harder for people to distinguish between claims of credibility or how to verify something," Dr Lokot said.

It also raises a tougher question, according to Dr Lokot. If a democratic government casts doubt on real footage, what happens when an authoritarian regime does the same, and people can’t tell the difference?

"It almost amplifies this effect of like ‘we don't really know who to trust,’ which I think is a much bigger problem. It basically undermines trust in the democratic process."

Dr Lokot also notes that beyond questions of public trust, AI is increasingly being used not just as a threat to guard against, but as a way for governments to shape narratives and reassert a sense of control during moments of uncertainty.

"When you have a situation where you don’t feel like you’re in control, which you could argue this was one such situation, you can fall back on the myth of AI as a very powerful technology that’s very easily appropriated," Dr Lokot said.

"The key concern [for governments] is: ‘how do we make sure that we're in control?’" she added.

Dr Tetyana Lokot, an associate professor at Dublin City University

Others say that while governments may reach for AI as a way to reassert control, sometimes it’s just a sign of how wired we’ve become to question everything, especially when AI is involved.

Claire Wardle, a professor at Cornell University in New York and a leading expert on misinformation, says the Macron case may also reflect a more instinctive response, a symptom of how easily doubt creeps in when AI is part of the conversation.

"What I don't know about this... is did they [the Élysée Palace] know and they were trying to cover it up, or did they just go, ‘oh, there's no way that she would appear like that, so it must be a deepfake,’" Ms Wardle said.

"It’s just a horrible reminder to advisers. Never say something is a deepfake until you know it’s verified," Ms Wardle added.

She adds that in a climate where trust in institutions is already at a low point, even a throwaway remark can deepen public suspicion.

"We already know that we're in trouble here and people are not trusting politicians. They believe they’re being lied to, which in some countries they increasingly are. This becomes just another way they can do it: by telling us it’s a deepfake when it’s not."

Ms Wardle also says the confusion around the Macron clip taps into a deeper problem that researchers like her have been warning about for years — the risk that AI doesn’t just create fake content, but also gives people cover to dismiss real events.

"Photography came along and we were like, ‘we can hold people accountable.’ And then AI technology took that away. It’s broken the foundations upon which we stand."

And when those foundations crack, whether it’s the French president having an altercation with his wife or something more mundane, Ms Wardle says the result can be the same: a sense that nothing can be trusted.

"Whether it’s frivolous Instagram posts, or it’s French politicians, or war crimes. In a very short space of time, the foundation we’ve relied on to understand reality has disappeared. And that’s what’s so terrifying."