IE 11 is not supported. For an optimal experience visit our site on another browser.

Deepfake of purported Putin declaring martial law fits disturbing pattern

The Putin deepfake is just the latest example of false video and audio targeting world leaders and their constituents.

By

Sometimes it can be easy for people who aren’t steeped in conversations about emerging technology to think these developments have no bearing on their everyday lives. People unfamiliar with Big Tech jargon can feel as though such discussions are taking place miles above their heads.

But then, in my experience, something terrifying tends to happen that brings these conversations down to earth, and people at the “ground level” are forced to confront them in new ways.

Conversations about algorithms fit this description. Ever since the 2016 presidential election and its controversies around social media being weaponized by nefarious political actors, public discussion about the things we see on social media, including how and why we see them with the help of algorithms, has become more common

The same appears to be happening with “deepfakes,” the term for doctored footage or audio clips meant to falsely indicate a particular incident occurred. 

A deepfake aired on TV in Russia on Monday that appeared to depict Russian President Vladimir Putin declaring martial law as he continues overseeing Russia’s invasion of Ukraine. 

It's unclear how the deepfake came to be. The Russian public broadcaster whose networks aired the sham speech said its radio and TV channels had been illegally interrupted, according to The New York Times. The Kremlin said the incident was the result of a "hack."

The quality of this particular deepfake wasn’t great, but that doesn't really matter. We all know someone vulnerable to being duped by even the shoddiest of disinformation. 

The alarming incident fits a pattern of apparent scare tactics designed to distress large groups of people by using fake messages purportedly delivered by their political leaders. 

Right-wing conspiracy theorists in the United States earlier this year spread a deepfake on their social media channels appearing to show President Joe Biden announcing he was reinstating the military draft and planning to send Americans to fight for Ukraine.

(Relatedly: Check out my colleague Zeeshan Aleem's column about the GOP's disturbing embrace of AI-generated advertising.)

And if you’re a frequent reader of The ReidOut Blog, you may remember I covered a viral deepfake of Ukrainian President Volodymyr Zelenskyy that was spread during the early months of Russia’s invasion last year. This deepfake purported to feature Zelenskyy urging his soldiers to surrender. 

The recurrence of these deepfake incidents — and the certainty that deepfake technology will be improved over time — is why some officials have tried to pass laws to curb the proliferation of deepfakes. In recent years, several states — including California, Texas and Virginia — have passed measures banning malicious pornographic deepfakes or political deepfakes. Minnesota also appears poised to enact similar legislation.

Often, it seems the world is bounding toward a frightening future of unregulated tech — a future in which manipulated audio and video are used to assault our concept of reality. 

Worldwide, we’re going to need much more oversight of the deepfake industry if we truly want to pump the brakes. And you don't have to take my word for it: Listen to this deepfake version of Meta CEO Mark Zuckerberg (aka #FakeZuck).