
How deepfakes will transform geopolitics
In 2024, one billion people around the world will go to the polls for national elections. From the US presidential election in 2024 to the war in Ukraine, we’re entering the era of deepfake geopolitics, where experts are concerned about the impact on elections and public perception of the truth.
This week, we’re exploring deepfakes and state-sponsored disinformation campaigns, what it means for the future of geopolitical conflict, and what we can do about it.
Three months, three notable deepfakes in the US
- March: Deepfake images of Donald Trump being arrestedspread across Twitter (ironically and unexpectedly, one image was created by Eliot Higgins, the founder of the open-source investigative outlet Bellingcat, who was experimenting with generative AI visualization tools and expected only a few people to see it). One recent study found that false news spreads faster than real news.
- April: The Republican National Committee released a 30-second dystopian ad, built entirely with AI, which predicted—via fake videos—that re-electing President Joe Biden would lead to China invading Taiwan, the collapse of the economy, and the closing of San Francisco due to the fentanyl crisis.
- May: Just a few days ago, images of an explosion at the Pentagon went viral on social media, but it turned out to be a deepfake.
Coming to a screen near you
Deception for geopolitical gain has been around since the Trojan horse. Deepfakes, however, are a particular form of disinformation that has emerged recently due to advances in technology that generate believable audio, video, and text intended to deceive.
Generative AI tools like Midjourney and OpenAI’s ChatGPT are being used by hundreds of millions each month to generate new content (ChatGPT is the fastest-growing consumer application in history), but they are also the tools used to create deepfakes.
Henry Adjer, an independent AI expert told WIRED, “To create a really high-quality deepfake still requires a fair degree of expertise, as well as post-production expertise to touch up the output the AI generates. Video is really the next frontier in generative AI.”
Fake vids, real geopolitics
Even if deepfake videos aren’t perfect, they’re already being used to shape geopolitics. The war in Ukraine could have gone very differently had Ukrainian soldiers believed the deepfake video from March 2022 of President Zelenskyy calling on his Ukrainian soldiers to lay down their arms.
The video was quickly diagnosed as a deepfake and taken down from social media: Zelenskyy’s accent was off, and both the audio and video had signs of doctoring.
It could only be a matter of time before deepfakes are used to escalate conflict between China and Taiwan (Taiwan receives more fake news online than any other country in the world, according to the Digital Society Project). In February, The New York Times reported on the first instance of a state-aligned disinformation campaign when the Chinese government used deepfake videos to create entirely fake personas of broadcasters to advance pro-China views, where both voice and image were 100% computer-generated.
How we can respond
Some believe that within a few years, up to 90 percent of online content could be synthetically generated. While generative AI has the potential to democratize access to creative tools and expand economic livelihoods for creators and entrepreneurs, the sheer volume of synthetic media could also erode trust in video or audio recordings—and in the news more generally, which is why we’ll need to leverage a whole suite of solutions to fight back:
- More fact-checkers: In Taiwan, a group of fact-checking nonprofitsuses tools developed by tech companies to find and debunk disinformation.
- Partnerships with tech firms: Google has trained more than 100 Taiwan government officials, legislative and campaign staff in Taiwan on how to use tools to detect deepfakes and disinformation.
- Cryptographic signatures: The Coalition for Content Provenance and Authenticity(C2PA) is an open technical standard providing publishers, creators, and consumers the ability to trace the origin of different types of media. It’s being used to create a signature on a piece of media to prove its legitimacy.
- Watermarking: Watermarking videoswith a digital watermark can help people trace the origin of the video.
- Regulation: Governments have been slow to respond to deepfakes, but earlier this year China rolled-out rulesrequiring deepfakes to have the subject’s consent and include digital signatures or watermarks. In the US, both Texas and California have laws banning deepfakes.
- Media literacy for the public: Developing the public’s media literacy so they can detect truth from fiction is critical. MIT has created a free online coursefor the public.
As the number of deepfakes continues to grow, so will the number of tools and approaches to detect and regulate them. The development of responsible technology can match the development of technology used to mislead, but it will require equipping citizens, journalists, and lawmakers with the tools they need to stay ahead of the curve.