insights

Campaigns

Battling Disinformation

Battling Disinformation

May 27 • 4 min read

The Pope did not wear Balenciaga. Donald Trump didn’t get arrested in New York City. And no, Hillary Clinton did not recently endorse Ron DeSantis.

And remember seeing Mark Zuckerberg, some years ago, brag about having “total control of billions of people’s stolen data”?  That, too, never really happened—but was made to look like it did, on video, by people using a form of artificial intelligence to make a “deepfake”—a single image or video of a fake event to spoof a celebrity or worse, spin the truth to maliciously disinform.

In recent years, deepfake videos have been relatively easy to spot and quickly discredit, mostly because the lip-synching falls short, or because what’s being shown strains all manner of credibility. Some deepfakes, like the popular series of @deeptomcruise videos shared on TikTok, have become, for some, an acceptable new form of entertainment.

But now, deepfake technology is becoming more sophisticated, dangerous—and easier for anyone to create.

Thanks to the launch of a variety of “generative AI” tools this past spring, under such names as Midjourney and DALL-E and others, deepfakes are becoming a lot cheaper to make and quite a bit harder to spot. According to Reuters, cloning a voice used to cost $10,000 in server and AI-training costs. Now, though, startups have begun offering it for a few dollars.

And consider this photo, which won a Sony World Photography Award in April. Photographer Boris Eldagsen refused to accept the prize for it at the award ceremony, revealing it was created by AI. Eldagsen said he chose to be a “cheeky monkey” by submitting the photo to spark a debate about the use of AI in the industry.

According to DeepMedia, a company working on tools to detect synthetic media, there have been three times as many video deepfakes and eight times as many voice deepfakes posted online so far this year compared to the same time period in 2022. The company predicts that more than 500,000 video and voice deepfakes will be shared on social media sites globally this year.

So what’s ahead? Fasten your seatbelts. In recent weeks, creatives and technology companies, alike, have begun warning that, as we move closer to the 2024 election season— where democracy will be tested again— there’s a strong likelihood that deepfakes will sharply proliferate, and the stakes couldn’t be higher.

“We are in the middle of a growth of authoritarianism globally, a decline in trust and in mainstream media, (and) pervasive mis- and dis-information. We need a globally inclusive response to this broader phenomenon of generative AI and synthetic media.”

Sam Gregory, Executive Director, witness.org

“It’s going to be very difficult for voters to distinguish the real from the fake, and you could just imagine how either Trump supporters or Biden supporters could use this technology to make the opponent look bad,” Darrell West, a senior fellow at the Brookings Institution told Reuters. “…There could be things that drop right before the election that nobody has a chance to take down.”

How to fight back?

There are several “fake news detection” websites set up to help. MIT’s Detect Fakes is a short quiz that enables users to compare two videos to decide which is real. Microsoft’s Spot the Deepfake is a 10-question quiz that has users detect signs like mismatched shoes or earrings, or eye movements that don’t synch.

Other new detection tools also are emerging, including free versions, like Content at Scale and GPTZero, as well as versions that charge for access, such as Sensity. Tools designed to spot embedded markers in AI generated images, for example, look for unusual patterns in how the pixels are arranged, including in their sharpness and contrast.  

But deepfake detention tools are still early in development, and it’s not yet certain which detection tools to trust, says Sam Gregory, executive director of witness.org, who specializes in deepfake detection. Mya Zepp, with IJNet’s Disarming Disinformation project, says it’s essential that journalists and creatives who are committed to sorting fact from fiction have access to reliable tools so they can more easily spot what’s real, question what might not be real and explain what’s fake to their audiences and stakeholders. Most critical, says Professor Lilian Edwards of Newcastle University in the U.K., a specialist in Internet law, is to “stem the potential chaos” of both real deepfakes and claims of deepfakes that seriously erode public trust.

“The future doesn’t have to be one in which anything can be called a deepfake, anyone can claim something is manipulated, and trust is further corroded,” Edwards recently told Guardian writer Ian Sample. “The problem may not be so much the faked reality as the fact that real reality becomes plausibly deniable.”

Back to Top