Authored by Vanessa Gu of CGTN,
It used to be that seeing is believing. But deepfake technology, artificial intelligence (AI) technology that can manipulate images, videos and audio to show a person doing or saying something they didn’t do in real life, is challenging that assumption and making it easier than ever.
Deeptrace, a cybersecurity company that specializes in detecting AI-generated synthetic videos, reported a spike in the number of deepfake videos online from 7,964 in December 2018 to 14,678 in July 2019. That’s an 84 percent jump in less than a year.
Analyzing videos from video hosting sites, community forums and deepfake apps, the company said 96 percent were pornographic in nature, often with the computer-generated face of a celebrity replacing that of a real-life adult actor in a sexually explicit scene. It found that the most frequently targeted individuals were actresses from Western countries with South Korean pop singers second and third in line.
The more elusive fake news
Given the proliferation and accessibility of deepfake technology, it is no surprise there’s been a spike in the number of deepfake videos circulating online. Earlier this year, Chinese app Zao went viral for its ability to superimpose selfies onto celebrities in movie scenes. The app takes a series of selfies where users make various facial expressions and digitally transfers it to a movie scene.
While most deepfake apps and software available are mainly for entertainment purposes, there can be severe political ramifications. In Malaysia, sex video clips allegedly involving Economic Affairs Minister Azmin Ali and former Santubong PKR Youth chief Haziq Abdullah Abdul Aziz were leaked through messaging platform WhatsApp. The government has said the videos are deepfakes, but cybersecurity experts have yet to conclusively determine if the videos were doctored.
Governments around the world are rushing to keep up with the burgeoning threat of AI deepfakes. In the U.S., California has enacted a law to ban the distribution of deepfake videos within 60 days of an election. A second ban allows California residents to sue anyone who distributes deepfake pornographic content with their likeness without their consent.
Tech giants take on deepfakes
However, there are questions as to how effective legislation will be given the difficulty in spotting deepfakes in the first place. While there are systems to authenticate a video or image at the point of capture, the rapid evolution of deepfake technology has created an arms race between deepfake creators and those trying to detect videos.
“It’s a cat-and-mouse game. If I design detection for deepfakes, I’m giving the attacker a new discriminator to test against,” Siddharth Garg, an assistant professor of computer engineering at New York University’s Tandon School, told Reuters.
Facebook came under fire this year when an altered video of U.S. House Speaker Nancy Pelosi slurring and tripping through a speech was distributed on the platform. Its Chief Executive Mark Zuckerberg admitted Facebook was too slow in flagging the video as false. “It took a while for our system to flag the video and for our fact checkers to rate it as false… and during that time it got more distribution than our policies should have allowed,” Zuckerberg said at a conference in Aspen, Colorado.
In the lead up to the 2020 U.S. Presidential election, tech giants like Facebook, Twitter and Google are eager to clamp down on deepfake videos to prevent a repeat of the fake news wildfire in the 2016 election.
Google released a large database of visual deepfakes last month that can be used as benchmarks to determine if a video has been altered. The company produced the database in the hope that it will help researchers build features to remove the videos. Meanwhile, Facebook is teaming up with Microsoft, the Partnership on AI coalition and academics from several universities to launch a contest to better detect deepfakes.