Adobe Is Leading The Charge Against The Growing Epidemic Of Deepfake Videos
The reality of deepfakes – or human image synthesis based on artificial intelligence to fake human participation in videos – is becoming so widespread and alarming that some of the biggest names in technology are teaming up with the New York Times to combat the problem.
With advances in editing tools and AI, fake videos – used for purposes spanning political motives to good old revenge porn – are becoming more prominent and certainly more convincing. “It will soon be possible to make convincing videos showing anyone saying anything and photos of things that never happened,” according to Axios.
As a result, Twitter, Adobe and the New York Times are now proposing a collaborative effort to make clear who makes photos or videos, and what changes have been made to them along the way.
Adobe is hoping to implement a system that allows publishers to append secure distribution data to content. The company could include the technology in its own tools, but it seeks for the technology to be an “open standard” that others would also use. The company showed a prototype of this tool earlier this week at its MAX conference in Los Angeles.
The companies are joined by a startup called Truepic, which also aims to create a “secure path” from the moment a photo or video is captured that can then be used to verify its authenticity.
Axios’ Kaveh Waddell said that the idea: “…solves a small but important layer of the online trust crisis. This would allow a reader to verify that something came from Axios — but if they are skeptical of Axios to begin with, that won’t matter. Verification that isn’t easily accessible threatens to bifurcate online information into ‘trusted content’ from those who have the resources to verify it and an easily dismissed information underclass.”
Many companies are looking to prove authenticity via means of blockchain or decentralized lists of transactions that can’t be altered. An alternative idea involves a database held by a single company. Adobe has commented that it has not yet finalized what type of mechanism it will use.
Adobe general counsel Dana Rao said: “When it comes to the problem of deepfakes, we think the answer really is around ‘knowledge is power’ and transparency. We feel if we give people information about who and what to trust, we think they will have the ability to make good choices.”
New York Times’ head of R&D Marc Lavallee commented: “Discerning trusted news on the internet is one of the biggest challenges news consumers face today. Combating misinformation will require the entire ecosystem — creators, publishers and platforms — to work together.”
Finally, Twitter trust and safety head Del Harvey said: “Serving and enhancing global public conversation is our core mission at Twitter. Everyone has a role to play in information quality and media literacy.”
Adobe will be hosting a summit at its headquarters next month to continue the discussion. “We do look at this as a shared responsibility,” Rao concluded.