“Alfonzo Macias” looks unremarkable at first glance — bearded, bespectacled, with a short widow’s peak. But his strangely distorted glasses and the dissolving background behind him hint at a discomforting truth: Mr Macias never existed.
Undetectable to the naked eye, the uncannily human face is in fact the creation of an algorithm — one used by pro-Trump media outlet TheBL to give an identity to one of the many fake Facebook accounts that it uses to drive traffic to its website.
While less attention-grabbing than the viral deepfake videos that have manipulated the speech and actions of politicians and celebrities to popular effect in recent years, static artificial intelligence-generated faces are becoming an increasingly common tool for misinformation, experts say.
Instead of making real people appear to say and do things they have not, the technique works by generating entirely “new” people from scratch.
Already, fake faces have been identified in bot campaigns from China and Russia, as well as in rightwing online media outlets and purportedly legitimate businesses. Their proliferation has led to concerns that the technology could represent a more ubiquitous and pressing threat than deepfakes, as online platforms grapple with a rising tide of misinformation ahead of the US election.
“A year ago, this was a novelty,” tweeted Ben Nimmo, director of investigations at social media intelligence group Graphika. “Now it feels like every operation we analyse tries this at least once.”
The face race
Like deepfakes, AI-generated faces are created using a technology known as GANs, or generative adversarial networks. One network generates content, while another compares it to human faces, forcing it to improve until it cannot distinguish the synthetic image from a real face.
Digital renderings of fictional humans have had a growing presence online in recent years, with stars such as the avatar influencer Lil Miquela drawing in vast followings on Instagram and Twitter. But what sets GAN-generated faces apart is their photorealism — the level of detail that gives a strange lifelikeness to the characters.
“The most recent GAN models [such as Nvidia’s popular StyleGAN2] can now be used to create highly realistic synthetic images of human faces, down to the minuscule details — in particular, skins and hair,” said Siwei Lyu, a professor in computer science at the University at Albany, State University of New York.
ThisPersonDoesNotExist, a website that creates a StyleGAN2 face each time it is refreshed, demonstrates how convincing such images can be. The technique is also not limited to human faces, however, with dozens of variants ranging from cars to cats.
While concerns over AI-powered misinformation had focused largely on political deepfakes, a substantial case was yet to materialise, said Henry Ajder, a researcher who specialises in deepfakes and synthetic media. “There hasn’t been the kind of [Donald] Trump waving the nuclear red button around.”
However instances of GAN-generated fake faces used for deception have been appearing since last June, when the Associated Press identified an account on LinkedIn masquerading as a think-tank employee.
Larger-scale use of the technique was first identified in December, when Graphika and the Atlantic Council’s Digital Forensic Research Lab released a report on a network of over 900 pages, groups and accounts linked to the rightwing news outlet Epoch Media Group. “They used these fake faces to bolster their Facebook presence and deliver their messages to a wider audience,” said Max Rizzuto, research associate at the DFR Lab.
Meanwhile nation states have also spotted the technology’s potential, with Graphika discovering dozens of GAN-generated faces used in campaigns linked to China and Russia. In the case of China, GAN-generated images were used as profile pictures in a Facebook campaign, with fake accounts pushing pro-Beijing talking points on subjects including Taiwan, the South China Sea and Indonesia.
By contrast, the Russian campaigns had used fake faces to create the personas of fictional editors behind divisive political news outlets.
Giorgio Patrini, chief executive of deepfake detection platform Sensity, said GAN-generated faces were also making an appearance in the corporate world, with examples including a software company that used fake faces for client testimonials and a marketing company that used the technology to generate photos of its “team”.
‘This is actually a fabrication’
The first step towards combating the risk of GAN-generated faces was spreading awareness of their existence, said Mr Rizzuto. “Once you tell these people that it is actually a fabrication, you can see this evolved sense that all humans have to spot abnormalities in an image.”
Despite the remarkable progress behind StyleGAN2, there were a number of tell-tale signs of a fake, he said — for instance a subject’s head might be tilted while their nose and teeth remained straight. The algorithm can also struggle when incorporating background objects and other people, sometimes creating inadvertently unpleasant spectacles.
Another potential giveaway noted by Graphika is that the eyes of GAN-generated faces all appear in the same place within the image, regardless of which way the “subject” is facing.
Meanwhile researchers, government bodies and tech companies are building and improving models to detect fake faces. Mr Lyu was among the authors of a paper on one such technique, which studied the images of objects reflected in the eyes of subjects in order to distinguish real faces from fakes.
The field was constantly evolving, said Mr Rizzuto, pointing to deepfake research by Samsung last year which turned the Mona Lisa into a realistic talking head. He said the technology could one day be applied to create more realistic fake profiles, creating pictures with a variety of angles and expressions.
“The potential capacity to deceive is kind of outweighed by the amount of labour that would take to pull it off [right now],” he said. “In the near future, I would expect to see that . . . diminish considerably.”