Summary List Placement
A shocking dossier intended to detonate a bomb under Joe Biden’s presidential campaign was defused after a researcher spotted its author was a computer-generated deepfake.
A document penned by Typhoon Investigations began circulating in right-wing circles from September and alleged compromising ties between Biden’s son Hunter Biden and China.
But “Martin Aspen”, the document’s purported author, isn’t real. His likeness was produced by a generative adversarial network (GAN), a branch of artificial intelligence, and the report’s allegations were baseless.
Disinformation researchers have warned that deepfake personas like Martin Aspen pose a threat to democracy, though up until now the threat has been minimal. We’ve seen convincing examples of Trump and Obama deepfakes, though neither were used for nefarious political purposes.
The Martin Aspen incident is something else — if political fakery is really on the rise, how do we protect ourselves?
There are tell-tale signs when a neural network has produced a fake image
First, it’s helpful to understand how these images are created.
Neural networks, which use hardware processing power to learn new skills, compete against each other to try and trick the other about what is a real image and what is faked, but indistinguishable, from the real thing.
GANs have become very good at creating lifelike images of people — but they’re not infallible. Check out this weird “dog ball” generated by a trio of researchers in 2019:
But GANs have improved significantly, to the extent where the technology can generate fairly convincing human faces:
“While these generative adversarial networks can be really good, and they learn from their own ‘mistakes’ so they get better over time, there are certain contextual things they cannot understand,” said Agnes Venema, a Marie Curie research fellow, working on a project at the Romanian National Intelligence Academy and at the Department of Information Policy and Governance of the University of Malta.
Here’s how to spot when an image isn’t exactly a real person.
Background details can be telling
“Key giveaways for GAN-created faces tend to be vague, out of focus backgrounds, or weird textures,” said Elise Thomas, the researcher at the Australian Strategic Policy Institute who first outed Aspen as an AI fraud.
“Sometimes they look like they’re borrowed from other things,” she added. “Like a shirt which looks like it has the texture of the plant.” Aspen’s odd green clothes were a dead giveaway.
It’s all in the eyes
The key tell that Aspen was the culmination of computer code doing its magic, rather than anyone real, was simple once you zoomed into the eyes. “You do sometimes see the irregular irises, as the Martin Aspen picture had,” said Thomas.
The irises get close to being realistic, but often bleed or blur in a way that isn’t natural. In the case of the faked image of Martin Aspen, there’s a second pupil in one iris, which is only visible when you zoom in and analyze the image in detail.
Check the ears, too
Computers don’t have ears, and so when confronted with the curious mix of cartilage, bone and skin, they struggle to …read more
Source:: Business Insider