One evening earlier this year, my eight-year-old son Jack asked me what a deepfake was. He pointed at the iPad and told me he had heard someone talking about it over lunch at school. I explained that deepfakes are videos which use technology to make a person appear to say or do something that they didn’t say or do. He was not particularly fazed by this and went on to imagine things that he might get a fake President Trump to say. I found that alarming. In 2020, children will have to be taught new digital skills around trust.
Being able to tell what’s real and what’s not has always been hard. We have got used to Photoshop and computer-generated imagery. National Geographic would manipulate an image of the Egyptian pyramids so it would fit better on the cover and Tom Hanks would appear to meet President Kennedy in the film Forrest Gump.
But the reason why deepfakes are more concerning is not the technical sophistication per se – it is the human factor. People see what they want to see or believe. Our desirability bias means we tend to believe first, then look for things that support those beliefs. It’s a deeply rooted flaw in how we process information.
Deepfakes exacerbate the problem. In 2020, audio and video that distort reality will become even easier to produce. And that will herald a new era of deception on a scale, and of a nature, different from what we’ve seen before.
A profound issue is how deepfakes will give disingenuous politicians, crooks and criminals carte blanche to deny the validity of real video footage. When the Access Hollywood audio recordings emerged a few weeks before the 2016 US Presidential election, in which candidate Trump famously bragged that he could “grab women by the pussy”, he sort-of apologised for his remarks, before suggesting that the clip was fake. And when anything can be faked, it becomes much easier for the guilty to deny the validity of something.
The American law professors Danielle Citron and Robert Chesney call this problem the “Liars’ Dividend”. When journalists try to establish the footage is real, it can backfire because it stirs up the possibility there might be truth to the claim that it’s fake. That’s the dividend to the perpetrator.
The greatest trust threat for the next generation isn't being deceived by deepfakes – in fact, I’m certain that in a couple of years, Jack will surpass me at spotting them. My worry is whether, amidst a constant stream of misinformation, he will care about figuring out what's real. The danger is that we will regard almost all information as untrustworthy, a state of mind that Aviv Ovadya, a media researcher and founder of the Thoughtful Technology Project, calls "reality apathy".
In 2020, we will find ourselves in a cat and mouse game of deception and detection, and we will need to teach our children and ourselves how to separate the factual from the fake – and to care about doing so.
Rachel Botsman is the author of Who Can You Trust? and a Trust Fellow at Oxford University’s Saïd Business School
This article was originally published by WIRED UK