Malicious rumors are a weapon of political warfare. The Kremlin adeptly uses them to erode trust and sow divisions. But what we have seen so far—fake news sites, the use of stolen, twisted information, swarms of pretend social-media accounts and so forth—is just the start. Next-generation tactics will be far worse. They will involve audio and video that has not just been edited in order to deceive, but outright invented.
Worries have been growing for months. This summer, the Economist published a story called “Fake News: you ain’t seen nothing yet,” highlighting how French musician Françoise Hardy purportedly discussed President Donald Trump’s inauguration in a YouTube video in which she looks only 20 years old; she is actually 73. And the words she “speaks” are actually those of Trump’s adviser, Kellyanne Conway. The “recording” never happened: computer software had analyzed and reworked previously published material.
That video was monochrome, and grainy. But the technology has already leapt ahead. Nvidia, a company that specializes in graphics processing, has just published a paper showing how its software can turn daytime scenes into night, and winter ones into summer (it can also turn pictures of cats into wild animals).
This is going to turn modern life upside down. We are quite used to forgeries—but of documents, not people. It will not be long before anyone with a sufficiently powerful computer and the right software can produce a video of any politician saying anything. How will the world react to footage of Vladimir Putin confessing that the invasion of Ukraine was a mistake—or issuing an ultimatum to the Baltic states? How will we know whether these images and sounds are real? And if the Kremlin then denies that Putin said anything of the kind, how do we know if the denial is truthful? Indeed, perhaps the footage of the spokesman issuing the denial is itself just a computer-generated forgery.
Take a voyage a little further into the future, and it will be increasingly difficult to know whether a phone conversation is with a human being. Maybe you are talking to a computer program that perfectly synthesizes your interlocutor’s voice, in response to whatever you say.
The big casualty here is trust. We believe, more or less, the tinny sounds and flickering images that electronic devices bring us. Once manipulation becomes cheap and ubiquitous, the impact will be initially disruptive, and then corrosive. It will be easy to start riots and wars, and to destroy reputations. Many people will stop believing anything they consume electronically: news will be no different from entertainment. A music video and a clip of a public figure saying or doing something outrageous will be rated solely on their capacity to divert, rather than for any factual content.
The big question is what happens after that. Daily life demands a reasonable level of certainty—about the date and the time of day, our whereabouts, what we are buying and selling, and whom we are dealing with. Long ago, all that happened solely based on physical proximity. We would tell the time by looking at the sun, or perhaps a nearby clock, orient ourselves from familiar landmarks, and trust only people we met face-to-face. It will be hard to go back to that era.
Our best ally here will be cryptography, which can create signatures and landmarks that allow us to start trusting our interlocutors and our surroundings. A video of Putin signed with his personal encryption key is the real thing. Anything else is suspicious. Technology enabled fake news. It may yet be its doom.
December 12, 2017