WASHINGTON – “Fake news” is about to get faker than ever thanks to artificial intelligence, which now allows people to compose entirely made-up videos that show politicians saying things they never said.
That’s the stark message that experts – including a former Defense Department official who now teaches at the University at Buffalo – brought to a House Intelligence Committee hearing Thursday.
The lies that Russia spewed on social media before the 2016 election were just the start, witnesses warned at the hearing on the national security threat posed by “deepfake” technology. They said the world is entering a new era of politics in which the public might not be able to tell the truth from fiction in future campaigns, unless government and the social media giants take action soon.
“There's an old adage that says that a lie can go halfway around the world before the truth can get its shoes on,” said David Doermann, a professor and director of the Artificial Intelligence Institute at UB. “And that’s true.”
There’s nothing new about lies going viral on social media. But what’s new, Doermann and other experts said, is the sophistication which with those lies can now be concocted.
Artificial intelligence quite literally can put words in the mouths of politicians who never said them, they said. And there are currently no effective laws or best practices in place to keep those fake videos from spreading.
Doermann monitored the growth of deepfake technology while serving for five years as the director of a Defense Advanced Research Projects Agency project aimed at combating the phenomenon.
“What was unexpected was the speed at which this manipulation technology would evolve,” said Doermann, who joined UB last year.
Editing software now exists that allows people to slice and dice images and words together into a seamless image of someone saying or doing something outrageous. It’s only a matter of time, witnesses at the hearing said, before foreign elements – perhaps in Russia, perhaps in China – start concocting fake videos aimed at influencing U.S. elections.
“It's likely to get much worse before it gets much better,” Doermann said.
It’s already bad enough in India, where a deepfake video was used not to influence an election, but to intimidate a female journalist who was covering government corruption.
“Her face was morphed onto pornography, and that first day, it goes viral, it's on every social media site, it’s on WhatsApp,” said Danielle Citron, a professor of law at the University of Maryland and the author of the book “Hate Crimes in Cyberspace.”
The journalist soon found herself the target of rape threats and had to go offline for several months, Citron said.
What can be done about such realistic and dangerous hoaxes? Social media companies, governments and individuals can all combat them, witnesses at the hearing said.
Doermann suggested that social media companies might want to impose some sort of delay before videos are posted online, in hopes of ferreting out the hoaxes.
“We need to continue to put pressure on social media to realize that the way that their platforms are being misused is unacceptable,” Doermann said. “They must do all they can to address today's issues and not allow things to get worse.”
Witnesses also proposed legislation that would weaken the immunity that social media companies now enjoy when fake videos appear on their platforms, as well as other changes in the law.
“Congress should implement legislation prohibiting U.S. officials, elected representatives and agencies from creating and distributing false and manipulated content,” said Clint Watts, distinguished research fellow at the Foreign Policy Research Institute.
But the partisan dialogue between the committee’s chairman, Democratic Rep. Adam Schiff, and the top Republican on the panel, Rep. Devin Nunes, seemed to indicate that passing any legislation on deepfake technology could be very difficult.
Schiff stressed that action must be taken soon, before deepfake videos wreak havoc in the 2020 presidential election.
“We got a preview of what that might look like recently when a doctored video of House Speaker Nancy Pelosi went viral on Facebook, receiving millions of views in the span of 48 hours,” Schiff said.
And that wasn’t even a deepfake video, but rather a crude manual manipulation.
“One does not need any great imagination to envision even more nightmarish scenarios that would leave the government, the media and the public struggling to discern what is real and what is fake,” Schiff said.
Schiff said Congress needs to develop legislation to combat deepfake videos. But when it was his turn to speak, Nunes began with a partisan jab at the much-discussed, never corroborated “dossier” that surfaced in the 2016 that allegedly showed that the Republican candidate, Donald Trump, engaged in lewd behavior.
“I join you in your concern about deep fakes and want to add to that fake news, fake dossiers and everything else that we have in politics,” Nunes told Schiff. Both lawmakers hail from California.
With Congress in the very early stages of dealing with deepfake technology, witnesses at the hearing said that everyone who uses social media will have to be vigilant to avoid spreading made-up news and videos.
“The people that share this stuff are part of the problem, even if they don’t know it,” Doermann said.