Deepfakes are solveable—but don’t forget that shallowfakes are already pervasive

Deepfakes are solveable—but don’t forget that shallowfakes are already pervasive

The technology industry has a unique opportunity to tackle “deepfakes”—the problem of fake audio and video created using artificial intelligence—before they become a widespread problem, according to human rights campaigner Sam Gregory.

But, he warns, major companies are still a very long way from tackling the pervasive and more damaging issue of cruder “shallowfake” misinformation.

Gregory is a program manager at Witness, which focuses on the use of video in human rights—either by activists and victims to expose abuses, or by authoritarian regimes to suppress dissent. Speaking on Monday to an audience at EmTech Digital, an event organized by MIT Technology Review, he said that the deepfakes we're currently seeing are “the calm before the storm.”

“Malicious synthetic media are not yet widespread in usage, tools have not yet gone mobile, they haven’t been productized,” said Gregory. This moment, he suggested, presents an unusual opportunity for the deepfakes' creators to establish ways to combat them before bad actors were able to deploy the technology widely.

“We can be proactive and pragmatic in addressing this threat to the public sphere and our information ecosystem,” he said. “We can prepare, not panic.”

While deepfakes may be some way ahead of the mainstream, however, there is already a problematic flood of misinformation that has not yet been solved. Fake information today does not generally use AI or complex technology. Rather, simple tricks like mislabeling content to discredit activists or spread false information can be devastatingly effective, sometimes even resulting in deadly violence, as happened in Myanmar.

“By these ‘shallowfakes’ I mean the tens of thousands of videos circulated with malicious intent worldwide right now—crafted not with sophisticated AI, but often simply relabeled and re-uploaded, claiming an event in one place has just happened in another,” Gregory said.

For example, he said, one video of a person being burned alive has been reused and re-attributed by actors in Ivory Ivory Coast, South Sudan, Kenya, and Burma, ”each time inciting violence.”

Another threat was the rising idea that we cannot trust anything we see, which is “in most cases simply untrue,” said Gregory. Subscribing to this idea “is a boon to authoritarians and totalitarians worldwide.”

“An alarmist narrative only enhances the real dangers we face: plausible deniability and the collapse of trust,” he added.

Mark Latonero, human rights lead at the Data & Society, a non-profit institute on the applications of data, agreed that technology companies should be doing more to tackle such issues. While Microsoft, Google, Twitter, and others have employees focused on human rights, he said, there was so much more they should be doing before they deploy technologies—not after.

“Now is really the time for companies, researchers, and others to build these very strong connections to civil society, and the different country offices where your products might launch,” he said. “Engage with the people who are closest to the issues in these countries, build those alliances now, when something does go wrong—and it will—we can start to have the foundation for collaboration and knowledge exchange.”

Source Link