This site uses cookies to ensure the best viewing experience for our readers.
Fake Pictures Are Worth Much More Than a Thousand Words

Opinion

Fake Pictures Are Worth Much More Than a Thousand Words

Jim Acosta’s temporary removal from the White House Press Corps in November 2018 brought to consciousness a much more serious issue: deepfakes, an emerging technological reality wherein any video that we assume to be a true representation of reality could have been altered, or even wholly faked

Dov Greenbaum | 18:41, 10.01.19
In November 2018, the White House was rocked by a relatively benign scandal. CNN reporter Jim Acosta was temporarily removed from the White House Press Corps for allegedly assaulting a White House intern during a press conference. To prove the assault, the White House released a short clip of the press briefing showing the interaction between Acosta and the intern. The release was immediately countered by claims that the video had been doctored. The White House admitted only that the video was sped up, but insisted it was otherwise unaltered.

For daily updates, subscribe to our newsletter by clicking here.

The incident was quickly litigated in a U.S. District Court in D.C., and the White House backed down, restoring Acosta’s credentials. While the issue at stake might seem at first glance case-specific, the incident actually portends to a much greater concern brewing: technological innovations have resulted in an emerging reality wherein any video that we assume to be a true representation of reality could have been altered, or even wholly faked.

Deppfakes. Photo: Shutterstock Deppfakes. Photo: Shutterstock Deppfakes. Photo: Shutterstock

This expeditiously evolving technology, which is based on artificial intelligence (AI) computing, has recently become available to a wider audience. Colloquially referred to as deepfakes, the technology effectively grants anyone with a computer, access to open source software, and no particular technical skill the ability to create realistic videos of people saying and doing things that they did not say or do.

The problem is exacerbated by the easy availability of cheap computer processing power and storage. Academic research in this area is also easily accessible nowadays, and much of it employs Generative Adversarial Networks (GANs), which are artificial intelligence algorithms used in unsupervised machine learning, and GAN-like technologies that are rapidly advancing this area, seemingly without any concern for the potential repercussions. The situation is further worsened by similar deep voice software that can clone a person’s voice with just seconds of original audio, giving perpetrators the ability to look and sound like their intended target.

Deepfakes can put words in mouths, put borrowed bodies on heads, or put borrowed heads on bodies, all with increasing sophistication and believability. With each new iteration of the relevant technologies, the fakes become even more convincing, and potentially more dangerous.

Like all new and exciting technologies, deepfakes have both legitimate and illegitimate benefits. The technology can be used, for example, to convincingly provide an anonymous cover for those who appreciate their privacy and don’t want to be identified. The technology also allows for the creation of informative political satire and enjoyable parody. In some instances, playing around with this software is just good clean fun.

But consider the political repercussions if a deepfake were used to coerce or misinform the public via the portrayal of a politician saying something inflammatory for either domestic or international consumption; the financial repercussions if a Wall Street tycoon is portrayed as saying something about a particular stock or market trend; the personal repercussions if a private citizen is portrayed doing something unbecoming or illegal; or the general havoc and chaos that might materialize if an official is presented as announcing an attack or a pandemic.

A related but currently more complicated technology announced in late 2018 showcased the capability of creating completely made-up human faces that are effectively indistinguishable from the genuine article. Such a technology can be used to supplant the real faces of people on video, to create fake social media accounts that can further spread fake news.

The legal concerns that arise with the uses of these technologies, both malicious and benign, are substantial. Consider the intellectual property and tort law repercussions related to the misappropriation of a person’s likeness, for example by portraying them in a film in which they were not actually in, or by using them to sell a product they did not intend to sell. Such an activity could, depending on the particular jurisdiction, infringe on that individual’s right of privacy, their right to control their image and persona within the public sphere.

In some instances, the legal concerns associated with the deepfake are content-agnostic: regardless of what the deepfake is promoting, the technology will lead to increasing copyright infringements of the source material, particularly if the deepfake becomes a meme and other forgeries are piled on.

In other instances, depending on the maliciousness of the association of the offended individual with the content of the video, the actual content of the deepfake could further exacerbate the offense, and could also lead to claims of defamation or the intentional infliction of emotional distress.

There are also much broader legal concerns. Simply the knowledge that such videos exist and their seeming authenticity could cause irreparable harm to the judicial system as we know it. In particular when it comes to the reliability of video evidence in court; we might no longer be able to prove that an individual said something or did something based on purported video or audio evidence of that action. Or at the very least, that evidence can be called into question without having to prove whether it is fake or not.

So what can we do in light of the new fake reality?

A proposed though crude solution might simply be to look for weird-looking blinks; many of the fake videos are the result of AIs trained on publicly available images that, more often than not, will not include images of people with their eyes closed.

Nevertheless, AIs will eventually become capable of re-creating realistic blinks as well as hyper-realistic videos. At that point, a longer-term solution might include, at least for the political concerns raised above, the insertion of human-imperceptible digital watermarks into all official and network broadcast videos. Ongoing research suggests that these watermarks can be used to ascertain whether a video has been manipulated or altered and localize that tampering. Perhaps criminal penalties can be applied to those who knowingly peddle deepfakes in either a courtroom or with malicious intent online (granted with slippery slope exceptions for parody).

Related stories

Of course, like all areas of security, the ongoing arms race between security measures and hackers may negate many of the technical solutions soon after their implementation. Perhaps the best solution may just require a social paradigm shift regarding the legitimacy of video as a reliable representation of reality. Most of us already don’t rely on CNN and other networks for our news anyway.

Dov Greenbaum, JD-PhD, is the director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies at the Radzyner Law School, at the Interdisciplinary Center in Herzliya.

share on facebook share on twitter share on linkedin share on whatsapp share on mail

TAGS