In our article, we explore copyright tort and even Criminal Code actions as potential yet sometimes imperfect remedies. We note that deepfake, impressive and game-changing no doubt, is likely overkill from manipulating the public. One certainly would not need complex computer algorithms to fake a video of the sort routinely serving as evidence or newsworthy.
Think back really to any security footage you have ever seen in a news incident. It's hardly impressive fidelity. It's often grainy or poorly angled, and usually only vaguely resembles the individuals in question.
While deepfake might convincingly place a face or characteristics into a video, simply using angles, poor lighting, film grain, or other techniques can get the job done. In fact, we've seen recent examples of speech synthesis seeming more human-like by actually interjecting faults such as ums, ahs, or other pauses.
For an alternative example, a recent viral video purportedly showed a female law student pouring bleach onto men's crotches on the Russian subway to prevent them from the micro aggression of manspreading, or men sitting with legs too splayed widely apart. This video triggered an expected positive and negative reaction across the political spectrum. Reports later emerged that the video was staged with the specific intent to promote a backlash against feminism and further social division in western countries. No AI technology was needed to fake the video, just some paid actors and a hot button issue that pits people against each other. While political, it certainly didn't target Canadian elections in any conceivably actual manner.
Deepfake videos do not present a unique problem, but instead another aspect of a very old problem worthy of consideration certainly, but we do have two main concerns about any judicial or legislative response to deepfake videos.
The first is overspecification or overreaction. We've long lived with the threat that deepfake poses for video in the realm of photography. I'm no visual effects wizard, but when I was an articling student at my law firm more than a decade ago, as part of our tradition of roasting partners at our holiday parties, I very convincingly manipulated a photograph of the rapper Eminem replacing his face with one of our senior lawyers. Most knew it was a joke, but one person did ask me how I got the partner to pose. Thankfully, he did not feel that his reputation was greatly harmed and I survived unscathed.
Yes, there will come a time when clear video is no longer sacred, and an AI-assisted representative of a person's likeness will be falsified and convincingly newsworthy. We've seen academic examples of this already, so legislators can and should ensure that existing remedies allow the state and victims to pursue malicious deepfake videos.
There are a number of remedies already available, a lot which will be discussed in our article, but in the future of digitally manipulable video, the difference between a computer simulation and the filming of an actual physical person may be a matter of content creator preference, so it may, of course, be appropriate to review legal remedies, criminal offences, and legislation to ensure that simulations are just as actionable as physical imaging.
Our second concern is that any court or government action may not focus on the breadth of responsibility by burdening or attacking the wrong target. By pursuing a civil remedy through courts, particularly over the borderless Internet, it will often be a heavy burden to place on the victim of a deepfake, whether it's a woman victimized by deepfake revenge pornography, or a politician victimized by deepfake controversy. It's a laborious, slow and expensive process. Governments should not solely leave remedy entirely to the realm of victim-pursued legislation or litigation.
Canada does have experience in intervening in Internet action to varying degrees of success. Our privacy laws and spam laws have protected Canadians, and sometimes burdened platforms, but in the cybersecurity race among malicious actors, platforms and users, we can't lose sight of two key facts.
First, intermediaries, networks, social media providers, and media outlets will always be attacked by malicious actors just as a bank or a house will always be the target of thieves. These platforms are, and it should not be forgotten, also victims of malicious falsehood spread through them just as much as those whose information is stolen or identities falsified.
Second, as Dr. Wardle alluded to, the continued susceptibility of individuals to fall victim to fraud, fake news, or cyber-attack speaks to the fact that humans are inherently not always rational actors. More than artificial intelligence, it is the all too human intelligence with its confirmation bias, pattern-seeking heuristics, and other cognitive shortfalls and distortions that will perpetuate the spread of misinformation.
For those reasons, perhaps even more than rules or laws that ineffectively target anonymous or extraterritorial bad actors, or unduly burden legitimate actors at Canadian borders, in our view governments' response must dedicate sufficient resources to education, digital and news literacy and skeptical thinking.
Thanks very much for having us.