Thank you.
I am grateful for the opportunity today to share with the Standing Committee on Access to Information, Privacy and Ethics my thoughts on misinformation and disinformation based on artificial intelligence.
The past few years have seen impressive advances in the capabilities of generative artificial intelligence, starting with the generation of images, speech and video. More recently, these advances have extended to natural language processing, which the public witnessed with the release of OpenAI's ChatGPT model.
Since the end of 2022, nearly two years ago, this last element brought us into an unprecedented technological reality, one in which it is becoming increasingly complex for the average citizen to determine whether they are conversing with a human or a machine when they interact with these models. This state of affairs, by the way, is commonly known in computer science as “passing the Turing test”: We can't distinguish between AI and a machine through a text interaction, and so the boundaries between human and artificial conversations are getting more blurred as these systems become more powerful and advanced after each release.
All of this is controlled by a handful of companies—all foreign—that have the required financial and technical resources. We're talking about over $100 million to train the latest models—and growing—so it's going to be billions pretty soon.
When analyzing the progress and acceleration of AI trends, we see that AI capabilities don't seem to be about to plateau or slow down. Between 2018 and today, every year, on average, “training compute” required to train these systems has quadrupled; the efficiency by which they exploit the data has increased by 30%—in other words, they don't need as much data for achieving the same efficiency of answers; the algorithmic efficiency has tripled—in other words, they are able to do the same computation faster; and the investments in AI have also been rising exponentially, increasing by over 30% per year, and in the last few years were an average of $100 billion, growing quickly towards the trillion.
There was a recent study carried out in Switzerland that I think is very important to the discussion of this committee. It showed that GPT-4, the latest version you can find online, has superior persuasive skills to humans in written form. In other words, they can convince somebody to change their mind better than humans.
What's interesting, and maybe scary as well, is that this advantage of the machine over humans is particularly strong when the AI has access to the user's Facebook page, because that allows the AI to personalize the dialogue. That's just now, so you can expect future generations of models to become even stronger, potentially superhuman in their persuasive abilities, and in ways that can disrupt our democracies. They could be much stronger than what we've seen with deepfakes and static media, because now we're talking about personalized interactive connections between AI and people.
I trust that most large organizations that develop these models make some efforts to ensure that they are not used for malicious purposes, but there are currently no regulations forcing them to do so anywhere in the world—well, I guess China is leading on this—and models, especially when they are open-sourced, such as Meta/Facebook, can easily be modified by malicious individuals or groups to change those models.
For example, they would be stronger at persuasion, helping more to build bombs, perpetrating all kinds of nefarious actions and providing information that can help terrorists or other bad actors. In the absence of a regulatory framework and mitigation measures, the deployment of such malicious capabilities would certainly have many harmful consequences for our democracy.
To minimize these pitfalls, the government needs to do a few urgent things. We need to pass Bill C-27, in particular to label AI-generated content. We need privacy-preserving authentication of social media users so they can be brought to justice if they violate rules. We need to register the generative AI platforms so governments can track what they're doing and enforce labelling and watermarking.
We need to inform and educate Canadians about these dangers to inoculate them with examples of disinformation and deepfakes.
Thank you for this opportunity to share my perspectives. This is an important exercise. Artificial intelligence has the potential to generate considerable social and economic benefits, but only if we govern it wisely rather than endure it and hope for the best. I often ask myself: will we be up to the scale of this challenge?
Thank you.