There are three we made specifically on AI that would help that issue. One was mandating privacy impact assessments whenever you have a high-impact system of AI. That would be one. Doing that, as an organization you would need to ask what the risk to privacy is. What is the risk of these types of deepfakes? How are you mitigating that? There are some proposed provisions in the AIDA, the artificial intelligence data act, that would do that as well.
We recommended great transparency for AI decisions. If a decision is made about you, you can ask for an explanation. If you see something that's strange, like a video of you, and you ask that question, you should get that explanation.
We also recommended collaboration among regulators wherever we can. I've just launched, with the Competition Bureau and the CRTC chair, a digital regulators forum, but there are limits on what we can do. We can't collaborate in investigations, for example. I can do that with the FTC in the U.S. and other countries, but I can't do it in Canada. That's a gap that would be easily fixed, and, in my view, it should be fixed.