In the experience of what I've seen, when people try to develop general regulations for all of AI or all of algorithms or all of technology, it never ends up being quite appropriate to the task.
I think I agree with Mr. Bengio in the sense that, if we're talking about certain types of international regulation, for example, it would be focused on automated killer systems, let's say, and there is already an extensive process going on in this work in Geneva and in other parts of the world, which I think is extremely important.
There is also the consideration that could be made whether Canada wants to become, itself, a state that has protections equivalent to the GDPR and that, I think, is also a relevant consideration that would considerably improve both flows of data and protection of privacy.
I think all other areas need to be looked at in a sectoral-specific way. If we're talking about elections, for example, often AI and other automated systems will abuse existing weaknesses in regulatory environments. So how can we ensure, for example, that campaign finance laws are improved in specific contexts, but also ensure that those contexts are improved in a way that they consider automation? When we're talking about the media sector and issues related to that, how can we ensure that our existing laws adapt and reflect AI?
I think if we build on what we have already, rather than developing a new cross-sectional rule for all of AI and for all algorithms, we may do a better job.
I think that also goes at the international level where it's very much a case of building on and developing from what we already have, whether it's related to dual-use controls, whether it's related to media or whether it's related to challenges related to elections. I think there are already existing instruments there. I think that's more effective than the one-size-fits-all AI treaty.