So, my question becomes this: If we look at Bill C-59, for example, where you're giving CSE defensive and offensive capabilities—and part of that is proactively shutting down malware that might be...or an IP, or things like that—is there concern about escalation and where the line is drawn?
Part of this study.... The problem is that we're all lay people, or most of us anyway—I won't speak for all—when it comes to these things. My understanding of AI—because I've heard that, too—is that it's not what we think of it as being from popular culture. Does that mean that if, due to employing AI to use some of these capabilities that the law has conferred on different agencies, AI is continuing...? How much human involvement is there in the adjustments? If that line is so blurry as to what the rules of engagement are, is there concern that AI is learning how to shut something down, that the consequences can be graver than they were initially, but the system is sort of evolving on its own? I don't want to get lost. I don't know what the proper jargon is there, but....