Thank you for that. It's very helpful testimony.
Mr. Harris, I want to ask you a question similar to that of Mr. Généreux's.
I similarly had the experience of listening to you and feeling like I was in a horror movie, a sci-fi novel or some intersection of those when I heard you talk. I know that you're bringing up these risks and potential harms as a very real thing, so I don't want to take that lightly, but it is quite scary to hear.
I want to ask you a bit of an ethical or philosophical question. You had talked about mitigating the risks. You had talked about a blanket ban on, or explicitly forbidding, certain types of AI or advanced AI systems. One question that occurs to me when we're dealing with, essentially, advanced AI, is whether it is surpassing human intelligence. I think that's what I'm hearing. You talked about the superhuman and the power-seeking behaviours as being a real risk.
I'm interested in how we develop an ethical and/or legal framework. I think that is a core challenge in this work, which I'm grappling with. A lot of our ethical and our legal concepts rely on things like reasonably foreseeable futures. They rely on concepts of duty, etc., most of which rely on humans' ability to look at what the outcomes might be, given our past experience.
You talked about how some of our national security assumptions had been invalidated. Are some of our ethical assumptions and our legal assumptions being invalidated by the advancement of AI? How do human beings create a system or a set of guidelines for something that is actually beyond our intelligence?
It's a tough question.