They are exactly the risks that you've described. For example, in the case of the recent Grok scandal, it shows the broader underlying problems in the development of these AI systems. Grok is not just an image model. It's a general-purpose AI system that can do a bunch of things: It can write code, make plans and make pictures, including these horrible pictures that we've seen in the recent scandal. These kinds of systems are systems that not even their own creators fully know how to understand internally or how to control. This is the fundamental issue that, as we scale them and as these companies keep investing hundreds of billions of dollars to make them smarter and more competent in all tasks, we will keep facing over and over, up to the point at which we get to superintelligence.
Obviously, the solutions for some of these harms are different in the immediate term, but the underlying problem is the same, and I do not believe it should be one or the other. I believe current harms should be dealt with through existing legislation by applying liability—
