It would be if no one is harmed.
It's really difficult to address that. I think that, first, we need to try. We need to recognize that just leaving it to the free market is probably not going to result in the conclusion we want to see.
There's an amazing resource called the AI Incident Database. I don't know if you've seen it. It tracks different types of harms that exist. I'd love for that to be compiled and then we'd understand better, so we can articulate in more common ways what those are.
It's a difficult question to answer in the absence of having any of these in place. I think the requirement for collecting data through a commissioner's office that would have those use cases reported is important.