I think I've shared repeatedly that I don't think AI is one monolithic thing. I do think that it needs to be broken down into sector-specific regulation.
I think what AIDA does is provide a framework that is then dependent on other types of sector-specific regulation. There is no contesting that how this was done is problematic. There needs to be more public consultation. I was really happy to see in the amendments that at least it speaks to what was heard and then how that's being addressed.
I think if we just put that aside—the process is for you guys to debate—it's very important to have regulation of AI systems. I've seen and experienced, by doing a lot of interventions with civil society organizations, harms that are occurring. I don't think that having rules or just leaving it up to self-regulation from companies to say, “We're doing the best we can do” is going to prompt the appropriate behaviour. I think legislators need that.
We need to be able to set the homework, too. We can't say, “You go and write your test, and then you mark it yourself.” I think it's very important that we as civil society organizations, in combination with industry and with government and academics, write what those tests are, the standards that I'm talking about, and then use that to assess industry.