Many of my colleagues who conceptualize privacy legislation think about its substantive provisions—you can do this and you can do that—and also about privacy law as a process. The idea is to get organizations that collect and use personal data to build governance and accountability around how they do so. It's thinking very carefully about what the uses are, documenting that and having checks.
In my earlier comment, I talked about the overall framework of the legislation and the interacting components. This is where I believe a stronger process orientation in both.... In the CPPA, that's the data protection impact assessment provision, which will interact subsequently with enforcement provisions. When it comes to AIDA, it's making clear some uses that are beyond the pale, which we're going to forbid, and then calibrating the legislation to the level of risk. Right now, AIDA really only governs systems that are high-risk and we don't know what those are because the criteria are not there.
The European law, which I think is weak and could be improved, governs all AI systems presumptively. Even those that are low-risk, where the people who are developing those systems have actually shown that the risk is low, are subject to requirements. I think that's a key safeguard. If we're going to create legislation that is going to be durable for a long time, it needs to assess that entire risk environment and capture it in the legislative package.