In the EU's AI Act, they have chosen to take the approach of essentially looking at high-risk and low-risk AI. The idea is that high-risk AI is the type of AI that will have to go through a specific process, a more burdensome authorization process, whereas low-risk AI can essentially sail through.
High-risk AI could be AI, for example, that's being used in situations where there is the risk of harm to a person—for example, their livelihood, their freedom or their health. That's essentially one approach that's being taken, and we will see how it works in practice.
Another interesting example is Germany. This has nothing to do with AI at all. This has been in place for a while. They essentially mandate worker consultation whenever any kind of technology is being introduced into the workplace and impacts workers' working conditions. I think it will also be interesting to see how that existing system is able to respond to the introduction of AI. It's not a piece of AI-specific policy; it has been in place for a while.
Those are two examples of how people across the world are dealing with this challenge.