Fortunately, the law is based on principles. So we're able to apply those principles to organizations that use and disclose data. That's what allows us to investigate TikTok and ChatGPT.
That said, there are shortcomings: we don't have the power to issue orders or fines.
In the case of organizations making huge profits from data, there is a shortcoming. It may not have been an issue before because companies weren't making so much money from data, but, now, they are.
So there have to be fines. We need to be more proactive. We need greater transparency. Explaining decisions made by algorithms, by artificial intelligence, obviously wasn't a problem. We can regulate this with principles, but there are certain things that become a little more technical. I think that, when it comes to artificial intelligence and algorithmic decisions, our requirements need to be broad enough that they still apply five years from now, ten years from now, to ChatGPT's successors. These requirements must be reinforced.