You have to find a balance between the law and the contract. Today, the contractual mode is predominant when it comes to deploying artificial intelligence in applications. When you click on a button at the end of the contract on Facebook, you accept or not. You don't have time to read it.
If you look at the content of these contracts, you see that they contain totally unacceptable elements that should not be there, and I'll take Facebook as an example. We examined the conditions of use of Facebook a little. That enterprise gives itself the authority to obtain your information through third-party applications.
Whether or not you are online using Facebook, whether or not you have registered with them, the enterprise has given itself the right to go and get information about you from other applications. That type of thing is entirely possible through the use of the contract form. If, as a user, you accept that, well it's too bad for you. That kind of contract should be regulated by law. That is precisely where a balance needs to be found. It isn't easy but it is the government's work to find that balance between what should be in a contract between a service provider and a user, and what should be in the law.
What is the priority? There are a lot of things that need to be done, but I think that in order to protect the public, your main, most serious priority should be the use that is made of the data. It isn't just the fact that you like the colour blue that is important, but if one day you no longer like it, an algorithm may come to the conclusion that you have a mental problem or a disease you don't know you have, for instance, and that will be much more troublesome.