That's an interesting question. I'm going to give a different example of the Clearview AI situation. You had a private sector company that created a facial recognition database based on scraped data that was then used by the RCMP. The Privacy Commissioner has already said that you can't have a legitimate use by a government actor of data that was collected illegitimately. That relationship is always interesting and it's an important one.
One of the challenges is that, on the one hand, you want to facilitate the use of data for socially beneficial purposes. The private sector is collecting vast quantities of data. There are legitimate questions in some cases about the quality of the consent, the quality of the collection practices and the kinds of data that are collected. In this case, we're looking at mobility data, which are very sensitive, but there are lots of other very sensitive data as well.
It becomes really important to think about that massive amount of data that's being collected under all sorts of privacy policies, which we don't have the time or even the skills to read and understand completely, that may become a product that is then sold to government, as well as to other actors, for their analytics. Right there, if there are flaws in that collection and it is sold, you carry over those flaws and those issues into the subsequent uses of those data.
That relationship is tremendously important.