I think we have to hold public discussions, be transparent and have obligations to be transparent.
The phenomenon you're describing has accelerated even more with artificial intelligence. We may think we know our personal information will be used by such and such an entity. However, do we really know what anyone can conclude about us based on that information? What inferences can be drawn? Sometimes postal codes or tastes in music, for example, can help someone deduce a person's sexual orientation, income level and so on. People don't know all that.
I recommended that Bill C‑27 provide for a transparency obligation so that, when people reached a decision with the help of artificial intelligence, they could request an explanation in every case. However, the current version of the bill provides that a general account may be provided only in cases that would have a significant impact on the individuals concerned. I recommended that part be deleted because, for the moment, I think it's better to encourage more transparency rather than less.
We have to try to find pleasant ways to explain this. One of my mandates is to try to acquire tools. We provide a lot of information on our website, and we try to explain it all as best we can, but I think we can do better.
We also have to talk about children, because I think the message has to be adapted to suit the audience.