The question is a bit broad, but I'll try to contextualize it.
When we're talking specifically about generative AI tools, the concern for me, from the data privacy perspective, would be Canadians going to websites like ChatGPT. They will tag their private and personal information into the window without realizing that they are actually consenting to that data being used for future training. They don't know whether that content will be printed out or spit out in somebody else's stream. I think that would be one form of concern.
The other form of concern, of course, is social media platforms relying on AI tools to detect harmful content, just because of the scale of the problem. Earlier this year I was looking at some of the transparency report charts from Meta, showing how they removed around 65% of content automatically that was classified as harassment and bullying. There's still a significant percentage, around 35%, that users had to report for platforms to act on. From that perspective, it is important to flag some of that problematic content that they won't have enough human content moderators or fact-checkers to look at.
When we look at AI, I think we have to differentiate the kind of use case we're actually talking about.