That's such a great question. Thank you so much for asking it.
Absolutely, AI is a threat that looms over all vulnerable groups, including exiled dissidents. Women and sexual minorities, in particular, are really concerned. The people I talk to are really concerned about deepfakes and about the implementation and use of AI in running smear campaigns and disinformation campaigns against them.
I want to specify the importance of recognizing that platforms in Canada, for instance, don't share transparency reports with the government. Maybe this committee can take a lead on this. Request that these platforms share transparency reports on content moderation personnel, because most of these attacks are being carried out in foreign languages. As we've seen in the transparency reports that these platforms submitted to the EU, there aren't enough personnel to look at this harmful content on top of everything they are doing to automate the responses, which all of these algorithms were coded by. There's no real coordination and work with the targeted communities. There's no sense of accountability. There are no means for any target now coming under threat to report it immediately to the platform and request that it look immediately at the online content.
I spoke to a Chinese dissident who was subjected to a massive disinformation campaign across platforms, and the content is still there today. Some platforms responded to requests made by some politicians in her country of residence, whereas other platforms just kept it there. Other platforms, when she was reporting to them, time after time, were asking her to provide the evidence that it was a state-sponsored attack.
Even the forensic work the platform can do is being put on the shoulders of victims, which is very traumatizing. Take screenshots and compile all of this evidence and then send everything to us. Because there is a machine and generative AI looking at it and saying whether it's ChatGPT or other generative AI-empowered models, the AI is not convinced because it wasn't told the information in the way it was coded to handle it. It asks for more information, so it's been very traumatizing and very draining for the victims.
The other thing is that we all are aware of the risk of AI being used in spyware and in the carrying out of sophisticated—