Thank you for the opportunity.
I just want to return to the question about content and harmful information that can appear on social media or elsewhere.
This is not to comment necessarily on how the bill would manage that issue or how Canada's domestic law would manage that issue, but our emphasis is simply that, if this bill intends to mitigate or prevent harm that can be caused by or through AI systems, there should be an explicit recognition that misinformation and disinformation can cause the types of harms that are listed in the definition, under proposed subsection 5(1), of what constitutes “harm” in the bill.
Immediately what comes to mind is both physical and psychological harms. With respect to the humanitarian assistance community, misinformation and disinformation can lead to the prevention or disruption of the provision of life-saving humanitarian assistance. Of course, in certain contexts social media platforms may cause harm through active child soldier recruitment, through threats of spreading violence to terrorize civilian populations, and so on and so forth.
We just want to ensure that the bill clarifies the risks that can arise from misinformation and disinformation. They should be included among those things you are trying to regulate, mitigate or prevent.
Thank you.