Good afternoon, everyone. Thank you for the invitation to appear before you.
Catherine and I will be focusing solely on part 3 of Bill C-27.
We are representatives of the International Committee of the Red Cross and the Canadian Red Cross. Our organizations work to minimize the suffering of victims of armed conflict, and we work with governments to ensure respect for the laws that regulate armed conflict.
We appear before you today to emphasize that, when governments regulate AI, you need to consider how AI is, can and will be used in armed conflict and to ensure that it does not contribute to unlawful harms.
Today, we are observing in real time that privately made AI systems developed and designed for civilian use are finding their way onto battlefields, whether adapted by militaries, armed groups or civilians. We are particularly concerned with the use of AI that can result in death, injury and other serious harms. This includes the use of AI in misinformation and disinformation campaigns and how they can disrupt and interfere with humanitarian operations. Artificial intelligence allows harmful information to be generated and spread at a scope and scale never before imagined, with real-world dangers for civilians in armed conflict as well as those who work in these contexts.
To address these concerns, we recommend that the bill require that all Canadian-made AI systems used in armed conflict must be designed to comply with international humanitarian law in accordance with Canada's pre-existing legal obligations. International humanitarian law, or IHL, is the body of international law that places limits on how warring parties may fight each other in armed conflicts and, importantly, it provides protections to civilians and others no longer participating in those hostilities.
To ensure IHL compliance, it will also be critical that the bill include language that preserves effective human control and judgment in the use of AI that could have serious consequences on human life in situations of armed conflict; that the bill ensure AI systems are traded in compliance with Canada's export control obligations; and that the bill clearly regulate AI systems used in misinformation and disinformation campaigns and must contain language that ensures the definition of “harm” in proposed subsection 5(1) includes types of harm that AI systems may cause through the creation and spread of misinformation and disinformation.