Our second major concern is that the bill includes the exemptions, as you already know, for the Minister of National Defence and the director of CSIS, as well as the chief of the CSE and other government positions, so while the bill's focus is on preventing harm by private industry, the bill does offer you a critical opportunity to reduce the risks of AI even further by providing clarity and certainty that AI uses by those who are currently exempted are currently regulated by pre-existing laws.
Novel AI capabilities can produce unpredictable effects and can operate with a lack of transparency that can be extremely dangerous for civilians and other victims of war, so the legal uncertainty created by the current bill places many people at much higher risk, in our opinion. The opportunity to make these changes should not be missed, and we believe that your silence should not be misinterpreted or cannot be misinterpreted as suggesting that government use of AI in armed conflict is unregulated.
We recommend that, alongside that, the private sector's design of AI be in line with pre-existing legal obligations. That includes international human rights law and international humanitarian law. We also strongly recommend that the bill be amended to provide legislative clarity to government actors and that the bill, as Jonathan mentioned, should be explicit about compliance with export control obligations and pre-existing legal obligations.
You will find those proposals in our written submission.
In conclusion, we trust that your goal is to ensure the use of AI enables rather than impedes the protection of civilians during times of armed conflict and ensures the provision of humanitarian assistance.
As you contemplate how best to regulate AI, we ask that the law that is put in place help to prevent AI from resulting in unlawful harm in armed conflict, knowing that AI systems, whether designed by the private or the public sector, might appear on the battlefield in unexpected and unintended ways, whether by militaries, by armed groups or by civilians.
To achieve the bill's purpose of preventing the harms and risks that AI can cause, we believe that the bill must better incorporate Canada's pre-existing obligations under international law, including humanitarian law, and a human-centred ethical approach to AI.
Thank you.