Thank you for that.
The only thing that I would add is that some of the ICRC's concerns that are primarily focused on armed conflict are transferable and translatable to other situations where AI is being used to assist people in making decisions. Some of those concerns are, as you mentioned, with the bias that's in the data being used. There's also concern around user bias. Do the users know what the system is supposed to be used for? Will they become overreliant on that system to the point of removing their own human judgment?
We have concerns, as you may know, around lack of transparency in AI systems that can have serious consequences and around lack of predictability, not knowing exactly why the AI system provides the output to the user that it provides. Particularly important during situations of armed conflict, but not always, is that artificial intelligence systems can produce results at a speed that, if they have certain autonomous features, outpaces human decision-making.
These are all things that are of particular relevance in situations of armed conflict, but I think you can imagine that they would also be relevant outside of situations of armed conflict, whether it's in the Canadian context or in any other domestic context.
Thank you.