My ability to answer this question will be very limited, because I have zero idea about the artificial intelligence technology IRCC is using. I may not be able to provide a specific answer to the question, but in every case where the use of artificial intelligence technology is involved, it's often important to have an independent body of experts that oversees the technology and the algorithm. That body should be independent from the user or the organization using the system, the artificial intelligence system itself.
If we can have an independent body of experts that oversees the design of this technology and the development of the algorithm to ensure that this racism or the data used in training this technology does not feed bias, discrimination and racism on the algorithm, that would be very important.
Let me quickly go to the Chinook case, which is a very good example.
Okay. I see from the MP that my time is up.