As artificial intelligence is being used by IRCC now, based on internal documents obtained, we are getting to the stage where IRCC is using AI to automate visa refusals, so we have a case of block processing of visa applications. Steven made reference to this earlier, the lack of individuality to this area. There is a lack of individuality in treating applications. Applications seem to be homogeneous, which is not often the case.
My major concern with the use of this technology by IRCC is the fact that to train an AI algorithm, you need huge amounts of data. IRCC inevitably has data from a historical collection of data. The problem is that, historically, you have been collecting data that seems to be biased against a particular group of people or a particular continent. When you use that data to train an AI algorithm, what the AI algorithm does is simply regurgitate those biases. This time it's even more difficult, because it becomes more difficult to be able to identify this problem.
The problem we have here, which I'm trying to highlight, is that if IRCC uses these data to train AI algorithms—which I believe it is doing now—without adequately trying to address the bias issue, we are going to have a situation where the problem we have identified in the Pollara report is now embedded into technology and it becomes more difficult to identify.
This will continue to perpetuate the discrimination we have highlighted against people from the sub-Saharan African continent, especially with study visa applications coming from that part of the world. That is of great concern to us at ASI Canada.