If somebody is embedding the code and their belief system is racist, we cannot take care of one set of belief systems. I think proper training procedures would help up to a certain point. Human oversight is necessary for AI systems, but we always need a a fair audit process to address these issues.
Also, we need data and research to be published publicly on IRCC. The data for the different VACs all around the world were published under 2016. After 2016, we had to make a special request to get the data on how many refusals have been done in different VACs.
Yes, we need a proper training mechanism for the people who are dealing with AI outcomes. To oversee those oversights, we need measures and a system to take care of this.