Yes, absolutely.
There has to be human oversight for AI algorithms. Somebody has to be a part of the processing piece. We can't leave AI to do the work independently because these systems themselves don't function as accurately as we would like.
On top of that, the refugee experience is so diverse across so many different communities. The consequences of visa refusals based on a broad set of criteria without taking into account certain nuances.... For example, if the criteria is mothers and children before young men, for example, AI can't understand where vulnerabilities exist. In some instances, particularly in conflict contexts, young men are targeted significantly for their ethnic background or for other issues that affect their identity.
I think leaving it to AI independently leaves a lot of room and expands the window on bias.