Thank you so much.
My name is Petra Molnar. I'm a lawyer and an anthropologist. Today I would like to share with you a few reflections from my work on the human rights impacts of such technologies as the facial recognition used in immigration and for border management.
Facial recognition technology underpins many of the types of technological experiments that we are seeing in the migration and border space, technologies that introduce biometric mass surveillance into refugee camps, immigration detention proceedings and airports. However, when trying to understand the impacts of various migration management and border technologies—i.e., AI lie detectors, biometric mass surveillance and various automated decision-making tools—it is important to consider the broader ecosystem in which these technologies develop. It is an ecosystem that is increasingly replete with the criminalization of migration, anti-migrant sentiments, and border practices leading to thousands of deaths, which we see not only in Europe but also at the U.S.-Mexico border, and most recently at the U.S.-Canada border, when a family froze to death in Manitoba.
Since 2018 I have monitored and visited borders all around the world, most recently the U.S.-Mexico frontier and the Ukrainian border during the ongoing occupation. Borders easily become testing grounds for new technologies, because migration and border enforcement already make up an opaque and discretionary decision-making space, one where life-changing decisions are rendered by decision-makers with little oversight and accountability in a system of vast power differentials between those affected by technology and those wielding it.
Perhaps a real-world example would be instructive here to illustrate just how far-reaching the impacts of technologies used for migration management can be. A few weeks ago, I was in the Sonoran Desert at the U.S.-Mexico border to see first-hand the impacts of technologies that are being tested out. These technological experiments include various automated and AI-powered surveillance towers sweeping the desert. Facial recognition and biometric mass surveillance, and even recently announced “robodogs”—like my barking dog in the background—are now joining the global arsenal of border enforcement technologies.
The future is not just more technology, however; it is more death. Thousands of people have already perished making dangerous crossings. These are people like Mr. Alvarado, a young husband and father from Central America whose memorial site we visited. Indeed, surveillance and smart border technologies have been proven to not deter people from making dangerous crossings. Instead, people have been forced to change their routes towards less inhabited terrain, leading to loss of life.
Again, in the opaque and discretionary world of border enforcement and immigration decision-making, structures that are underpinned by intersecting systemic racism and historical discrimination against people migrating, technology's impacts on people's human rights are very real. As other witnesses have already said, we already know that facial recognition is highly discriminatory against black and brown faces and that algorithmic decision-making often relies on biased datasets that render biased results.
For me, one of the most visceral examples of the far-reaching impacts of facial recognition is the increasing appetite for AI polygraphs, or lie detectors, used at the border. The EU has been experimenting with a now derided system called iBorderCtrl. Canada has tested a similar system called AVATAR. These polygraphs use facial and emotional recognition technologies to reportedly discern whether a person is lying when presented with a series of questions at a border crossing. However, how can an AI lie detector deal with differences in cross-cultural communication when a person, due to religious or ethnic differences, may be reticent to make eye contact, or may just be nervous? What about the impact of trauma on memory, or the fact that we know that we do not recollect information in a linear way? Human decision-makers already have issues with these complex factors.
At the end of the day, this conversation isn't really about just technology. It's about broader questions. It's about questions around which communities get to participate in conversations around proposed innovation, and which groups of people become testing grounds for border technologies. Why does the private sector get to determine, time and again, what we innovate on and why, in often problematic public-private partnerships, which states are increasingly keen to make in today's global AI arms race? Whose priorities really matter when we choose to create AI-powered lie detectors at the border instead of using AI to identify racist border guards?
In my work, based on years of on-the-ground research and hundreds of conversations with people who are themselves at the sharpest edges of technological experimentation at the border, it is clear that the current lack of global governance around high-risk technologies creates a perfect laboratory for high-risk experiments, making people on the move, migrants and refugees a testing ground.
Currently, very little regulation of FRT exists in Canada and internationally. However, the European Union's recently proposed regulation on AI demonstrates a regional recognition that technologies used for migration management need to be strictly regulated, with ongoing discussions around an outright ban on biometric mass surveillance, high-risk facial recognition and AI-type lie detectors. Canada should also take a leading role globally. We should introduce similar governance mechanisms that recognize the far-reaching human rights impacts of high-risk technologies and ban the high-risk use of FRT in migration and at the border.
We desperately need more regulation, oversight and accountability mechanisms for border tech used by states like Canada.