Evidence of meeting #11 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was use.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Cynthia Khoo  Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual
Carole Piovesan  Managing Partner, INQ Law
Ana Brandusescu  Artificial Intelligence Governance Expert, As an Individual
Kristen Thomasen  Professor, Peter A. Allard School of Law, University of British Columbia, As an Individual
Petra Molnar  Lawyer, Refugee Law Lab, York University

12:35 p.m.

NDP

Matthew Green NDP Hamilton Centre, ON

What changes in our policy do you think are needed to ensure that a human rights lens is a part of our procurement process?

12:35 p.m.

Artificial Intelligence Governance Expert, As an Individual

Ana Brandusescu

I think one is once we write, researchers, investigative journalists, whoever—because we're at this point where our open government isn't really open, we still have to file an access to information request and find all this information—we need you to hear us. So the government now knows that Palantir has caused human rights abuses or is linked to them. The list is growing, it's at around 105 companies now, and the government should take Palantir off the list. That's one simple step, but it's also then to think about who can commit to the AIA and what does AI really mean and who has input to the AIA. If it's just other companies that are engaged when an AIA is published, what does that say about the rest of Canada, not just the Canadian public, but affected groups, digital rights organizations, civil society bodies? Where are we in the conversation?

12:40 p.m.

NDP

Matthew Green NDP Hamilton Centre, ON

This is important work.

Mr. Chair, through you to Ms. Brandusescu, how concerned should we be about the corporate capture in government's policy development for regulating AI and facial recognition in Canada? And with this in mind, can you describe who is setting the Canadian policy framework for AI and what the are consequences for those?

12:40 p.m.

Artificial Intelligence Governance Expert, As an Individual

Ana Brandusescu

We should be really concerned.

My next four years of research as a Ph.D. student will be around the privatization of the states specifically with these technologies. I think this will just get bigger. As Ms. Molnar mentioned, public-private partnerships are a key point about procuring and deploying and developing and using these technologies. We need to make sure that we are in line with the Treasury Board, which is hosting all the responsible AI suites, but also look at others like Public Services and Procurement Canada, that really hold a lot of cards here but are rarely in these discussions. It's always either the Treasury Board and the OPC that are in the conversation. I never see the procurement people, but really they are a key component here to this conversation.

12:40 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you.

With that, we'll move to Mr. Kurek for four minutes.

12:40 p.m.

Conservative

Damien Kurek Conservative Battle River—Crowfoot, AB

Thank you very much.

Just before I get into my questions, knowing that we are short on time here I would invite all of the witnesses here, if there are things that you didn't have a chance to address, to please feel free to send information to this committee. These are big questions with technical answers. Two, three or four minutes is certainly not enough time to see them appropriately addressed.

I certainly see one of the biggest challenges with addressing this is even just the evolution of the space of artificial intelligence and facial recognition. I'd mentioned in the previous round, with the conflict in Europe right now, the implications of this on how military uses some of this technology and the Geneva Convention, for example, deal with bombs that are dropped from planes, but there's a whole new space that's opened up.

Ms. Brandusescu, as this technology is being developed, in terms of research, government, private corporations, testing and understanding of the impacts on this technology and its impacts on society, do you have suggestions as to a path forward for this committee that would ensure that there is an appropriate understanding of what this means for Canadian citizens, and the fact that we are facing a world where AI and facial recognition become more and more part of our daily lives?

12:40 p.m.

Artificial Intelligence Governance Expert, As an Individual

Ana Brandusescu

Again, I think we can push back on tech inevitability, and we can say no to some of this technology, but that also requires funding and resources for education around these technologies. A lot of these contracts are made behind closed doors. In industry-government relationships, the public-private partnerships sometimes involve universities and labs, but it's always for a private interest focus. You want to fund these technologies, to build them, and then to use them. You don't think about the consequences. Very little money, time and resources go into dealing with the mess these technologies create and the harm they create.

We need to make sure there's a balance in that and move away and reconsider what we think about innovation when we fund that, especially as taxpayers. We need to really branch out. Right now I would say that the innovation work has been captured by specifically tech innovations that are designed to develop and deploy these technologies first and ask questions later. We can see how much harm they have caused, and yet here we are still debating this.

I don't want us to have a Clearview AI case, so what do we do? The free trial software transparency is really important, because that is beyond FRTs. That goes to all those AI systems and technologies that the government uses. Nobody sees that information anywhere. If we can get that information there, especially for law enforcement and national security, who won't use those excuses to say they're covering trade secrets....

We need to go beyond that. Again, if we want to build trust with the government, we need to have that level of transparency to know even what they are procuring and using so that we can ask better questions.

12:45 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you.

With that, we will go to Mr. Bains for four minutes.

March 21st, 2022 / 12:45 p.m.

Liberal

Parm Bains Liberal Steveston—Richmond East, BC

Thank you, Mr. Chair.

Thank you to our witnesses for joining us today. All of you, along with our previous panel, have highlighted the considerable amount of challenges we're facing here.

Ms. Thomasen, recently you participated in drafting some comments on the Toronto Police Services Board's proposed policy on AI technologies. The first recommendation is as follows:

Any implementation of AI technologies by law enforcement needs to begin from the assumption that it cannot reliably anticipate all the effects of those technologies on policing or policed communities and act accordingly in light of these impacts.

I'm interested to know how often, in your view, governments should be reviewing the effects of AI technology used in policing.

12:45 p.m.

Kristen Thomasen

Often; I know that in the draft policy that was ultimately adopted as a policy by the TPSB, the reviews will take place annually, which I think is a positive. I actually think that because of the way in which technology progresses, and the quantity of data that can be collected and utilized, even over the course of a year, that in practice, in a perfect world, would not be enough. Of course, reviews and audits take resources and time. I recognize that there are some practical limitations there.

But that's one police force in Canada. There are other police forces that we already know are using algorithmic policing technologies and are not engaging in these reviews, at least not to the extent that we are aware of publicly. There isn't necessarily the public oversight or transparency available.

So I think the TPSB policy is a step forward. It is a positive step, but even then I think it's not enough. There's still a lot that could be done. I think to the extent that the federal government could be involved in establishing some form of guidelines, and then of course oversight for federal-level police forces, that would be a positive step.

12:45 p.m.

Liberal

Parm Bains Liberal Steveston—Richmond East, BC

Were your recommendations satisfactorily incorporated into the final version of the policy by the TPS?

12:45 p.m.

Kristen Thomasen

I think the final policy did incorporate a number of recommendations that were made—there were a number of parties who contributed recommendations to that process—but there were still some weaknesses in the policy. In my view, the policy still very much treats algorithmic policing technologies as inevitable, as a net benefit so long as we can mitigate some of the risks. I think what you've been hearing from the witnesses today, including me, is that this is not the right framework from which to approach this technology, given the considerable harms that can be enacted through these technologies and the social context into which they're introduced.

One aspect of that policy process that was not formalized but that was discussed was the creation of an independent expert panel that includes expertise from a range of different areas, not simply technical expertise. That didn't come into fruition. There's still some conversation around that. I do think that's a step that could also be helpful at the federal level, to provide some kind of additional guidance and governance around not just facial recognition but all forms of algorithmic policing technologies.

12:45 p.m.

Liberal

Parm Bains Liberal Steveston—Richmond East, BC

I'm also in British Columbia, so my questions are coming to you from Richmond, B.C. I want to know if there is anything in British Columbia that you've looked at and studied with the law enforcement agencies in B.C.

12:45 p.m.

Kristen Thomasen

Well, I would flag that the Vancouver police force uses algorithmic policing technologies and would stand to benefit from looking at some of the processes that the Toronto Police Services Board has engaged in. To engage in that process on a federal and provincial level would be much more helpful, I think, than simply on a city or municipal police force level, because TPSB actually recognizes the—

12:45 p.m.

Conservative

The Chair Conservative Pat Kelly

Ms. Thomasen, I'm sorry. I'm going to have to move to the next round.

12:45 p.m.

Kristen Thomasen

No problem. I'll happily provide some submissions.

12:50 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you very much.

We'll now go to Monsieur Villemure.

12:50 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

How much time do I have left, Mr. Chair?

12:50 p.m.

Conservative

The Chair Conservative Pat Kelly

You have two and a half minutes.

12:50 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

Okay. Thank you very much.

Ms. Brandusescu, I'll turn to you again.

When you first spoke, you mentioned the Palantir company. I don't know if my colleagues know this, but on social media, Palantir presents itself as a very nice company and gives itself a very positive image.

At the same time, we know that projects like Gotham and Apollo are war projects, in a way. Palantir is a company that basically serves the military sector; it uses military technology to observe society. I therefore conclude that the words “ethics” and “Palantir” shouldn't be used in the same sentence.

I'd like you to clarify your thoughts on Palantir. I'd also like you to provide us with a list of the 105 companies you mentioned a little earlier and tell us what we should focus on to better understand the problem.

For now, I'll let you talk about Palantir.

12:50 p.m.

Artificial Intelligence Governance Expert, As an Individual

Ana Brandusescu

Thank you for the question, and I'll gladly answer. I love that you stated that “ethics” and “Palantir” are not synonyms, because that is correct.

As I already stated, Palantir is a tech data analytics company, and hence this is the problem with the way “AI” is defined by the federal government. The definition is really broad, and I think it's just important for me to note what it is in this meeting. The Treasury Board defines “artificial intelligence” as “Information technology”—which is IT—“that performs tasks that would ordinarily require biological brain power to accomplish, such as making sense of spoken language, learning behaviours or solving problems.”

This is how Palantir managed to be on this list, which I will gladly share with you. The problem with Palantir is that it's actually really loved by governments all around the world, but it is getting some pushback right now from the EU—although it is involved in GAIA-X's project.

They were largely funded and created by Peter Thiel and others, and there are many conflict of interest cases even within that governance.

The problem is that they're still there. Clearview AI is also still there, although Canada has made a direct statement within OPC around having them out of the country, so to speak, although that's questionable. They're still scraping the web.

With Palantir, they really do data governance around the world. Why they are dangerous is that even though everyone knows they're not ethical and some people think they're cool, they're still hired by the law enforcement and—

12:50 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you, Ms. Brandusescu. I'm going to have to go Mr. Green. We went a little bit over time there, but that's excellent information.

We move on now to Mr. Green for two and a half minutes.

12:50 p.m.

NDP

Matthew Green NDP Hamilton Centre, ON

Thank you, Mr. Chair. My last set of questions will be directed through you to Ms. Molnar, who referenced what I suggest are the dystopian prospects of ”robodogs” and drones increasingly being utilized alongside AI and facial recognition at border crossings.

Can you explain how the existing power imbalances between the state and people crossing borders, especially refugees, can be further exploited by the use of AI and facial recognition?

12:50 p.m.

Lawyer, Refugee Law Lab, York University

Dr. Petra Molnar

Thank you so much. Ultimately, it comes down to the power imbalances, like you say, in this context. We already are dealing with an opaque and discretionary decision-making system in which, when humans are making really complex decisions, oftentimes it's really difficult to know why particular decisions are rendered and what we can do if mistakes are made. Now, imagine that we start augmenting or replacing human decision-makers with automated decision-making and increasing surveillance. It basically just muddies the already very discretionary space of immigration and refugee processing and decision-making.

Again, it's all along historical lines of power and privilege, and oftentimes, again, we're talking about communities that already have less access to justice and an inability, for example, to challenge mistakes that have really far-reaching implications.

12:50 p.m.

NDP

Matthew Green NDP Hamilton Centre, ON

I want to get a bit more specific. In your report, “Bots at the Gate”, you state:

For persons in need of protection under section 97(1) of the Immigration and Refugee Protection Act, error or bias in determining their application may expose them to the threat of torture, cruel and inhumane treatment or punishment, or a risk to their life.

Do we have a legal or moral obligation to ensure that a refugee process prioritizes the safety and security of the individual, and remove any technology or practices that increase the risk of error?