Evidence of meeting #102 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was systems.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Ana Brandusescu  AI Governance Researcher, McGill University, As an Individual
Alexandre Shee  Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual
Bianca Wylie  Partner, Digital Public
Ashley Casovan  Managing Director, AI Governance Center, International Association of Privacy Professionals

4:50 p.m.

Conservative

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

I agree with you. Moreover, we were told on Tuesday that the third world war will be technological.

To avoid potential abuses, should we still have something like what is about to be implemented in Europe and around the world?

4:55 p.m.

AI Governance Researcher, McGill University, As an Individual

Ana Brandusescu

Thank you.

To build on Bianca's point, I think we need to regulate AI. We need to slow down. We can't move fast and break things with regulation. Again, AI is being regulated, but it's private regulation. It's self-regulation, and that's not working. Mr. Shee already said that in his first five minutes.

We need something different. We need it to be like the EU in the way that it needs to be for both the public and the private sector, and it cannot be centralized. I insist, because there's too much at stake to keep all of the power in one agency. I'm going to move on to also say that it can't just be the OPC. It cannot just be the Privacy Commissioner, because AI is more than privacy. AI is also about privatization.

What we see right now is the risk of regulatory capture, because every time there's a new summit being done, as in the U.K., at Bletchley Park, the major governments, including ours, get together and announce collaborations with a top firm. Now, we have the usual suspects—Amazon, Google and Microsoft—and then the new kids on the block, but it cannot be that.

Again, this isn't about perfection at all; it's that the process to get here was one and a half years of almost no public consultation, participation or understanding, even when, as Bianca said, we do have specific examples of harms over and over again. We do need to make sure that AI is regulated. We can use our imagination to do that with law.

4:55 p.m.

Conservative

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

You have 30 seconds, Ashley.

4:55 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

I think I've shared repeatedly that I don't think AI is one monolithic thing. I do think that it needs to be broken down into sector-specific regulation.

I think what AIDA does is provide a framework that is then dependent on other types of sector-specific regulation. There is no contesting that how this was done is problematic. There needs to be more public consultation. I was really happy to see in the amendments that at least it speaks to what was heard and then how that's being addressed.

I think if we just put that aside—the process is for you guys to debate—it's very important to have regulation of AI systems. I've seen and experienced, by doing a lot of interventions with civil society organizations, harms that are occurring. I don't think that having rules or just leaving it up to self-regulation from companies to say, “We're doing the best we can do” is going to prompt the appropriate behaviour. I think legislators need that.

We need to be able to set the homework, too. We can't say, “You go and write your test, and then you mark it yourself.” I think it's very important that we as civil society organizations, in combination with industry and with government and academics, write what those tests are, the standards that I'm talking about, and then use that to assess industry.

4:55 p.m.

Conservative

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

Thank you very much.

4:55 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much, Mr. Généreux.

Mr. Turnbull, the floor is yours.

4:55 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thanks, Chair.

Thanks to all of the witnesses for being here today. We have a great juxtaposition of perspectives. We've been hearing a diverse cross-section of perspectives during this undertaking.

I think we can all admit that this is a very big and important piece of legislation that is complex and challenging for all of us, both as legislators and as.... I'm not sure that any one stakeholder has the full view on how this should move forward. I think it's good to have conversations like this that are push-and-pull. There are lots of challenges here. I appreciate that.

I wanted to just say, first off, that this bill was initiated due to recommendations from the minister's AI advisory committee, which consisted of industry experts. The Facebook whistle-blower was also part of the context that led to this work.

I'd also say that, from my perspective, there were consultations of over 300 stakeholders, which included universities, institutes, companies, industry groups, associations, privacy experts and consumer protection groups. I think there are some other categories, but those are the ones that I can see. I have the list here. It has been provided publicly and to committee members.

I would also say, in terms of the way that parliamentary practice goes, that usually amendments aren't provided in advance, during a study where you hear from witnesses. The government has provided the amendments in advance. We've also heard from some witnesses.

There are varying perspectives on what the process should look like. We've heard from some witnesses that tabling a framework piece of legislation was a good way to get something on the Order Paper and then undertake a lot of consultation to inform amendments to that. Some people feel like that process is very justified.

I just wanted to make those statements off the hop.

Ms. Casovan, we've heard the point that you made, about balancing innovation and protection, from some other witnesses. What I've heard is that having responsible guardrails for AI will allow people to benefit from it while protecting them at the same time. I know that's a challenge. Like any legislation that we work on, it is a balancing act that we're constantly confronting.

Could you speak to how we will know if we get that balance right, from your perspective?

5 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

It would be if no one is harmed.

It's really difficult to address that. I think that, first, we need to try. We need to recognize that just leaving it to the free market is probably not going to result in the conclusion we want to see.

There's an amazing resource called the AI Incident Database. I don't know if you've seen it. It tracks different types of harms that exist. I'd love for that to be compiled and then we'd understand better, so we can articulate in more common ways what those are.

It's a difficult question to answer in the absence of having any of these in place. I think the requirement for collecting data through a commissioner's office that would have those use cases reported is important.

5 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Ms. Casovan, from your perspective, are we moving too fast on this legislation and this work?

We heard from quite a few witnesses earlier this week that we're in fact behind and we need to move faster. That's what I've been hearing a lot from stakeholders. Some would maybe disagree with that.

What would you say?

5 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

With all respect to my fellow witnesses, I think we're moving way too slowly on this.

5 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

I understand the gravity of this. There are many different risks of harm, and it's hard to understand those without contextualizing them. I think Ms. Wylie made that point quite well. I heard her points, which essentially seem to be leaning towards a really decentralized approach to this, whereas I think the approach we're opting to take is to have a very central piece of legislation that is going to regulate all activity to some degree. Obviously, that will need to evolve and change. We know that the pace AI is evolving at is so quick that it's hard to keep up with.

What is your perspective? It's a tough question to answer.

5 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

I think there are two key points here.

One is that we really need to have one point of accountability. There's a lot of interoperability between different types of AI systems, so knowing exactly.... If it's an automated vehicle, it might be very clear that this is going to fall into transportation, but if it's a health care system, it might have issues related to consumer protection or it might have issues related to the health and safety of somebody. Breaking those apart is difficult, so what I think this bill does is require those different types of regulators and regulations to work hand in hand with each other.

There are also gaps that exist.

Maybe, third, I would add—as I said in my opening statements—having the professionalization of an individual who would be responsible and accountable for the governance of these systems. You would then have some consistency across all of these different regulations.

5:05 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

It's interesting, because it's not uncommon for us these days to talk about the big overarching issues and wanting to take an all-of-government or all-of-economy or all-of-society approach, and I think most people understand that governments have to integrate across ministries and really tackle these problems together. We see that with the fight against climate change.

However, a lot of the legislation still sits within a ministerial accountability and falls within a minister's mandate and role. I think it's not uncommon to have central legislation that is in one ministry but still impacts the work right across government ministries. I think that's what we might see in this process.

Is that what you're hoping to see?

5:05 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

The requirement of harmonization across different ministries, I think, is really important. I would also flag the requirement of harmonization within Canada—interprovincially, as well as provincial to national government and local government, as well—which I think is quite important.

Also, this bill, as we know with the amendments, addresses international harmonization with Canada playing a crucial role with the EU—which we've heard a lot about today—but we haven't talked about the U.S. executive order and the implications of that.

5:05 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Chair, I think I'm out of time, but thanks for your leniency.

5:05 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

I now give the floor to Mr. Williams for five minutes.

5:05 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Thank you very much, Chair.

Ashley, I want to follow up with you on a couple of things.

This has been a great discussion, by the way, especially on AIDA today.

We talk about the value of public and private data, especially for AIDA, and where this bill right now exempts that. Right now, under this bill, DND, CSIS and CSE are exempt from AIDA and there's provision for any federal or provincial department or agency to be exempted via regulation. That's the entire federal government and Crown corporations that are exempt.

When we talk about AIDA as a whole in this bill, in your opinion, is it right that we've exempted all of the public government from AIDA as a whole?

5:05 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

That's why we worked on the directive on automated decision-making systems at Treasury Board Secretariat. That's the purview of management systems that Treasury Board is responsible and accountable for. Should that be raised to an act level, similar to where we see PIPEDA and a Privacy Act that governs how public sector services work? Yes.

One thing I would like to see is alignment of requirements between AIDA and the directive, or a subsequent type of policy that would come out from TBS recognizing that automated decision-making systems aren't the only types of AI.

One of the things that it doesn't address, or things that are out of scope, is national security systems, as you mentioned, so I do think that additional provisions would need to be made for that.

5:05 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

I guess the premise of this.... Just for everyone listening right now, the first part of Bill C-27 does not cover the public sector, but to the point that you brought up, we have the Privacy Act, which, it could be argued, we should have been studying at the exact same time. The point I'm making is that there is nothing out there that exists, especially not in AIDA, that addresses AI in the public sector, and we've talked a lot about that.

I'm trying to get a better handle on your recommendation. Should this have been included with AIDA right now, or is this a whole other act that you're looking at that we should have included with this?

December 7th, 2023 / 5:05 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

The directive on automated decision-making systems does, though, oversee government's use of AI systems.

One other additional thing is that we should, again, ensure alignment between these two due to the fact that most government departments aren't actually developing their own AI systems. They're purchasing them. I think that ensuring that procurement rules are the same as AIDA is quite important.

5:05 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

However, privacy and looking at an act that would govern data of AI and AI as a whole would certainly look over that. Procurement would only look at other sections, like the Investment Canada Act or other acts.

It's interesting to me that that's not in there. I think that is a glaring hole that I've just noticed today.

I want to switch to either Ms. Wylie or Ms. Brandusescu.

I really focus a lot on opposition to competition. We look at big, bossy conglomerates that exist within the system.

Ms. Wylie, you made an interesting comment that this seems to be going forward only for industry, because capital is looking for a place to go. The examples you gave are that it seems to be benefiting Amazon, Microsoft and Google. They're big, bossy conglomerates. They're huge companies that are only looking to get bigger, and obviously to benefit from this.

When it comes to competition, as the industry committee, we want small, scrappy competitors and companies to be able to enter the space and to ensure that they can compete and enter the market.

I agree with your arguments on where we are with AIDA. Let's talk about if we started anew. How do we create competition? Where do we start in terms of making sure that we get all the players in, not just the big ones but some of the smaller ones included within the discussions?

5:10 p.m.

Partner, Digital Public

Bianca Wylie

I have just two comments on this.

One, it's partially why, if we had a proper public engagement and started from the beginning, you'd have to map the infrastructural assets that make up artificial intelligence. There is no AI without big tech, full stop. You can't spin it up in your garage. You can't go and do your little software company because code is available to you. That's not how this industry works. This is what I mean. I'm concerned about the lack of homework that has been done to make sure we're starting from a place of material, physical, infrastructural reality, and how it relates to this industry. That's one thing.

The second thing I want to say, which relates back to the conversation we were having about centralization or decentralization, is that not only does the Canadian government not have much clout in terms of telling what the heart of this infrastructure can and can't do.... When we think about privacy legislation, if we start up here with an umbrella called “privacy”, and then we look at how that works in different sectors, we might know what that looks like sector to sector. If our umbrella is called “artificial intelligence”, it's artificial intelligence what? What exactly are we trying to do if our umbrella is called “artificial intelligence”? Are we trying to use it everywhere?

I just want to keep returning us to the fact that we're having a conversation within a frame that does not track to the reality of how this industry is set up, nor how our pre-existing legislation is set up.

I just want to say how little companies might come in on this. The start-ups are hoping no one is going to ask about their two- or three-year revenues, because all start-ups have to do is show scale. That's how the venture capital industry works. You just have to show that your thing is getting big; you don't have to show that it's making money. That's how similar it is to a casino.

That's why I think the fact that we're building into this sector without looking at the consequences on the rest of our whole economy is also a grave error.

5:10 p.m.

AI Governance Researcher, McGill University, As an Individual

Ana Brandusescu

To add to Bianca's point, I want to take us back four years ago, when Element AI was heavily invested in by the public and the private sector. It's a case that we just do not speak about anymore in Canada and Quebec. This is to Bianca's point about who owns the infrastructure and who owns the data centre versus the datasets. Again, without big tech, there may not be AI, but I would argue that without the military there would be no AI, because that's where it comes from, like most technology.

Element AI was a darling of Canada. In the end, the space that we had in the regulatory framework for competition did not allow it to survive. What happened? It was acquired by ServiceNow, a Silicon Valley company that does, frankly, worker surveillance.

I would like to know exactly, when we move on to this new ideation, what more shared prosperity in competition looks like across SMEs and big companies. I would like to reflect on the failures of AI in Canada within the industry space, and see where we went wrong and what happened to the massive amount of funding and government spending to prop up our industry with all the AI research expertise we have, with all of the centres of excellence. We should reflect on this before we even go and ideate on how competition should look. We should reflect on what happened, especially with Element AI.

5:15 p.m.

Liberal

The Chair Liberal Joël Lightbound

You're out of time, Mr. Williams.

Before I turn to Mr. Gaheer, I'll give myself one small question.

Ms. Brandusescu, you just mentioned something we've never heard so far on the committee. You said there would be no AI without the military. Would you mind explaining that?

5:15 p.m.

AI Governance Researcher, McGill University, As an Individual

Ana Brandusescu

Certainly. I've heard over and over again witnesses talk about scale, but not violence at scale. That's what we see—how AI is being used in the military. We have to go back to something I spoke about when Parliament did a study on facial recognition technology—that's companies that are defence contractors, which are now spun up as AI and data analytics firms. A famous one is Palantir. You may know of them.

Palantir is interesting, because it started in defence, but now it's everywhere. The NHS in the U.K. just gave them a contract of millions of dollars, despite so much opposition to it. Palantir promised that the U.K. government would be in charge of the data of the people, but in the end it is not so. We have past examples of Palantir abusing human rights. Let's bring that into context. For example, an Amnesty U.S.A. study showed how, in the U.S., government planned mass arrests of nearly 700 people and “the separation of children from their parents...causing irreparable harm”.

I'll go back to the military. What does this mean? The military is the biggest funder of AI. We see rapid, exacerbating killing at scale. When we are racing to move forward with making more AI, making it faster and creating faster regulation just so we can justify to ourselves that we use it, we are not thinking about what should be banned, what should be decommissioned—