Evidence of meeting #102 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was systems.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Ana Brandusescu  AI Governance Researcher, McGill University, As an Individual
Alexandre Shee  Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual
Bianca Wylie  Partner, Digital Public
Ashley Casovan  Managing Director, AI Governance Center, International Association of Privacy Professionals

4:10 p.m.

Liberal

Tony Van Bynen Liberal Newmarket—Aurora, ON

I'm interested in your comments. Some of the notes I made include “digital slavery” and your concerns about that.

How do you think the impacts of AI on work should be regulated in Canada?

4:10 p.m.

Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual

Alexandre Shee

There are two aspects to consider.

The first one is how it's impacting work today in Canada and beyond. That's the first element. Then, how will it impact society and the place of work, going forward?

If we think about today, we see there are millions of people who actually work behind the scenes in AI systems to make them operate effectively. They are not protected under this law, nor are they protected under any legislation that's coming out on AI; therefore, there's an opportunity to legislate the AI supply chain for what it is, a supply chain with millions of people working on it.

In the second phase—the impact on workers going forward—there are a lot of unknowns around what will happen to workers and how their work will be influenced.

One of the advantages of the Global Partnership on Artificial Intelligence is that we have representatives from academia, industry and worker unions, as well as governments. The statement that was put out was essentially that we need to put in place studies on the impact of AI on future work. We need to invest in retraining. We need to invest in making sure we're transitioning some roles. We need to be aware, even most recently with the advent of generative AI, that there already are economic impacts on low-skilled workers, who will need to be retrained and given other opportunities.

The future of work needs that, and the Global Partnership on AI has a policy brief that is available online.

4:10 p.m.

Liberal

Tony Van Bynen Liberal Newmarket—Aurora, ON

I think I have about 15 seconds left.

You provide an overview of regional and national initiatives. Which countries have the most robust approaches? Would you recommend amendments to the artificial intelligence legislation that we have here?

4:10 p.m.

Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual

Alexandre Shee

The first amendment that I would recommend is to have a disclosure on the supply chain to ensure that we can decide on the usage of ethical AI that does not have forced labour or child labour in it. Right now the leading jurisdiction is the EU, where we see that companies we're working with actually have, in practice, higher standards than anywhere in the world, and they are forcing disclosure mechanisms in the private sector.

I would say that's where we should be looking right now. We should be looking at the EU right now for legislation.

4:10 p.m.

Liberal

Tony Van Bynen Liberal Newmarket—Aurora, ON

Thank you, Mr. Chair.

4:10 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you, Mr. Van Bynen.

Mr. Lemire, you have the floor.

4:10 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

Thank you, Mr. Chair.

I'd like to thank all the witnesses.

I'll start with Ms. Casovan.

Ms. Casovan, during your time in the Government of Canada, you led the development of the first‑ever artificial intelligence policy, namely, the directive on automated decision‑making. This directive imposes a number of requirements on the federal government's use of technologies that assist or replace the judgment of a human decision‑maker, including the use of machine learning and predictive analytics. These requirements include the requirement to provide notice when the automated decision‑making system is being used, as well as the existence of recourse methods for those who wish to challenge administrative decisions.

In your opinion, should this type of notice or recourse provision be included in the Artificial Intelligence and Data Act?

4:15 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

I believe this type of notification is required.

One thing that we did with the directive on automated decision systems was recognize that there are multiple different types of contexts in which these systems are being used and that those have different types of categories of harms. If you have a reference in the legislation like appendix C in the directive, then you'll see that there are different requirements that exist for those different types of harms.

One of the challenges we had when looking to implement it was that people were looking for the acceptable standards or the bar that they'd need to meet. Unfortunately, that wasn't developed. That's what needs to happen now in order to address some of the concerns that you've raised—notification and other types of documentation requirements. That type of additional context is required through additional regulations that support the broader framework of AIDA, and then you need to look at what you do in those contexts for different degrees and categorizations of risk.

4:15 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

In the case of a remedy, who should the consumer turn to if they want to challenge an automated decision‑making process or provide clarification?

4:15 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

When consumers are looking to make a challenge, again, not only do they need the notification in order to understand that an AI system is even being used, but they should also have appropriate recourse for that. Again, these are different types of mitigation measures that will be context-specific and that should be included based on what the type of system is—again, following subsequent rules that should be made.

4:15 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

As I understand it, the directive requires an algorithmic impact assessment for each automated decision-making system. Based on various specific criteria, this assessment will lead to a classification ranging from level 1, which is the lowest incidence, to level 4, which is the highest incidence. The results of that evaluation must be made public and updated if there are any changes to the functionality or the scope of the system.

Why is it important that automated decision‑making systems undergo an algorithmic impact assessment?

4:15 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

The key issue that we were trying to address is not to over-regulate or create more oversight than is required. We want there to be this balance of innovation in using these systems and also protection of the people who are using them. By breaking it down and recognizing that different types of issues and harms occur with the different types of systems, we wanted to address the effort that is required to ensure that appropriate mitigation measures are put in place.

4:15 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

Can you give us some examples of some of the criteria used to determine the level of impact of each system? Would it be a good idea to add this type of requirement to Bill C‑27?

4:15 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

I would love to see that. I think we see that in the amendments, with the different types of classes. We have the seven classes of potential impacts. I think there's recognition in that. There are different levels of harm that can exist within that. I would definitely recommend adding something almost like a matrix—to say that you have these different types of impacts that could occur in hiring or health, and these are the different types of harms that could exist, so therefore these are the mitigations needed. Most importantly, it's about matching that to industry-developed standards.

One thing that Bianca was referencing—and other witnesses have too—is the need for increased public participation in this process. Standards development processes do allow for that and accommodate that. That's why I think this is really important.

Again, it's recognizing that we're not going to be able to put everything in black and white in any sort of legislation. Having people trained to understand what those harms are, and how to look for them and mitigate them, is the point of having somebody responsible, like a chief AI officer.

4:15 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

One of the criteria in the algorithmic impact assessment is the level of impact on the rights not only of individuals but also of communities. We have heard the call from marginalized communities that Bill C‑27 must go beyond individualized harms and include harms that disproportionately affect certain groups.

Can you explain to us why we need to change some individualized language and ensure that the government directive will be as specific and inclusive as possible?

4:20 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

Different types of mitigation, as you're mentioning, depend on the use of the system. Both the technology and the context within which it's being used will change. The harms will change, from an individual to a group to the organization itself. Therefore, first of all, it's understanding what the harms are.

The work I did at the Responsible AI Institute was really building on the work I did at Treasury Board: This is what the scope of a system is, and we need to put something like a certification mark on it, like a good housekeeping or LEED symbol. That type of acknowledgement would require you to be able to identify what those harms are, first and foremost, and therefore identify the different types of criteria or controls you would need to go through in order to mitigate them for the individual or the group or the organization.

4:20 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

Thank you very much.

4:20 p.m.

Madam Ashley Casovan

You're welcome.

4:20 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you, Mr. Lemire.

Mr. Masse, the floor is yours.

4:20 p.m.

NDP

Brian Masse NDP Windsor West, ON

Thank you, Mr. Chair.

Maybe I'll start with Mr. Shee, because he's virtual.

There have been suggestions, not only by this panel but others as well, that we scrap this and start all over. I'm wondering if you have an opinion with regard to the content related to the Privacy Commissioner. If there is a separation of the two major aspects of the bill here, would you support continuation of the privacy work or should that be potentially looked at as well?

Then I'll go to the witnesses in person.

4:20 p.m.

Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual

Alexandre Shee

I would say that the AI act in itself is extremely important. As was mentioned by other witnesses today, AI systems already have an impact on people's lives, and their development is just increasing in speed. I would be very favourable to seeing legislation that at least sets the base framework. From there, we can move forward.

Right now the legislation is not complete. It needs work and it needs to have additional amendments to ensure that it touches the whole AI supply chain and protects people throughout, both while it's being built and when it's being deployed.

4:20 p.m.

NDP

Brian Masse NDP Windsor West, ON

I'll move to Ms. Casovan, please, and then across the table.

Again, what I'm looking for is this: If we do end up not getting enough fixes to the AI component, and that starts over or is delayed, should we continue to progress with the privacy and the potential changes that are suggested there?

4:20 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

I'm a huge fan of the fact that this bill has.... I know that some people have said it's a bolt-on, as was announced today, but I think it's an important bolt-on. If AIDA does not continue, there are privacy implications and consumer protection implications in relation to the use of AI.

I would like to see at least those two components strengthened.

4:20 p.m.

NDP

Brian Masse NDP Windsor West, ON

Thank you.

I'll go to our next witness, please.

4:20 p.m.

Partner, Digital Public

Bianca Wylie

I'm not going to respond to that. I'm not well located to comment on the privacy pieces of the bill.