Evidence of meeting #20 for Access to Information, Privacy and Ethics in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was approach.

A recording is available from Parliament.

On the agenda

Members speaking

Before the committee

Leahy  Chief Executive Officer, Conjecture Ltd.
Alfour  Chief Technology Officer, Conjecture Ltd.
Piovesan  Managing Partner, INQ Law

12:50 p.m.

Managing Partner, INQ Law

Carole Piovesan

What I meant by that is that Canada has put together a responsible AI brand through its co-founding of the Global Partnership on AI and its active involvement in the ISO AI standards—the 42000 series. In so doing, Canada has had an important role to play in establishing what responsible AI looks like. I think we should continue to do that not only by selling Canadian technologies globally, but by proving that our companies have established certain security safeguards that build trust in Canadian technology.

12:50 p.m.

Conservative

Gabriel Hardy Conservative Montmorency—Charlevoix, QC

That is interesting, because two of the witnesses who appeared earlier told the committee that ultimately, incentives drive businesses to develop artificial intelligence at a very fast pace for financial reasons. We drew a bit of a parallel with social media. We recognize that they have very adverse effects, especially on young people and on our children’s mental health. The witnesses noted that after technology has been developed, companies distance themselves from blame if users misuse it.

You have spoken about putting up structures and ensuring Canadian businesses develop artificial intelligence properly. If businesses were held accountable for the harmful consequences of artificial intelligence developed purely for profit, do you think this might discourage businesses from developing technologies that, while lucrative, offer no benefit to society?

12:50 p.m.

Managing Partner, INQ Law

Carole Piovesan

It depends on how we decide to hold a company accountable for something that it may not foresee.

I think there are certain disincentives you can put in place. There are also incentives we can put in place by providing guardrails and showing what “good” looks like. I think that might be a really effective approach to supporting our companies.

It's not to say that I'm contrary to anything punitive. That's not the point. It's just that we have to understand what we're regulating and what tools we're using to regulate and to enforce a regulation. It has to be tailored.

12:50 p.m.

Conservative

Gabriel Hardy Conservative Montmorency—Charlevoix, QC

I would tend to agree with you. I like the comparison with social media or cigarettes. For a long time, people were told cigarettes were good and then they came to understand they were not, and measures were put in place to restrict advertising. I have a feeling that social media currently benefits from a big legal vacuum. Our children are hooked on social media and these companies have not faced any negative consequences.

I think we can learn from that and maybe direct the development of artificial intelligence to ensure we are not justifying the idea of profit at all cost.

Do you agree with that?

12:50 p.m.

Managing Partner, INQ Law

Carole Piovesan

I am, and I think there are certain areas where we could be far more active in ensuring transparency and disclosure with AI, particularly where you're looking at the use of public-facing chatbots that can be confusing to the user. There are notices we could put out to the public to make it much clearer.

Again, we need to be targeted and we need to understand the use of the general purpose technology.

12:55 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Hardy.

Ms. Lapointe, you will be sharing your time with Mrs. Church. You have 300 seconds.

Linda Lapointe Liberal Rivière-des-Mille-Îles, QC

Thank you very much, Mr. Chair.

Thank you for joining us, Ms. Piovesan. Your remarks about what we need to focus on have been very interesting.

Last week, we heard from a witness, Antoine Guilmain, who said he had a better understanding of protecting personal information. He stated that there are many laws and that we should identify any gaps in existing ones before creating new ones. You alluded to that earlier as well.

I know we had Bill C‑27, which died on the Order Paper, but what would you recommend to close these gaps?

12:55 p.m.

Managing Partner, INQ Law

Carole Piovesan

I agree with that. The recommendation is to conduct more of a comprehensive study, maybe through regulators, as with the U.K. model, feeding information back into more of a central body to understand where there are specific gaps in the application of the regulation in the context of artificial intelligence.

Linda Lapointe Liberal Rivière-des-Mille-Îles, QC

Thank you.

Leslie Church Liberal Toronto—St. Paul's, ON

Hello, Ms. Piovesan.

My question is about the principles- and values-based approach that you've talked about.

What are the best practices that you've seen in the U.K. or Singapore around enforcement and transparency? How do we ensure that the standards in place and the principles that govern the framework are taken to heart, adopted and adequately enforced? Also, how do we ensure that AI systems are transparent to the agencies or the government enforcing the principles?

12:55 p.m.

Managing Partner, INQ Law

Carole Piovesan

The examples we've seen—primarily in Singapore and, to a lesser extent, in the U.K.—are through the consultative and collaborative process between market participants, if you will, and the applicable regulator. It's done on an iterative and ongoing basis to see how the application of those principles is occurring and where there are deficiencies or challenges in the application of those principles.

As we become more and more aware of where the risks are materializing, we will start to see a greater emphasis on regulation, with stronger enforcement. It may also be that for regulators, we will need to augment certain enforcement capabilities so they are able to exercise whatever jurisdiction they have in the application of their particular sector or body of regulation and AI.

Leslie Church Liberal Toronto—St. Paul's, ON

Given the commercial sensitivities that are no doubt present in the sector, how do we ensure that companies are fully transparent and compliant?

12:55 p.m.

Managing Partner, INQ Law

Carole Piovesan

I think it will be a challenge to ensure full transparency, since many AI companies are relying on trade secrets as a mechanism for IP protection. That was embedded in the earlier AIDA model. To the extent that we proceed with a body such as a data and AI commissioner, that might be a role we look to augment within that particular office.

Leslie Church Liberal Toronto—St. Paul's, ON

One of the groups I've met with is the Kids Help Phone—this is more in the context of some of the chatbots and public access to AI currently—and they've talked about a standard of care existing.

Do you think there is an approach we could take to ensure that as AI develops and is publicly utilized, we can create some guardrails to create a standard of care, particularly in situations where we're dealing with harms that children could be facing?

12:55 p.m.

Managing Partner, INQ Law

Carole Piovesan

I think we will start to see a change in the applicable standard of care, whether it applies to children, to the health care sector or to the auto sector. We will start to see a changing standard of care that is applicable to and conscious of the use of AI in a particular case.

12:55 p.m.

Conservative

The Chair Conservative John Brassard

You have 50 seconds.

Leslie Church Liberal Toronto—St. Paul's, ON

Would you say there is a sufficient duty of care by companies that are innovating in this area right now under Canadian law?

1 p.m.

Managing Partner, INQ Law

Carole Piovesan

It's all very context-specific. In certain sectors, there are existing frameworks that are working quite effectively to stem irresponsible investment in AI in the use context. Whether or not they're experimenting internally is one thing, but how it's ultimately used is another. I think in certain contexts there are effective safeguards. I also think that many sectors are adapting, with concern that the safeguards aren't good enough.

Look at health care. Health Canada came out with “Software as a Medical Device” to help us better understand how we can assess the risk of medical devices. That's important. The development of AI in health care is one thing, but we need that kind of tailored approach to ensure that when it gets to bedside, it meets certain standards, and we have to know what those standards are.

1 p.m.

Liberal

Leslie Church Liberal Toronto—St. Paul's, ON

Thank you so much.

1 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Ms. Church.

Ms. Piovesan, I want to thank you on behalf of the committee for appearing today. I appreciate your input.

I have a couple of things for the committee. In case you haven't seen it yet—it was distributed among committee members—we heard from the office of the AI minister that he will not be appearing for this study. That's in the digital binder. Also, as a reminder to committee members, the Ethics Commissioner will be appearing on Monday of next week.

That's it for today. Thank you, everyone.

The meeting is adjourned.