Evidence of meeting #108 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was systems.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Ignacio Cofone  Canada Research Chair in AI Law and Data Governance, McGill University, As an Individual
Catherine Régis  Full Professor, Université de Montréal, As an Individual
Elissa Strome  Executive Director, Pan-Canadian AI Strategy, Canadian Institute for Advanced Research
Yoshua Bengio  Scientific Director, Mila - Quebec Artificial Intelligence Institute

11:05 a.m.

Liberal

The Chair Liberal Joël Lightbound

I call the meeting to order.

Good morning one and all. Welcome to meeting number 108 of the House of Commons Standing Committee on Industry and Technology. Today's meeting is taking place in a hybrid format, pursuant to the Standing Orders.

Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming its study of Bill C-27, an act to enact the consumer privacy protection act, the personal information and data protection tribunal act and the artificial intelligence and data act and to make consequential and related amendments to other acts.

Today's witnesses are all joining us by video conference. We have with us Ignacio Cofone, Canada research chair in artificial intelligence law and data governance at McGill University; Catherine Régis, full professor at Université de Montréal; Elissa Strome, executive director of pan-Canadian AI strategy at the Canadian Institute for Advanced Research; and Yoshua Bengio, scientific director at Mila - Quebec Artificial Intelligence Institute.

Welcome and thank you all for being with us.

Since we are already a bit behind schedule, I'm going to turn the floor right over to you, Mr. Cofone. You have five minutes for your opening statement.

11:05 a.m.

Ignacio Cofone Canada Research Chair in AI Law and Data Governance, McGill University, As an Individual

Thank you very much, Mr. Chair.

Good morning, everyone, and thank you for the invitation to share with the committee my thoughts on Bill C-27.

I'm appearing today in my personal capacity. Mr. Chair has already introduced me, so I'm going to skip that part and say that it is crucial that Canada have a legal framework that fosters the enormous benefits of AI and data while preventing its population from becoming collateral damage from it.

I'm happy to share my broad thoughts on the act, but today I want to focus on three important opportunities for improvement while maintaining the general characteristics and approach of the act as proposed. I have one recommendation for AIDA, one for the CPPA and one for both.

My first recommendation is that AIDA needs an improved definition of “harms”. AIDA is an accountability framework, and the effectiveness of any accountability framework depends on what it is that we hold entities accountable for. AIDA recognizes currently property, economic, physical and psychological harms, but for it to be helpful and comprehensive, we need one step more.

Consider the harms to democracy that were imposed during the Cambridge Analytica scandal and consider the meaningful but diffuse and invisible harms that are inflicted every day through intentional misinformation that polarizes voters. Consider the misrepresentation of minorities that disempowers them. These go unrecognized by the current definition of “harms”.

AIDA needs two changes to recognize intangible harms beyond individual psychological ones: It needs to recognize harms to groups, such as harms to democracy, as AI harms often affect communities rather than discrete individuals, and it also needs to recognize dignitary harms, like those stemming from misrepresentation and the growing of systemic inequalities through automated means.

I therefore urge the committee to amend subsection 5(1) of AIDA to incorporate these intangible harms to individuals and to communities. I would be happy to propose suggested language.

This fuller account of harms would put Canada up to international standards, such as the EU AI Act, which considers harms to “public interest”, to “rights protected” by EU law, to a “plurality of persons” and to people in a “vulnerable position”. Doing so better complies with AI ethics frameworks, such as the Montreal declaration for responsible AI, the Toronto declaration and the Asilomar AI principles. You would also increase consistency within Canadian law, as the directive on automated decision-making repeatedly refers to “individuals or communities”.

My second recommendation is that the CPPA must recognize inferences as personal information. We live in a world where things as sensitive and dangerous as our sexuality or ethnicity and our political affiliation can be inferred from things as inoffensive as our Spotify listens or our coffee orders or text messages, and those are just some of the inferences that we know about.

Inferences can even be harmful when they are incorrect. TransUnion, for example, the credit rating agency, was sued in the United States a couple of years ago for mistakenly inferring that hundreds of people were terrorists. By supercharging inferences, AI has transformed the privacy landscape.

We cannot afford to have a privacy statute that focuses on disclosed information and builds a back door into our privacy law that strips from it its power to create meaningful protection in today's inferential economy. The CPPA doesn't rule out inferences being personal information, but it doesn't incorporate them explicitly. It should. I urge the committee to amend the definition of personal information in one of the acts to say that “ 'personal information' means disclosed or inferred information about an identifiable individual or group”.

This change would also increase consistency within Canadian law, as the Office of the Privacy Commissioner has repeatedly stated that inferences should be personal information, and also with international standards, as foreign data protection authorities emphasize the importance of inferences for privacy law. The California attorney general has also stated that inferences should be personal information for the purposes of privacy law.

My third brief recommendation is a consequence of this bill, which is reforming enforcement. As AI and data continue to seep into more aspects of our social and economic lives, one regulator with limited resources and personnel will not be able to have their eye on everything. They will need to prioritize. If we don't want all other harms to fall through the cracks, both parts of the act need a combined public and private enforcement system, taking inspiration from the GDPR, so that we have an agency that issues fines without preventing the court system from compensating for tangible and intangible harm done to individuals and groups.

We also have a brief elaborating on the suggested outlines here.

I'd be happy to address any questions or elaborate on anything.

Thank you very much for your time.

11:10 a.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

Ms. Régis, you may now go ahead. You have five minutes for your opening statement.

11:10 a.m.

Professor Catherine Régis Full Professor, Université de Montréal, As an Individual

Good morning, Mr. Chair and members of the committee. Thank you for the opportunity to comment on the AI portion of Bill C-27.

I am a full professor in the faculty of law at Université de Montréal. I am also the Canada research chair in collaborative culture in health law and policy, as well as the Canada-CIFAR chair in AI, affiliated to Mila. From January 2, 2022 to December 2023, I co-chaired the ​​Working Group on Responsible AI for the Global Partnership on AI.

The first point I want to make is to reaffirm not only the importance, but also the urgency of creating a better legal framework for AI, as proposed in Bill C-27. That has been my view for the past five years, and I am now more convinced than ever, given the dizzying pace of recent developments in AI, which you are all familiar with.

We need legal tools that are binding. They must clearly set out our expectations, values and requirements in relation to AI, at the national level. During the citizen consultations that culminated in the development of the Montréal Declaration for a Responsible Development of Artificial Intelligence, the first need identified was for an appropriate legal framework that would enable the development of trusted AI technologies.

As you probably know, that trend has spread across the world, the most obvious example definitely being the European Union's efforts. As of last week, the EU is now one step closer to adopting a regulatory framework for AI.

In addition to these national requirements, the global discussions around AI and the resulting decisions will have repercussions for every country. In fact, the idea of creating a specific AI authority is being discussed.

In order to ensure that Canadian values and interests are taken into account in the international space, Canada has to be able to influence the discussions and decisions. Setting out a national vision with strong and clear standards is vital to playing a credible, meaningful and influential role in the global governance of AI.

That said, I think Bill C-27 could still use some improvements. I will focus on two of them today.

The first improvement is to make the artificial intelligence and data commissioner more independent. Although recent amendments have resulted in improvements, the commissioner is still very much tied to Innovation, Science and Economic Development Canada. To avoid any conflict of interest, real or apparent, the government should create more of a wall between the two entities. This would address any tensions that might arise between the government's role as a funder on one hand, and its role as a watchdog on the other.

Possible solutions include creating an office of the artificial intelligence commissioner that is totally independent of the department, and empowering the commissioner to impose administrative monetary penalties or require that corrective actions be taken to address the accountability framework. In addition, the commissioner could be asked to recommend new or improved regulations informed by their experience as a watchdog, mainly through the annual public report.

Other measures could also be taken. Once the legislation is passed, for instance, the government could give the commissioner the financial and institutional resources, as well as the qualified staff necessary to successfully carry out the duties of the commissioner. Making sure that the commissioner has the means to achieve their objectives is really important. Another possibility is to create a mechanism whereby the public could report issues directly to the commissioner. That would establish a relationship between the two.

The second major improvement that's needed, as I see it, is to further strengthen the crucial role that human rights can play in analyzing the risks and impacts of AI systems. The importance of taking into account human rights in defining the classes of high-impact AI systems is specifically mentioned. However, the importance of then incorporating consideration of those rights in companies' assessments, which could include an analysis of the risks of harm and adverse effects, is not quite so clear.

I would also recommend adding specific language to address the need to conduct impact assessments for human rights in relation to individuals or groups of individuals who may be affected by high-impact AI systems. A portion of those assessments could also be made public. These are sometimes called human rights impact assessments.

The Council of Europe, the European Union with its AI legislation, and even the United Nations Educational, Scientific and Cultural Organization are working on similar tools, so exploring the possibility of sharing expertise would be worthwhile.

The second recommendation is fundamental. While the AI race is very real, there can be no winner of the race to violate human rights. The legislation must make that clear.

Thank you.

11:15 a.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you.

Ms. Strome, go ahead.

11:15 a.m.

Dr. Elissa Strome Executive Director, Pan-Canadian AI Strategy, Canadian Institute for Advanced Research

Thank you, Mr. Chair.

Hello. My name is Elissa Strome. I am the executive director of the pan-Canadian AI strategy at the Canadian Institute for Advanced Research, CIFAR.

Thank you for the opportunity to meet with the committee today.

CIFAR is a Canadian-based global research organization that brings together brilliant people across disciplines and borders to address some of the most pressing problems facing science and humanity. Our programs span all areas of human discovery.

CIFAR's focus on pushing scientific boundaries allowed us to recognize the promise of an idea that Geoffrey Hinton came to us with in 2004—to build a new CIFAR research program that would advance the concept of artificial neural networks. At the time, this concept was unpopular, and it was difficult to find funding to pursue it.

Twenty years later, this CIFAR program continues to put Canada on the global stage of leading-edge AI research and counts Professor Hinton, Professor Yoshua Bengio—who is here with us today—Professor Richard Sutton at the University of Alberta and many other leading researchers as members.

Due to this early foresight and our deep relationships, in 2017, CIFAR was asked to lead the pan-Canadian AI strategy. We continue to work with our many partners across the country and across sectors to build a robust and interconnected AI ecosystem around the central hubs of our three national AI institutes: Amii in Edmonton, Mila in Montreal and the Vector Institute in Toronto. There are now more than 140,000 people working in the highly skilled field of AI across the country.

However, while the pan-Canadian AI strategy has delivered on its initial promise to build a deep pool of AI talent and a robust ecosystem, Canada has not kept up in our regulatory approaches and infrastructure. I will highlight three priorities for the work of this committee and ongoing efforts.

First is speed. We cannot delay the work of AI regulation. Canada must move quickly to advance our regulatory bodies and processes and to work collaboratively, at an international level, to ensure that Canada's responsible AI framework is coordinated with those of our partners. We must also understand that regulation will not hinder innovation but will enhance it, providing greater stability and ensuring interoperability and competitiveness of Canadian-led AI products and services on the global stage.

Second is flexibility. The approach we take must be able to adapt to a fast-changing technology and global context. So much is at stake, with the potential for AI to be incorporated into virtually every type of business or service. As the artificial intelligence and data act reflects, these effects can have a high impact. This means we must take an inclusive approach to this work across all sectors, with ongoing public engagement to ensure citizen buy-in, in parallel with the development and refinement of these regulations.

We also must understand that AI is not contained within borders. This is why we must have systems for monitoring and adapting to the global context. We must also adapt to the advances and potentially unanticipated uses and capabilities of the technology. This is where collaboration with our global partners will continue to be key and will call upon the strengths of Canada's research community, not only in ways to advance AI safety but also in the ethical and legal frameworks that must guide it.

Third is investment. Canada must make significant investments in infrastructure, systems and qualified personnel for meaningful AI regulation when used in high-impact systems. We were glad to see this defined in the amendments to the act.

Just like those in the U.S. and the U.K., our governments must staff up with the expertise to understand the technology and its impacts.

For Canada to remain a leader in advancing responsible AI, Canadian companies and public sector institutions must also have access to the funding and computing power they need to stay at the leading edge of AI. Again, the U.S., the U.K. and other G7 countries have a head start on us, having already pledged deep investments in computing infrastructure to support their AI ecosystems, and Canada must do the same.

I won’t pretend that this work won't be resource-intensive; it will be. However, we are at an inflection point in the evolution of artificial intelligence, and if we get regulation right, Canada and the world can benefit from its immense potential.

To conclude, Canada has tremendous strengths in our research excellence, deep talent pool and rich, interconnected ecosystem. However, we must act smartly and decisively now. Getting our regulatory framework, infrastructure and systems right will be critical to Canada's continued success as a global AI leader.

I look forward to the committee's questions and to the comments from my fellow witnesses.

Thank you.

11:20 a.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

It is now Mr. Bengio's turn.

11:20 a.m.

Yoshua Bengio Scientific Director, Mila - Quebec Artificial Intelligence Institute

Thank you, Mr. Chair.

Good morning.

First, I want to say how much I appreciate this opportunity to meet with the committee.

My name is Yoshua Bengio, and I am a full professor at Université de Montréal, as well as the founder and scientific director of Mila - Quebec Artificial Intelligence Institute. Here's a fun fact: I recently became the most quoted computer scientist in the world.

Over the past year I've had the privilege of sharing my perspective on AI in a number of important international forums, including the U.S. Senate; the first global AI Safety Summit, an advisory board to the UN Secretary-General; and the U.K. Frontier AI Taskforce; in addition to the work I'm doing here in Canada in co-chairing the advisory committee on AI for the government.

In recent years, the pace of AI advancement has accelerated to such a degree that I and many leaders in the field of AI have revised downwards our estimates of when human levels of broad cognitive competence, also known as AGI, will be achieved—in other words, when we will have machines that are as smart as humans at a cognitive level.

This was previously thought to be decades or even centuries away. I now believe, with many of my colleagues, including Geoff Hinton, that superhuman AI could be developed in the next two decades, and even possibly in the next few years.

If we look at the low end, we're not ready, and this prospect is extremely worrying.

The prospect of the early emergence of human-level AI is very worrisome.

As discussed in the above international forums, without adequate guardrails, the current AI trajectory poses serious risks of major societal harms even before AGI is reached.

To be clear, progress in AI has opened exciting opportunities for numerous beneficial applications that have motivated me for many years, yet it is urgent to establish the necessary guardrails to foster innovation while mitigating risks and harms.

With that in mind, we urgently need agile AI legislation. I think this law is doing that, and is moving in the right direction, but initial requirements must be put in place even before the consultations are completed to develop the more comprehensive regulatory framework. With the current approach, it would take something like two years before enforcement would be possible.

I therefore support AIDA broadly and would like to formulate recommendations to this committee on ways to strengthen its capacity to meaningfully protect Canadians. They are laid out in detail in my submission, but there are three things that I would like to highlight.

The first is the urgency to adopt legislation.

Upcoming advances are likely to be disruptive, and the timeline for these is very uncertain. In this situation, an imperfect law whose regulation could be adapted later is better than no law and better than postponing a law too much. We should best move forward with AIDA's framework and rely on agile regulatory systems that can be adapted as this technology evolves.

Also, because of the urgency, the law should include initial provisions that will apply as soon as it is adopted to ensure the public's protection while the regulatory framework is being developed.

What would we do as an initial step? I'm talking about a registry.

Systems beyond a certain level of capability should report to the government and provide information about their safety and security measures, as well as safety assessments. A regulator will be able to use that information to form best-in-class requirements for future permits to continue developing and deploying these advanced systems. This would put the burden of demonstrating safety on developers with the billions required to build these advance systems, rather than taxpayers.

Second, another important point to add in the law is that national security risks and societal threats should be listed among the high-impact categories. Examples of capabilities to bring harm include being easily transformable to help bad actors design dangerous cyber-attacks and weapons, deceiving and manipulating as well as or better than humans, or finding ways to self-replicate in spite of contrary programming instructions.

Finally, my last main point concerns the need for pre-deployment requirements. Developers should be required to register their system and demonstrate its safety and security even before the system is fully trained and developed, and before deployment. We need to address and target the risks that emerged earlier in an AI's life cycle, which the current law doesn't seem to do.

In conclusion, I welcome the committee's questions and look forward to hearing what my fellow witnesses have to say. All of their comments thus far have been quite interesting.

At this point, I would like to thank you for having this important conversation.

11:25 a.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

To start our conversation, I will yield the floor to MP Perkins for six minutes.

11:25 a.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Thank you, Mr. Chair.

Thank you, witnesses, for the continuation of this very important piece of legislation and some very interesting opening testimony.

Originally this bill proposed legislating and regulating only what it called “high-impact systems”, which would not be defined in the law but would be defined in the regulation at some future date.

Is it Mr. Bengio or Dr. Bengio?

11:25 a.m.

Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

Either way is fine.

11:25 a.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Dr. Bengio, we now have two added definitions in the draft amendments that Minister Champagne has made to the bill. The amendments add a definition, in a schedule, of “high impact”. They also add a new category, which is specifically machine learning, with a third being general purpose. Is “general purpose” getting too broad in terms of the power?

It strikes me that large amounts of AI that will happen in business are business processes that are not attached to individuals, the Internet or that kind of thing. There's a company in my riding that's trying to train it to identify the difference between a scallop and a surf clam. To me, that's not something that is high impact. It may be for their business, but at the end of the day, it's just business efficiency. It has and will have a general purpose application, if I'm reading it right.

Does the bill go too far with the general purpose provision?

11:25 a.m.

Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

No. I think it's very important to cover the general purpose AI systems in particular, because they could be the most dangerous if misused. This is the place where there is also the most uncertainty about the harms that could follow from these systems.

I think that having a law that says more oversight is necessary for these general purpose systems will also be an encouragement for developers to create more specialized systems. In fact, in most applications in business and science or medicine, we want a system that's very specialized on one particular kind of question we care about. Until recently, these were the only kinds of AI systems that we knew how to build. General purpose systems like the large language models can be specialized and turned into something specific that doesn't know everything about the world and only knows some specific questions, in which case they become much more innocuous.

11:25 a.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Thank you.

Mr. Cofone, I have a question around your discussion about groups and larger harms.

Some witnesses, way back at the beginning of this bill, from Jim Balsillie on, talked about the fact that the bill is absent in dealing with group harms and group risks to privacy as they relate to artificial intelligence. Could you expand that a little more? What would you see as needing to be added?

You mentioned proposed subsection 5(1) of AIDA. Can you share with us a little more about what you had in mind?

11:25 a.m.

Canada Research Chair in AI Law and Data Governance, McGill University, As an Individual

Ignacio Cofone

Of course. The directive on automated decision-making explicitly recognizes that harms can be done to individuals or communities, but when it defines harm in proposed subsection 5(1), AIDA has repeated references to individuals for harm to property and for economic, physical and psychological harm.

The thing is that harms in AIDA, by their nature, are often diffuse. Oftentimes they are harms to groups, not to individuals. For a good example of this, think of AI bias, which is covered in proposed subsection 5(2), not in 5(1). If you have an automated system that allocates employment, for example, and it is biased, it is very difficult to know whether a particular individual got or didn't get the job because of that bias or not. It is easier to see that the system may be biased towards a certain group.

The same goes for representation issues in AI. An individual would have difficulty in proving harm under the act, but the harm is very real for a particular group. The same is true of misinformation. The same is true of some types of systemic discrimination that may not be captured by the current definition of bias in the act.

What I would find concerning is that by regulating a technology that is more likely to affect groups rather than individuals under a harm definition that specifically targets individuals, we may be leaving out most of what we want to cover.

11:30 a.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

I very much look forward to getting your draft amendment on that and taking a look at it. Thank you.

11:30 a.m.

Canada Research Chair in AI Law and Data Governance, McGill University, As an Individual

Ignacio Cofone

Thank you.

11:30 a.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

I would like to ask this of perhaps all of the witnesses, maybe starting with Ms. Strome.

We've had a great debate about Dr. Bengio's saying that having an imperfect bill is better than not having a bill. The challenge for parliamentarians is in two aspects of that.

One, I never like passing an imperfect bill, especially one as important as this. I don't think there's any merit in sort of saying that we're number one because we got our first bill through. The way Parliament works is that it's five to 10 years before legislation comes back.

I also don't like giving the department a blank cheque to basically not have to come back to Parliament on an overall public policy framework of how we're going to govern this. This bill lacks that. It just talks about the specifics about high-impact general purpose and machine learning. It doesn't talk overall, such as the Canada Health Act does in referring to five principles.

What are the five principles of AI, such as transparency and that kind of thing? The bill doesn't speak to that, and it governs all AI. I think that's an issue going forward. I also think that it's an issue to give the bureaucracy, while maintaining flexibility, total control over future development without having to seek approval from Parliament.

I would like to ask all of the witnesses about the five things, four things or three things that are high-level philosophies about how we should govern AI in Canada, which this bill does not seem to define.

I'll start with Ms. Strome, and then we'll go from there.

11:30 a.m.

Executive Director, Pan-Canadian AI Strategy, Canadian Institute for Advanced Research

Dr. Elissa Strome

Just to make sure that I understand correctly, are you asking us to zero in on areas that the bill doesn't currently address?

11:30 a.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

No. It's sort of the high-level idea that all AI, when a user is interacting with it, needs to be transparent.

What are similar types of philosophies, forgetting about whether it's high-impact machine learning or general purpose, that should govern all of this in the act, which the bill is missing?

11:30 a.m.

Executive Director, Pan-Canadian AI Strategy, Canadian Institute for Advanced Research

Dr. Elissa Strome

Absolutely.

There's a broad international consensus about what constitutes safe and trustworthy AI. Whether it's the OECD principles or the Montreal declaration, many organizations have a common consensus about what constitutes responsible AI.

These principles include having fairness as a primary concern. That ensures that AI delivers recommendations that treat people fairly and equitably and that there's no discrimination and no bias.

Another principle is accountability, which means ensuring that AI systems and developers of AI systems are accountable for the impacts of the technologies that they are developing.

Transparency is one that you mentioned. That ensures that we understand and have the opportunity to interrogate AI systems and models and get a better understanding of how they are coming towards the decisions and recommendations that they are developing.

Privacy is a principle that is very deeply interconnected with the bill that's before you today. Those are questions are deeply intertwined with AI as well to ensure that the fundamental principles and rights of privacy are also protected.

11:35 a.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much, Madam Strome.

Mr. Perkins, hopefully another MP will pick up where you left off. We're way over time.

Mr. Turnbull, you have the floor.

11:35 a.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thanks, Chair.

Thanks to all the witnesses for being here today. It seems that we have some really important testimony, so thank you for making the time. Thank you for lending your expertise to this important conversation.

I think we've all heard the phrase or the cliché that “perfection can be the enemy of the good”. I wonder if this is one of those instances.

We have a very fast-evolving AI space and lots of expertise here in Canada, but then we have people with differing opinions. Some people say that we should split the bill up and do the AIDA portion over again. We have others saying that we need to move forward. In a lot of the opening testimony that I heard from you today, speed is of the essence.

Mr. Bengio, maybe you can comment on whether you think that we should start over with AIDA and maybe comment on the importance of moving quickly.

11:35 a.m.

Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

Yes, I mentioned urgency many times in my little presentation because you have to understand AI not as a static thing where we are now but as the trajectory that is happening in research and development, mostly in large companies but also in academia. As these systems become smarter and more powerful, their abilities have dual use, and that means more good and more harm can happen. The harm part is what we need government to protect us from.

In particular, going back to the question from Mr. Perkins, we need to make sure that one of the principles is that major harm, such as a national security threat, will not be coming easily from the products that are considered legal and are within the law. This is why the high-impact category and maybe the different ways that it could be spelled out are so important.

11:35 a.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thank you.

I'll stick with you, Mr. Bengio, for the moment. I want to also ask you what the risks are to Canadians if AI is not regulated sooner rather than later. You've mentioned the idea that there's more good and more harm that can be done, but in the absence of any regulation and any law, what are the potential harms you see?