Evidence of meeting #101 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was artificial.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Erica Ifill  Journalist and Founder of Podcast, Not In My Colour, As an Individual
Adrian Schauer  Founder and Chief Executive Officer, AlayaCare
Jérémie Harris  Co-Founder, Gladstone AI
Jennifer Quaid  Associate Professor and Vice-Dean Research, Civil Law Section, Faculty of Law, University of Ottawa, As an Individual
Céline Castets-Renard  Full Law Professor, Civil Law Faculty, University of Ottawa, As an Individual
Jean-François Gagné  AI Strategic Advisor, As an Individual
George E. Lafond  Strategic Development Advisor, As an Individual
Stephen Kukucha  Chief Executive Officer, CERO Technologies
Guy Ouimet  Engineer, Sustainable Development Technology Canada

3:50 p.m.

Liberal

The Chair Liberal Joël Lightbound

I call the meeting to order.

Good afternoon everyone, and welcome to meeting No. 101 of the House of Commons Standing Committee on Industry and Technology.

Today’s meeting is taking place in a hybrid format, pursuant to the Standing Orders.

Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming consideration of Bill C‑27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts.

I’d like to welcome our witnesses today, Mr. Jean-François Gagné, an AI strategic advisor, who will be given an opportunity to give his opening address when he joins us a little later. We also have with us Ms. Erica Ifill, a journalist and founder of the Podcast Not In My Colour, and from AlayaCare, Mr. Adrian Schauer, its founder and chief executive officer.

I want to thank you, Mr. Schauer, for making yourself available again today. I know we had some technical difficulties before, but the headset looks fine this afternoon. Thanks for being here again.

Thank you, Madam Clerk, for the help, as well.

We have, from AltaML Inc., Nicole Janssen, co-founder and chief executive officer; and from Gladstone AI, we have Jérémie Harris.

And last, we will have Jennifer Quaid, associate professor and vice-dean research, civil law section, Faculty of Law, University of Ottawa along with with Céline Castets-Renard, full law professor, Faculty of Civil Law , University of Ottawa.

As we have several witnesses, we will begin the discussion immediately. Each of you will have five minutes for an opening statement. Mr. Gagné, please begin.

Madame Ifill, the floor is yours.

3:50 p.m.

Erica Ifill Journalist and Founder of Podcast, Not In My Colour, As an Individual

Good afternoon to the industry and technology committee as well as a lot of their assistants and also to whoever may be in the room.

I am here today to talk about part 3 of Bill C-27, an act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts. Part 3 is the Artificial Intelligence and Data Act.

Firstly, there are some issues, some challenges, with this bill, especially in accordance with societal effects and public effects.

Number one, when this bill was crafted, there was very little public oversight. There were no public consultations, and there are no publicly accessible records accounting for how these meetings were conducted by the government's AI advisory council, nor which points were raised.

Public consultations are important, as they allow a variety of stakeholders to exchange and develop innovative policy that reflects the needs and concerns of affected communities. As I raised in the Globe and Mail, the lack of meaningful public consultation, especially with Black, indigenous, people of colour, trans and non-binary, economically disadvantaged, disabled and other equity-deserving populations, is echoed by AIDA's failure to acknowledge AI's characteristic of systemic bias, including racism, sexism and heteronormativity.

The second problem with AIDA is the need for proper public oversight.

The proposed artificial intelligence and data commissioner is set to be a senior public servant designated by the Minister of Innovation, Science and Industry and, therefore, is not independent of the minister and cannot make independent public-facing decisions. Moreover, at the discretion of the minister, the commissioner may be delegated the “power, duty” and “function” to administer and enforce AIDA. In other words, the commissioner is not afforded the powers to enforce AIDA in an independent manner, as their powers depend on the minister's discretion.

Number three is the human rights aspect of AIDA.

First of all, how it defines “harm” is so specific, siloed and individualized that the legislation is effectively toothless. According to this bill:

harm means

(a) physical or psychological harm to an individual;

(b) damage to an individual's property; or

(c) economic loss to an individual.

That's quite inadequate when talking about systemic harm that goes beyond the individual and affects some communities. I wrote the following in The Globe and Mail:

“While on the surface, the bill seems to include provisions for mitigating harm,” [as said by] Dr. Sava Saheli Singh, a research fellow in surveillance, society and technology at the University of Ottawa's Centre for Law, Technology and Society, “[that] language focuses [only] on individual harm. We must recognize the potential harms to broader populations, especially marginalized populations who have been shown to be negatively affected disproportionately by these kinds of...systems.”

Racial bias is also a problem for artificial intelligence systems, especially those used in the criminal justice system, and racial bias is one of the greatest risks.

A federal study was done in 2019 in the United States that showed that Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.

A study from the U.K. showed that the facial recognition technology the study tested performed the worst when recognizing Black faces, especially Black women's faces. These surveillance activities raise major human rights concerns when there is evidence that Black people are already disproportionately criminalized and targeted by the police. Facial recognition technology also disproportionately affects Black and indigenous protesters in many ways.

From a privacy perspective, algorithmic systems raise issues of construction, because constructing them requires data collection and processing of vast amounts of personal information, which can be highly invasive. The reidentification of anonymized information, which can occur through the triangulation of data points collected or processed by algorithmic systems, is another prominent privacy risk.

There are deleterious impacts or risks stemming from the use of technology concerning people's financial situations or physical and/or psychological well-being. The primary issue here is that a significant amount and type of personal information can be gathered that is used to surveil and socially sort, or profile, individuals and communities, as well as forecast and influence their behaviour. Predictive policing does this.

In conclusion, algorithmic systems can also be used in the public sector context to assess a person's ability to receive social services, such as welfare or humanitarian aid, which can result in discriminatory impacts on the basis of socio-economic status, geographic location, as well as other data points analyzed.

3:55 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

Mr. Schauer, please begin.

3:55 p.m.

Adrian Schauer Founder and Chief Executive Officer, AlayaCare

Thank you.

I think this will be an interesting perspective side-by-side with Erica's.

I'm the founder and CEO of AlayaCare. It is a home care software company. We deliver our solutions both to the private sector providers and the public sector health authorities.

In the machine learning domain, we have all sorts of risk models we deliver. One of the things that you can imagine our ultimately building up to is a model that, on the basis of an assessment and patient data, will help at a population health level determine where the health system's resources get optimally allocated. In that use and case, it's definitely a high-impact system.

I really like two things about the framework in this bill. One is that you're looking to adhere to international standards. As a developer of software looking to generate value in our society, we can't have a thousand fiefdoms. Let me start with a thanks for that. The second thing I really appreciate is your segmentation of the actors between the people who generate the AI models, those who develop them into useful products, and those who operate them in public. I think that's a very useful framework.

On the question of bias, I think it raises some interesting questions. I think we have to be very careful about legislating against bias in the right way. In developing the model, really the only difference between a linear regression—think of what you might do in Excel—and an AI model is the black box aspect. Yes, if you're trying to figure out how to allocate health system resources, you probably don't want to put in certain elements that could be bigoted into your model, because that's not how a society wants to be allocating health resources. With a machine learning model, you're going to feed a bunch of data into a black box and out comes a prediction or an optimization. Then you can imagine all sorts of biases creeping in. It might be that a certain identity, for example, that left-handed people can actually get by with a bit less home care and still stay out of the hospital.... That wouldn't be programmed into the algorithm, but it could certainly be an output of the algorithm.

I think what we need to be careful of is assigning the right accountability to the right actor in the framework. I think the model developers need to demonstrate a degree of care in the selection of the training data. To the previous example—and I can say this with some certainty—the reason that the facial recognition model doesn't perform as well for indigenous communities is that it just wasn't fed enough training data of that particular group. When you're developing the AI model, you need to take care and demonstrate that you've taken care of having a representative training set that's not biased.

When you develop and put an algorithm into the market, I think providing as much transparency as possible to the people who will use it is definitely something that we should endeavour to do. Then, in the use of that and the output of that algorithm you have a representative training set and the right caveats. I think we have to be careful that you don't bring inappropriate accountability back to the model developers. That's my concern. Otherwise, you're going to be pitting usefulness against potential frameworks for bias.

What I think we have to be careful about with this legislation is to not disproportionately shift societal concerns on how resources should be allocated—you name the use case—to the tool developer and sit them appropriately with the user of the tool.

That's my perspective on the bill.

4 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much, Mr. Schauer.

I will now give the floor to Jeremie Harris, of Gladstone AI, for five minutes.

4 p.m.

Jérémie Harris Co-Founder, Gladstone AI

Thank you and good afternoon, Mr. Chair and members of the committee.

I'm here on behalf of Gladstone AI, which is an AI safety company that I co-founded. We collaborate with researchers at all the world's top AI labs, including OpenAI and partners in the U.S. national security community, to develop solutions to pressing problems in advanced AI safety.

Today's AI systems can write software programs nearly autonomously, so they can write malware. They can generate voice clones of regular people using just a few seconds of recorded audio, so they can automate and scale unprecedented identity theft campaigns. They can guide inexperienced users through the process of synthesizing controlled chemical compounds. They can write human-like text and generate photorealistic images that can power, and have powered, unprecedented and large-scale election interference operations.

These capabilities, by the way, have essentially emerged without warning over the last 24 months. Things have transformed in that time. In the process, they have invalidated key security assumptions baked into the strategies, policies and plans of governments around the world.

This is going to get worse, and fast. If current techniques continue to work, the equation behind AI progress has become dead simple: Money goes in, in the form of computing power, and IQ points come out. There is no known way to predict what capabilities will emerge as AI systems are scaled up using more computing power. In fact, when OpenAI researchers used an unprecedented amount of computing power to build GPT-4, their latest system, even they had no idea it would develop the ability to deceive human beings or autonomously uncover cyber exploits, yet it did.

We work with researchers at the world's top AI labs on problems in advanced AI safety. It's no exaggeration to say that the water cooler conversations among the frontier AI safety community frames near-future AI as a weapon of mass destruction. It's WMD-like and WMD-enabling technology. Public and private frontier AI labs are telling us to expect AI systems to be capable of carrying out catastrophic malware attacks and supporting bioweapon design, among many other alarming capabilities, in the next few years. Our own research suggests this is a reasonable assessment.

Beyond weaponization, evidence also suggests that, as advanced AI approaches superhuman general capabilities, it may become uncontrollable and display what are known as “power-seeking behaviours”. These include AIs preventing themselves from being shut off, establishing control over their environment and even self-improving. Today's most advanced AI systems may already be displaying early signs of this behaviour. Power-seeking is a well-established risk class. It's backed by empirical and theoretical studies by leading AI researchers published at the world's top AI conferences. Most of the safety researchers I deal with on a day-to-day basis at frontier labs consider power-seeking by advanced AI to be a significant source of global catastrophic risk.

All of which is to say that, if we anchor legislation on the risk profile of current AI systems, we will very likely fail what will turn out to be the single greatest test of technology governance we have ever faced. The challenge AIDA must take on is mitigating risk in a world where, if current trends simply continue, the average Canadian will have access to WMD-like tools, and in which the very development of AI systems may introduce catastrophic risks.

By the time AIDA comes into force, the year will be 2026. Frontier AI systems will have been scaled hundreds to thousands of times beyond what we see today. I don't know what capabilities will exist. As I mentioned earlier, no one can. However, when I talk to frontier AI researchers, the predictions I hear suggest that WMD-scale risk is absolutely on the table on that time horizon. AIDA needs to be designed with that level of risk in mind.

To rise to this challenge, we believe AIDA should be amended. Our top three recommendations are as follows.

First, AIDA must explicitly ban systems that introduce extreme risks. Because AI systems above a certain level of capability are likely to introduce WMD-level risks, there should exist a capability level, and therefore a level of computing power, above which model development is simply forbidden, unless and until developers can prove their models will not have certain dangerous capabilities.

Second, AIDA must address open source development of dangerously powerful AI models. In its current form, on my reading, AIDA would allow me to train an AI model that can automatically design and execute crippling malware attacks and publish it for anyone to freely download. If it's illegal to publish instructions on how to make bioweapons or nuclear bombs, it should be illegal to publish AI models that can be downloaded and used by anyone to generate those same instructions for a few hundred bucks.

Finally, AIDA should explicitly address the research and development phase of the AI life cycle. This is very important. From the moment the development process begins, powerful AI models become tempting targets for theft by nation, state and other actors. As models gain more capabilities and context awareness during the development process, loss of control and accidents become greater risks, as well. Developers should bear responsibility for ensuring the safe development of their systems, as well as their safe deployment.

AIDA is an improvement over the status quo, but it requires significant amendments to meet the full challenge likely to come from near-future AI capabilities.

Our full recommendations are included in my written submission, and I look forward to taking your questions. Thank you.

4:05 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much, Mr. Harris.

Over to you, Professor Quaid.

4:05 p.m.

Dr. Jennifer Quaid Associate Professor and Vice-Dean Research, Civil Law Section, Faculty of Law, University of Ottawa, As an Individual

Mr. Chair. vice-chairs and members of the Standing Committee on Industry and Technology, I am very pleased to be here once again, this time to talk about Bill C‑27.

I am grateful to be able to share my time with my colleague Céline Castets-Renard, who is online and who is the university research chair in responsible AI in a global context. As one of the preeminent legal experts on artificial intelligence in Canada and in the world, she is very familiar with what is happening elsewhere, particularly in the EU and the U.S. She also leads a SSHRC-funded research project on AI governance in Canada, of which I am part. The project is directed squarely at the question you are grappling with today in considering this bill, which is how to create a system that is consistent with the broad strokes of what major peer jurisdictions, such as Europe, the U.K. and the U.S., are doing while nevertheless ensuring that we remain true to our values and to the foundations of our legal and institutional environment. In short, we have to create a bill that's going to work here, and our comments are directed at that; at least, my part is. Professor Castets-Renard will speak more specifically about the details of the bill as it relates to regulating artificial intelligence.

Our joint message to you is simple. We believe firmly that Bill C-27 is an important and positive step in the process of developing solid governance to encourage and promote responsible AI. Moreover, it is vital and urgent that Canada establish a legal framework to support responsible AI governance. Ethical guidelines have their place, but they are complementary to and not a substitute for hard rules and binding enforceable norms.

Thus, our goal is to provide you with constructive feedback and recommendations to help ready the bill for enactment. To that end, we have submitted a written brief, in English and in French, that highlights the areas that we think would benefit from clarification or greater precision prior to enactment.

This does not mean that further improvements are not desirable. Indeed, we would say they are. It's only that we understand that time is of the essence, and we have to focus on what is achievable now, because delay is just not an option.

In this opening statement, we will draw your attention to a subset of what we discuss in the brief. I will briefly touch on four items before I turn it over to my colleague, Professor Castets-Renard.

First, it is important to identify who is responsible for what aspects of the development, deployment and putting on the market of AI systems. This matters for determining liability, especially of organizations and business entities. Done right, it can help enforcers gather evidence and assess facts. Done poorly, it may create structural immunity from accountability by making it impossible to find the evidence needed to prove violations of the law.

I would also add that the current conception of accountability is based on state action only, and I wonder whether we should also consider private rights of action. Those are being explored in other areas, including, I might add, in Bill C-59, which has amendments to the Competition Act.

Second, we need to use care in crafting the obligations and duties of those involved in the AI value chain. Regulations should be drafted with a view to what indicators can be used to measure and assess compliance. Especially in the context of regulatory liability and administrative sanctions, courts will look to what regulators demand of industry players as the baseline for deciding what qualifies as due diligence and what can be expected of a reasonably prudent person in the circumstances.

While proof of regulatory compliance usually falls on the business that invokes it, it is important that investigators and prosecutors be able to scrutinize claims. This requires metrics and indicators that are independently verifiable and that are based on robust research. In the context of AI, its opacity and the difficulty for outsiders to understand the capability and risks of AI systems makes it even more important that we establish norms.

Third, reporting obligations should be mandatory and not ad hoc. At present, the act contemplates the power of the AI and data commissioner to demand information. Ad hoc requests to examine compliance are insufficient. Rather, the default should be regular reporting at regular intervals, with standard information requirements. The provision of information allows regulators to gain an understanding of what is happening at the research level and at the deployment and marketing level at a pace that is incremental, even if one can say that the development of AI is exponential.

This builds institutional knowledge and capacity by enabling regulators and enforcers to distinguish between situations that require enforcement and those that do not. That seems to be the crux of the matter. Everyone wants to know when it's right to intervene and when we should let things evolve. It also allows for organic development of new regulations as new trends and developments occur.

I would be happy to talk about some examples. We don't have to reinvent the wheel here.

Finally, the enforcement and implementation of the AI act as well as the continual development of new regulations must be supported by an independent, robust institutional structure with sufficient resources.

The proposed AI data commissioner cannot accomplish this on their own. While not a perfect analogy—and I know some people here know that I'm the competition expert—I believe that the creation of an agency not unlike the Competition Bureau would be a model to consider. It's not perfect. The bureau is a good example because it combines enforcement of all types—criminal, regulatory, administrative and civil—with education, public outreach, policy development and now digital intelligence. It has a highly specialized workforce trained in the relevant disciplines it needs to draw on to discharge its mandate. It also represents Canada’s interests in multilateral fora and collaborates actively with peer jurisdictions. It matters, I think, to have that for AI.

I am now going to turn it over for the remaining time to my colleague Professor Castets-Renard.

Thank you.

4:15 p.m.

Madam Céline Castets-Renard Full Law Professor, Civil Law Faculty, University of Ottawa, As an Individual

Thank you very much, Mr. Chair, vice-chairs and members of the Standing Committee on Industry and Technology.

I would also like to thank my colleague, Professor Jennifer Quaid, for sharing her time with me.

I' m going to restrict my address to three general comments. I'll begin by saying that I believe artificial intelligence regulation is absolutely essential today, for three primary reasons. First of all, the significance and scope of the current risks are already well documented. Some of the witnesses here have already discussed current risks, such as discrimination, and future and existential risks. It's absolutely essential today to consider the impact of artificial intelligence, in particular its impact on fundamental rights, including privacy, non-discrimination, protecting the presumption of innocence and, of course, the observance of procedural guarantees for transparency and accountability, particularly in connection with public administration.

Artificial intelligence regulation is also needed because the technologies are being deployed very quickly and the systems are being further developed and deployed in all facets of our professional and personal lives. Right now, they can be deployed without any restrictions because they are not specifically regulated. That became obvious when ChatGPT hit the marketplace.

Canada has certainly developed a Canada-wide artificial intelligence strategy over a number of years now, and the time has now come to protect these investments and to provide legal protection for companies. That does not mean allowing things to run their course, but rather providing a straightforward and understandable framework for the obligations that would apply throughout the entire accountability chain.

The second general comment I would like to make is that these regulations must be compatible with international law. Several initiatives are already under way in Canada, which is certainly not the only country to want to regulate artificial intelligence. I'm thinking in particular, internationally speaking, of the various initiatives taking being taken by the Organisation for Economic Co‑operation and Development, the Council of Europe and, in particular, the European Union and its artificial intelligence bill, which should be receiving political approval tomorrow as part of the inter-institutional trialogue negotiations between the Council of the European Union, the European Parliament and the European Commission. Agreement has reached its final phase, after two years of discussion. President Biden's Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence also needs to be given consideration, along with the technical standards developed by the National Institute of Standards and Technology and the International Organization for Standardization.

My final general comment is about how to regulate artificial intelligence. The bill before us is not perfect, but the fact that it is risk-based is good, even though it needs strengthening. By this I mean considering risks that are now considered unacceptable, and which are not necessarily existential risks, but risks that we can already identify today, such as the widespread use of facial recognition. Also worth considering is a better definition of the risks to high-impact systems.

We'd like to point out and praise the amendments made by the minister, Mr. Champagne, before your committee a few weeks ago. In fact, the following remarks, and our brief, are based on these amendments. It was pointed out earlier that not only individual risks have to be taken into account, but also collective risks to fundamental rights, including systemic risks.

I'd like to add that it's absolutely essential, as the minister's amendments suggest, to consider the general use of artificial intelligence separately, whether in terms of systems or foundational models. We will return to this later.

I believe that a compliance-based approach that reflects the recently introduced amendments should be adopted, and it is fully compatible with the approach adopted by the European Union.

When all is said and done, the approach should be as comprehensive as possible, and I believe that the field of application of Bill C‑27 is too narrow at the moment and essentially focused on the private sector. It should be extended to the public sector and there should be discussions and collaboration with the provinces in their fields of expertise, along with a form of co‑operative federalism.

Thank you for your attention. We'll be happy to discuss these matters with you.

4:20 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

Mr. Gagné, you have the floor.

4:20 p.m.

Jean-François Gagné AI Strategic Advisor, As an Individual

Thank you very much.

I'm pleased to be here to testify as an individual.

I'm a strategic advisor in artificial intelligence. I' ve spent my entire career using AI technology, which became available in the early 2000s. I worked in operational research, artificial intelligence, and applied mathematics. I developed tools and software that have been used around the world. In 2016, I founded Element AI and was the company's president until it was sold to ServiceNow in 2021.

I have frequently collaborated internationally. For two years, I was the co‑chair of the working group on innovation and marketing for the Global Partnership on Artificial Intelligence. I also represented Canada on the European Commission's high-level expert group on artificial intelligence. Canada was the only country to have participated that was not in the European Union. I co‑chaired the drafting of the main deliverable on regulation and investment for trustworthy artificial intelligence.

I was involved in many events held by the Organization for Economic Co‑operation and Development and the Institute of Electrical and Electronics Engineers, in addition to many other international contributions. I was also a member of federal sectoral economic strategy tables for digital industries.

Despite Canada's track record in artificial intelligence research, and its undeniable contribution to basic research, it has gradually been losing its leadership role. It's important to be aware of the fact that we are no longer in the forefront. Our researchers now have limited resources. Conducting research and understanding what is happening in this field today is extremely expensive, and many innovations will emerge in the private sector. It's a fact. Much of the work being published by researchers has been done in collaboration with foreign firms, because that's how they can get access to the resources needed to train models and conduct tests, so that they can continue to publish and come up with new ideas.

Canada has always been somewhat less competitive than the United States, and although things have not got worse, they haven't improved. For a technology as essential as artificial intelligence, which I like to compare literally to energy, we're talking about intelligence, know-how and capabilities. It's a technology that is already being deployed in every industry and every sphere of life. Absolutely no corner of society is unaffected by it.

What I would like to underscore is the importance of not treating artificial intelligence homogeneously, just as the various regulations and statutes for oil, natural gas and electricity are not so treated. I could even start breaking it down into all the subsidiary aspects of production for each of these resources. It's very difficult to treat artificial intelligence in the same way for each of its applications. Everything is moving forward very quickly and it's highly complex, and when you put all the facts together, we feel overwhelmed. That, unfortunately, is what we hear all too often in the media. We've been here for quite a while and we've already heard words like "fear" and "advancement". there has also been talk of uncertainty about the future.

So, to return to the subject at hand, yes, it's absolutely urgent to take action. I am in no way hinting that measures ought not to be taken, but they ought to be appropriate for the situation now facing us.

We are facing a rapidly evolving complex situation that affects every sphere of society. It' s important to avoid adopting a single, straightforward and overly forceful response. What would happen if we took that kind of approach? We would perhaps protect ourselves, but it would certainly prevent us from taking advantage of opportunities and promoting the kind of economic development and productivity growth that would enrich the whole country. That's simply a fact. We can't deal with every single potential situation, because it would be too complex.

If we try to do everything and cover all aspects, our regulations will be too vague, ineffective and misunderstood. The economic outcome of vague regulation—you know this better than I do—will be that investments will not flow in. If consequences are unclear or definitions left until later, companies will simply invest elsewhere. It's a highly mobile digital field. Many Canadian workers compile and train models in the United States, beyond the reach of our own rules for our companies and our universities. It's important to be aware of that.

I believe that these are the key elements. They are central to our deliberations about how to write the rules, and in particular the way that they will be fine-tuned. Not only that, but they will guide the effort required to do the work properly to come up with a clear and accurate regulatory framework that promotes investment. With a framework like that, we'll know exactly what we are going to get if we make such and such an investment, and would understand exactly what the costs will be to provide transparency, to be able to publish data and to check that they have been anonymized.

That would enable organizations to invest as much as they and we want. If we are clear, organizations will be able to do the computations and decide whether or not to invest in Canada and deploy their services here. It will then be up to us to determine whether the bar has been set too high and whether the criteria are overly restrictive.

Vague regulations would guarantee that nothing will happen. Companies will simply go elsewhere because it's too easy to do so. Various other elements are on my list and I will summarize these. Please excuse me for not having done so prior to my presentation. I will send the committee all the details and recommendations with respect to the adjustments that should have been made.

In this regulatory framework, I believe that transparency will be very important if there is to be a climate of trust. It's important to ensure that users of the technology are aware that they are interacting with it. Some questions and subjects arise in all industries. It's important to be able to know what we are getting.

I'm talking about the underlying principles: stating what services we can access, their parameters and their specifications. If a service changes or its model is updated, that would enable us to assess the repercussions of using it. There are also all the other principles that would ensure people are not being manipulated and that require compliance with ethical and other issues. These are fundamental principles that must be part of the regulatory framework.

One of my most serious concerns is the lack of specificity and the possibility that the law would be too broad in scope. I learned a lesson from my participation in what led to the European Union's artificial intelligence law. Europe tried to come up with exhaustive legislative measures that attempted to include almost everything. However, many of the recommendations made by the committee at the time focused on the need to work with industry, the need for accuracy and avoiding a piece of legislation that tried to cover everything.

Of course, something new always comes up. It could be generative artificial intelligence or the next generation of artificial intelligence as applied to cybersecurity, health and all aspects of the economy, services and our lives. There's always something that has to be amended or altered.

My view is that caution is needed in this respect, as well as an extremely surgical approach that would lead to the development of regulations specific to each and every industry sector, with their assistance, the automobile sector for instance.

4:30 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much, Mr. Gagné.

That concludes the statements from the witnesses. We are now going to begin the first round of questions.

Mr. Généreux, you have the floor for six minutes.

4:30 p.m.

Conservative

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

Thank you, Mr. Chair.

Thanks to all the witnesses.

Earlier, Mr. Harris, I had the impression I was in a movie in which a parliamentary committee was conducting a study on an artificial intelligence bill. You were telling the people on this committee that the third world war was about to arrive and that it would be technological, by which I mean that no weapons of any kind would be used. Listening to you today, I felt like swearing, but unfortunately, I couldn't.

My greatest frustration, and I think I'm not alone around this table to feel that way, is that the bill before us includes a series of elements, underpinned by three principles, which are privacy, the courts and artificial intelligence. However, according to the testimony we heard today, artificial intelligence should have been dealt with in a separate bill.

We are being told that there have already been major advances in artificial intelligence since the start of our study, including the signing of a memorandum of understanding in England. Some countries decided to introduce a voluntary code while awaiting the adoption of various bills.

Ms. Castets-Renard, you spoke about a trialogue that would address certain issues. You are no doubt talking about Europe. Mr. Gagné, you also spoke earlier about measures that were proposed in reports you submitted to the European Union. Are you talking about the same thing? I'm not sure I've understood properly.

4:30 p.m.

Associate Professor and Vice-Dean Research, Civil Law Section, Faculty of Law, University of Ottawa, As an Individual

Dr. Jennifer Quaid

I will let Ms. Castets-Renard take that one, because she's the expert in European law.

4:30 p.m.

Conservative

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

I'm trying to understand whether there's a link between the work done by Mr. Gagné and the European Union, and the build that could possibly be adopted tomorrow.

4:30 p.m.

Full Law Professor, Civil Law Faculty, University of Ottawa, As an Individual

Madam Céline Castets-Renard

I can't speak on behalf of Mr. Gagné because I'm not exactly sure what he was involved in, but I think he took part in the work on ethics done by the group of experts that preceded the proposed European Union regulations.

What I'm talking about are proposed regulations disclosed in 2021 by the European Commission, and afterwards adopted by the Council of the European Union in December 2022, and by the European Parliament in June 2023. In Europe, law is decided by three partners or co‑legislators. In the case under discussion, the three partners have to agree on the same wording, because where things stand now, each has adopted different versions. Since the summer, and particularly since September, this trialogue has been under way, and there has indeed been debate among the representatives of these three institutions. There is going to be a very important meeting tomorrow—the fifth of its kind. It is therefore possible that there might be political agreement on the wording, which in any event must be adopted before the coming European elections prior to June 2024.

4:30 p.m.

Conservative

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

So, Ms. Castets-Renard, a bill could be passed before 2024 if the text is adopted or agreed to by the three parties tomorrow.

Mr. Gagné, you and Mr. Harris spoke about how quickly artificial intelligence, technology, and research and development were moving forward. Everyone is aware of that. Earlier, Ms. Castets-Renard referred to the amendments introduced by the minister, whose actual content we don't really know because we haven't yet had an opportunity to read them. What's your view of this bill compared to what is being done elsewhere in the world?

This question is also for Ms. Quaid.

4:35 p.m.

AI Strategic Advisor, As an Individual

Jean-François Gagné

This bill seems to be on a tangent that is not unlike the one in Europe, by which I mean that there is an attempt being made to come up with legislation on artificial intelligence. However, in my address, I suggested that you think about the scope of this legislation and the effort required to get there.

The United Kingdom also has a bill in the works and over 280 people are working full time on it, which indicates the scale of the task. As Canada is going through a process similar to the one in Europe, I believe it would be a good idea, in view of Canada's resources—

4:35 p.m.

Conservative

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

Don't be afraid to say it plainly, Mr. Gagné.

4:35 p.m.

AI Strategic Advisor, As an Individual

Jean-François Gagné

—and our role in this, to share the work. For example, when people talk about self-driving cars, that's artificial intelligence. Smart cities, that's artificial intelligence. All these sectors need to look into the impact of artificial intelligence on privacy. there are cameras in cars and these vehicles involve risks; for example, how can you certify that a car is self-driving and completely automatic? How would that work in a parking lot for cars that interact? What will the rules of the game have to be? What data could be shared? I used cars as an example, but I could go on for quite a while.

What I mean is that it's a good idea to come up with a framework and principles. There are certain basic principles for the protection of privacy and personal information, as well as data anonymization. Everything you've been working on in some parts of the bill is, I believe, extremely useful, because it's a specific subject.

But artificial intelligence is not a specific subject. It's a technology that has many uses. That's my point of view.

4:35 p.m.

Conservative

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

Thank you, Mr. Gagné.

Ms. Quaid, do you have anything you'd like to add, briefly?

4:35 p.m.

Associate Professor and Vice-Dean Research, Civil Law Section, Faculty of Law, University of Ottawa, As an Individual

Dr. Jennifer Quaid

I'd like to point out that it's not necessary to put sectoral regulations or frameworks in opposition to general regulations. I think that the danger is mixing too many things up when the emphasis should be on what's on the table, which is a general framework. That does not include other frameworks, at the provincial level for example.

We are lagging behind. Insofar as it's something that is affecting every sector, there will have to be some legislation more specifically suited to certain sectors. However, this doesn't exclude a general framework or set it in opposition to such a general framework. Europe certainly has gone in that direction. It has overall regulations and sectoral regulations, including for transportation.

The United Kingdom has decided not to introduce legislation and will continue with a voluntary framework. Without wishing to speak on behalf of Ms. Castets-Renard, who knows much more about it than I do, I can say that that's the wrong thing to do. I believe we need regulation and a framework.

4:35 p.m.

Conservative

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

We signed it, at least.

4:35 p.m.

Associate Professor and Vice-Dean Research, Civil Law Section, Faculty of Law, University of Ottawa, As an Individual

4:35 p.m.

Conservative

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

We signed that agreement.