Digital Charter Implementation Act, 2022

An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts

Sponsor

Status

In committee (House), as of April 24, 2023

Subscribe to a feed (what's a feed?) of speeches and votes in the House related to Bill C-27.

Summary

This is from the published bill. The Library of Parliament has also written a full legislative summary of the bill.

Part 1 enacts the Consumer Privacy Protection Act to govern the protection of personal information of individuals while taking into account the need of organizations to collect, use or disclose personal information in the course of commercial activities. In consequence, it repeals Part 1 of the Personal Information Protection and Electronic Documents Act and changes the short title of that Act to the Electronic Documents Act . It also makes consequential and related amendments to other Acts.
Part 2 enacts the Personal Information and Data Protection Tribunal Act , which establishes an administrative tribunal to hear appeals of certain decisions made by the Privacy Commissioner under the Consumer Privacy Protection Act and to impose penalties for the contravention of certain provisions of that Act. It also makes a related amendment to the Administrative Tribunals Support Service of Canada Act .
Part 3 enacts the Artificial Intelligence and Data Act to regulate international and interprovincial trade and commerce in artificial intelligence systems by requiring that certain persons adopt measures to mitigate risks of harm and biased output related to high-impact artificial intelligence systems. That Act provides for public reporting and authorizes the Minister to order the production of records related to artificial intelligence systems. That Act also establishes prohibitions related to the possession or use of illegally obtained personal information for the purpose of designing, developing, using or making available for use an artificial intelligence system and to the making available for use of an artificial intelligence system if its use causes serious harm to individuals.

Elsewhere

All sorts of information on this bill is available at LEGISinfo, an excellent resource from the Library of Parliament. You can also read the full text of the bill.

Votes

April 24, 2023 Passed 2nd reading of Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts
April 24, 2023 Passed 2nd reading of Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts

Diane Poitras President, Commission d'accès à l'information du Québec

Thank you, Mr. Chair.

I'd like to thank all the members of the committee for inviting me to participate in this study.

As you know, Quebec has undertaken a major reform of its privacy laws to make them more responsive to the new challenges posed by the current digital and technological environment. An Act to modernize legislative provisions of personal information, better known as Bill 25, was passed in September 2021. Its provisions have come into force or will come into force gradually over a three‑year period.

The changes made by Bill 25 can be grouped into three categories. The first involves new obligations for provincial businesses, organizations and political parties. The second contains new rights for citizens. Lastly, the third includes new powers for the Commission d'accès à l'information du Québec.

Among the new obligations of businesses is the addition of the principle of responsibility for the personal information they hold. It implies that each company has a privacy officer and that it establishes governance policies and practices. When a confidentiality incident occurs, businesses are also subject to new obligations, which are similar to those found in Bill C‑27.

Bill 25 also introduces enhanced transparency obligations about what companies do with personal information.

To give citizens greater control over their information, new consent requirements are provided, such as for obtaining express consent when the information is sensitive. To be valid, the consent must also meet certain conditions, be requested in simple and clear terms, for each of the purposes pursued and separately from any other information.

The legislation also provides for measures to prevent privacy breaches, such as the requirement to conduct a privacy impact assessment at the design of products or technological systems that involve personal information. This type of screening must also be carried out before personal information is shared outside Quebec to ensure that it is adequately protected.

If an organization collects personal information by offering a product or a technology service, the privacy parameters must, by default, be addressed to those who provide the highest level of protection.

The act also provides a framework for the collection and use of particularly sensitive information and certain situations with a higher potential for intrusion, such as profiling, geolocation, biometrics, and information about minors.

New rights for individuals include the right to be forgotten, the right to portability of information and certain rights when a fully automated decision is made about a person by an AI system.

Finally, the commission is being given new powers. It's the organization responsible for overseeing the enforcement of laws relating to access to documents and the protection of personal information, and for promoting those rights in Quebec. It has had order‑making powers since its inception. It may also, on the authorization of a judge, initiate a criminal prosecution for an offence under the acts it is responsible for overseeing.

Bill 25 significantly increased the amount of penalties that can be imposed and lengthened the time frame for such prosecutions.

The commission now also has the authority to impose administrative monetary penalties of up to several million dollars. It can adopt guidelines, and it has enhanced investigative powers.

Bill C‑27 has similar objectives to those that motivated the reform in Quebec. For businesses, the consistency of the rules in the various jurisdictions in which they operate helps to reduce their regulatory burden.

The adoption of similar and interoperable rules facilitates the essential work of collaboration between the various control authorities across the country, but also internationally. At the end of the day, it also respects people's fundamental rights and increases their confidence in the digital economy and in the use of new technologies such as artificial intelligence, which promotes responsible innovation.

In closing, I would like to point out that a collective, non‑partisan, transparent and inclusive reflection on the framework for artificial intelligence has taken place in recent months in Quebec. More than 200 experts, including the commission, looked at six topics, and a call for public contributions complemented that thinking. The preliminary direction of this work was discussed at a public forum last month.

Recommendations on regulating artificial intelligence will be submitted to the Government of Quebec by the end of the year.

Thank you. I look forward to your questions.

The Chair Liberal Joël Lightbound

Good afternoon, everyone.

Welcome to meeting No. 104 of the House of Commons Standing Committee on Industry and Technology.

Today's meeting is taking place in a hybrid format, pursuant to the Standing Orders. Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming consideration of Bill C‑27, an act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts.

First of all, I'd like to welcome our witnesses. At the same time, I'd like to offer our apologies for the brief delay caused by a vote in the House of Commons.

We welcome Diane Poitras, president of the Commission d'accès à l'information du Québec. Thank you very much for being with us, Mrs. Poitras.

We also have from the Office of the Information and Privacy Commissioner of Alberta,

Diane McLeod, information and privacy commissioner, also joining us by video conference. Thanks for being here.

Madame McLeod is accompanied by Cara-Lynn Stelmack, assistant commissioner of case management, and Sebastian Paauwe, manager of innovation and technology engagement. Both are appearing by video conference.

Lastly, we have Michael McEvoy, information and privacy commissioner for the Province of British Columbia.

Thank you to the three of you for joining us today. We have until 5 p.m. Without further ado, I will cede the floor.

I'll give you the floor, Mrs. Poitras. You have five minutes for your opening remarks.

Thank you.

Ryan Williams Conservative Bay of Quinte, ON

I guess the premise of this.... Just for everyone listening right now, the first part of Bill C-27 does not cover the public sector, but to the point that you brought up, we have the Privacy Act, which, it could be argued, we should have been studying at the exact same time. The point I'm making is that there is nothing out there that exists, especially not in AIDA, that addresses AI in the public sector, and we've talked a lot about that.

I'm trying to get a better handle on your recommendation. Should this have been included with AIDA right now, or is this a whole other act that you're looking at that we should have included with this?

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

Thank you, Mr. Chair.

Thank you to all the witnesses.

As they say in Quebec, I am “sur le cul”.

I don't know if you know what that means. It means “I'm on my ass.”

I don't know if that translates into that.

I apologize to the interpreters.

Ms. Wylie, you're giving us a particularly interesting lesson.

Bill C‑27 has been on the table for almost two years. It has been evaluated. It was created by public servants, obviously, in Ottawa. Some politicians have done some work to try to put in place legislation that would frame a problem that you don't really see. In fact, you are saying that all the legislation we need already exists. We simply have to proceed by sector to correct the elements that will be related to artificial intelligence.

At the committee, we have heard from people. Over the past few years, we have conducted studies on blockchain, the automotive industry, the right to repair, and so on.

Today, you are telling us that what we are doing is not working at all. You are telling us to take back the studies we have conducted and the existing legislation and to correct what will affect artificial intelligence, because it is already in all these sectors, let's face it.

My question is still for you, Ms. Wylie, but I would also like to know what Ms. Brandusescu and Ms. Casovan think of your position.

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

One of the criteria in the algorithmic impact assessment is the level of impact on the rights not only of individuals but also of communities. We have heard the call from marginalized communities that Bill C‑27 must go beyond individualized harms and include harms that disproportionately affect certain groups.

Can you explain to us why we need to change some individualized language and ensure that the government directive will be as specific and inclusive as possible?

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

Can you give us some examples of some of the criteria used to determine the level of impact of each system? Would it be a good idea to add this type of requirement to Bill C‑27?

Ashley Casovan Managing Director, AI Governance Center, International Association of Privacy Professionals

Thank you for inviting me here to participate in this important study, specifically to discuss AIDA, a component of the digital charter implementation act.

I am here today in my capacity as the managing director of IAPP's AI governance centre. IAPP is a global, non-profit, policy-neutral organization dedicated to the professionalization of the privacy and AI governance workforces. For context, we have 82,000 members located in 150 countries and over 300 employees. Our policy neutrality is rooted in the idea that no matter what the rules are, we need people to do the work of putting them into practice. This is why we make one exception to our neutrality: We advocate for the professionalization of our field.

My position at IAPP builds on nearly a decade-long effort to establish responsible and meaningful policy and standards for data and AI. Previously, I served as executive director for the Responsible Artificial Intelligence Institute. Prior to that, I worked at the Treasury Board Secretariat, leading the first version of the directive on automated decision-making systems, which I am now happy to see included in the amendments to this bill. I also serve as co-chair for the Standards Council of Canada's AI and data standards collaborative, and I contribute to various national and international AI governance efforts. As such, I am happy to address any questions you may have about AIDA in my personal capacity.

While I have always had a strong interest in ensuring technology is built and governed in the best interests of society, on a personal note, I am now a new mom to seven-month-old twins. This experience has brought up new questions for me about raising children in an AI-enabled society. Will their safety be compromised if we post photos of them on social media? Are the surveillance technologies commonly used at day cares compromising?

With this, I believe providing safeguards for AI is now more imperative than ever. Recent market research has demonstrated that the AI market size has doubled since 2021 and is expected to grow from around $2 billion in 2023 to nearly $2 trillion in 2030. This demonstrates not only the potential impact of AI on society but also the pace at which it is growing.

This committee has heard from various experts about challenges related to the increased adoption of AI and, as a result, improvements that could be made to AIDA. While the recently tabled amendments address some of these concerns, the reality is that the general adoption of AI is still new and these technologies are being used in diverse and innovative ways in almost every sector. Creating perfect legislation that will address all the potential impacts of AI in one bill is difficult. Even if it accurately reflects the current state of AI development, it is hard to create a single long-lasting framework that will remain relevant as these technologies continue to change rapidly.

One way of retaining relevance when governing complex technologies is through standards, which is already reflected in AIDA. The inclusion of future agreed-upon standards and assurance mechanisms seems likely, in my experience, to help AIDA remain agile as AI evolves. To complement this concept, one additional safeguard being considered in similar policy discussions around the world is the provision of an AI officer or designated AI governance role. We feel the inclusion of such a role could both improve AIDA and help to ensure that its objectives will be implemented, given the dynamic nature of AI. Ensuring appropriate training and capabilities of these individuals will address some of the concerns raised through this review process, specifically about what compliance will look like, given the use of AI in different contexts and with different degrees of impacts.

This concept is aligned with international trends and requirements in other industries, such as privacy and cybersecurity. Privacy law in British Columbia and Quebec includes the provision of a responsible privacy officer to effectively oversee implementation of privacy policy. Additionally, we see recognition of the important role people play in the recent AI executive order in the United States. It requires each agency to designate a chief artificial intelligence officer, who shall hold primary responsibility for managing their agency's use of AI. A similar approach was proposed in a recent private member's bill in the U.K. on the regulation of AI, which would require any business that develops, deploys or uses AI to designate an AI officer to ensure the safe, ethical, unbiased and non-discriminatory use of AI by the business.

History has shown that when professionalization is not sufficiently prioritized, a daunting expertise gap can emerge. As an example, ISC2's 2022 cybersecurity workforce study discusses the growing cyber-workforce gap. According to the report, there are 4.7 million cybersecurity professionals globally, but there is still a gap of 3.4 million cybersecurity workers required to address enterprise needs. We believe that without a concerted effort to upskill professionals in parallel fields, we will face a similar shortfall in AI governance and a dearth of professionals to implement AI responsibly in line with Bill C-27 and other legislative objectives.

Finally, in a recent survey that we conducted at IAPP on AI governance, 74% of respondents identified that they are currently using AI or intend to within the next 12 months. However, 33% of respondents cited a lack of professional training and certification for AI governance professionals, and 31% cited a lack of qualified AI governance professionals as key challenges to the effective rollout and operation of AI governance programs.

Legislative recognition and incentivization of the need for knowledgeable professionals would help ensure organizations resource their AI governance programs effectively to do the work.

In sum, we believe that rules for AI will emerge. Perhaps, more importantly, we need professionals to put those rules into practice. History has shown that early investment in a professionalized workforce pays dividends later. To this end, as part of our written submission, we will provide potential legislative text to be included in AIDA, for your consideration.

Thank you for your time. I am happy to answer any questions you might have.

Bianca Wylie Partner, Digital Public

My name is Bianca Wylie. I work in public interest digital governance as a partner at Digital Public. I've worked at both a tech start-up and a multinational. I've also worked in the design, development and support of public consultations for governments and government agencies.

Thank you for the opportunity to speak with you today about AIDA. As far as amendments go, my suggestion would be to wholesale strike AIDA from Bill C-27. Let's not minimize either the feasibility of this amendment or the strong case before us to do so. I'm here to hold this committee accountable for the false sense that something is better than nothing on this file. It's not, and you're the ones standing between the Canadian public and further legitimizing this undertaking, which is making a mockery of democracy and the legislative process.

AIDA is a complexity ratchet. It's a nonsensical construct detached from reality. It's building increasingly intricate castles of legislation in the sky. It's thinking about AI that is detached from operations, from deployment and from context. ISED's work on AIDA highlights how open to hijacking our democratic norms are when you wave around a shiny orb of innovation and technology.

As Dr. Lucy Suchman writes, “AI works through a strategic vagueness that serves the interests of its promoters, as those who are uncertain about its referents (popular media commentators, policy makers and publics) are left to assume that others know what it is.” I hope you might refuse to continue a charade that has had spectacular carriage through the House of Commons on the back of this socio-psychological phenomenon of assuming that someone else knows what's going on here.

This committee has continued to support a minister basically legislating on the fly. How are we writing laws like this? What is the quality control at the Department of Justice? Is it just that we'll do this on the fly when it's tech, as though this is some kind of thoughtful, adaptive approach to law? No. The process of AIDA reflects the very meaning of law becoming nothing more than a political prop.

The case to pause AIDA and reroute it to a new and separate process begins at its beginning. If we want to regulate artificial intelligence, we have to have a coherent “why”. We have never received a coherent why for AIDA from this government. Have you, as members of this committee, received an adequate backstory procedurally on AIDA? Who created the urgency? How was it drafted, and from what perspective? What work was done inside government to think about this issue across existing government mandates?

If we were to take this bill out to the general public for thoughtful discussion, a process that ISED actively avoided doing, it would fall apart under the scrutiny. There is use of AI in a medical setting versus use on a manufacturing production floor versus use in an educational setting versus use in a restaurant versus use to plan bus routes versus use to identify water pollution versus use in a day care—I could do this all day. All of these create real potential harms and benefits. Instead of having those conversations, we're carrying some kind of delusion that we can control and categorize how something as generic as advanced computational statistics, which is what AI is, will be used in reality, in deployment, in context. The people who can help us have those conversations are not, and have never been, in these rooms.

AIDA was created by a highly insular, extremely small circle of people—tiny. When there is no high-order friction in a policy conversation, we're talking to ourselves. Taking public engagement on AI seriously would force rigour. By getting away with this emergency and urgency narrative, ISED is diverting all of us from the grounded, contextual thinking that has also been an omission in both privacy and data protection thought. That thinking, as seen again in AIDA, continues to deepen and solidify power asymmetries. We're making the same mistake again for a third time.

This is a “keep things exactly the same, only faster” bill. If this bill were law tomorrow, nothing substantial would happen, which is exactly the point. It's an abstract piece of theatre, disconnected from Canada's geopolitical economic location and from the irrational exuberance of a venture capital and investment community. This law is riding on the back of investor enthusiasm for an industry that has not even proven its business model out. On top of that, it's an industry that is highly dependent on the private infrastructures of a handful of U.S. companies.

Thank you.

The Chair Liberal Joël Lightbound

Colleagues, I call this meeting to order.

Welcome to meeting No. 102 of the House of Commons Standing Committee on Industry and Technology. Today's meeting is taking place in a hybrid format, pursuant to the Standing Orders.

Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming consideration of Bill C-27, an act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts.

I'd like to welcome our witnesses this afternoon. With us is Ana Brandusescu, AI governance researcher with McGill University.

Good Afternoon, Ms. Brandusescu.

I would also like to welcome Alexandre Shee, industry expert and incoming co‑chair of Future of Work, Global Partnership on Artificial Intelligence.

Good Afternoon, Mr. Shee.

From Digital Public, we have Bianca Wylie.

Thank you for being with us, Ms. Wylie.

Lastly, from International Association of Privacy Professionals, we have Ashely Casovan, managing director of the AI Governance Centre.

I'd like to thank you, too, Ms. Casovan.

Without further ado, I will yield the floor for five minutes to Ms. Brandusescu.

December 5th, 2023 / 5:25 p.m.


See context

Full Law Professor, Civil Law Faculty, University of Ottawa, As an Individual

Madam Céline Castets-Renard

In Canada and the provinces, the use of facial recognition, generally speaking, and in particular by law enforcement agencies, is not circumscribed. Of course, without a legal framework, it becomes a matter of trial and error. As was demonstrated in the Clearview AI case, we know from a reliable source that facial recognition was used by several law enforcement agencies in Canada, including the Royal Canadian Mounted Police.

When there is no legal framework, things become problematic. Practices develop without any restrictions. That's why people might, on the one hand, fear the legal framework because its existence means the technology has been accepted and recognized, while on the other hand, it would be naïve to imagine that the technology will not be used and can't be stopped, and possibly has many advantages for use in police investigations.

It's always a matter of striking the right balance between the benefits of AI while avoiding the risks. More specifically, a law on the use of facial recognition should ideally anticipate the principles of necessity and proportionality. For example, limits could be placed on when and where the technology can be used for specific purposes or certain types of big investigations. The use of the technology would have to be permitted by a judicial or administrative authority. Legal frameworks are possible. There are examples elsewhere and in other fields. It is certainly among the things that need to be dealt with.

I would add that Bill C‑27 is not directly related to this subject, because what we are dealing with here is regulating international and interprovincial trade. It has nothing to do with the use of AI in the public sector. We can, in due course, regulate companies that sell these facial recognition AI products and systems to the police, but not their use by the police. It's also important to ask about the scope of the regulation that is to be adopted for AI, which will no doubt extend beyond Bill C‑27.

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

Thank you, Mr. Chair.

Ms. Castets-Renard, I heard you yesterday on Radio-Canada as I was headed to Ottawa, and the topic was really interesting. You were talking about the things that could go wrong with artificial intelligence as a result of its use by law enforcement authorities, particularly in connection with facial recognition. What I understood from the case that occurred in Ireland was that the use of artificial intelligence could, for instance, place the presumption of innocence at risk.

Are current Canadian laws sufficiently advanced to protect against potential social problems? Bill C‑27 may not be the solution. How can we plan for or protect ourselves from these problems, which are probably imminent?

Not only that, but the use of artificial intelligence in political face-saving endeavours might well lead to other restrictions. That's what happened, I understand. Is that right?

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

Finally, I have one more quick question.

I, like many of us here around this table, have children. What do we need to consider for children with respect to AI?

Is there anything specific we can be doing on the AI aspect of Bill C-27 to ensure that we do whatever we possibly can to protect the innocence of kids?

December 5th, 2023 / 5 p.m.


See context

Full Law Professor, Civil Law Faculty, University of Ottawa, As an Individual

Madam Céline Castets-Renard

Thank you.

I'd like to add something about Bill C‑27. A risk-based approach would avoid treating all artificial intelligence systems in the same way, or placing the same obligations on them. Other options include the high-impact concept, and the amendments introduced by the minister, Mr. Champagne, explain what this concept means in seven different sectors of activity.

I therefore don't think it's fair to say that it would be applied everywhere, on everyone, and haphazardly. It's possible to discuss how it's going to be applied in seven different activity sectors. Some, no doubt, would say that doesn't go far enough, but it is certainly not a law that will lack specifics, because the amendments specify the details.

To return to what was said earlier, it also means that there can be a comprehensive approach with general principles, and an separate approach for each sector or field. That's what the European Union has done with its amendments. That's why statutes being adopted in other countries need to be considered.

As for what was said about the United Kingdom earlier, Canada has signed a policy declaration which has no legal or binding value. It's a very general text that adds nothing to what we have already said about the ethics of artificial intelligence. It definitely does not prevent Canada from following its own path, as the United States did when it issued its executive order right before the summit in England. The Americans were not willing to wait for England to take the lead.

Those are the details I wanted to add.

December 5th, 2023 / 4:55 p.m.


See context

AI Strategic Advisor, As an Individual

Jean-François Gagné

I think these are good guideposts. An enormous amount of work was done by the international community to understand the issues. I think that many of the things I was reading about in Bill C‑27 and the amendments are valid, and I could identify which portions were intended to cover health or a specific aspect of biotechnology. I could really tell. However, it seems to want to cover all industries Canada, from the smallest to the biggest. What's really needed is to think carefully about them, make adjustments, and if there are specific situations, work with these sectors, while concurrently protecting people and being careful not to hinder innovation.

That's really my greatest concern. I have friends who are entrepreneurs, I'm an entrepreneur myself, and reading this worried me. It's already difficult to innovate and try to stand out from the crowd. If it becomes even more expensive to develop and launch products, it would make things more complicated.

Madam Céline Castets-Renard Full Law Professor, Civil Law Faculty, University of Ottawa, As an Individual

Thank you very much, Mr. Chair, vice-chairs and members of the Standing Committee on Industry and Technology.

I would also like to thank my colleague, Professor Jennifer Quaid, for sharing her time with me.

I' m going to restrict my address to three general comments. I'll begin by saying that I believe artificial intelligence regulation is absolutely essential today, for three primary reasons. First of all, the significance and scope of the current risks are already well documented. Some of the witnesses here have already discussed current risks, such as discrimination, and future and existential risks. It's absolutely essential today to consider the impact of artificial intelligence, in particular its impact on fundamental rights, including privacy, non-discrimination, protecting the presumption of innocence and, of course, the observance of procedural guarantees for transparency and accountability, particularly in connection with public administration.

Artificial intelligence regulation is also needed because the technologies are being deployed very quickly and the systems are being further developed and deployed in all facets of our professional and personal lives. Right now, they can be deployed without any restrictions because they are not specifically regulated. That became obvious when ChatGPT hit the marketplace.

Canada has certainly developed a Canada-wide artificial intelligence strategy over a number of years now, and the time has now come to protect these investments and to provide legal protection for companies. That does not mean allowing things to run their course, but rather providing a straightforward and understandable framework for the obligations that would apply throughout the entire accountability chain.

The second general comment I would like to make is that these regulations must be compatible with international law. Several initiatives are already under way in Canada, which is certainly not the only country to want to regulate artificial intelligence. I'm thinking in particular, internationally speaking, of the various initiatives taking being taken by the Organisation for Economic Co‑operation and Development, the Council of Europe and, in particular, the European Union and its artificial intelligence bill, which should be receiving political approval tomorrow as part of the inter-institutional trialogue negotiations between the Council of the European Union, the European Parliament and the European Commission. Agreement has reached its final phase, after two years of discussion. President Biden's Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence also needs to be given consideration, along with the technical standards developed by the National Institute of Standards and Technology and the International Organization for Standardization.

My final general comment is about how to regulate artificial intelligence. The bill before us is not perfect, but the fact that it is risk-based is good, even though it needs strengthening. By this I mean considering risks that are now considered unacceptable, and which are not necessarily existential risks, but risks that we can already identify today, such as the widespread use of facial recognition. Also worth considering is a better definition of the risks to high-impact systems.

We'd like to point out and praise the amendments made by the minister, Mr. Champagne, before your committee a few weeks ago. In fact, the following remarks, and our brief, are based on these amendments. It was pointed out earlier that not only individual risks have to be taken into account, but also collective risks to fundamental rights, including systemic risks.

I'd like to add that it's absolutely essential, as the minister's amendments suggest, to consider the general use of artificial intelligence separately, whether in terms of systems or foundational models. We will return to this later.

I believe that a compliance-based approach that reflects the recently introduced amendments should be adopted, and it is fully compatible with the approach adopted by the European Union.

When all is said and done, the approach should be as comprehensive as possible, and I believe that the field of application of Bill C‑27 is too narrow at the moment and essentially focused on the private sector. It should be extended to the public sector and there should be discussions and collaboration with the provinces in their fields of expertise, along with a form of co‑operative federalism.

Thank you for your attention. We'll be happy to discuss these matters with you.