Digital Charter Implementation Act, 2022

An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts

Sponsor

Status

In committee (House), as of April 24, 2023

Subscribe to a feed (what's a feed?) of speeches and votes in the House related to Bill C-27.

Summary

This is from the published bill. The Library of Parliament has also written a full legislative summary of the bill.

Part 1 enacts the Consumer Privacy Protection Act to govern the protection of personal information of individuals while taking into account the need of organizations to collect, use or disclose personal information in the course of commercial activities. In consequence, it repeals Part 1 of the Personal Information Protection and Electronic Documents Act and changes the short title of that Act to the Electronic Documents Act . It also makes consequential and related amendments to other Acts.
Part 2 enacts the Personal Information and Data Protection Tribunal Act , which establishes an administrative tribunal to hear appeals of certain decisions made by the Privacy Commissioner under the Consumer Privacy Protection Act and to impose penalties for the contravention of certain provisions of that Act. It also makes a related amendment to the Administrative Tribunals Support Service of Canada Act .
Part 3 enacts the Artificial Intelligence and Data Act to regulate international and interprovincial trade and commerce in artificial intelligence systems by requiring that certain persons adopt measures to mitigate risks of harm and biased output related to high-impact artificial intelligence systems. That Act provides for public reporting and authorizes the Minister to order the production of records related to artificial intelligence systems. That Act also establishes prohibitions related to the possession or use of illegally obtained personal information for the purpose of designing, developing, using or making available for use an artificial intelligence system and to the making available for use of an artificial intelligence system if its use causes serious harm to individuals.

Elsewhere

All sorts of information on this bill is available at LEGISinfo, an excellent resource from the Library of Parliament. You can also read the full text of the bill.

Votes

April 24, 2023 Passed 2nd reading of Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts
April 24, 2023 Passed 2nd reading of Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts

Jean-Denis Garon Bloc Mirabel, QC

Thank you, Mr. Chair.

We are coming nearly to the end of the testimony and this witness panel’s appearance.

I am trying to form an opinion on what I heard. I am just trying to see where you are at regarding regulation so that I can think about it constructively.

I must admit I am a little confused.

On the one hand, when we asked all of you if this requires regulation, the answer was yes. When we asked you if quick action is needed, the answer was yes.

On the other hand, when we got into the details, you told us that Bill C‑27 is inadequate. It contains too many things and touches on too many aspects. Then you sort of told us that a lot of legislation would need changes. I noted down which ones we discussed today: The Canada Health Act, the Canada Elections Act, the Personal Information Protection and Electronic Documents Act, the Criminal Code, the Copyright Act, the Patent Act and measures specifically targeting advertising for children. These types of changes would require endless legislative work, especially with the type of Parliament we’re sitting in today. In the end, it leads to us not having any regulation.

Furthermore, I think if we presented a bill to you in which we changed all of that legislation at the same time, you would probably tell us we are coming back to the same problem at the start of Bill C‑27, and it all boils down to the same thing.

If I understand correctly, it’s a matter of public relations and strategy, among other things.

I have the bad habit of being very direct. I will therefore ask you the following question: Isn’t this a rather clever way of telling us that you don’t want any regulation?

Take all the time left to answer my question.

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

Thank you, Mr. Chair.

I thank all the witnesses for being here with us.

Last Monday, when he appeared before this committee via videoconference, Mr. Bengio told us we had to move Bill C‑27 forward quickly, because in a decade or even within two years, robots as smart as humans could make decisions.

In today’s La Presse, an article on digital life shows that in 2019, during the pandemic, your four respective companies and Apple created nearly 1 million jobs. Since then, especially over the last two years, over 125,000 of them were cut, and it’s not over.

Are these employees, who created tools through artificial intelligence, now paying for it by having their jobs eliminated? Is this the start of a significant reduction in the number of employees?

I own an SME. As we speak, in the field of communications, tools like ChatGPT can create websites in five minutes. Obviously, it doesn’t take me five minutes to do it. One must adapt to today’s reality.

In the future, will artificial intelligence help us to create more jobs or fewer jobs in the field of information technology?

In fact, Ms. Craig, you talked about research and development. I think Ms. Curran did too.

Could Bill C‑27 undermine research and development in Canada if it sets out rules for artificial intelligence that are too strict?

My questions are for everyone. You may answer one after the other if you like.

February 7th, 2024 / 6:15 p.m.


See context

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

I'd agree with that.

The issue you've raised around deepfakes, both video and audio, would not be addressed by Bill C-27, at least not anytime soon. I know that the government made an announcement—I think today—around the issue of deepfakes and an intent to deal with them. They could be dealt with very easily through an amendment to the Criminal Code or existing legislation.

It's the same thing around election disinformation. If that's a harm committee members are concerned about, that can be addressed through a quick amendment to the Canada Elections Act. There's even the Copyright Act on issues of creator rights. The use of material in the context of AI development that impacts creator rights can be dealt with through the Copyright Act as well.

There are existing statutes. We advocated previously for a sectoral approach to AI regulation because of this, but those could all be dealt with very quickly. They won't be dealt with in the context of Bill C-27 quickly.

Amanda Craig Senior Director of Public Policy, Office of Responsible AI, Microsoft

Thank you, Mr. Chair and committee members for the opportunity to testify.

At Microsoft, we believe in the immense opportunity that AI presents to contribute to Canada's growth and to deliver prosperity to Canadians. To truly realize AI's potential and to improve people's lives, we must effectively address the very real challenges and risks of using AI without appropriate safeguards. That's why we have championed the need for regulation that navigates the complexity of AI to strengthen safety and to safeguard privacy and civil liberties.

Canada has been a leader in putting forward a framework for AI, and there are positive aspects of the legislative framework that provide a helpful foundation going forward. However, as it currently stands, Bill C-27 applies the rules and requirements too broadly. It regulates both low-risk and high-risk AI systems in a similar way without adjusting requirements according to risk, and it includes criminal penalties as part of the enforcement regime.

Not all risk is created equal. Intuitively we know that, but it can be difficult to determine risk levels and adjust for them. In our view, the set of rules and requirements in the AIDA should apply to AI systems and used where the level of risk is high. For example, the AIDA applies the same rules and regulatory obligations to a high-risk system, such as AI that is used to determine whether to approve a mortgage, and to a low-risk system, such as AI that is used to optimize package delivery routes.

Applying the rules and requirements too broadly has several implications. Businesses in Canada, including small and medium-sized businesses, will need to focus on resource-intensive assessment and third party audits even for low-risk, general purpose systems, rather than focusing on where the risk is highest or on developing new safety systems. A restaurant chain and its AI system for inventory management and food waste reduction will be subject to the same requirements as facial recognition technology. This will spread thinly the time, money, talent and resources of Canadian businesses. It will potentially mean finite resources are not sufficiently focused on the highest risk.

Canada's approach is also out of step with that of some of its largest trading partners, including the U.S., the EU, the U.K., Japan and others. In fact, the Canadian law firm Osler has published a comparison of the AIDA with the EU's AI Act, which I'll be happy to submit to the committee. The comparison includes 11 examples where Canada has gone further than the EU, creating a set of unique requirements for businesses operating in Canada.

Going further than the EU does not mean that Canadians will be better protected from the risks of AI. It means that businesses in Canada that are already using lower-risk AI systems could face a more onerous regime than anywhere in the world. Instead, Canadians will be better protected with more targeted regulation. By ensuring that the AIDA is risk-based and provides clarity and certainty on compliance, Canada can set a new standard for AI regulation.

We firmly believe that with the right amendments, it is possible to strike the right balance in the AIDA. You can achieve the crucial objective of reducing harm and protecting Canadians, and you can enable businesses in Canada to be more confident in adapting AI, which will provide enormous benefits for productivity, innovation and competitiveness.

In conclusion, we would recommend, first, better scoping of what is truly high-impact AI. Second, we recommend distinguishing the levels of risk of AI systems and defining requirements according to that level of risk. Third and finally, we recommend rethinking enforcement, including the use of criminal penalties, which is unlike any other jurisdiction in the OECD. This would also ensure that Canada's approach is interoperable with what other global leaders, such as the EU, the U.K. and the U.S., are doing.

We are happy to provide this committee with a written submission detailing our recommendations.

Thank you, Mr. Chair. We look forward to your questions.

Rachel Curran Head of Public Policy, Canada, Meta Platforms Inc.

Thank you, Mr. Chair.

My name is Rachel Curran and I'm the head of public policy for Meta in Canada. It's a pleasure to address the committee this afternoon.

Meta supports risk-based, technology-neutral approaches to the regulation of artificial intelligence. We believe it's important for governments to work together to set common standards and governance models for AI. It's this approach that will enable the economic and social opportunities of an open science approach to AI and also bolster Canadian competitiveness.

Meta has been at the forefront of the development of artificial intelligence for more than a decade. We can talk about that later during this hearing. This innovation has allowed us to connect billions of people and generate real value for small businesses. For our community, AI is what helps people discover and engage with the content they care about. For the millions of businesses, particularly small businesses, that use our platforms, our AI-powered tools make an advertiser's job easier. That's a real game-changer for small and medium-sized businesses that are looking to reach customers who are interested in their products.

In addition, Meta's fundamental AI research team has taken an open approach to AI research, pioneering breakthroughs across a range of industries and sectors. In 2017 we launched our AI research lab in Montreal to contribute to the Canadian AI ecosystem. Today, Meta's global research efforts are led by Dr. Joelle Pineau, a world-leading Canadian researcher and a professor at McGill University. She is the one who heads up Meta's global AI research efforts.

Our Canadian team of researchers has worked on some of the biggest breakthroughs in AI, from developing more diverse and inclusive AI models to improving health care accessibility and patient care, which have benefited communities in Canada and abroad. This work is shared openly with the greater research community, a commitment to open science and a level of transparency that helps Meta set the highest standards of quality and responsibility and ultimately build better AI solutions.

We applaud Canada's leadership on the development of smart regulation and guardrails for AI development, particularly through its leadership on the Global Partnership on AI and the G7 process. We strongly support the work of this committee, of course, and the initial aim of Bill C-27, which is to ensure that AI is developed and deployed responsibly while also ensuring that global regulatory frameworks are aligned, maintaining Canada's status as a world leader in AI innovation and research.

We think AI is advancing so quickly that measures focused on specific technologies could soon become irrelevant and hinder innovation. As we look to the future, we hope that the government will consider a truly risk-based and outcome-focused approach that will be future-proof. In that regard, we would flag a few specific concerns with Bill C-27.

First, one proposed amendment from the minister to this bill would classify content moderation or prioritization systems as “high-impact”. We respectfully disagree that these systems are inherently high risk as defined in the legislation, and suggest that the regulation of risks associated with content that Canadians see online would be better dealt with in pending online harms legislation.

Similarly, we think the proposed regime for general purpose AI is not appropriately tailored to risk and more closely resembles the requirements for truly high-impact systems. We suggest that the obligations for general purpose AI should be harmonized with international frameworks, such as the ongoing G7 Hiroshima process, which I referenced earlier, the White House voluntary commitments and OECD work on AI governance.

Lastly, we'd flag the audit and access powers contemplated by Bill C-27. We think they are at odds with existing frameworks—for example, with the approach by other signatories of the Bletchley Declaration arising out of the recent U.K. AI safety summit. That includes the U.S. and the U.K. Again, we'd encourage Canada to pursue an approach that preserves privacy and is consistent with global standards.

Members, we believe that Meta is uniquely poised to solve some of AI's biggest problems by weaving our learnings from our world-leading research into products that billions of people and businesses can benefit from while continuing to contribute to Canada's vibrant, world-leading AI ecosystem.

We look forward to working with this committee and to answering your questions.

Thank you.

Jeanette Patell Director, Government Affairs and Public Policy, Google Canada

Good afternoon, Chair and members of the committee. My name is Jeanette Patell and I am the director of government affairs and public policy for Google in Ottawa. I am joined remotely by my colleagues Tulsee Doshi and Will DeVries. Tulsee is a director and head of product in responsible AI at Google. Will is a director on our privacy legal team and advises the company on global privacy laws and data protection compliance. We appreciate the invitation to appear today and to contribute to your consideration of Bill C-27.

As the committee knows, this is important legislation, and important legislation to get right.

Today, we will present a few remarks on the Consumer Privacy Protection Act and the Artificial Intelligence and Data Act. We will be very happy to answer your questions.

We will present our brief to this committee shortly. We will also maintain our commitments regarding aspects that could be improved and ensure better results for businesses, innovators and Canadian consumers.

When Canadians use our services, they are trusting us with their information. This is a responsibility that we take very seriously at Google, and we protect user privacy with industry-leading security infrastructure, responsible data practices and easy-to-use privacy tools that put our users in control.

Google has long championed smart, interoperable and adaptable data protection regulations—rules that will protect privacy rights, enhance trust in the digital ecosystem and enable responsible innovation. We support the government's efforts to modernize Canada's privacy and data protection regulatory framework and to codify important rights and obligations.

We also believe the CPPA would benefit from further consideration and targeted amendments in certain areas. For example, we agree with others, like the Canadian Chamber of Commerce, that consent provisions should be both clarified and tailored to more consequential activities. We also highlight the need for a consistent federal definition of “minors” and clearer protections for minors' rights and freedoms. Improvements to these areas would maintain and enhance Canadian privacy protections, make it easier for businesses to operate across Canada and the world and enable continued innovation throughout the economy.

Turning to the artificial intelligence and data act, as our CEO has said, “AI is too important not to regulate, and too important not to regulate well.” We are encouraged to see governments around the world developing policy frameworks for these new technologies, and we're deeply engaged in supporting these efforts to maximize AI's benefits while minimizing its risks.

Google has been working on AI for a long time, including at our sites in Montreal and Toronto, and in 2017 we reoriented to be an AI-first company. Today AI powers Google search, translate, maps and other services Canadians use every day. We're also using AI to help solve societal issues, from forecasting floods to improving screenings of diseases like breast cancer. Since 2018, our work with these technologies has been guided by our responsible AI principles, which are supported by a robust governance structure and review process. My colleague Tulsee has been at the centre of this work.

Canada has an exceptional opportunity to leverage investments in basic research and artificial intelligence. This committee will contribute to developing a legislative framework for solid public protection measures that will harness economic and societal benefits.

We welcome the government's efforts to establish the right guardrails around AI, and we share some of the concerns that others have raised with this committee. We believe the bill can be thoughtfully amended in ways that support the government's objectives without hindering AI's development and use.

There is no one-size-fits-all approach to regulating AI. AI is a multi-purpose technology that takes many forms and spans a wide range of risk profiles. A regulatory framework for these technologies should recognize the vast range of beneficial uses and should weigh the opportunity costs of not developing or deploying AI systems. It should also tailor obligations to the magnitude and likelihood of harm specific to particular use cases. We believe the AIDA should establish a risk-based and proportionate approach tailored to specific applications and focused on ensuring global interoperability via widely accepted compliance tools such as international standards.

We hope to continue to work with the Canadian government, as we have with governments around the world, to build thoughtful, smart regulations that protect Canadians and capture this once-in-a-generation opportunity to strengthen our economy, position Canadian innovators for success on the global stage and drive transformational scientific breakthroughs.

Thank you again for the invitation to appear. We look forward to answering your questions and continuing this important conversation.

Nicole Foster Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Thank you for your invitation.

It's a privilege to be here as the committee conducts its study of the AI and data act within Bill C-27.

AWS has a strong presence in and commitment to Canada. We have two infrastructure regions here, in both Montreal and Calgary, to support our Canadian customers, and we have plans to invest up to nearly $25 billion by 2037 in this digital infrastructure.

Globally, more than 100,000 organizations of all sizes are using AWS AI and machine-learning services. They include Canadian start-ups, national newspapers, professional sports organizations, federally regulated financial institutions, retailers, public institutions and more.

Specifically, AWS offers a set of capabilities across three layers of the technology stack. At the bottom layer is the AI infrastructure layer. We offer our own high-performance custom chips, as well as other computing options. At the middle layer, we provide the broadest selection of foundation models on which organizations build generative AI applications. This includes both Amazon-built models and those from other leading providers, such as Cohere—a Canadian company—Anthropic, AI21, Meta—who's here today—and Stability AI. At the top layer of the stack, we offer generative AI applications and services.

AWS continually invests in the responsible development and deployment of AI. We dedicate efforts to help customers innovate and implement necessary safeguards. Our efforts towards safe, secure and responsible AI are grounded in a deep collaboration with the global community, including in work to establish international technical standards. We applaud the Standards Council of Canada's continued leadership here.

We are excited about how AI will continue to grow and transform how we live and work. At the same time, we're also keenly aware of the potential risks and challenges. We support government's efforts to put in place effective, risk-based regulatory frameworks while also allowing for continued innovation and a practical application of the technology.

I'm pleased to share some thoughts on the approach Bill C-27 proposes.

First, AI regulations must account for the multiple stakeholders involved in the development and use of AI systems. Given that the AI value chain is complex, recent clarification from the minister that helps define rules for AI developers and deployers is a positive development. Developers are those who make available general purpose AI systems or services, and deployers are those who implement or deploy those AI systems.

Second, success in deploying responsible AI is often very use case- and context-specific. Regulation needs to differentiate between higher- and lower-risk systems. Trying to regulate all applications with the same approach is very impractical and can inadvertently stifle innovation.

Because the risks associated with AI are dependent on context, regulations will be most effective when they target specific high-risk uses of the technology. While Bill C-27 acknowledges a conceptual differentiation between high- and low-impact applications of AI, we are concerned that, even with the additional clarifications, the definition of “high impact” is still too ambiguous, capturing a number of use cases that would be unnecessarily subject to costly and burdensome compliance requirements.

As a quick example, there's the use of AI via peace officer, which is deemed high impact. Is it still high impact if it includes the use of autocorrect when filling out a traffic violation? Laws and regulations must clearly differentiate between high-risk applications and those that pose little or no risk. This is a core principle that we have to get right. We should be very careful about imposing regulatory burdens on low-risk AI applications that can potentially provide much-needed productivity boosts to Canadian companies both big and small.

Third, criminal enforcement provisions of this bill could have a particularly chilling effect on innovation, even more so if the requirements are not tailored to risk and not drafted clearly.

Finally, Bill C-27 should ensure it is interoperable with other regulatory regimes. The AI policy world has changed and progressed quite quickly since Bill C-27 was first introduced in 2022. Many of Canada's most important trading partners, including the U.S., the U.K., Japan and Australia, have since outlined very different decentralized regulatory approaches, where AI regulations and risk mitigation are to be managed by regulators closest to the use cases. While it's commendable that the government has revised its initial approach following feedback from stakeholders, it should give itself the time necessary to get its approach right.

Matthew Hatfield Executive Director, OpenMedia

Hi there. I'm Matt Hatfield, and I'm the executive director of OpenMedia, a grassroots community of 230,000 people in Canada who work together for an open, accessible and surveillance-free Internet. I'm joining you from the unceded territory of the Sto:lo, Tsleil-Waututh, Squamish and Musqueam nations.

I’d like to ask us all a question: What does cybersecurity mean to you as an individual, as a family member and as a citizen? For me, and for many people across Canada, our cybersecurity is inseparable from our privacy, as so much of our everyday lives is conducted online—much more so since COVID—and none of us feel secure with the thought of being spied on in our everyday lives, whether by hackers, hostile states or our own government. For most Canadians, our cybersecurity is very much about that sense of personal security.

The draft of Bill C-26 you have in front of you threatens that security. It poses enormous risks to our personal privacy, without basic accountability and oversight to ensure that the people given these powers don't abuse them against us. You must fix this.

Exhibit A is proposed section 15.2 of the Telecommunications Act, which grants the government the power to order telcos “to do anything or refrain from doing anything”. There are no limits here, no tests for necessity, proportionality and reasonableness, and no requirement for consultation. The government could use these powers to order telcos to break the encryption we need to keep ourselves safe from hackers, fraudsters and thieves. They could even use these powers to disconnect ordinary people indefinitely from the Internet, maybe because our smart toaster or an old phone we gave our kids gets hijacked by a hostile botnet. Without a requirement that these orders be proportional or time-limited, these are real risks.

It gets worse. The government would be allowed to keep even the existence of these orders—never mind their content—top secret indefinitely, and even if these orders are challenged by judicial review, the minister could bring secret evidence before secret hearings, which flies in the face of basic judicial transparency.

There's no excuse for this. Our close allies in Australia and the U.K. have shown how cybersecurity can be strengthened without compromising fundamental rights. Why do Canadians deserve lesser protections?

All this comes when Parliament is working on strengthening our privacy laws through Bill C-27. I have to ask, does one hand of our government even know what the other is working on?

We recognize that there are very real problems, though, that Bill C-26 is trying to solve. When we read the government's stated objectives, we're on board. Should we protect the digital infrastructure? Sure. Should we remove risky equipment from hostile states? Of course. Should we force big banks and telcos to better protect their customers? Of course. However, we can fulfill these objectives without sacrificing our rights or balanced, effective governance. Let's talk about how.

First, the government's new powers must be constrained. Robust necessity, proportionality and reasonableness tests are an absolute must. An unbreakable encryption is the fundamental baseline that all of our personal privacy depends on, so there must be an absolute prohibition on the government using these powers to break encryption.

Second, privacy rights must be entrenched. Personal information must be clearly defined as confidential and forbidden from being shared with foreign states, which are not subject to Bill C-26's checks and balances.

Third, the government must not be allowed to conceal the use of its new powers under a permanent veil of secrecy.

Fourth, when the use of those powers is challenged in court, there must be no secret evidence. Special advocates should be appointed to ensure all evidence is duly tested.

Fifth, any information the Canadian Security Establishment obtains about Canadians under Bill C-26 should be used exclusively for the defensive cybersecurity part of their mandate. I hope you all remember that NSIRA, the body explicitly established by Parliament to oversee CSE, has complained for years about CSE not being accountable to them. Knowing how difficult it's proved to keep them accountable for their existing powers, please don't grant them broad new powers without tight and clear use and reporting mechanisms.

As other people have said, when cybersecurity works, it's a team sport. It requires buy-in from all of us. We all have to be on team Canada, and we all have to trust in the regulatory framework that governs it. There's zero chance of that happening with Bill C-26 as is. Adequate transparency, proportionality and independent verification are the necessary baseline that this bill has to earn for it to work.

We're going to be delivering a petition signed by nearly 10,000 Canadians to you shortly, folks who are calling for that baseline protection. We urge you to listen to these voters and adopt the amendments package that civil society has suggested to you to get this legislation where it needs to be.

Thanks. I look forward to your questions.

The Chair Liberal Joël Lightbound

What I have is to seek additional resources with the clerk for extra meetings on top of what we have on Bill C-27 next week. With these resources, we invite the CEOs of the telcos. In addition, we also invite the minister to come and testify as part of this telco study.

The Chair Liberal Joël Lightbound

Okay. I can definitely see....

If there's consensus around the room to say that we'll start this study on telecoms earlier than planned if we have the additional resources, and we'll keep Bill C-27 as planned.... The clerk is here by my side, so we'll be looking for additional resources.

There is still a motion before this committee, though. I don't know how colleagues want to proceed with this motion or if there's an agreement that we just start the telecoms study earlier.

I'll looking at Ryan and Brian.

Ryan, I'll yield the floor to you.

Ryan Turnbull Liberal Whitby, ON

No one is disagreeing with the fact that cellphone companies should be called before the committee and questioned about any planned increases. I think we've all agreed to that. That's actually in the subcommittee report. I think it's more substantive. It already includes the CEOs of Telus and Quebecor Media, etc. It includes all the CEOs of all the companies that have been mentioned. It also includes a focus on increased customer cellphone bills, so any.... It's already there.

I think we've already agreed to do this work, so I still can't understand the rationale for an additional motion that just bumps it up. If you're asking for additional committee resources to start that component of the broader study earlier, okay, that's fine, but then isn't it subject to committee resources? If we've asked for additional resources to study Bill C-27, why shouldn't that be the first priority, which is what we agreed to?

We've already agreed to that. We've already had that debate and that conversation. We agreed to what's in the subcommittee report, so why is this now...? Even though we've already agreed to it, somehow it's now an even higher priority because you just decided it in the last week or so.

It doesn't make sense to me when we've already agreed to do a broader study. We've agreed to call all the witnesses. We've agreed to focus on cellphone prices and bills and we've agreed that it can be the first priority in that broader study. We've also agreed to a report of findings and recommendations back to the House.

I just can't understand what the.... In a way, isn't this a redundant motion? We've already done this.

Isn't there some rule in the Standing Orders that a motion has to be substantively different in order for it to be considered? This doesn't seem different at all. I don't see anything that's different here. I really can't understand the rationale for this, other than a bit of a grandstand.

The Chair Liberal Joël Lightbound

Okay.

I'll turn it over to Mr. Turnbull.

There's a small thing to keep in mind in terms of scheduling. We have witnesses lined up for Bill C-27 on February 12 and 14. Should this motion be adopted, I would suggest we try to seek additional resources so as to not undo the great work that our clerk has done to get these witnesses before the committee. That's just something to keep in mind.

Go ahead, Mr. Turnbull.

The Chair Liberal Joël Lightbound

I'm looking around the room to see if there are more interventions.

It is true that in the steering committee we did agree to start the telco study on the 26th and to finish the Bill C-27 witnesses before we adjourn for the constituency week in February.

I'll let Mr. Williams speak to his motion.

February 5th, 2024 / 12:10 p.m.


See context

Executive Director, Pan-Canadian AI Strategy, Canadian Institute for Advanced Research

Dr. Elissa Strome

Not directly.

The pan-Canadian AI strategy at its inception was really designed to advance Canada's leadership in AI research, training and innovation. It really focused on building a deep pool of talented individuals with AI expertise across the country and fielding very rich, robust, dynamic AI ecosystems in our three centres in Toronto, Montreal and Edmonton. That was the foundation of the strategy.

As the strategy evolved over the years, we saw additional investments in budget 2021 to focus on advancing the responsible development, deployment and adoption of AI, as well as thinking about those opportunities to work collaboratively and internationally on things like standards, etc.

Indirectly, I would say that the pan-Canadian AI strategy has at least been engaged in the development of the AI and data act through several channels. One is through the AI advisory council that Professor Bengio mentioned earlier. He's the co-chair of that council. We have several leaders across the AI ecosystem who are participants and members on that council. I'm also a member on that council. The AI and data act and Bill C-27 have been discussed at that council.

Second—

February 5th, 2024 / noon


See context

Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

I'll let my colleagues answer some of those questions. However, I would like to clarify something I proposed in what I said and wrote. It has to do with setting a criterion related to the size of the systems in terms of computing power, with the current threshold above which a system would have to be registered being 1026 operations per second. That would be the same as in the United States, and it would bring us up to the same level of oversight as the Americans.

This criterion isn't currently set out in Bill C‑27. I would suggest that we adopt that as a starting point, but then allow the regulator to look at the science and misuse to adjust the criteria for what is a potentially dangerous and high‑impact system. We can start right away with the same thing as in the United States.

In Europe, they've adopted more or less the same system, which is also based on computing power. Right now, it's a simple, agreed‑upon criterion that we can use to distinguish between potentially risky systems that are in the high‑impact category and systems that are 99.9% classified as AI systems without a national security risk.