Evidence of meeting #109 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was risk.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Nicole Foster  Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.
Jeanette Patell  Director, Government Affairs and Public Policy, Google Canada
Rachel Curran  Head of Public Policy, Canada, Meta Platforms Inc.
Amanda Craig  Senior Director of Public Policy, Office of Responsible AI, Microsoft
Will DeVries  Director, Privacy Legal, Google LLC
John Weigelt  National Technology Officer, Microsoft Canada Inc.

5:05 p.m.

Liberal

The Chair Liberal Joël Lightbound

We'll try to figure out with the technical staff what's happening. We'll suspend for a minute.

5:20 p.m.

Liberal

The Chair Liberal Joël Lightbound

Good afternoon, everyone.

I call this meeting to order.

Welcome to meeting number 109 of the House of Commons Standing Committee on Industry and Technology.

Today’s meeting is taking place in a hybrid format, pursuant to the Standing Orders.

Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming consideration of Bill C‑27, An Act to Enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other acts.

I’d like to welcome our witnesses today. From Amazon Web Services, we have Ms. Nicole Foster, director of global artificial intelligence and Canada public policy.

From Google Canada, we have Ms. Jeanette Patell, director of government affairs and public policy. Also from Google, and participating by videoconference, we have Mr. Will DeVries, director of privacy legal, as well as Ms. Tulsee Doshi, director of product management.

From Meta, we have Ms. Rachel Curran, head of public policy for Canada.

From Microsoft—

5:20 p.m.

NDP

Brian Masse NDP Windsor West, ON

On a point of order, Mr. Chair, we're not getting interpretation.

5:20 p.m.

Liberal

The Chair Liberal Joël Lightbound

Mr. Masse, maybe you haven't checked the proper thing, because it appears that here in the room members are getting interpretation.

5:20 p.m.

NDP

Brian Masse NDP Windsor West, ON

Yes, I did. I've confirmed with this my staff, who are also online.

5:20 p.m.

A voice

The witnesses are shaking their heads.

5:20 p.m.

Liberal

The Chair Liberal Joël Lightbound

They're not getting interpretation as well.

It just so happens that we always have technical problems when we have technology giants with us for some reason. I don't know why.

5:20 p.m.

Voices

Oh, oh!

5:20 p.m.

Liberal

The Chair Liberal Joël Lightbound

Please excuse these technological inconveniences.

I will therefore continue the introductions.

From Microsoft, we have Ms. Amanda Craig, senior director of public policy, Office of Responsible AI, as well as Mr. John Weigelt, chief technology officer.

We thank all of you for being here today.

You all have five minutes for your opening statements.

We'll start with Amazon and Madam Foster.

5:20 p.m.

Nicole Foster Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Thank you for your invitation.

It's a privilege to be here as the committee conducts its study of the AI and data act within Bill C-27.

AWS has a strong presence in and commitment to Canada. We have two infrastructure regions here, in both Montreal and Calgary, to support our Canadian customers, and we have plans to invest up to nearly $25 billion by 2037 in this digital infrastructure.

Globally, more than 100,000 organizations of all sizes are using AWS AI and machine-learning services. They include Canadian start-ups, national newspapers, professional sports organizations, federally regulated financial institutions, retailers, public institutions and more.

Specifically, AWS offers a set of capabilities across three layers of the technology stack. At the bottom layer is the AI infrastructure layer. We offer our own high-performance custom chips, as well as other computing options. At the middle layer, we provide the broadest selection of foundation models on which organizations build generative AI applications. This includes both Amazon-built models and those from other leading providers, such as Cohere—a Canadian company—Anthropic, AI21, Meta—who's here today—and Stability AI. At the top layer of the stack, we offer generative AI applications and services.

AWS continually invests in the responsible development and deployment of AI. We dedicate efforts to help customers innovate and implement necessary safeguards. Our efforts towards safe, secure and responsible AI are grounded in a deep collaboration with the global community, including in work to establish international technical standards. We applaud the Standards Council of Canada's continued leadership here.

We are excited about how AI will continue to grow and transform how we live and work. At the same time, we're also keenly aware of the potential risks and challenges. We support government's efforts to put in place effective, risk-based regulatory frameworks while also allowing for continued innovation and a practical application of the technology.

I'm pleased to share some thoughts on the approach Bill C-27 proposes.

First, AI regulations must account for the multiple stakeholders involved in the development and use of AI systems. Given that the AI value chain is complex, recent clarification from the minister that helps define rules for AI developers and deployers is a positive development. Developers are those who make available general purpose AI systems or services, and deployers are those who implement or deploy those AI systems.

Second, success in deploying responsible AI is often very use case- and context-specific. Regulation needs to differentiate between higher- and lower-risk systems. Trying to regulate all applications with the same approach is very impractical and can inadvertently stifle innovation.

Because the risks associated with AI are dependent on context, regulations will be most effective when they target specific high-risk uses of the technology. While Bill C-27 acknowledges a conceptual differentiation between high- and low-impact applications of AI, we are concerned that, even with the additional clarifications, the definition of “high impact” is still too ambiguous, capturing a number of use cases that would be unnecessarily subject to costly and burdensome compliance requirements.

As a quick example, there's the use of AI via peace officer, which is deemed high impact. Is it still high impact if it includes the use of autocorrect when filling out a traffic violation? Laws and regulations must clearly differentiate between high-risk applications and those that pose little or no risk. This is a core principle that we have to get right. We should be very careful about imposing regulatory burdens on low-risk AI applications that can potentially provide much-needed productivity boosts to Canadian companies both big and small.

Third, criminal enforcement provisions of this bill could have a particularly chilling effect on innovation, even more so if the requirements are not tailored to risk and not drafted clearly.

Finally, Bill C-27 should ensure it is interoperable with other regulatory regimes. The AI policy world has changed and progressed quite quickly since Bill C-27 was first introduced in 2022. Many of Canada's most important trading partners, including the U.S., the U.K., Japan and Australia, have since outlined very different decentralized regulatory approaches, where AI regulations and risk mitigation are to be managed by regulators closest to the use cases. While it's commendable that the government has revised its initial approach following feedback from stakeholders, it should give itself the time necessary to get its approach right.

5:20 p.m.

Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Nicole Foster

Leveraging emerging international norms and technical standards will ensure that Canada's regulatory regime can be interoperable with those of other leading economies and trading partners. Ultimately, this will help enable global growth for Canada's AI champions. In the meantime, we can and should address specific harms, like the risk of deepfakes for election disinformation, by reviewing existing legislation and crafting specific amendments where needed.

We are committed to sharing our knowledge and expertise with policy-makers as they move forward to promote the responsible use of AI. Thank you so much for the opportunity to be here today.

5:25 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

I now yield the floor to Google for five minutes.

5:25 p.m.

Jeanette Patell Director, Government Affairs and Public Policy, Google Canada

Good afternoon, Chair and members of the committee. My name is Jeanette Patell and I am the director of government affairs and public policy for Google in Ottawa. I am joined remotely by my colleagues Tulsee Doshi and Will DeVries. Tulsee is a director and head of product in responsible AI at Google. Will is a director on our privacy legal team and advises the company on global privacy laws and data protection compliance. We appreciate the invitation to appear today and to contribute to your consideration of Bill C-27.

As the committee knows, this is important legislation, and important legislation to get right.

Today, we will present a few remarks on the Consumer Privacy Protection Act and the Artificial Intelligence and Data Act. We will be very happy to answer your questions.

We will present our brief to this committee shortly. We will also maintain our commitments regarding aspects that could be improved and ensure better results for businesses, innovators and Canadian consumers.

When Canadians use our services, they are trusting us with their information. This is a responsibility that we take very seriously at Google, and we protect user privacy with industry-leading security infrastructure, responsible data practices and easy-to-use privacy tools that put our users in control.

Google has long championed smart, interoperable and adaptable data protection regulations—rules that will protect privacy rights, enhance trust in the digital ecosystem and enable responsible innovation. We support the government's efforts to modernize Canada's privacy and data protection regulatory framework and to codify important rights and obligations.

We also believe the CPPA would benefit from further consideration and targeted amendments in certain areas. For example, we agree with others, like the Canadian Chamber of Commerce, that consent provisions should be both clarified and tailored to more consequential activities. We also highlight the need for a consistent federal definition of “minors” and clearer protections for minors' rights and freedoms. Improvements to these areas would maintain and enhance Canadian privacy protections, make it easier for businesses to operate across Canada and the world and enable continued innovation throughout the economy.

Turning to the artificial intelligence and data act, as our CEO has said, “AI is too important not to regulate, and too important not to regulate well.” We are encouraged to see governments around the world developing policy frameworks for these new technologies, and we're deeply engaged in supporting these efforts to maximize AI's benefits while minimizing its risks.

Google has been working on AI for a long time, including at our sites in Montreal and Toronto, and in 2017 we reoriented to be an AI-first company. Today AI powers Google search, translate, maps and other services Canadians use every day. We're also using AI to help solve societal issues, from forecasting floods to improving screenings of diseases like breast cancer. Since 2018, our work with these technologies has been guided by our responsible AI principles, which are supported by a robust governance structure and review process. My colleague Tulsee has been at the centre of this work.

Canada has an exceptional opportunity to leverage investments in basic research and artificial intelligence. This committee will contribute to developing a legislative framework for solid public protection measures that will harness economic and societal benefits.

We welcome the government's efforts to establish the right guardrails around AI, and we share some of the concerns that others have raised with this committee. We believe the bill can be thoughtfully amended in ways that support the government's objectives without hindering AI's development and use.

There is no one-size-fits-all approach to regulating AI. AI is a multi-purpose technology that takes many forms and spans a wide range of risk profiles. A regulatory framework for these technologies should recognize the vast range of beneficial uses and should weigh the opportunity costs of not developing or deploying AI systems. It should also tailor obligations to the magnitude and likelihood of harm specific to particular use cases. We believe the AIDA should establish a risk-based and proportionate approach tailored to specific applications and focused on ensuring global interoperability via widely accepted compliance tools such as international standards.

We hope to continue to work with the Canadian government, as we have with governments around the world, to build thoughtful, smart regulations that protect Canadians and capture this once-in-a-generation opportunity to strengthen our economy, position Canadian innovators for success on the global stage and drive transformational scientific breakthroughs.

Thank you again for the invitation to appear. We look forward to answering your questions and continuing this important conversation.

5:25 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much, Ms. Patell.

I now give the floor to Ms. Curran, from Meta.

5:25 p.m.

Rachel Curran Head of Public Policy, Canada, Meta Platforms Inc.

Thank you, Mr. Chair.

My name is Rachel Curran and I'm the head of public policy for Meta in Canada. It's a pleasure to address the committee this afternoon.

Meta supports risk-based, technology-neutral approaches to the regulation of artificial intelligence. We believe it's important for governments to work together to set common standards and governance models for AI. It's this approach that will enable the economic and social opportunities of an open science approach to AI and also bolster Canadian competitiveness.

Meta has been at the forefront of the development of artificial intelligence for more than a decade. We can talk about that later during this hearing. This innovation has allowed us to connect billions of people and generate real value for small businesses. For our community, AI is what helps people discover and engage with the content they care about. For the millions of businesses, particularly small businesses, that use our platforms, our AI-powered tools make an advertiser's job easier. That's a real game-changer for small and medium-sized businesses that are looking to reach customers who are interested in their products.

In addition, Meta's fundamental AI research team has taken an open approach to AI research, pioneering breakthroughs across a range of industries and sectors. In 2017 we launched our AI research lab in Montreal to contribute to the Canadian AI ecosystem. Today, Meta's global research efforts are led by Dr. Joelle Pineau, a world-leading Canadian researcher and a professor at McGill University. She is the one who heads up Meta's global AI research efforts.

Our Canadian team of researchers has worked on some of the biggest breakthroughs in AI, from developing more diverse and inclusive AI models to improving health care accessibility and patient care, which have benefited communities in Canada and abroad. This work is shared openly with the greater research community, a commitment to open science and a level of transparency that helps Meta set the highest standards of quality and responsibility and ultimately build better AI solutions.

We applaud Canada's leadership on the development of smart regulation and guardrails for AI development, particularly through its leadership on the Global Partnership on AI and the G7 process. We strongly support the work of this committee, of course, and the initial aim of Bill C-27, which is to ensure that AI is developed and deployed responsibly while also ensuring that global regulatory frameworks are aligned, maintaining Canada's status as a world leader in AI innovation and research.

We think AI is advancing so quickly that measures focused on specific technologies could soon become irrelevant and hinder innovation. As we look to the future, we hope that the government will consider a truly risk-based and outcome-focused approach that will be future-proof. In that regard, we would flag a few specific concerns with Bill C-27.

First, one proposed amendment from the minister to this bill would classify content moderation or prioritization systems as “high-impact”. We respectfully disagree that these systems are inherently high risk as defined in the legislation, and suggest that the regulation of risks associated with content that Canadians see online would be better dealt with in pending online harms legislation.

Similarly, we think the proposed regime for general purpose AI is not appropriately tailored to risk and more closely resembles the requirements for truly high-impact systems. We suggest that the obligations for general purpose AI should be harmonized with international frameworks, such as the ongoing G7 Hiroshima process, which I referenced earlier, the White House voluntary commitments and OECD work on AI governance.

Lastly, we'd flag the audit and access powers contemplated by Bill C-27. We think they are at odds with existing frameworks—for example, with the approach by other signatories of the Bletchley Declaration arising out of the recent U.K. AI safety summit. That includes the U.S. and the U.K. Again, we'd encourage Canada to pursue an approach that preserves privacy and is consistent with global standards.

Members, we believe that Meta is uniquely poised to solve some of AI's biggest problems by weaving our learnings from our world-leading research into products that billions of people and businesses can benefit from while continuing to contribute to Canada's vibrant, world-leading AI ecosystem.

We look forward to working with this committee and to answering your questions.

Thank you.

5:30 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

I now give the floor to Ms. Craig, from Microsoft.

5:30 p.m.

Amanda Craig Senior Director of Public Policy, Office of Responsible AI, Microsoft

Thank you, Mr. Chair and committee members for the opportunity to testify.

At Microsoft, we believe in the immense opportunity that AI presents to contribute to Canada's growth and to deliver prosperity to Canadians. To truly realize AI's potential and to improve people's lives, we must effectively address the very real challenges and risks of using AI without appropriate safeguards. That's why we have championed the need for regulation that navigates the complexity of AI to strengthen safety and to safeguard privacy and civil liberties.

Canada has been a leader in putting forward a framework for AI, and there are positive aspects of the legislative framework that provide a helpful foundation going forward. However, as it currently stands, Bill C-27 applies the rules and requirements too broadly. It regulates both low-risk and high-risk AI systems in a similar way without adjusting requirements according to risk, and it includes criminal penalties as part of the enforcement regime.

Not all risk is created equal. Intuitively we know that, but it can be difficult to determine risk levels and adjust for them. In our view, the set of rules and requirements in the AIDA should apply to AI systems and used where the level of risk is high. For example, the AIDA applies the same rules and regulatory obligations to a high-risk system, such as AI that is used to determine whether to approve a mortgage, and to a low-risk system, such as AI that is used to optimize package delivery routes.

Applying the rules and requirements too broadly has several implications. Businesses in Canada, including small and medium-sized businesses, will need to focus on resource-intensive assessment and third party audits even for low-risk, general purpose systems, rather than focusing on where the risk is highest or on developing new safety systems. A restaurant chain and its AI system for inventory management and food waste reduction will be subject to the same requirements as facial recognition technology. This will spread thinly the time, money, talent and resources of Canadian businesses. It will potentially mean finite resources are not sufficiently focused on the highest risk.

Canada's approach is also out of step with that of some of its largest trading partners, including the U.S., the EU, the U.K., Japan and others. In fact, the Canadian law firm Osler has published a comparison of the AIDA with the EU's AI Act, which I'll be happy to submit to the committee. The comparison includes 11 examples where Canada has gone further than the EU, creating a set of unique requirements for businesses operating in Canada.

Going further than the EU does not mean that Canadians will be better protected from the risks of AI. It means that businesses in Canada that are already using lower-risk AI systems could face a more onerous regime than anywhere in the world. Instead, Canadians will be better protected with more targeted regulation. By ensuring that the AIDA is risk-based and provides clarity and certainty on compliance, Canada can set a new standard for AI regulation.

We firmly believe that with the right amendments, it is possible to strike the right balance in the AIDA. You can achieve the crucial objective of reducing harm and protecting Canadians, and you can enable businesses in Canada to be more confident in adapting AI, which will provide enormous benefits for productivity, innovation and competitiveness.

In conclusion, we would recommend, first, better scoping of what is truly high-impact AI. Second, we recommend distinguishing the levels of risk of AI systems and defining requirements according to that level of risk. Third and finally, we recommend rethinking enforcement, including the use of criminal penalties, which is unlike any other jurisdiction in the OECD. This would also ensure that Canada's approach is interoperable with what other global leaders, such as the EU, the U.K. and the U.S., are doing.

We are happy to provide this committee with a written submission detailing our recommendations.

Thank you, Mr. Chair. We look forward to your questions.

5:35 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

To start the conversation, I'll yield the floor to MP Perkins for six minutes.

5:35 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Thank you, Mr. Chair.

Thank you, witnesses.

I'm going to start at a fairly high level, because we have the world leaders in AI before us today.

It's a pretty special committee meeting among a whole series of special meetings we've had on this bill, but we have Amazon, Google, Meta-Facebook and Microsoft before us. You guys are the world leaders and are putting the most money in everywhere.

This bill was tabled almost two years ago. Just give a yes or no: Were any of your companies consulted on this bill before the bill was tabled?

Please go one at a time.

5:35 p.m.

Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Nicole Foster

No, we were not consulted before the bill was tabled.

5:35 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

No, we were not consulted.

5:35 p.m.

Senior Director of Public Policy, Office of Responsible AI, Microsoft

Amanda Craig

No, not before the bill was tabled.

5:35 p.m.

Director, Government Affairs and Public Policy, Google Canada

Jeanette Patell

To my knowledge, we were not consulted.

5:35 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

I find it shocking, frankly, that you wouldn't have been consulted. You've had meetings, I assume, since the bill was tabled, because the minister claims 300 meetings, although most of them were with academics and think tanks. In those meetings, did you propose any specific amendments to the bill?

I'll go through this one at a time in the same order.

5:35 p.m.

Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Nicole Foster

Further to my remarks, we did advocate that there was a need for greater clarity for developers and deployers in defining their responsibilities in the act—