Thank you for your invitation.
It's a privilege to be here as the committee conducts its study of the AI and data act within Bill C-27.
AWS has a strong presence in and commitment to Canada. We have two infrastructure regions here, in both Montreal and Calgary, to support our Canadian customers, and we have plans to invest up to nearly $25 billion by 2037 in this digital infrastructure.
Globally, more than 100,000 organizations of all sizes are using AWS AI and machine-learning services. They include Canadian start-ups, national newspapers, professional sports organizations, federally regulated financial institutions, retailers, public institutions and more.
Specifically, AWS offers a set of capabilities across three layers of the technology stack. At the bottom layer is the AI infrastructure layer. We offer our own high-performance custom chips, as well as other computing options. At the middle layer, we provide the broadest selection of foundation models on which organizations build generative AI applications. This includes both Amazon-built models and those from other leading providers, such as Cohere—a Canadian company—Anthropic, AI21, Meta—who's here today—and Stability AI. At the top layer of the stack, we offer generative AI applications and services.
AWS continually invests in the responsible development and deployment of AI. We dedicate efforts to help customers innovate and implement necessary safeguards. Our efforts towards safe, secure and responsible AI are grounded in a deep collaboration with the global community, including in work to establish international technical standards. We applaud the Standards Council of Canada's continued leadership here.
We are excited about how AI will continue to grow and transform how we live and work. At the same time, we're also keenly aware of the potential risks and challenges. We support government's efforts to put in place effective, risk-based regulatory frameworks while also allowing for continued innovation and a practical application of the technology.
I'm pleased to share some thoughts on the approach Bill C-27 proposes.
First, AI regulations must account for the multiple stakeholders involved in the development and use of AI systems. Given that the AI value chain is complex, recent clarification from the minister that helps define rules for AI developers and deployers is a positive development. Developers are those who make available general purpose AI systems or services, and deployers are those who implement or deploy those AI systems.
Second, success in deploying responsible AI is often very use case- and context-specific. Regulation needs to differentiate between higher- and lower-risk systems. Trying to regulate all applications with the same approach is very impractical and can inadvertently stifle innovation.
Because the risks associated with AI are dependent on context, regulations will be most effective when they target specific high-risk uses of the technology. While Bill C-27 acknowledges a conceptual differentiation between high- and low-impact applications of AI, we are concerned that, even with the additional clarifications, the definition of “high impact” is still too ambiguous, capturing a number of use cases that would be unnecessarily subject to costly and burdensome compliance requirements.
As a quick example, there's the use of AI via peace officer, which is deemed high impact. Is it still high impact if it includes the use of autocorrect when filling out a traffic violation? Laws and regulations must clearly differentiate between high-risk applications and those that pose little or no risk. This is a core principle that we have to get right. We should be very careful about imposing regulatory burdens on low-risk AI applications that can potentially provide much-needed productivity boosts to Canadian companies both big and small.
Third, criminal enforcement provisions of this bill could have a particularly chilling effect on innovation, even more so if the requirements are not tailored to risk and not drafted clearly.
Finally, Bill C-27 should ensure it is interoperable with other regulatory regimes. The AI policy world has changed and progressed quite quickly since Bill C-27 was first introduced in 2022. Many of Canada's most important trading partners, including the U.S., the U.K., Japan and Australia, have since outlined very different decentralized regulatory approaches, where AI regulations and risk mitigation are to be managed by regulators closest to the use cases. While it's commendable that the government has revised its initial approach following feedback from stakeholders, it should give itself the time necessary to get its approach right.