Thank you, Mr. Chair and committee members for the opportunity to testify.
At Microsoft, we believe in the immense opportunity that AI presents to contribute to Canada's growth and to deliver prosperity to Canadians. To truly realize AI's potential and to improve people's lives, we must effectively address the very real challenges and risks of using AI without appropriate safeguards. That's why we have championed the need for regulation that navigates the complexity of AI to strengthen safety and to safeguard privacy and civil liberties.
Canada has been a leader in putting forward a framework for AI, and there are positive aspects of the legislative framework that provide a helpful foundation going forward. However, as it currently stands, Bill C-27 applies the rules and requirements too broadly. It regulates both low-risk and high-risk AI systems in a similar way without adjusting requirements according to risk, and it includes criminal penalties as part of the enforcement regime.
Not all risk is created equal. Intuitively we know that, but it can be difficult to determine risk levels and adjust for them. In our view, the set of rules and requirements in the AIDA should apply to AI systems and used where the level of risk is high. For example, the AIDA applies the same rules and regulatory obligations to a high-risk system, such as AI that is used to determine whether to approve a mortgage, and to a low-risk system, such as AI that is used to optimize package delivery routes.
Applying the rules and requirements too broadly has several implications. Businesses in Canada, including small and medium-sized businesses, will need to focus on resource-intensive assessment and third party audits even for low-risk, general purpose systems, rather than focusing on where the risk is highest or on developing new safety systems. A restaurant chain and its AI system for inventory management and food waste reduction will be subject to the same requirements as facial recognition technology. This will spread thinly the time, money, talent and resources of Canadian businesses. It will potentially mean finite resources are not sufficiently focused on the highest risk.
Canada's approach is also out of step with that of some of its largest trading partners, including the U.S., the EU, the U.K., Japan and others. In fact, the Canadian law firm Osler has published a comparison of the AIDA with the EU's AI Act, which I'll be happy to submit to the committee. The comparison includes 11 examples where Canada has gone further than the EU, creating a set of unique requirements for businesses operating in Canada.
Going further than the EU does not mean that Canadians will be better protected from the risks of AI. It means that businesses in Canada that are already using lower-risk AI systems could face a more onerous regime than anywhere in the world. Instead, Canadians will be better protected with more targeted regulation. By ensuring that the AIDA is risk-based and provides clarity and certainty on compliance, Canada can set a new standard for AI regulation.
We firmly believe that with the right amendments, it is possible to strike the right balance in the AIDA. You can achieve the crucial objective of reducing harm and protecting Canadians, and you can enable businesses in Canada to be more confident in adapting AI, which will provide enormous benefits for productivity, innovation and competitiveness.
In conclusion, we would recommend, first, better scoping of what is truly high-impact AI. Second, we recommend distinguishing the levels of risk of AI systems and defining requirements according to that level of risk. Third and finally, we recommend rethinking enforcement, including the use of criminal penalties, which is unlike any other jurisdiction in the OECD. This would also ensure that Canada's approach is interoperable with what other global leaders, such as the EU, the U.K. and the U.S., are doing.
We are happy to provide this committee with a written submission detailing our recommendations.
Thank you, Mr. Chair. We look forward to your questions.