Thank you for inviting me here to participate in this important study, specifically to discuss AIDA, a component of the digital charter implementation act.
I am here today in my capacity as the managing director of IAPP's AI governance centre. IAPP is a global, non-profit, policy-neutral organization dedicated to the professionalization of the privacy and AI governance workforces. For context, we have 82,000 members located in 150 countries and over 300 employees. Our policy neutrality is rooted in the idea that no matter what the rules are, we need people to do the work of putting them into practice. This is why we make one exception to our neutrality: We advocate for the professionalization of our field.
My position at IAPP builds on nearly a decade-long effort to establish responsible and meaningful policy and standards for data and AI. Previously, I served as executive director for the Responsible Artificial Intelligence Institute. Prior to that, I worked at the Treasury Board Secretariat, leading the first version of the directive on automated decision-making systems, which I am now happy to see included in the amendments to this bill. I also serve as co-chair for the Standards Council of Canada's AI and data standards collaborative, and I contribute to various national and international AI governance efforts. As such, I am happy to address any questions you may have about AIDA in my personal capacity.
While I have always had a strong interest in ensuring technology is built and governed in the best interests of society, on a personal note, I am now a new mom to seven-month-old twins. This experience has brought up new questions for me about raising children in an AI-enabled society. Will their safety be compromised if we post photos of them on social media? Are the surveillance technologies commonly used at day cares compromising?
With this, I believe providing safeguards for AI is now more imperative than ever. Recent market research has demonstrated that the AI market size has doubled since 2021 and is expected to grow from around $2 billion in 2023 to nearly $2 trillion in 2030. This demonstrates not only the potential impact of AI on society but also the pace at which it is growing.
This committee has heard from various experts about challenges related to the increased adoption of AI and, as a result, improvements that could be made to AIDA. While the recently tabled amendments address some of these concerns, the reality is that the general adoption of AI is still new and these technologies are being used in diverse and innovative ways in almost every sector. Creating perfect legislation that will address all the potential impacts of AI in one bill is difficult. Even if it accurately reflects the current state of AI development, it is hard to create a single long-lasting framework that will remain relevant as these technologies continue to change rapidly.
One way of retaining relevance when governing complex technologies is through standards, which is already reflected in AIDA. The inclusion of future agreed-upon standards and assurance mechanisms seems likely, in my experience, to help AIDA remain agile as AI evolves. To complement this concept, one additional safeguard being considered in similar policy discussions around the world is the provision of an AI officer or designated AI governance role. We feel the inclusion of such a role could both improve AIDA and help to ensure that its objectives will be implemented, given the dynamic nature of AI. Ensuring appropriate training and capabilities of these individuals will address some of the concerns raised through this review process, specifically about what compliance will look like, given the use of AI in different contexts and with different degrees of impacts.
This concept is aligned with international trends and requirements in other industries, such as privacy and cybersecurity. Privacy law in British Columbia and Quebec includes the provision of a responsible privacy officer to effectively oversee implementation of privacy policy. Additionally, we see recognition of the important role people play in the recent AI executive order in the United States. It requires each agency to designate a chief artificial intelligence officer, who shall hold primary responsibility for managing their agency's use of AI. A similar approach was proposed in a recent private member's bill in the U.K. on the regulation of AI, which would require any business that develops, deploys or uses AI to designate an AI officer to ensure the safe, ethical, unbiased and non-discriminatory use of AI by the business.
History has shown that when professionalization is not sufficiently prioritized, a daunting expertise gap can emerge. As an example, ISC2's 2022 cybersecurity workforce study discusses the growing cyber-workforce gap. According to the report, there are 4.7 million cybersecurity professionals globally, but there is still a gap of 3.4 million cybersecurity workers required to address enterprise needs. We believe that without a concerted effort to upskill professionals in parallel fields, we will face a similar shortfall in AI governance and a dearth of professionals to implement AI responsibly in line with Bill C-27 and other legislative objectives.
Finally, in a recent survey that we conducted at IAPP on AI governance, 74% of respondents identified that they are currently using AI or intend to within the next 12 months. However, 33% of respondents cited a lack of professional training and certification for AI governance professionals, and 31% cited a lack of qualified AI governance professionals as key challenges to the effective rollout and operation of AI governance programs.
Legislative recognition and incentivization of the need for knowledgeable professionals would help ensure organizations resource their AI governance programs effectively to do the work.
In sum, we believe that rules for AI will emerge. Perhaps, more importantly, we need professionals to put those rules into practice. History has shown that early investment in a professionalized workforce pays dividends later. To this end, as part of our written submission, we will provide potential legislative text to be included in AIDA, for your consideration.
Thank you for your time. I am happy to answer any questions you might have.