Evidence of meeting #102 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was systems.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Ana Brandusescu  AI Governance Researcher, McGill University, As an Individual
Alexandre Shee  Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual
Bianca Wylie  Partner, Digital Public
Ashley Casovan  Managing Director, AI Governance Center, International Association of Privacy Professionals

3:35 p.m.

Liberal

The Chair Liberal Joël Lightbound

Colleagues, I call this meeting to order.

Welcome to meeting No. 102 of the House of Commons Standing Committee on Industry and Technology. Today's meeting is taking place in a hybrid format, pursuant to the Standing Orders.

Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming consideration of Bill C-27, an act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts.

I'd like to welcome our witnesses this afternoon. With us is Ana Brandusescu, AI governance researcher with McGill University.

Good Afternoon, Ms. Brandusescu.

I would also like to welcome Alexandre Shee, industry expert and incoming co‑chair of Future of Work, Global Partnership on Artificial Intelligence.

Good Afternoon, Mr. Shee.

From Digital Public, we have Bianca Wylie.

Thank you for being with us, Ms. Wylie.

Lastly, from International Association of Privacy Professionals, we have Ashely Casovan, managing director of the AI Governance Centre.

I'd like to thank you, too, Ms. Casovan.

Without further ado, I will yield the floor for five minutes to Ms. Brandusescu.

3:35 p.m.

Ana Brandusescu AI Governance Researcher, McGill University, As an Individual

Good afternoon. Thank you for having me here today.

My name is Ms. Ana Brandusescu. I research the governance of AI technologies in government.

In my brief, with public participation and AI expert, Dr. Renee Sieber, we argue that the AIDA is a missed opportunity for shared prosperity. Shared prosperity is an economic concept where the benefits of innovation are distributed equitably among all segments of society. Innovation is taken out of the hands of the few—in this case, the AI industry—and put in the hands of the many.

Today, I will present four problems and three recommendations from our brief.

The first problem is that AIDA implies but does not ensure shared prosperity. The preamble of the bill states, “Whereas trust in the digital and data-driven economy is key to ensuring its growth and fostering a more inclusive and prosperous Canada”. However, what we see is a concentration of wealth in the AI industry, especially for big tech companies, which does not guarantee that the prosperity will trickle down to Canadians. Being “data-driven” can just as easily equal mass data surveillance and more opportunities to monetize data.

Trust, too, can be easily conflated in Canada with social acceptance of AI, telling people over and over that AI is invariably good. You may have heard the phrase “show, don't tell”. Repeating that AI is beneficial will not convince marginalized people who are subject to AI harms, such as false arrests. AI harms are extensively covered by the Canadian parliamentary study titled “Facial Recognition Technology and the Growing Power of Artificial Intelligence”.

The second problem is the AIDA's centralization of power to ISED and the Minister of Industry. The current set-up is prone to regulatory capture. We cannot trust ISED—an agency placed in the position of both promoting and regulating AI, with no independent oversight for the AIDA—to ensure shared prosperity. Agencies placed in these dual roles with dual responsibilities, such as nuclear regulatory agencies, are often incompatible, so it will inevitably favour commercial interests over accountability of AI development.

The third problem is that public consultation is absent. To date, there has been no demonstrable public consultation on AIDA. Tech policy expert Christelle Tessono and many others have raised this concern in their briefs and in articles. ISED's consultation process thus far has been selective. Many civil society and labour organizations were largely excluded from consultation on the drafting of the AIDA.

The fourth problem is that the AIDA does not include workers' rights. Workers in Canada and globally cannot share in the prosperity when their working conditions to develop AI systems include surveillance in the workplace and mental health crises. Researchers have extensively documented the exploitative nature of AI systems development on data workers. For instance, there is huge toll on their mental health, even leading to suicide.

In 2018, I learned from digital governance expert Nanjira Sambuli about Sama, which is a Silicon Valley company that works for big tech and hires data workers all over the world, including in Kenya. The contracts that Sama held with Facebook/Meta and OpenAI have been found to traumatize workers.

We have also seen many cases of IP theft from creators, as AI governance expert Blair Attard-Frost has written about in their brief on generative AI.

To share in the prosperity promised by AI, we propose three recommendations.

First, we need a redraft of the AIDA outside of ISED to ensure public and private sector accountability. Multiple departments and agencies that are already involved in work on responsible AI need to co-create the AIDA for the private and the public sector and prevent the use of harmful technologies. This version of the AIDA would hold companies like Palantir, as well as national security and law enforcement agencies, accountable.

Second, we need AI legislation to incorporate robust workers' rights. Worker protection means unions, lawsuits and safe spaces for whistle-blowers. Kenyan data workers unionized and sued Meta due to the company's exploitative working conditions. The Supreme Court ruled in their favour. Canada can follow the lead of the Kenyan government in listening to its workers.

Similarly, in the actors' union strike, American workers prevented production companies from deciding when they could use and not use AI, showing that workers can indeed drive regulations. Beyond unions and strikes, workers need safe and confidential channels to report harms. That is why whistle-blower protection is essential to workers' rights and responsible AI.

Third and lastly, we need meaningful public participation. Government has a responsibility to protect its people and ensure shared prosperity. A strong legislative framework demands meaningful public participation. Participation will actually drive innovation, not slow it down, because the public will tell us what's right for Canada.

Thank you.

3:40 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much, Ms. Brandusescu.

I'll now give the floor to Mr. Shee for five minutes.

Go ahead, Mr. Shee.

December 7th, 2023 / 3:40 p.m.

Alexandre Shee Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual

Thank you, members of the committee, for the opportunity to speak with you today.

My name is Alexandre Shee. I'm the incoming co-chair of the future of work working group of the Global Partnership on AI, of which Canada is a member state. I'm an executive at a multinational AI company, a lawyer in good standing and an investor and adviser to AI companies, as well as the proud father of two boys.

Today, I'll speak exclusively on part 3 of the bill, which is the artificial intelligence and data act, as well as the recently proposed amendments.

I believe we should pass the act. However, it needs significant amendments beyond those currently proposed. In fact, the act fails to address a key portion of the AI supply chain—data collection, annotation and engineering—which represents 80% of the work done in AI. This 80% of the work is manually done by humans.

Failing to require disclosures on the AI supply chain will lead to bias, low-quality AI models and privacy issues. More importantly, it will lead to the violation of the human rights of millions of people on a daily basis.

Recent amendments have addressed some of the deficiencies in the act by including certain steps in the AI supply chain, as well as requiring the preservation of records of the data used. However, the law does not consider the AI development process as a supply chain, with millions of people involved in powering AI systems. No disclosure mechanism is put in place to ensure that Canadians are able to make informed decisions on the AI systems they choose, ensuring that they're fair and high-quality, and that they respect human rights.

If I unpack that statement, there are three takeaways that I hope to leave you with. The first is that the act as drafted does not regulate the largest portion of AI systems: data collection, annotation and engineering. The second is that failing to address this fails to protect human rights for millions of people, including vulnerable Canadians. In turn, this leads to low-quality artificial intelligence systems. The third is that the act can help protect those involved in the AI supply chain and empower people to choose high-quality and fair artificial intelligence solutions if it is enacted with disclosure requirements.

Let me dive deeper into all of these three points, with additional detail on why these considerations are relevant for the future iteration of the act.

Self-regulation in the AI supply chain is not working. The lack of a regulatory framework and disclosures of the data collection, annotation and engineering aspects of the AI supply chain is having a negative impact on millions of lives today. These people are mostly in the global south, but they also include vulnerable Canadians.

There is currently a race to the bottom, meaning that basic human rights are being disregarded to diminish costs. In a recent well-documented investigative journalism piece featured in Wired magazine, entitled “Underage Workers Are Training AI” and published on November 15, 2023, a 15-year-old Pakistani child describes working on tasks to train AI models that pay as little as one cent. Even in higher-paying jobs, the amount of time he needs to spend doing unpaid research means that he needs to work between five and six hours to complete an hour of real-time work—all to earn two dollars. He is quoted as saying, “It’s digital slavery”. His statement echoes similar reporting done by journalists and in-depth studies of the AI supply chain by academics from around the world, and international organizations such as the Global Partnership on Artificial Intelligence.

However, while these abuses are well documented, they are currently part of the back end of the AI development process, and Canadian firms, consumers and governments interacting with AI systems do not have a mechanism to make informed choices about abuse-free systems. Requiring disclosures—and eventually banning certain practices—will help to avoid a race to the bottom in the data enrichment and validation industry, and enable Canadians to have better, safer AI that does not violate human rights.

If we borrow from recently passed legislation Bill S-211, Canada’s “modern slavery act”, creating disclosure obligations helps foster more resilient supply chains and offers Canadians products free from forced or child labour.

Transparent and accountable supply chains have helped respect human rights in countless industries, including the garment industry, the diamond industry and agriculture, to name only a few. The information requirements in the act could include information on data enrichment and specifically how data is collected and/or labelled, a general description of labelling instructions and whether it was done using identifiable employees or contractors, procurement practices that include human rights standards, and validating that steps have been taken so that no child or forced labour was used in the process.

Companies already prepare instructions for all aspects of the AI supply chain. The disclosure would formalize what is already common practice. Furthermore, there are options in the AI supply chain that create high-quality jobs that respect human rights. The Canadian government should immediately require these disclosures as part of its own procurement processes of AI systems.

Having a disclosure mechanism would also be a complement to the audit authority bestowed on the minister under the act. Creating equivalent reporting obligations on the AI supply chain would augment the current law and ensure that quality, transparency and respect of human rights are part of AI development. It would allow Canadians to benefit from innovative solutions that are better, safer and aligned with our values.

I hope you will consider the proposal today. You can have a positive impact on millions of lives.

Thank you.

3:45 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you, Mr. Shee.

I'll now yield the floor to Ms. Wylie for five minutes.

3:45 p.m.

Bianca Wylie Partner, Digital Public

My name is Bianca Wylie. I work in public interest digital governance as a partner at Digital Public. I've worked at both a tech start-up and a multinational. I've also worked in the design, development and support of public consultations for governments and government agencies.

Thank you for the opportunity to speak with you today about AIDA. As far as amendments go, my suggestion would be to wholesale strike AIDA from Bill C-27. Let's not minimize either the feasibility of this amendment or the strong case before us to do so. I'm here to hold this committee accountable for the false sense that something is better than nothing on this file. It's not, and you're the ones standing between the Canadian public and further legitimizing this undertaking, which is making a mockery of democracy and the legislative process.

AIDA is a complexity ratchet. It's a nonsensical construct detached from reality. It's building increasingly intricate castles of legislation in the sky. It's thinking about AI that is detached from operations, from deployment and from context. ISED's work on AIDA highlights how open to hijacking our democratic norms are when you wave around a shiny orb of innovation and technology.

As Dr. Lucy Suchman writes, “AI works through a strategic vagueness that serves the interests of its promoters, as those who are uncertain about its referents (popular media commentators, policy makers and publics) are left to assume that others know what it is.” I hope you might refuse to continue a charade that has had spectacular carriage through the House of Commons on the back of this socio-psychological phenomenon of assuming that someone else knows what's going on here.

This committee has continued to support a minister basically legislating on the fly. How are we writing laws like this? What is the quality control at the Department of Justice? Is it just that we'll do this on the fly when it's tech, as though this is some kind of thoughtful, adaptive approach to law? No. The process of AIDA reflects the very meaning of law becoming nothing more than a political prop.

The case to pause AIDA and reroute it to a new and separate process begins at its beginning. If we want to regulate artificial intelligence, we have to have a coherent “why”. We have never received a coherent why for AIDA from this government. Have you, as members of this committee, received an adequate backstory procedurally on AIDA? Who created the urgency? How was it drafted, and from what perspective? What work was done inside government to think about this issue across existing government mandates?

If we were to take this bill out to the general public for thoughtful discussion, a process that ISED actively avoided doing, it would fall apart under the scrutiny. There is use of AI in a medical setting versus use on a manufacturing production floor versus use in an educational setting versus use in a restaurant versus use to plan bus routes versus use to identify water pollution versus use in a day care—I could do this all day. All of these create real potential harms and benefits. Instead of having those conversations, we're carrying some kind of delusion that we can control and categorize how something as generic as advanced computational statistics, which is what AI is, will be used in reality, in deployment, in context. The people who can help us have those conversations are not, and have never been, in these rooms.

AIDA was created by a highly insular, extremely small circle of people—tiny. When there is no high-order friction in a policy conversation, we're talking to ourselves. Taking public engagement on AI seriously would force rigour. By getting away with this emergency and urgency narrative, ISED is diverting all of us from the grounded, contextual thinking that has also been an omission in both privacy and data protection thought. That thinking, as seen again in AIDA, continues to deepen and solidify power asymmetries. We're making the same mistake again for a third time.

This is a “keep things exactly the same, only faster” bill. If this bill were law tomorrow, nothing substantial would happen, which is exactly the point. It's an abstract piece of theatre, disconnected from Canada's geopolitical economic location and from the irrational exuberance of a venture capital and investment community. This law is riding on the back of investor enthusiasm for an industry that has not even proven its business model out. On top of that, it's an industry that is highly dependent on the private infrastructures of a handful of U.S. companies.

Thank you.

3:50 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

I'll now give the floor to Ms. Casovan for five minutes.

3:50 p.m.

Ashley Casovan Managing Director, AI Governance Center, International Association of Privacy Professionals

Thank you for inviting me here to participate in this important study, specifically to discuss AIDA, a component of the digital charter implementation act.

I am here today in my capacity as the managing director of IAPP's AI governance centre. IAPP is a global, non-profit, policy-neutral organization dedicated to the professionalization of the privacy and AI governance workforces. For context, we have 82,000 members located in 150 countries and over 300 employees. Our policy neutrality is rooted in the idea that no matter what the rules are, we need people to do the work of putting them into practice. This is why we make one exception to our neutrality: We advocate for the professionalization of our field.

My position at IAPP builds on nearly a decade-long effort to establish responsible and meaningful policy and standards for data and AI. Previously, I served as executive director for the Responsible Artificial Intelligence Institute. Prior to that, I worked at the Treasury Board Secretariat, leading the first version of the directive on automated decision-making systems, which I am now happy to see included in the amendments to this bill. I also serve as co-chair for the Standards Council of Canada's AI and data standards collaborative, and I contribute to various national and international AI governance efforts. As such, I am happy to address any questions you may have about AIDA in my personal capacity.

While I have always had a strong interest in ensuring technology is built and governed in the best interests of society, on a personal note, I am now a new mom to seven-month-old twins. This experience has brought up new questions for me about raising children in an AI-enabled society. Will their safety be compromised if we post photos of them on social media? Are the surveillance technologies commonly used at day cares compromising?

With this, I believe providing safeguards for AI is now more imperative than ever. Recent market research has demonstrated that the AI market size has doubled since 2021 and is expected to grow from around $2 billion in 2023 to nearly $2 trillion in 2030. This demonstrates not only the potential impact of AI on society but also the pace at which it is growing.

This committee has heard from various experts about challenges related to the increased adoption of AI and, as a result, improvements that could be made to AIDA. While the recently tabled amendments address some of these concerns, the reality is that the general adoption of AI is still new and these technologies are being used in diverse and innovative ways in almost every sector. Creating perfect legislation that will address all the potential impacts of AI in one bill is difficult. Even if it accurately reflects the current state of AI development, it is hard to create a single long-lasting framework that will remain relevant as these technologies continue to change rapidly.

One way of retaining relevance when governing complex technologies is through standards, which is already reflected in AIDA. The inclusion of future agreed-upon standards and assurance mechanisms seems likely, in my experience, to help AIDA remain agile as AI evolves. To complement this concept, one additional safeguard being considered in similar policy discussions around the world is the provision of an AI officer or designated AI governance role. We feel the inclusion of such a role could both improve AIDA and help to ensure that its objectives will be implemented, given the dynamic nature of AI. Ensuring appropriate training and capabilities of these individuals will address some of the concerns raised through this review process, specifically about what compliance will look like, given the use of AI in different contexts and with different degrees of impacts.

This concept is aligned with international trends and requirements in other industries, such as privacy and cybersecurity. Privacy law in British Columbia and Quebec includes the provision of a responsible privacy officer to effectively oversee implementation of privacy policy. Additionally, we see recognition of the important role people play in the recent AI executive order in the United States. It requires each agency to designate a chief artificial intelligence officer, who shall hold primary responsibility for managing their agency's use of AI. A similar approach was proposed in a recent private member's bill in the U.K. on the regulation of AI, which would require any business that develops, deploys or uses AI to designate an AI officer to ensure the safe, ethical, unbiased and non-discriminatory use of AI by the business.

History has shown that when professionalization is not sufficiently prioritized, a daunting expertise gap can emerge. As an example, ISC2's 2022 cybersecurity workforce study discusses the growing cyber-workforce gap. According to the report, there are 4.7 million cybersecurity professionals globally, but there is still a gap of 3.4 million cybersecurity workers required to address enterprise needs. We believe that without a concerted effort to upskill professionals in parallel fields, we will face a similar shortfall in AI governance and a dearth of professionals to implement AI responsibly in line with Bill C-27 and other legislative objectives.

Finally, in a recent survey that we conducted at IAPP on AI governance, 74% of respondents identified that they are currently using AI or intend to within the next 12 months. However, 33% of respondents cited a lack of professional training and certification for AI governance professionals, and 31% cited a lack of qualified AI governance professionals as key challenges to the effective rollout and operation of AI governance programs.

Legislative recognition and incentivization of the need for knowledgeable professionals would help ensure organizations resource their AI governance programs effectively to do the work.

In sum, we believe that rules for AI will emerge. Perhaps, more importantly, we need professionals to put those rules into practice. History has shown that early investment in a professionalized workforce pays dividends later. To this end, as part of our written submission, we will provide potential legislative text to be included in AIDA, for your consideration.

Thank you for your time. I am happy to answer any questions you might have.

4 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

To start the discussion, I'll yield the floor to MP Perkins, for six minutes.

4 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Thank you, Mr. Chair.

Ms. Wylie, the minister talked a lot about 300 consultations after he tabled the bill, not before. Looking at the list that he provided after we asked for it, I see that 28 were with academics and 216 were basically with big business and not really with people who are impacted, so it was sort of the converted talking to the converted.

I'd like you to talk a little more, if you could, to expand on your belief about why you think a proper consultation, with this bill defeated and reintroduced in a new format, would produce a better result.

4 p.m.

Partner, Digital Public

Bianca Wylie

Certainly. Thank you.

I think, even with academics, they're not working in operations. The reason I listed the examples I gave is that I think AI starts to make sense when we talk about it in a specific context: as mentioned, in manufacturing, in health care, in dentists' offices. We could go through all of society here. We need to talk about people who are working in those spaces, not general specialists.

This is what I mean. Even within the critics, people have a vested interest in going way down into the complexity instead of zooming out and looking at this to ask why we are doing this. What are we trying to accomplish? The answers to those questions are going to be very different per sector. What looks beneficial and harmful per sector is a totally different thing.

I think that's why we need to restart the conversation from the point of what we are trying to do here, and then we can talk about how we would do it. You can't start the “how” before you get your “why” clear.

4 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

What this bill outlines—which was a bolt-on to a previously failed privacy bill—is driven by trying to imitate what's going on in Europe, but it basically says that we're going to legislate harms, the highest level of harms in AI. It has already failed to define it well, because the minister has already had to revise the definition.

Are the highest risks or harms the only harms that are potential out there, and what are all the levels? There are various levels of AI that can impact people, besides the highest level that they're legislating.

4 p.m.

Partner, Digital Public

Bianca Wylie

Absolutely.

There are two things on this point. One of them is that harm is always contextual. Something can seem absolutely safe in terms of, say, data collection your doctor has, and you turn around and someone else has it. It's dangerous. It's never absent context and use, ever, so I would argue that structural categorization is incorrect.

The reason we look to Europe all the time and ask what Europe is doing.... I know it's appealing to say that what they are doing over there may be thoughtful, but geopolitically, from an economic perspective, they want their own Google, Amazon and Microsoft. When you gin up all this complexity, you protect your national industry. This is a way to enable the economy to grow, based on domestic rules.

There is, then, that broad harmonization conversation you're hearing. How well has that worked to date globally with data protection law? It has not. It has not worked with privacy either.

Those are the two pieces of a response to that.

4 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

We've had a lot of discussion here about the first two parts of the bill, about whether or not privacy is a fundamental human right and whether or not this bill, in spite of the late-stage, eleventh-hour conversion of the minister in recognizing that, still has a lot of exceptions in it that give the paramount authority to business to override the fundamental right.

In the AIDA bill, there's no mention of human rights, personal privacy or anything else, but there is mention of creating a super ministry of undefined power and undefined regulation at ISED to rule it all. What's an alternative to having one major Ottawa super agency that thinks it can rule the entire AI world in Canada? What's the alternative?

4:05 p.m.

Partner, Digital Public

Bianca Wylie

There is at least one alternative, which is why I keep going back.... The groundwork, the homework for this bill was not done. Even before you go out to the public, you go within the government and ask if this is the problem we're seeing in banking, in health care and in automobiles. We start from there, and then we think, “What do we do next from an adaptive perspective?”

We don't reinvent the world in the name of artificial intelligence. It's disrespectful to the existing status of the government, of democracy and of accountability. I think you at least start there. When things don't fall into there, then let's get specific and regulate. Let's get specific and talk about accountability. We don't start building the world around artificial intelligence here and ignore everything else that came before.

4:05 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Should any future legislation outline what all the levels, as we know them, of artificial intelligence are, from the repetitive task stuff that gets done in a business right through to computer efficiency?

4:05 p.m.

Partner, Digital Public

Bianca Wylie

I genuinely don't think this is the right approach from a structural question perspective, because artificial intelligence, if we break it down, is pattern matching and advanced statistics. We didn't regulate mathematics. We didn't regulate statistics. We didn't regulate databases. We didn't regulate general software. I don't think the software industry did badly without general regulation.

It's just to say—

4:05 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

That reminds me of a question I've been meaning to ask and haven't been able to ask anyone yet.

We have not yet regulated any level of computing power in the world, but we are here trying to. Why?

4:05 p.m.

Partner, Digital Public

Bianca Wylie

It's industry. Capital is looking for the next place to go. I'm only saying this because the business model isn't even proven yet. Do you know who knows they're making money? It's Google, Microsoft and Amazon. For every other start-up that is riding on the back of those companies, let's talk about where they are in two years. We're legislating for that context, which is novel and has not arrived yet, and that's the driving feature here. Make it make sense, please.

4:05 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

I know that—

4:05 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much, MP Perkins.

I'll now yield the floor to MP Van Bynen for six minutes.

4:05 p.m.

Liberal

Tony Van Bynen Liberal Newmarket—Aurora, ON

Thank you very much, Mr. Chair.

One thing that I'm enjoying very much about this committee is the divergent perspectives that we're hearing, the level of engagement and the level of intelligence in approaching the issue.

The reality is that the genie is out of the bottle. My concern is that we're not going to go back to where we were before.

My first question is for Ms. Casovan.

In April 2023, you and 75 other researchers co-signed a letter calling on the government to move forward with the artificial intelligence and data act and saying that further postponing the act would be out of sync with the speed at which technology is being developed. Is your position the same today as when you co-signed that letter?

4:05 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

I would note that I did that in my former capacity as the executive director of the Responsible AI Institute. I still continue to serve on the board of the Responsible AI Institute, so I'll share this in that capacity, given the policy neutrality of my current position.

That said, yes, I definitely do believe that. As you mentioned, the genie is out of the bottle. I have a lot of respect for Bianca and her perspective. One thing I want to focus on is the role that.... Ana spoke to the harms and the challenges that exist from these systems. I do think that there is a fundamental delta between AI technologies and other types of ways in which we've provided regulation in certain sectors previously. I agree that we need to augment or look at existing legislation and figure out how AI impacts those industries: instead of having legislation that is specific to AI, figure out how we augment that and how that's complementary to this work. That does leave a lot of systems and different types of contexts that don't get resolved through that process.

4:05 p.m.

Liberal

Tony Van Bynen Liberal Newmarket—Aurora, ON

Thank you.

That turns me over to Mr. Shee.

According to the website, the Global Partnership on Artificial Intelligence is “a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities.” It includes 29 countries.

How could the work of the Global Partnership on Artificial Intelligence working group provide a framework for the implementation of the laws that will be governing artificial intelligence, such as the artificial intelligence and data act?

4:10 p.m.

Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual

Alexandre Shee

It's an excellent question.

The purpose of the group is to bring world-renowned experts and policy-makers together around the table to actually think about the practical applications of artificial intelligence.

One of the artifacts that recently came out from the working group on the future of work was 10 policy recommendations about what we have identified with the International Labour Organization as the “great unknown”, the idea that 8% of the working population, going forward, will be impacted in an unknown way by artificial intelligence, and there is an opportunity to act.

It's an incredible organization that brings stakeholders from around the world. We discuss, in very practical terms, the way to apply legislation. It would be very open to continuing to be consulted in this process, and it can help give concrete examples of how AI can be built responsibly and benefit humanity.