Evidence of meeting #89 for Human Resources, Skills and Social Development and the Status of Persons with Disabilities in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was technology.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

James Bessen  Professor, Technology & Policy Research Initiative, Boston University, As an Individual
Angus Lockhart  Senior Policy Analyst, The Dais at Toronto Metropolitan University
Olivier Carrière  Executive Assistant to the Quebec Director, Unifor
David Autor  Ford Professor, Massachusetts Institute of Technology, As an Individual
Gillian Hadfield  Chair and Director, Schwartz Reisman Institute for Technology and Society, As an Individual
Théo Lepage-Richer  Social Sciences and Humanities Research Council and Fonds de recherche du Québec Postdoctoral Fellow, University of Toronto, As an Individual
Nicole Janssen  Co-Founder, AltaML Inc.
Jacques Maziade  Committee Clerk

11:45 a.m.

Professor, Technology & Policy Research Initiative, Boston University, As an Individual

James Bessen

Yes. AI can make recommendations about what to trim, but it can't trim it itself. Obviously it requires legal and regulatory approval to trim the process. It's a good idea.

11:45 a.m.

Conservative

Scott Aitchison Conservative Parry Sound—Muskoka, ON

Thank you.

Mr. Lockhart, I will ask you to just provide some comment here on the same question, if you wouldn't mind.

11:45 a.m.

Senior Policy Analyst, The Dais at Toronto Metropolitan University

Angus Lockhart

I would say two things. The first is that when we switch from talking about AI use in private workplaces to AI use by government, there are a lot of different questions that raises and a lot of different issues that come about. In the private sector a lot of the time we get to just focus on productivity, but in the public sector there's a lot more to consider than productivity. You can't just talk about making the process faster, because I think there's an important equity concern here even when it comes to housing applications. Handing over to an AI tool any kind of judgment on that makes for a real challenge.

The second is more on the topic of using AI to cut down on regulations. I think you're going to really run into a challenge there, because there are real social considerations, as opposed to just productivity or efficiency considerations, that go into that kind of regulation system. It seems to me that it's probably better done and left to humans and human decision-making for now.

11:45 a.m.

Conservative

Scott Aitchison Conservative Parry Sound—Muskoka, ON

I will throw it to you, Mr. Carrière, as well, if you're interested in commenting on that, sir.

11:45 a.m.

Executive Assistant to the Quebec Director, Unifor

Olivier Carrière

I will refrain from answering that particular question about the use of artificial intelligence and housing issues. I don’t think I have anything new to add.

However, I will reiterate that we need to learn more about these tools. The way to better understand them is to talk about them, to provide a framework that forces employers to explain to their employees what they want to do, the goal they’re trying to achieve, the changes that will be made to their workplace and the repercussions on people’s autonomy.

In a context where augmented work will occur, that’s terrific. In a context where we’re only getting diminished work results, it’s problematic. It all begins with knowledge. We need to know what we’re dealing with. We don’t even know whether we’re dealing with algorithmic tools for automated decisions or semi-automated decisions or whether they’re symbolic algorithms or machine learning algorithms. Those are things we simply don’t know. Workers don’t know if the algorithmic tool is capable of thinking for itself or if it’s just following a decision tree.

We’re a long way from understanding. We need to develop mechanisms to learn more. Once we do…

11:50 a.m.

Liberal

The Chair Liberal Bobby Morrissey

Thank you, Mr. Carrière.

Mr. Kusmierczyk, go ahead for five minutes, please.

11:50 a.m.

Liberal

Irek Kusmierczyk Liberal Windsor—Tecumseh, ON

Thank you so much, Mr. Chair. I have a question for Mr. Carrière.

You know, Liberals believe in the power of the bargaining table. That's why we introduced Bill C-58, which will ban the use of replacement workers. That's what differentiates us from the Conservative Party: We believe in the power of the bargaining table and we're putting forward the ban on replacement workers.

Are you able to comment? Have you already seen the spectre of AI being part of discussions at the bargaining table? Are you currently seeing negotiations with employers? Are you seeing AI being raised in those bargaining discussions? I'm not sure how much time you've spent at those bargaining tables, but can you tell us a little about whether it's part and parcel of those discussions already?

11:50 a.m.

Executive Assistant to the Quebec Director, Unifor

Olivier Carrière

Thank you for the question.

Presently, this is not something that’s openly and clearly discussed at the bargaining table. We aren’t discussing it. For example, recently, the St. Lawrence Seaway was closed for eight days. Could an algorithmic management tool one day manage the locks remotely? Very likely. Will this lead to job losses? Quite possibly. Is this being discussed at the bargaining table? No, it’s not on the table at all. There is no disclosure.

It’s like asking workers to use up all their bargaining capital, an expression we use. Instead of seeking to improve their working conditions, they’d be asked to use all their bargaining capital to ask for transparency about artificial intelligence. That’s not something workers are interested in. Employers are not disclosing how such tools are being integrated, or what their future impact will be. There’s a huge demand on workers’ participation to populate the databases of these tools and to correct the margins of error, but they’re not told how this will affect their jobs or the evolution of their jobs.

So the dialogue is non-existent. We have to start somewhere. Of course, the bargaining table is a start, but for all the sectors that are not represented, there have to be mechanisms in place for that dialogue to take place.

11:50 a.m.

Liberal

Irek Kusmierczyk Liberal Windsor—Tecumseh, ON

I appreciate that response. I know that Unifor, even back in 2017, was hosting conferences and meetings on AI and on technology, so you're definitely not new to this issue; you're very much forward-looking.

I want to ask if there is dialogue between unions. For example, are there conversations between Unifor and, let's say, the UFCW in the food-processing and food-picking sector? Are there conversations with other unions—you mentioned, for example, ports—to have that discussion? Are there conversations taking place between unions, as well, regarding the concerns about AI?

11:55 a.m.

Executive Assistant to the Quebec Director, Unifor

Olivier Carrière

Yes, there are plenty of conversations between the groups, because unions are sharing what little knowledge they’ve acquired. We realize that all of this is in its infancy. Certain aspects of technology were introduced 15 years ago, and today, with the advent of artificial intelligence, they’re taking on incredible dimensions.

Unions, not just American and Canadian unions, but international unions too, are exchanging best practices or examples of framework measures that could be included in collective agreements or in legislation.

So there are discussions, but the observation remains the same: our knowledge on this subject is in its infancy. We know nothing. This dialogue needs to take place with employers to devise solutions. The aim is not to limit or reduce the effect of AI-related technologies, but to ensure that they represent a positive addition to the workplace, rather than the opposite.

11:55 a.m.

Liberal

The Chair Liberal Bobby Morrissey

Thank you, Mr. Kusmierczyk and Monsieur Carrière.

Ms. Chabot, you have two and half minutes.

11:55 a.m.

Bloc

Louise Chabot Bloc Thérèse-De Blainville, QC

Thank you, Chair.

Mr. Carrière, I’d like to ask you a question about the employer-employee relationship.

When an algorithm that has built a decision tree is used to perform a function, what happens if something goes wrong? Who’s the boss in such a situation? I think this changes the employer-employee relationship.

I’m quite surprised to see that currently, there isn’t more upstream dialogue about what’s going on. At the same time, I’m not surprised either. If we take the concrete example of Bell, what does this mean for a worker?

11:55 a.m.

Executive Assistant to the Quebec Director, Unifor

Olivier Carrière

Bell Canada uses a tremendous amount of data and conducts extensive monitoring in all types of jobs. Everything is recorded. Every activity is recorded in a computer. Every action taken and every gesture made by a worker is known. It’s the same for technicians on the road and people working on the networks. Everything is analyzed and everything is known.

People’s performance is managed on the basis of targets to be achieved. Those are determined by the outcome of data analysis. If a technician is told that it takes 25 minutes to connect a line, but in fact takes 35 minutes to make the connection, he will be penalized. The vagaries of weather, for example, are not anticipated by the algorithm. The technician will be told that he’s doing a bad job because he’s not meeting the targets set by the algorithm. That’s where we stand now.

Has the manager’s judgment been substituted by a ready-made solution from an algorithm? The answer is yes, and has been for quite some time. Again, this is an unknown for us, because we don’t really measure what it takes into account. When we ask the employer to share the criteria used for their management tool, we don’t get an answer, because it’s so specific. We’re not given the information.

The manager is being replaced by an algorithmic management tool. At the end of the day, what is the basis for challenging the decision? This is where the question you raised, Ms. Chabot, is significant. You can’t go before an arbitrator or the courts and ask an algorithmic management tool why it made this decision rather than another. That’s why I mentioned earlier that we need to give ourselves the necessary means to correct the effects of algorithmic management decisions. This is the impression we get from people in the field. Managers today pass on messages, but all the tasks that involve judging a worker’s performance are carried out by this tool.

11:55 a.m.

Liberal

The Chair Liberal Bobby Morrissey

Thank you, Mr. Carrière.

Madam Zarrillo will conclude this....

Noon

NDP

Bonita Zarrillo NDP Port Moody—Coquitlam, BC

Thank you.

I'm going to ask Mr. Carrière.... Hopefully, we can keep it to about a minute, because I would also like to ask Mr. Lockhart about equity.

Thank you so much, Monsieur Carrière, for bringing back the humanity part of this discussion. We are a committee that has “human resources” at the beginning of its title.

I want to revisit something. The CLC—the Canadian Labour Congress—testified in front of this committee and recommended an advisory council on artificial intelligence.

I'm wondering whether you agree with this recommendation—that the federal government should have an advisory council that looks at the impacts on human resources. If so, who should be on that advisory council? Who should be represented?

Noon

Executive Assistant to the Quebec Director, Unifor

Olivier Carrière

This is an interesting first step. You certainly have to start with a consultation structure. Employers definitely need to be involved in the process, as well as unions and all the worker associations.

We need plain language. Simple language is needed. This is something that seems so complicated to us that we need scientists and people to explain the impact of these replacements. We also need to reassure workers. The fear is that the machine will replace the individual. We don't see what's going on, but we make the work dehumanizing.

The unions need to be at the bargaining table, but all the workers' associations also need to be at the bargaining table. We'll need plain language in order to fully understand the challenges.

Noon

NDP

Bonita Zarrillo NDP Port Moody—Coquitlam, BC

Thank you so much.

Mr. Lockhart, I want to revisit equity.

Again, this committee also looks at persons with disabilities.

I'm wondering whether you could share a bit about the work and discussions happening in your organization around what equity needs to look like in relation to AI.

Noon

Senior Policy Analyst, The Dais at Toronto Metropolitan University

Angus Lockhart

AI has the potential both to promote equity and to harm it.

If we look specifically at persons with disabilities, there are examples in which AI has been used to improve the capacity of people with disabilities to operate in a workplace. There is a café that recently opened in Tokyo that uses robots to help increase the motor function of people with disabilities in order to help them fully operate within that workplace.

At the same time, if you don't take an equity lens when you're implementing artificial intelligence, those marginalized groups—people with disabilities and other groups like them—are going to be the first people harmed by the introduction of AI in the workplace.

You have to start from a place of asking how AI can help uplift and increase the participation of everyone, and use that as your framework, instead of starting with, “We have AI. What can we get rid of with it?”

Noon

Liberal

The Chair Liberal Bobby Morrissey

Thank you, Madam Zarrillo. You're a little over.

I want to thank the witnesses for appearing for this first hour on the AI study.

With that, we will suspend for a few moments while we bring in our second panel of witnesses. We'll suspend for a few minutes.

12:05 p.m.

Liberal

The Chair Liberal Bobby Morrissey

I call the meeting back to order.

Members, we'll reconvene the committee as the witnesses, now all appearing virtually, have been sound tested. I've been told their sound is fine.

We will begin with opening statements. I would ask everybody to keep their time within the five minutes or less, because there are four of you.

We'll start with Mr. Autor for five minutes or less, please.

12:05 p.m.

Prof. David Autor Ford Professor, Massachusetts Institute of Technology, As an Individual

That's perfect. Good afternoon.

12:05 p.m.

Liberal

The Chair Liberal Bobby Morrissey

You're the first one who showed up on my list. That's why you're going first, Mr. Autor.

12:05 p.m.

Ford Professor, Massachusetts Institute of Technology, As an Individual

Prof. David Autor

Thank you for having me. My name is David Autor, and I am the Ford professor of economics at the MIT Department of Economics, and also co-director of the MIT “shaping the future of work” initiative. I am honoured to speak with you today about my research on artificial intelligence and the future of work, and I apologize for my cold.

AI presents obvious threats to workers and the labour force. While machines of the past could only automate routine tasks with clear rules, AI can quickly adapt to problems that require creativity and judgment. It seems reasonable to worry that AI will suddenly make huge swaths of human work redundant. I believe these concerns are somewhat misplaced, however. Strong demand for labour has persisted throughout past periods of technical change, like the industrial or computing revolutions, and all signs point to growing labour scarcity, not the opposite, in most industrialized countries, including Canada.

Instead, the important question to ask is how AI will impact the value of human expertise, by which I mean the skills and judgment in specific domains like medicine, teaching and software development, or modern crafts such as electrical work or plumbing. Will new technologies augment the value of human expertise, or will it make human judgment valueless?

In industrialized economies, expertise is the primary source of labour’s market value. Consider the jobs of air traffic controllers in comparison with crossing guards, both of whom have the job of protecting lives by preventing vehicle collusions. Air traffic controllers in the U.S. are paid four times more than crossing guards. Why? It's because they have scarce expertise, painstakingly acquired and necessary for their important work. The value of that expertise is augmented by tools: Without GPS, radar and two-way radio, an air traffic controller is basically a person in a field staring at the sky. Crossing guards provide a similar socially valuable social service, but most able-bodied adults can serve as crossing guards without formal training and without any expertise, and this virtually guarantees low wages.

While technology makes air traffic controllers' expertise valuable, it can also make human expertise redundant. London cab drivers used to train for years, memorizing all the streets of London. GPS made this expertise economically irrelevant. It's no longer necessary. You might ask, why isn't all expertise eventually made superfluous by automation? The answer is that human expertise becomes relevant because its domain expands with social needs. Jobs like software developers, laparoscopic surgeons and hospice careworkers emerged only when technological or social innovations made them necessary. In fact, my co-authors and I estimate that around 60% of all jobs that people do in the U.S. today didn’t exist in 1940. Technology and other social forces can just as readily create opportunities for high-quality work as they can automate it.

I believe that AI can create novel opportunities for non-college workers—low and middle-educated workers. With the support of AI tools, these workers could perform tasks that had previously required more costly training and highly specific knowledge. For example, medical professionals with less training than doctors could tackle more complicated tasks with the assistance of AI. In the U.S., in part due to technological innovations such as software that prevents the dispensing of harmful drug interactions, nurse practitioners have proven effective at tasks formerly reserved for doctors with five more years of medical education. AI could push this further, helping workers with less training deliver high-quality care. This is not to say that AI makes expertise irrelevant. It's just the opposite: AI can enable valuable expertise to go further. AI tools enable less experienced programmers to write better code faster. They help awkward writers to produce more fluid prose.

This positive future of which I'm speaking is not guaranteed. We must make collective decisions to build it. For example, China has made substantial investments in AI technology, in part to create the most effective surveillance and censorship systems in human history. This is not a preordained consequence of AI, although it depends on it, but it's a result of a particular vision of how to use this new tool. Similarly, it is far from inevitable that AI will automate all of our jobs. That's a vision that many AI pioneers are pursuing. I think this would be mistake. To shape this protean technology, AI, to constructive ends, political leaders must work with industry, NGOs, labourers and universities to build a future in which machines work in service of minds.

Let me end by saying what government can do. I don't claim to have complete answers here, but let me say a couple of things. First, governments should germinate and fund human-complementary AI research. The current path of private sector development has a bias towards automation. Government can correct this by supporting the development of worker-augmenting AI in industries like health care, education or skilled crafts work.

Second, I would prioritize protections for workers. Using AI for undue surveillance for high-stakes decisions like hiring and firing and to appropriate workers' creative works without compensation should be disallowed. Empowering workers to collectively bargain and including them in rule-making is a critical step.

I'm also concerned about AI safety. I think governments are comparatively well equipped to regulate safety.

Let me end by saying that rather than asking, “What will AI do to us?”, we should ask, “What do we want AI to do for us?” Answering that question thoughtfully and acting decisively will help us build a future that we all will want to inhabit and that we will want our children to inherit.

Thank you very much. I welcome your questions.

12:10 p.m.

Liberal

The Chair Liberal Bobby Morrissey

Thank you, Mr. Autor.

Now we have Ms. Hadfield for five minutes, please.

12:10 p.m.

Professor Gillian Hadfield Chair and Director, Schwartz Reisman Institute for Technology and Society, As an Individual

Thank you very much. Good afternoon.

My name is Gillian Hadfield. I'm a professor of law and of strategic management at the University of Toronto, where I hold the Schwartz Reisman chair in technology and society and the Canada CIFAR AI chair at the Vector Institute for Artificial Intelligence. I'm appearing in a personal capacity.

Thank you for this opportunity to speak to you on this subject of such critical importance.

I want to highlight four key aspects of the impacts of AI on the labour market.

First, AI is a general-purpose technology that is likely to transform almost all aspects of our economy and our society.

Second, the latest advances in AI can be adopted relatively quickly, but Canadian businesses to date have been slow to adopt AI.

Third, current AI systems are rapidly evolving to perform highly sophisticated tasks, meaning that high-income and high-education occupations may face the greatest exposure to this latest round of automation.

Fourth, the profound impacts of AI across our economy and society demand regulatory shifts to ensure that the full benefits of AI can be realized.

Let me go through each of these in a little more detail.

First, AI is a general-purpose technology. This means it will transform almost all aspects of our economy and society, similar to the impact of the steam engine or information technology. For example, publicly available large language models such as generative pretrained transformers, GPTs, demonstrate the potential for AI to radically reshape the nature of work. These systems are designed to understand and generate human-like text, including computer code, on a massive scale, increasingly to reason and problem-solve, facilitating an almost unlimited range of applications.

Second, the latest advances in AI can be adopted relatively quickly. ChatGPT's swift integration into everyday applications over the last year demonstrates this and suggests that the most recent strides in AI can be implemented relatively quickly, outpacing the adoption rates seen with earlier iterations of this technology. This presents an opportunity for Canadian business and policy-makers to boost productivity and economic growth; however, the committee should take note that Canada has to date been slow to adopt AI. According to a study by Statistics Canada, only 3.7% of companies were using AI at the end of 2021. Studies conducted by IBM and the OECD also suggest that Canada lags behind other economies according to AI adoption metrics.

Third, AI systems are rapidly evolving to perform highly sophisticated and complex tasks. Specifically, AI is being fine-tuned in sector-specific software applications. A notable instance from my own field is CoCounsel, which is a LLM system built on top of GPT-4, functioning as an AI legal assistant for tasks such as legal research, writing and document analysis. CoCounsel has managed to achieve a higher score on the American uniform bar exam than the average test taker—in fact, 90% of test takers. It is also designed to address inherent risks such as AI hallucinations.

Other examples beyond LLM systems include things like AlphaFold, which has solved the protein folding problem, described by a leading computational biologist as the first time an AI system has solved a major scientific problem. These advancements mean that AI can be harnessed more safely and effectively, particularly in sensitive and cognitively complex domains like law, science and health care.

In one study, OpenAI researchers found that GPT exposure was higher at the higher income and education levels. That's something for us to take into account, thinking about how this would look different than in previous innovations.

This brings me to my final and crucial point. The profound impacts that AI will have across our economy and society demand regulatory shifts to ensure that the full benefits of AI can be realized. Our current legal and regulatory frameworks were designed for a pre-AI era and may restrict innovative and productive uses of AI in workplaces. To harness the benefits of AI, we must update these frameworks to address the unique challenges and opportunities that AI presents. Furthermore, given that the nature of AI is rapidly developing technology, effective governance of AI demands that policy-makers move quickly to adopt an AI-enabling regulatory posture that seeks to properly regulate risks, as we do with all other economic activities, while supporting innovation and investment.

In conclusion, we stand at the cusp of a transformative era, and we should be acting to ensure that the benefits of AI are realized equitably and responsibly.

Thank you.