Evidence of meeting #29 for Industry and Technology in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was copyright.

A recording is available from Parliament.

On the agenda

Members speaking

Before the committee

Geist  Canada Research Chair in Internet and E-Commerce Law, Faculty of Law, University of Ottawa, As an Individual
Bennett  Professor Emeritus, University of Victoria, As an Individual
Bengio  Full Professor, Université de Montréal, As an Individual
Dehghantanha  Professor and Canada Research Chair in Cybersecurity and Threat Intelligence, University of Guelph
Craig  Associate Professor of Law, Osgoode Hall Law School, York University, As an Individual
Cukier  Professor, Entrepreneurship and Strategy, Ted Rogers School of Management, and Academic Director, Diversity Institute, As an Individual

Carys Craig Associate Professor of Law, Osgoode Hall Law School, York University, As an Individual

Thank you, Chair and members of the committee.

My name is Carys Craig. I'm a full professor at Osgoode Hall Law School at York University, where my teaching and research focus on copyright technology and the public interest. I've published widely on the AI challenge to copyright law, so I'm grateful for the opportunity to share my views with you here today.

In my short time, I want to make three points about copyright protection that I think are relevant to this committee's work. First, I think it's vital to distinguish copyright law from AI regulation. Second, copyright law must not obstruct AI research, development and training in Canada. Third, Canada must continue to refuse copyright protection to AI-generated works.

First, I think there's obviously an understandable concern about the effects of generative AI on creative workers, our cultural industries and our information ecosystem, but I'm going to urge the committee to be cautious about including expanded copyright protections as part of an AI regulatory package to address these concerns. Copyright exists to encourage the creation and dissemination of works, to reward authors and to foster a vibrant public domain. It is technology-neutral. It is not designed to govern technology risks or to restrain technological developments, and it should not be pressed into that service now.

The real risks of AI—from bias and misinformation to deepfakes and privacy violations to labour displacement and corporate consolidation—demand dedicated, fit-for-purpose regulatory responses. Expanding copyright control risks distorting foundational copyright principles while failing to address, or indeed worsening, the harms themselves. This is what I've called running into the AI copyright trap. It's mistakenly turning to copyright as a catch-all—or, for some, a windfall—in response to the threats posed by generative AI.

My second point concerns AI training. Some have called for compulsory licensing for copyrighted works that are used in training data, backstopped by owners' rights to opt in or out. I understand the impulse, but the consequences of this approach would, I think, be deeply harmful.

Under the current law, first, it's not clear that training AI on copyright works even implicates the rights of copyright owners. When a system is trained, it translates expressive content into statistical patterns. It turns the meaning into math. This is a technical, intermediate, non-public use to extract information that copyright does not protect. Even if copyright extends to this data extraction and analysis process, most text and data mining is likely lawful without permission or licence under Canada's fair dealing provisions, as interpreted by the Supreme Court of Canada. If the committee is interested in supporting AI research and innovation in Canada, the real problem is legal uncertainty, not illegality.

Requiring licences for AI training would create a pay-to-play system regulated by private actors. The wealthiest corporations could afford access to the vast data troves required, but academic researchers, non-profits, start-ups and SMEs would be shut out, and this would concentrate AI development even further in the hands of big-tech incumbents, which I think is what we're trying to prevent. It would also incentivize secrecy, reduce the diversity of AI systems, exacerbate bias and be practically impossible to administer effectively, as the EU's implementation efforts already reveal.

If copyright reform is required, it should be to confirm that text and data mining for informational analysis does not constitute infringement. This was the original INDU recommendation in the 2019 Copyright Act review, and it remains, I think, the best way to support a healthy AI ecosystem in Canada. It would most likely align with emerging U.S. fair use jurisprudence, but it would also give us the significant advantage of legal clarity. I think Canada's focus here should be on good data governance, not propping up private control of data in a way that's going to send AI development offshore while Canadian creators gain little, if anything.

My third and final point concerns AI outputs. The most effective thing copyright can do to protect human creators is to maintain the position that copyright requires a human author, while AI-generated content is unprotected in the public domain. That is the correct result. It protects the role of human creators in the creative industries, whereas granting rights in AI outputs would be an unnecessary, misplaced incentive that could further chill human creativity.

In closing, I just want to emphasize that copyright law, at its best, serves human creativity and the public interest. It exists because we value what human beings create, share and learn from each other. We cannot allow it to become a tool for controlling technology, a bargaining chip for corporate licensing deals or a vehicle for granting monopoly rights over information or machine-generated content. I urge the committee to keep copyright's principled limits and its practical consequences in view. There are many more apt solutions to the risks posed by AI systems.

Thank you.

The Chair Liberal Ben Carr

Thank you very much.

Professor Cukier, you have five minutes.

Wendy Cukier Professor, Entrepreneurship and Strategy, Ted Rogers School of Management, and Academic Director, Diversity Institute, As an Individual

Thanks so much.

It's really a privilege to be here among such learned and smart people. I will try to supplement what has already been said.

I'm a professor of entrepreneurship and innovation at Toronto Metropolitan University. I was also the vice-president of research and innovation, so I'm very invested in and committed to issues around the commercialization of technology in Canada. At the same time, I'm part of a number of big studies that are focused on responsible use, and I think we've heard a lot about the risks associated with artificial intelligence that have to be taken seriously.

I previously submitted a brief to the AI task force, and I'm happy to provide it to this committee. It reinforced a lot of the points that have already been made about infrastructure development, about sovereignty and the limits of sovereignty, about the urgent need for a regulatory framework for increased risk and to create some measure of certainty, and about the importance of balancing risks and rewards.

What I want to focus on today, though, because I didn't hear anybody talking about them, are the issues around adoption, government as a model user, bias in AI, and skills. I'll try to be brief.

The AI paradox in Canada is that we have a Nobel Prize winner in the development of the technology, yet if you look at us in comparison with other OECD countries, we're laggards in terms of adoption. There are a lot of reasons we can point to to explain that, but one of the most important ones is that we are a country of small and medium-sized enterprises.

We hear a lot about what large corporations are doing. Think about your ridings and who the big employers are. It's not just large companies. Large companies in Canada account for about 10% of private sector employment. What they do is important, but so is what the SMEs do. They provide 90% of the employment, and I think they are often left out of these discussions.

When I talk about SMEs, I'm not just talking about AI start-ups; I'm talking about family businesses in agriculture, in manufacturing, in retail and so on. We really have to grapple with the fact that SMEs in Canada need support in order to grow, address productivity and innovate.

A lot of the focus on AI adoption is around job displacement. That will happen, without question, but that's more likely to happen in large corporations that are using AI to lay people off. Small companies can punch above their weight and can look much bigger than they are if they use AI tools correctly. When we talk about AI tools, we're not just talking about machine learning; we're talking about simple, off-the-shelf services, generative AI and so on. That's one point I would like to emphasize.

Government has a role as a model user. We learned this with the early days of the Internet. Government can do a lot to advance opportunities for start-ups in this space, and I think we see signs that they're moving in that direction.

We have to focus on human capital, and there is a preoccupation with science, technology, engineering and math. They're absolutely critical. We need deep AI skills, and it would be nice to have another Nobel Prize winner, but science, technology, engineering and math are actually what you need to create AI tools.

We need a lot of other skills to advance innovation, and Canada continually makes the mistake of confusing invention with innovation. Innovation is about doing things differently. That means we need lawyers, ethicists and people who understand consumer behaviour, organizational behaviour and markets.

Our biggest barrier to innovation in this country, in my view—I'm biased because I'm in a business school—is the lack of attention on markets and who is going to use the stuff, and for what purposes. While deep AI skills are critical and AI literacy for everyone is important—because all jobs will be affected and all of us need to be protected—the AI skills for innovation, where we take people who understand their businesses and processes and give them the tools to use AI for a responsible purpose, is where I see one of the biggest gaps.

The final thing I'll say, because I am from the Diversity Institute, is that we need to double down on ensuring that AI is not reinforcing bias in the use of biased data and the use of homogeneous teams. We need to ensure that AI is not reinforcing the digital divide we currently see, based on income, geography, indigeneity and gender. We need to be using AI responsibly and inclusively.

I'll stop there. Thank you.

5 p.m.

Liberal

The Chair Liberal Ben Carr

Thank you very much, everybody, for your opening testimony.

Mr. Guglielmin, the floor will be yours for six minutes.

5 p.m.

Conservative

Michael Guglielmin Conservative Vaughan—Woodbridge, ON

Thank you, Chair, and thank you to all the witnesses for your opening testimonies.

Mr. Dehghantanha, at our committee meetings, we heard testimony that AI has fundamentally changed the nature of cyber-attacks, moving from tools that would assist hackers in their operation to tools and systems that can now autonomously build attack plans, create multiple strategies, troubleshoot, and find workarounds when they generally fail. We even heard an example of AI rewriting its own code to avoid shutdown. We also heard about a third party foreign actor who was able to breach over 100 million data points from citizens in Mexico without a human directing each and every step.

This obviously raises a wide variety of potential national security concerns, and I know that your research sits exactly at this intersection. In plain terms, how close are we to a scenario in which an AI-powered attack could compromise Canadian infrastructure, the financial system or other national security apparatus structures?

5:05 p.m.

Professor and Canada Research Chair in Cybersecurity and Threat Intelligence, University of Guelph

Ali Dehghantanha

AI is currently used in both defence and offence. On the offence side, as you mentioned, it provides capabilities that we have never seen in threat actors before. It reduces what we call “mean time to respond”, which means the amount of time that a cybersecurity professional has to respond to an incident significantly. It used to take hours for hackers to meet their objective. These days, it is minutes. If we are not detecting and stopping adversaries in a very short time, they will meet their objectives, as in some of your examples.

The question is, how far are adversaries from building capabilities that can be deployed at a scale targeting all critical infrastructure or all critical services? I can say that they are not that far. We are seeing them in the wild, testing these tools against any available research organizations or the infrastructures that are there.

At the same time, we are actively working with many partners in Canada to build up their skills. My main concern always lies with small and medium-sized businesses, especially in the less protected sectors, such as agri-food. They don't have enough investment to be made and they are widely distributed. An attack that is automated with AI could impact this infrastructure significantly.

When we talk about cybersecurity attacks, everyone thinks about the financial organizations. They would definitely be the first target, but they have tools and techniques to stop these attacks early. When you go down that food chain, getting into other critical sectors—I mentioned agri-food, and health care is another example—you see that the response time in these sectors is very small. We don't even try to build defensive capabilities at the scale that could defend against these AI-based attacks.

5:05 p.m.

Conservative

Michael Guglielmin Conservative Vaughan—Woodbridge, ON

Professor, we've also heard from Anthropic's former safety lead that in perhaps six to 18 months, there are going to be AI models that are capable of long-range strategic attacks.

I remember a story that we were told at this committee about a robotic dog that had, essentially, a kill switch—a button on a wall. It was able to reprogram itself, because it knew that this switch would turn it off. We also heard of scenarios in which agentic AI agents were deployed and then started mining cryptocurrency without being given that instruction.

Given all of this and the timelines that we're being presented with, what can government here in Canada do today to better prepare us for a sophisticated AI national security breach?

5:05 p.m.

Professor and Canada Research Chair in Cybersecurity and Threat Intelligence, University of Guelph

Ali Dehghantanha

What I am saying is that in Canada, we are putting a lot of focus on checkboxes—tools that check the AI before it is deployed in the real world. What we are missing is what happens after the AI is deployed. Who is going to monitor it? Who is going to be accountable for that? Who is going to contain the AI's skills?

You gave some examples of AI learning new skills in the field, and that causes complications or adversarial capabilities. You mentioned the timeline of six months to 18 months, and it could be much shorter. I would say that what should be done in a very short time is to invest in building that control plane, a layer that would sit between the AI application and foundational models, and try to control it. Give control back to the owner, to the human, to the operator—whatever we want to call it. That control plane is currently the missing layer in AI adoption.

We are not going through a slow adoption of AI. We will still let the applications be built, but the control plane should be built, and we need to have regulations and rules around that.

5:05 p.m.

Conservative

Michael Guglielmin Conservative Vaughan—Woodbridge, ON

If AI significantly lowers the barrier to entry for cyber-attacks, are we entering a world where less sophisticated actors are going to be able to deploy this technology and cyber-attacks against national security infrastructure would actually ramp up?

5:10 p.m.

Professor and Canada Research Chair in Cybersecurity and Threat Intelligence, University of Guelph

Ali Dehghantanha

What we are observing in the field is mostly that sophisticated adversaries now have much better tools and much better capabilities that they could deploy in a much shorter time, and that they outpace the targets that smaller adversaries were going after. That is what is actually happening with AI.

What is happening, I would say, is that the age of lone attackers or a small group of attackers being successful in attacking our infrastructure is gone. Most of those infrastructures are being targeted by advanced attackers that already have that automation at scale.

That makes me more worried, because previously, if I was talking to a farmer a few years ago, I wouldn't even think that an adversary from Russia would target them. These days they may, because everything is automated. The same ransomware, the same malware, that AI is now dropping, could end up on, say, a dairy farm. You would have never seen that in the past.

5:10 p.m.

Conservative

Michael Guglielmin Conservative Vaughan—Woodbridge, ON

Thank you very much.

The Chair Liberal Ben Carr

Thank you.

Mr. Ma, the floor is yours for six minutes, please.

Michael Ma Liberal Markham—Unionville, ON

Thank you to all of the witnesses.

My first question is for Dr. Cukier.

Following up on your last point about the digital divide and bias, we all know that AI basically functions on large data, whether that's language or biological data—and everything else.

You mentioned the digital divide. Certain parts of the world have, unfortunately, less input into and therefore less interaction with the AI model. Eventually, we're going to see a skewed model in which certain ethnic groups or certain geographical representations are lacking in that environment. If we depend on AI to make decisions legally or medically and so forth, that's going create a further digital divide and create a more unjust environment globally. Can you speak to that a bit more, please?

5:10 p.m.

Professor, Entrepreneurship and Strategy, Ted Rogers School of Management, and Academic Director, Diversity Institute, As an Individual

Wendy Cukier

Sure.

The issue of the digital divide, as many of you know, is not new at all. What we learned during COVID—because people talked about adaptation during COVID with rapid digitization and everything moving online—was that, for example, indigenous people in rural and remote communities have less access. Most people know about that, but did you know that 42% of racialized children in the city of Toronto were doing their homework on iPhones because they didn't have access to high-speed Internet, computers and so on?

You can take those principles and understand that the digital divide is not just about physical access to broadband. It's about broadband. It's about affordability. It's about devices. It's about skills. I'll be the only boomer in the room, and I'll tell you that I am much more vulnerable to the misuse of AI because I answer my phone and think it's a human, or I look at a video and think it's real. We need a sophisticated understanding of, first of all, the dimensions of the digital divide, and then the ways in which artificial intelligence applications in all of their manifestations, for good and for evil, will have an impact on that.

The interesting thing that came out in a recent survey we did with Environics is that the gap between men and women in the use of AI tools—not the developers—is much smaller than we would see with other technologies. Indigenous people are using AI tools more than others in the population. Immigrants are using AI tools more than others and so on. It's interesting, because to me this signals that there are ways in which AI can bridge some of these gaps.

I referred to the discipline differences. In some ways, AI is the English major's revenge. You don't need coding and you don't need a background in computer science to be able to build tools. That's something we have to really pay attention to when we're thinking about our national AI strategy. We need a responsible AI for all approaches, in my view.

Does that answer the question?

Michael Ma Liberal Markham—Unionville, ON

Thank you.

My follow-up question relates to that as well.

You have talked about racial bias as well, such as in HR and health care. I know experience-wise, from the last couple of years of observing, that HR hiring practices or recruitment agencies use AI for screening, and therefore there must be some inherent biases built into it. Can you talk a bit more about that? How do we ensure that we have a much better inclusion environment?

5:15 p.m.

Professor, Entrepreneurship and Strategy, Ted Rogers School of Management, and Academic Director, Diversity Institute, As an Individual

Wendy Cukier

That's a really good point, and it's not just racial bias. It's gender bias as well. Even tech firms have been stymied in their efforts to level the playing field.

There are two things here, and one is garbage in, garbage out. The data that you use, if it's biased, is going to replicate bias. The second is making sure you have diverse teams that are sensitive to these issues.

A third thing is disclosure. I think disclosure is absolutely critical.

A fourth thing is “human in the loop”. I'm working with a number of public sector organizations, for example, that are experimenting with large-scale AI tools. One of the critical things is to do experiments where you compare the results you get from the AI-enabled processes to what you would get with humans. Then you try to figure out if AI is amplifying bias or reducing bias, because sometimes it will cut out the bias that's associated with “we play golf together” or “I went to Queen's University”.

There are huge opportunities for good and evil, in my view, but transparency, human in the loop and inclusion are fundamental principles.

Michael Ma Liberal Markham—Unionville, ON

Great. Thank you very much.

My next questions are for Professor Craig.

The Chair Liberal Ben Carr

Mr. Ma, we're at time.

Michael Ma Liberal Markham—Unionville, ON

Okay. Time flies.

The Chair Liberal Ben Carr

It sure does. You might have a chance to come back toward the end.

Mr. Ste‑Marie, the floor is yours for six minutes.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Thank you, Mr. Chair.

Welcome to the three witnesses. Thank you for being here and for your presentations.

Ms. Cukier, in your presentation, you said that it is important for small and medium-sized businesses to embrace artificial intelligence technology in order to increase their productivity. That seems to come with some challenges.

Here is my first question. In your view, what are the obstacles preventing small and medium-sized businesses from incorporating artificial intelligence into their activities? Are they financial, technical, cultural or regulatory?

5:15 p.m.

Professor, Entrepreneurship and Strategy, Ted Rogers School of Management, and Academic Director, Diversity Institute, As an Individual

Wendy Cukier

Yes, it's all of those things.

We know, especially post-COVID and especially since the trade wars, that small and medium-sized enterprises are struggling. They have narrow margins. Most small and medium-sized enterprises in Canada have fewer than five people, so the person who's doing the technology development is also doing payroll and taking out the garbage. They lack the inherent skill, capacity and so on. That's one piece.

It's also the investment, for sure, although sometimes the barriers to entry are not that large. Unfortunately, when we talk about technology, people who love the technology talk about the technology, not about what it's good for. We need more use cases that show simple applications very clearly. For example, you can take a stack of expenses and receipts and turn them into a spreadsheet in five minutes instead of six hours. We need concrete, simple examples. We have them; it's just that they're not widely shared.

We've done research in Quebec as well as across the country, and often, small businesses have a short-term horizon rather than a long-term horizon. For the programs the government has implemented aimed at advancing technology adoption, if they're targeting SMEs, the benefits have to be more than the costs. By this I mean that if you're giving someone a small amount of money to implement technology, you need to make sure they can get it without a lot of trouble. You can still introduce accountability and have audits to make sure they did what they said they were going to do, and you can do evaluations, but if you try to incentivize upskilling, adoption of technology or infrastructure investments, it's important that you make it easy to get access.

The other thing we need to think about, because we're going to spend a lot of money on major projects and infrastructure, is how we can leverage those investments to provide opportunities for small and medium-sized enterprises to modernize, upskill, re-skill and so on. I think there are some clever ways we can get more bang for our investment.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Thank you very much. That is very clear and very complete. I am grateful to you.

I have another question, which is still about the use of artificial intelligence to increase productivity in small and medium-sized businesses, and large businesses too. In your view, which sectors have the greatest potential for adopting this technology? What could the government do to help those sectors adopt artificial intelligence?

5:20 p.m.

Professor, Entrepreneurship and Strategy, Ted Rogers School of Management, and Academic Director, Diversity Institute, As an Individual

Wendy Cukier

Honestly, it's a question of short term versus long term. I don't think any sector will not be affected. When I think of priority sectors from an economic point of view—they've already been mentioned—they're things like manufacturing, energy and construction infrastructure. There are interesting physical AI adoption opportunities, but they are also typically highly capital-intensive.

Where are our pain points in Canada? They're in health care. There are huge opportunities in health care, if we can manage the risks. Agriculture was already mentioned. Building self-sufficiency in agriculture is about not only large-scale farms but also vertical gardens and all kinds of things.

We need sectoral strategies for AI adoption. We need to recognize that in almost all sectors, except maybe finance, IT and some manufacturing, SMEs are at the core.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

I have a question on another matter: protecting personal information. You can tell me whether you are uncomfortable answering, given that we are changing the subject.

In your view, what role should artificial intelligence play in managing, conserving and disclosing information in federal institutions with links to the government?