Evidence of meeting #29 for Industry and Technology in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was copyright.

A recording is available from Parliament.

On the agenda

Members speaking

Before the committee

Geist  Canada Research Chair in Internet and E-Commerce Law, Faculty of Law, University of Ottawa, As an Individual
Bennett  Professor Emeritus, University of Victoria, As an Individual
Bengio  Full Professor, Université de Montréal, As an Individual
Dehghantanha  Professor and Canada Research Chair in Cybersecurity and Threat Intelligence, University of Guelph
Craig  Associate Professor of Law, Osgoode Hall Law School, York University, As an Individual
Cukier  Professor, Entrepreneurship and Strategy, Ted Rogers School of Management, and Academic Director, Diversity Institute, As an Individual

5:20 p.m.

Professor, Entrepreneurship and Strategy, Ted Rogers School of Management, and Academic Director, Diversity Institute, As an Individual

Wendy Cukier

That's a really good question.

I am in between what I would call protectionism and the free market when it comes to data. I know that in the province of Quebec, there has been a lot of progress made in data regulation. Looking at that carefully to figure out what's on paper and what has actually been implemented might help us understand what we should be doing on a national level.

My view—and this is a bit different from what you may have heard from others—is that often we dichotomize individual rights to information and privacy versus corporate interests in making money. When you look at regulations, often they're here or they're here. I think there is a third piece, and that is the public interest.

The analogy I would encourage people to think about when we think about data and think about AI is our tax system. I earn money. It's not all my money. The government takes a portion of it to advance the public interest. When we think about security and think about health care, there are many things for which the government having access to some of my data will actually benefit all Canadians. We have to figure out how to balance those interests in an appropriate way so that we are advancing our economic development, innovation and trade; protecting privacy; and helping the government do a better job for all Canadians.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Thank you very much.

The Chair Liberal Ben Carr

Thank you, Mr. Ste‑Marie.

Ms. DeRidder, the floor is yours for five minutes.

5:25 p.m.

Conservative

Kelly DeRidder Conservative Kitchener Centre, ON

Thank you.

Dr. Dehghantanha, my questions will be for you today.

I was recently briefed on the reports saying that Anthropic's Claude AI model was used by a suspected Chinese state-sponsored group to conduct a large-scale, automated cyber-espionage campaign on roughly 30 organizations globally. Would you mind sharing with the committee what you know about this incident and any suggestions or recommendations you may have on how to mitigate this type of attack in the future?

5:25 p.m.

Professor and Canada Research Chair in Cybersecurity and Threat Intelligence, University of Guelph

Ali Dehghantanha

Sure.

The background of the case, as you mentioned, is one of Anthropic's tools. AI systems have been used for automating data to do what we call data exfiltration, which means receiving data from the network, by what is believed to be a Chinese adversarial operator. I can tell you, based on all the knowledge of and what we understand from the attack, that the main reason this specific system was used was for ease. It is more focused on optimizing code, and it is much more accurate. That is what the adversaries were after, but it doesn't mean that this capability is only limited to Anthropic or to Anthropic systems.

Adversaries are normally using AI automation based on the objective that they have in mind. For example, if they want to steal copyrighted information, they may choose OpenAI platforms because those are better in text recognition. If they are going after codes, they may go with Anthropic systems.

That's more context. What I want to highlight is that usually, from the adversary's point of view, they don't care who is behind the AI. They're more interested in the skills or the capability of an AI technology or AI system.

The second part of your question, I believe, was more on what we can do. Is that right?

5:25 p.m.

Conservative

Kelly DeRidder Conservative Kitchener Centre, ON

I agree with you completely. I was using Anthropic as an example because it just recently published a report, but it is open-source AI.

Yes, the second part is more what I'd like to talk about. How do we mitigate that risk in the future?

5:25 p.m.

Professor and Canada Research Chair in Cybersecurity and Threat Intelligence, University of Guelph

Ali Dehghantanha

As I mentioned in the answer to the previous question, currently the response time by cybersecurity people is squeezed significantly, which means that the moment adversaries are in, it takes minutes for them to achieve their objectives using AI and automation.

What we should do at the enterprise level is create a layer, as I mentioned, between AI applications and AI foundational models, a layer that is controlled by the company, by the enterprise. Even if an adversary wants to use, say, Anthropic or any other foundational model capability, they will need to go through that control layer and will hopefully be detected on that layer. That's one thing we can do at the enterprise defence level.

On the other side, we need to invest significantly in the detection of the deployment or usage of AI that is not approved by an enterprise. These days, the technology available is very limited, even to enterprise organizations, big organizations, in identifying whether a specific skill or a specific activity that is done by AI is legitimate or is following their policy. Advancement on that could significantly help us to identify what is allowed and what is not allowed in the system. That could limit an adversary's capability once they are in the network.

5:25 p.m.

Conservative

Kelly DeRidder Conservative Kitchener Centre, ON

Thank you for that.

In my community of Kitchener Centre, Canada's innovation capital, we see advanced manufacturing and tech innovation go hand in hand. How can Canada better align its cybersecurity and AI strategies, both to protect these industries and to ensure they remain competitive globally?

The Chair Liberal Ben Carr

Professor, I'm going to ask you to keep that answer to about 45 seconds. Thank you.

5:25 p.m.

Professor and Canada Research Chair in Cybersecurity and Threat Intelligence, University of Guelph

Ali Dehghantanha

Sure.

When AI missed manufacturing...we are looking at the risk of physical AI, which means that devices are able to take and make new actions and do new activities. What we need to focus on is controlling what kinds of skill sets and what kinds of actions AI systems can do. We need to support both the start-ups in your area that are focused on controlling AI and the start-ups that are more focused on how the policy can be applied and integrated into AI actions.

The Chair Liberal Ben Carr

Thank you very much.

Colleagues, we are running over time, but I'm going to give five minutes to Madame O'Rourke, a minute to Monsieur Ste-Marie, two and a half minutes to Ms. Borrelli and two and a half to Mr. Bains to finish.

Witnesses, I know that's a bit of an audible, so if you do have to go, we understand, and you're certainly welcome to excuse yourself. Otherwise, I hope you don't mind sticking with us for an extra 10 to 15 minutes. We're very much appreciating the insights.

Madame O'Rourke, you have five minutes.

Dominique O'Rourke Liberal Guelph, ON

Thank you, Chair Carr.

Dr. Dehghantanha, it's nice to see you again. Thanks very much for an earlier conversation at the University of Guelph.

The University of Guelph has the Centre for Advancing Responsible and Ethical Artificial Intelligence, as well as the AI for Food initiative. Given that Ms. Cukier was talking about having a sectoral approach to AI adoption and that we tend to be thinking about AI in terms of the financial sector, white-collar jobs and perhaps advanced manufacturing, can you tell us what the potential is for AI in agriculture, some of the pitfalls you can see and then what measures we would need to consider now, including perhaps the right to repair?

5:30 p.m.

Professor and Canada Research Chair in Cybersecurity and Threat Intelligence, University of Guelph

Ali Dehghantanha

AI has immense capabilities. It's already changing the practices in agri-food significantly.

You mentioned a couple of the centres we have at the University of Guelph that are focused on helping farmers to build more AI capacities at their farms, both at their operation and at distribution. You are seeing that AI is now disrupting technology at all of these three layers. Farmers are integrating AI into their on-farm operations, from sensing all the way to controlling livestock—everything at that level. Then, as you would see at the operation level, AI is now getting a lot of inputs from different data sources and is helping farmers to optimize or improve their practices. That's the operational level.

When it comes to distribution, you will see that bigger companies, like Sobeys and the like, are using AI significantly to manage what they should buy from which farmers at what price. Also, the other way around, it's used by the farmers as well. What I'm trying to say is that in 12 to 18 months, I would say, you will see AI going from the farm all the way to the table. The whole ecosystem is now being built around AI in agri-food.

You mentioned what measures we should put in place. One of the main points of the agri-food sector is that most of the operators in this sector are small and medium-sized businesses that are physically distributed across the country. Being able to secure them, protect them and make sure they are using AI responsibly requires a standard and requirements by law through vendors so that as they deploy these AI solutions, they deploy them in a way that they are responsible [Technical difficulty—Editor] control. That's what we don't have at the moment.

Some examples have been mentioned in this meeting. At the moment, if an AI agent that's able to create new skills is released on farm, we don't even know when and where in this situation it will gain skills and what it will do with those skills. We are always hoping for the best, but there is no testing and no benchmark for evaluation before deployment, for when it is in use and then after deployment for what you should do when you want to just kill this system, kill this data.

Dominique O'Rourke Liberal Guelph, ON

Thank you.

I have another question. When speaking with Dr. Beth Parker from the groundwater research centre at Guelph, I asked her whether AI will allow her to accelerate discovery. She said, “We still have to go and get the core samples. We still need the data.” That goes to Ms. Cukier's earlier comments.

I'm struggling with the timelines we're discussing. Sometimes it's 12 to 18 months or three to five years. Where are we in terms of good solid data? That's not for things like ChatGPT, but things like medical research or advances in agriculture. How close are we to that? How close are we to having good data collection? How close are we to having a secure layer in order to monitor, identify challenges and address them? How are these things coming together over the next three to five years?

5:30 p.m.

Professor and Canada Research Chair in Cybersecurity and Threat Intelligence, University of Guelph

Ali Dehghantanha

In terms of data, I would say the best people to talk to are the domain experts, such as people in AI and Beth Parker at the water centre you mentioned. The good thing about AI is that once the data is collected at one point in the world, we can start using it everywhere else. I am seeing a lot of investment being made by core AI companies on generating reliable data. For me, the timeline is quite short if you have a global view. If you have a regional view for specific places, yes, that would take a lot longer.

In terms of how advanced we are in securing AI systems, we are at the very earliest stage. I have yet to see any enterprise in Canada, and we are working with many of them, deploying any control layer for AI. That becomes the main challenge for them in deploying AI. Set aside thinking about smaller businesses or smaller organizations. That's a huge gap we are seeing there. That doesn't exist.

Dominique O'Rourke Liberal Guelph, ON

Thank you.

The Chair Liberal Ben Carr

Thanks, Madame O'Rourke.

Mr. Ste‑Marie, you have one minute.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Thank you, Mr. Chair.

Ms. Craig, are you familiar with the European legislation on artificial intelligence in terms of copyright? If so, can you provide us with some comments in one minute?

5:35 p.m.

Associate Professor of Law, Osgoode Hall Law School, York University, As an Individual

Carys Craig

I have. This committee may know that Europe was quite an early mover on creating rules around exceptions to copyright to permit text and data mining. They had a tiered approach, where research institutions and cultural organizations could engage in text and data mining with copyright works without any liability risks for non-commercial research purposes. As a second tier, which is outside of that, there is an exception for text and data mining with the possibility of opt-outs by rights holders.

It was an early move and it was controversial. In a way, it suggested that rights holders can, unless an exception applies, prevent text and data mining by exercising the opt-out. The tricky part has been trying to work out how that opt-out can be exercised, by whom, the force and effect of that, and how it can be implemented.

It has created some challenges. Of course, that is still being worked out in the European context, although the rule itself has become the rule through the EU AI Act. The question that faces us is whether we want to follow suit or hold up and see how this unfolds and whether it's the best option. Of course, there are tensions and incompatibilities to consider with the other rules that are emerging in the U.S.

The thing I want to stress here is that copyright, while it might seem like a side issue, has been described as an issue that could bring AI to its knees. We're not just talking about movies, books and things that we think of as copyright works. Everything—all of the datasets and all of the data that's being scraped and used and on which these machines are trained—can be subject to the very low automatic protection of copyright.

If you create, at the kind of scale we're talking about, an obligation to license or a right to opt out of training, then you create huge obstacles to the capacity to access the data you need to train AI systems well. You create a system where there are limits in the datasets and where there are some key players that can access the data that's needed and there are others who cannot. It could become a true obstacle to the development of AI in Canada and beyond.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Thank you.

The Chair Liberal Ben Carr

Thank you, Mr. Ste‑Marie.

We'll go to Ms. Borrelli for two and a half minutes.

Kathy Borrelli Conservative Windsor—Tecumseh—Lakeshore, ON

Professor Craig, hello. Thank you for being here today.

Canadian SMEs are the backbone of our economy, yet some lack the capital, the expertise or just the access to infrastructure that's needed to use AI. Do you believe current government policies are doing enough to ensure that SMEs can not only adopt AI but also capture real economic value from it?

5:35 p.m.

Associate Professor of Law, Osgoode Hall Law School, York University, As an Individual

Carys Craig

The biggest challenge that I see potentially facing smaller Canadian competitors and innovators that want to rely upon or develop AI in this economy and in this context is the concern about what they can lawfully do and lawfully use, and then how they can implement and put to work the AI tools available to them.

To be honest, the incentives to use AI and the availability of good public systems mean that there is a growing capacity for everybody to take advantage of generative AI tools in particular and to find efficiencies that can benefit them. In this regard, I think education and access can be key.

To the extent that people want to be able to develop their own tools that maximize their capacities in their own sectors and for their own purposes, that's when you see both the need for technical supports and the accessibility of the data and tools becoming key. We could spend more time thinking about how we prop up and support the development of open-access and open-source models of data commons that are accessible to small movers and innovators that want to take advantage of that, rather than thinking about how large rights holders can block and prevent training on their data.

Rather than thinking about how we can exclude, we can think about how we include people in the data and how to ensure access to data and the technology it allows.

5:40 p.m.

Conservative

Kathy Borrelli Conservative Windsor—Tecumseh—Lakeshore, ON

Thank you.

I wish I had more time for another question.

The Chair Liberal Ben Carr

Thank you, Ms. Borrelli.

Mr. Bains, you have the floor for two and a half minutes.