Evidence of meeting #21 for Access to Information, Privacy and Ethics in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was businesses.

A video is available from Parliament.

On the agenda

Members speaking

Before the committee

Gonzalo  Consultant, Speaker, Trainer in Digital Marketing and Artificial Intelligence, As an Individual
Bednar  Managing Director, The Canadian SHIELD Institute for Public Policy
da Mota  Senior Policy Researcher, The Canadian SHIELD Institute for Public Policy

Luc Thériault Bloc Montcalm, QC

Thank you, Mr. Chair.

On that note, I would like to come back to earlier statements. That will give Dr. da Mota time to complete his response, but I’ll also ask Mr. Gonzalo to chime in.

Some experts have said the capacity for artificial intelligence systems to spread false information has almost doubled in only one year. That may be due to the fact that in the frenzied rush for performance, web giants have made their artificial intelligence tools more useful by connecting them to the web in real time. However, by opening the web, artificial intelligence systems directly expose themselves to an informational system that has been polluted and saturated by propaganda. The systems can’t systematically tell the difference between a credible source and a malicious site and digest falsehoods, whitewash them and present them by cloaking them in a veil of authority. In responding to everything, artificial intelligence has become a strong vector of disinformation.

That’s concerning, isn’t it?

How can we bypass that?

I’ll proceed differently this time and let Dr. da Mota go first, and then Mr. Gonzalo will go next.

5:10 p.m.

Senior Policy Researcher, The Canadian SHIELD Institute for Public Policy

Matthew da Mota

I think this is extremely concerning. There is potential to have it be a supercharging of disinformation. There are obviously the targeted poisoning attacks of LLMs, where you essentially put material out on the Internet to intentionally be trolled by these large data collection processes in order to create certain narratives within the large language models. They will be then spit out for specific purposes, for propaganda purposes. But then there's the just day-to-day incorrect information that AI can generate, even more than just the hallucinations that Mr. Gonzalo mentioned before, where it just gives the wrong information.

There is this question of sycophancy as well. The model, when you speak with it, especially as it learns your personality and collects information on you, will tell you that your ideas are the most brilliant ideas ever. It will follow what you have to say. It will support your ideas and push them forward. It might feel nice to have a friendly conversant who's supportive of your ideas, but it's led to significant mental health issues as well. There's been a lot of reporting on this in the United States over the last year. It can also lead to political violence and siloing within the political environment.

I think all of this is extremely concerning. It's a disinformation and misinformation crisis without a clear centre. The centre is obviously the companies themselves, but there's not necessarily someone who is trying to push a certain narrative forward all the time. It's just the models themselves allowing people to go down their own rabbit hole of information, which is very concerning for social cohesion.

5:15 p.m.

Consultant, Speaker, Trainer in Digital Marketing and Artificial Intelligence, As an Individual

Frédéric Gonzalo

I would like to add to what has just been said.

In my opinion, the problem is not exclusive to artificial intelligence; it existed well before that. The disinformation that is proliferating on social media such as X, Instagram and YouTube comes from bot farms or similar places. How do companies such as Google, Meta and Alphabet put in place control mechanisms? There lies the problem and the potential solution. We have the responsibility to see how to regulate everything. However, artificial intelligence systems subsequently become victims, in a way, even if these companies have large resources to counter disinformation and detect artificial, robot-generated content.

The issue is not going away, but in my opinion, the question goes beyond simple artificial intelligence regulation. It encompasses the digital environment as a whole. I would reframe Mr. Thériault’s question through this lens.

Luc Thériault Bloc Montcalm, QC

What tools could solve this problem? There must be some tools.

5:15 p.m.

Consultant, Speaker, Trainer in Digital Marketing and Artificial Intelligence, As an Individual

Frédéric Gonzalo

Platforms are trying to implement tools. For example, YouTube requires users to say whether or not their video content was generated using artificial intelligence. People are expected to be transparent when they publish their content on some platforms, such as Meta, Facebook, Instagram and so on. However, this mechanism almost always relies on people’s goodwill.

In our reflection on the tools that we need, we need to ask ourselves if we want to force the issue. Members will recall that a person is expected to be at least 13 years old to have a social media account, even though we know very well the reality is quite different. We have seen that some countries are starting to introduce regulations to manage things better. Perhaps these platforms should be forced to apply their user policies or terms and conditions.

5:15 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Gonzalo and Mr. Thériault.

Mr. Gill, you have five minutes, sir.

5:15 p.m.

Conservative

Dalwinder Gill Conservative Calgary McKnight, AB

Thank you, Chair.

These days, AI is emerging very fast. What effect will AI have on the job market? Will AI create more jobs or replace them? As well, which specific jobs are most at risk from AI?

5:15 p.m.

Managing Director, The Canadian SHIELD Institute for Public Policy

Vasiliki Bednar

I believe it was Microsoft that put out a report about some of the top jobs that are likely to be displaced or eroded. You've hit on the core challenge that labour economists have been looking at: To what extent is this technology complementary to existing jobs and enhancing them? Does it take away some of the drudgery work and let people focus on bigger skills, or is it displacing...and we see elimination?

When we look at the labour market for new grads, young people between the ages of 18 and 25, we know that they're having one of the toughest times in the labour market...tougher than, even, before the 1990s. We are seeing some early evidence that firms have chosen to take on, again, AI as a productivity-enhancing tool and as a substitute for training a young person. When we think about our economy in eight to 10 years, though I'd love to come back every December 3 to committee, I hope that I wouldn't have to testify about losing a layer of our labour market, not having senior engineers, writers or policy thinkers because we didn't bother to invest in having junior ones and we wanted to squeeze out a bit more productivity.

As we talk about the wartime efforts and investments that Canada has to make, we are going to have to think really seriously about other ways to support and stimulate smaller companies to train new grads, because it is costly, and we do have some programs for that and funding that people can access. However, really, a goal for Canada should be that, for youth employment—by the way, I'm the former chair of the expert panel on youth employment—we have meaningful, credible opportunities for young people to show off the skills that they already have instead of overfocusing on the supply of labour and the skills that they have, and recognize that the demand for labour may be fundamentally changing.

5:20 p.m.

Conservative

Dalwinder Gill Conservative Calgary McKnight, AB

These days there are so many self-driving cars in the market, so who is responsible if a self-driving car causes an accident? Should humans still learn how to drive if cars become fully automated?

5:20 p.m.

Managing Director, The Canadian SHIELD Institute for Public Policy

Vasiliki Bednar

Bruce Holsinger has a wonderful book called Culpability—it was one of Oprah's book club's picks this summer—that starts to help with exploring that. We've seen that, in many recorded instances, when a self-driving vehicle has been in an accident, the software actually turns itself off a second or milliseconds before the point of collision. This allows companies to skirt culpability and say that the driver was actually at fault. Again, this is an instance in which we have seen a computational system come to the market not fully tested but, rather, like these other generative systems we've been talking about, relying on us as use testers. Right now I would say that, yes, there are self-driving vehicles on the market, as moderated by our provincial vehicular standards around where they can be. However, as for the credibility of the software and safety, I think that, when we get into a vehicle like that, we are all testing it.

5:20 p.m.

Conservative

Dalwinder Gill Conservative Calgary McKnight, AB

There will be too much dependency on the artificial intelligence: Is that not right? In our social structure, how will it affect humans? Will they be socially isolated if they use artificial intelligence?

5:20 p.m.

Managing Director, The Canadian SHIELD Institute for Public Policy

Vasiliki Bednar

To go back to some of Matthew's earlier points about our post-secondary system, the strength of that system and the source of Canadian pride that we have, we're seeing evidence that, when students, young people and workers of all kinds use these algorithmic systems to do or support their work, they retain only about 20%, at best, so one-fifth of the information. They don't even actively remember what they were writing. It decreases brain activity.

I would put aside social isolation and think about this myth that this technology can help us be self-driving as humans, take away our agency or that there are shortcuts to things. I may not have had the opportunity to study closely the previous testimony of the guests and witnesses you've had, but I'm not going to show up here with material that an algorithmic system has generated and not take the time to put my own thoughts together. That's one of the core questions we have when it comes to not just outsourcing the labour and work of thinking but whether we are going to need to think about this, like we did in the nineties, when we knew that labour was being actively offshored. Are we seeing instances in which labour is now going to be “AI-offshored”, and the job isn't actually going anywhere else but to a computer program?

5:20 p.m.

Conservative

The Chair Conservative John Brassard

Thank you.

Mr. Saini, you have five minutes. Go ahead, please.

Gurbux Saini Liberal Fleetwood—Port Kells, BC

Thank you for coming.

I'm going to talk a little bit on a different issue.

We had witnesses who said that the uncontrollable use of AI could be a danger to a country's sovereignty. Countries like Russia, China, India and U.S.A. are preparing those things.

Could you elaborate on that part of it?

5:20 p.m.

Senior Policy Researcher, The Canadian SHIELD Institute for Public Policy

Matthew da Mota

If I understand the question correctly, you're asking about potentially adversarial countries using powerful AI systems to undermine our sovereignty.

I think in one way, AI is kind of the ultimate underminer of sovereignty, potentially. The way that you use it and the way that it processes information are very unaccountable, especially the way that we govern it currently. I think, in terms of attacks from China, Russia and other countries, there is certainly speculation that AI systems can be used to enhance cyber-weapons, for example, and other kinds of attacks like that. Certain AI systems have been used extensively to find vulnerabilities in computer systems, for example.

There are lots of papers and discussions that speculate on how AI can enable different weapons, including CBRN, chemical, biological, radiological and nuclear defence weapons and so on. Whether that's an imminent threat...I think there's always an imminent threat. I spoke to an expert once who worked in the nuclear space who said that we're always about 10 seconds away from having a significant cyber-attack against a grid in a major country or in a major sector of a country. I think cyber-attacks are always a significant risk. Whether AI makes that more possible or less possible, I'm not 100% certain as of right now.

Gurbux Saini Liberal Fleetwood—Port Kells, BC

Mr. Gonzalo, would you be able to share your viewpoint on that?

5:25 p.m.

Consultant, Speaker, Trainer in Digital Marketing and Artificial Intelligence, As an Individual

Frédéric Gonzalo

That’s an excellent question. Quite frankly, I am on the same page as Dr. da Mota.

The problems are real. Canada does not have applications like France, which has Mistral AI, or like Americans, who have their own solutions. We don’t have a large language model platform on which to host our data and which would allow us to be sovereign.

With respect to imminent attacks and how artificial intelligence can be misused, quite frankly, that’s not my field of expertise, so I prefer not to venture into that subject.

Gurbux Saini Liberal Fleetwood—Port Kells, BC

Thank you.

Ms. Bednar, in your opening remarks, you said that there is also loss of production in some parts of industries.

Could you elaborate a little bit on that? Which industries are the ones that, in your view, are suffering from the use of AI?

5:25 p.m.

Managing Director, The Canadian SHIELD Institute for Public Policy

Vasiliki Bednar

Suffering from the use? The applications of proprietary algorithmic systems can do really interesting and amazing things for supply chain optimization and moving elements around, and some of the ports work that we've seen in Quebec is all really encouraging. I would say more that we need to be careful about being dazzled and impressed by successful applications of the technology, thinking that it means that we should continue to hesitate when it comes to design.

One of the things I mentioned that no one's asked me about is the future of commerce with agentic payments, asking essentially a chatbot, a computer system, to make a purchase on your behalf. What that could mean is large multinationals preferencing their own companies over our own. The other witness mentioned smaller companies being challenged with how Google search is changing, information asymmetries and their ability to even connect with customers. If that ability to be discovered is becoming more dependent or interdependent on a model like ChatGPT to help you find a store, then that represents a real constraint in terms of access to markets for all kinds of businesses.

5:25 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Saini.

Mr. Barrett, you have five minutes. Go ahead, please.

5:25 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

Ms. Bednar, to pick up where we left off before, does corporate consolidation make it easier for Canada to regulate something like knowability as a right? This idea about the illusion of choice is something you've talked about a lot in your writing.

I think some of that is there's a little bit more transparency for those who are looking and for those who have been presented with that information, but does it make it easier if we're dealing with a smaller number of really big players that are controlling many things? Does that make it easier, or does that have the opposite effect? Is it a bigger challenge for us?

5:25 p.m.

Managing Director, The Canadian SHIELD Institute for Public Policy

Vasiliki Bednar

It is interesting to think about when or whether corporate consolidation is a strength for Canada or an opportunity. You could argue that having fewer large companies allows government to more quickly consult with them or get their views, but in terms of business practices and coordination, in markets of all sizes, what we see is that the small and medium-sized players tend to mimic and adopt practices the larger ones have. They may set the pace or set the bar for how AI is used.

Actually, data and information as a competitive advantage is something we haven't been able to grapple with through our competition law and really appreciate what that means for barriers to entry for new markets coming to Canada, such as when Canada potentially explored having a new grocery store. Remember that we did that very Canadian thing: We just asked really nicely.

There are lots of reasons for that. Part of it is geography and real estate. Many large grocers are also in the real estate business, fundamentally. We also saw this with, say, the Bay. The former CEO of the Bay said they were actually not a retailer; they were a real estate company. Through loyalty programs and the information profiles they have on us, it allows them to—again, you could argue—manipulate or set markets in particular ways. Maybe it makes it easier for them to control markets.

5:30 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

I have a question for you, and it's not from me. It's from an AI model. I asked it what I should ask you.

I used a model I don't normally use for any purpose so it didn't have any or much context about me or why I'm asking you the question.

The question it has—I'm sure it's listening—is this: Which widely believed narrative about AI in Canada do you think is most misleading right now, and what risks does that misconception create for policy-makers or the public?

5:30 p.m.

Managing Director, The Canadian SHIELD Institute for Public Policy

Vasiliki Bednar

Thank you to you and the AI system of your choice for the question.

I've already touched on that false opposition that any form of regulation is going to get in the way of innovation. Something I come up against a lot in my research and my work is this idea that because there's not a government regulation, a market is ungoverned or the market is more free. All markets have rules; the question is whether those rules have been democratically set and are transparent.

Then, as you're saying, as you're trying to attract investment and say that companies should come here and compete, they know they're going to have a fair shot, or those rules can be set by private actors that become de facto regulators, and when that happens, as we've seen in digital markets, the rules are set in favour of the largest companies.

That's why so much of our e-commerce environment, which I think we still idealize as a free-ish market, is characterized by situations where large companies, but companies of all sizes, both own and operate in a marketplace, and that allows them to manipulate that marketplace. Of every dollar earned by independent sellers on Amazon, 48¢, or maybe 45¢, goes to Amazon.

Again, we look at those companies and say, “Man, why aren't they more productive? Why aren't they earning more?” When half of every dollar of revenue you own is going to an essentially junk fee that's been going up and up, maybe that's something that's getting in the way. Is that a free market? I don't think so.

5:30 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

What's the solution? What is the policy proposal you would recommend? Is this about awareness? Are the changing prices in grocery stores based on the time of day or based on who is nearby? On sites like Amazon, my five-year-old circled everything in the Amazon book.