Evidence of meeting #21 for Access to Information, Privacy and Ethics in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was businesses.

A video is available from Parliament.

On the agenda

Members speaking

Before the committee

Gonzalo  Consultant, Speaker, Trainer in Digital Marketing and Artificial Intelligence, As an Individual
Bednar  Managing Director, The Canadian SHIELD Institute for Public Policy
da Mota  Senior Policy Researcher, The Canadian SHIELD Institute for Public Policy

4:50 p.m.

Managing Director, The Canadian SHIELD Institute for Public Policy

Vasiliki Bednar

I call it fake news. It's fake music. It's fake sounds. It's fake.

4:50 p.m.

Consultant, Speaker, Trainer in Digital Marketing and Artificial Intelligence, As an Individual

Frédéric Gonzalo

Can I chime in?

4:50 p.m.

Conservative

The Chair Conservative John Brassard

You may, but briefly.

4:50 p.m.

Consultant, Speaker, Trainer in Digital Marketing and Artificial Intelligence, As an Individual

Frédéric Gonzalo

Algorithms do play a powerful role. We know that today, about 70% of the content consumed on Netflix comes from the platform’s recommendations. On Spotify, 50% to 60% of music shown is driven by playlists, your tastes and your listening habits.

Some education does come into play, but it’s important to recognize the role and the strength of these algorithms.

4:50 p.m.

Conservative

The Chair Conservative John Brassard

Thank you.

Well, there goes my music career after I retire. I was hoping to have a synthetic music career, but that may not work now.

You have six minutes, Mr. Thériault.

Luc Thériault Bloc Montcalm, QC

Thank you, Mr. Chair.

Even though I will start by referencing an article by Mr. Gonzalo, my question is for all witnesses and I would like each of them to chime in.

In a blog article, Mr. Gonzalo, you stated that this year, there is a significant increase in the use of artificial intelligence tools as search engines. You explain that last year, 5% of Canadians surveyed stated that their first instinct to stay informed is to use these tools, and that this figure now stands at 12%. This is a significant increase that once again confirms the penetration rate of artificial intelligence in our daily lives.

I have some concerns when I see such an increase, in particular when it comes to the numerous unavoidable biases of artificial intelligence. We need to ask a basic question: Who is responsible for biases in data, algorithms and the results? No one knows.

References to biases in artificial intelligence allude to the appearance of biased results due to human prejudices that skew training data or source artificial intelligence algorithms. These skewed results can have adverse consequences. Biases that are not dealt with harm people’s ability to participate in the economy and society. Biases reduce the accuracy of artificial intelligence, and by extension, its potential. They have an impact on all society and businesses. This can be something such as recommending politically biased content, which can replicate or perpetuate echo chambers. These impacts may also be felt in recruitment or in access to credit and loans, for example.

How can we ensure these biases don’t mislead people?

4:55 p.m.

Consultant, Speaker, Trainer in Digital Marketing and Artificial Intelligence, As an Individual

Frédéric Gonzalo

May I answer that question?

Luc Thériault Bloc Montcalm, QC

Yes, please go first, Mr. Gonzalo.

4:55 p.m.

Consultant, Speaker, Trainer in Digital Marketing and Artificial Intelligence, As an Individual

Frédéric Gonzalo

You have zeroed in on the issue of biases. There is also the issue of hallucinations. I would say that we have not yet come up with a response or solution to these two factors. We know that big artificial intelligence companies say they are solving these issues, but the challenge remains real.

In my opinion, the government can ensure these companies are compliant, so to speak, by forcing them to be transparent. It’s important to try and open up this black box. For now, there is no mechanism in place in that regard.

A study by the Blue Cross on travel intentions by Quebeckers and Canadians was released today. Over 3,000 Canadians were surveyed to find out where they were planning to go this winter, in Canada or abroad. The results showed people are increasingly using artificial intelligence tools for travel suggestions and for tips and tricks to save money while travelling.

The report you alluded to in the article I wrote was the DGTL study published by Léger in September. From one year to the next, consumers are making more use of artificial intelligence in their daily lives.

Obviously, Google is still the main online search engine, but do they know exactly how Google’s algorithm works when giving results? They did not know more. There were just a few indicators. Artificial intelligence has put us in a field where we have sources, but we don’t know how the tool was trained.

This creates challenges for businesses, for example, as they don’t always understand why they are not recommended in search results. That poses a real challenge because instead of getting a list with hundreds of clickable links, you now get a mash-up answer with two or three suggestions for companies, businesses and organizations. Businesses are at risk if their name does not appear among these suggestions.

I don’t have an answer to that, unfortunately, but I think that it’s indeed a problem that must be dealt with.

4:55 p.m.

Conservative

The Chair Conservative John Brassard

Mr. Thériault, I know that Dr. da Mota would also like to chime in.

Luc Thériault Bloc Montcalm, QC

Yes, of course.

4:55 p.m.

Conservative

The Chair Conservative John Brassard

Mr. da Mota, do you want to respond to that?

Matthew da Mota Senior Policy Researcher, The Canadian SHIELD Institute for Public Policy

This is an extremely concerning question that I've been working on for a few years—the question of how AI will impact research in general, especially Canadian research institutions. It's what we would call—and what we're working on under the term—“epistemic sovereignty”, which is the ability of a country or a community to be able to control the knowledge environment and how knowledge is produced. That's an important question, not only for researchers in the sciences and humanities but also for people working in government and for businesses. How do you translate information into knowledge and then into action in the world?

This is a huge concern. We don't know how a lot of these models are trained exactly. We don't necessarily know what kind of data they're being trained on. There have been many examples of intentional insertion of certain types of data to skew results towards one narrative or another. These are all major concerns.

In terms of how we could govern this, we need to think first about what we want our knowledge environment to look like. This is what I would say across the board on what we're doing with AI. What do we actually want the results to look like? What are the long-term goals? Then, we come up with solutions based on that.

Part of that would be thinking about the kinds of monopolies that control our information environment and our knowledge environment. This is very obvious in the big-tech sector, but in the research sector, in particular, there are only a few companies—they're all multinationals; none of them are Canadian companies—that own the vast majority of academic copyright. They also are developing AI tools to access and process that information from that copyright.

This is what our entire research and education system is built on at the university level, and this is a major concern.

5 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Dr. da Mota.

Thank you, Mr. Thériault.

Mr. Gonzalo, I apologize for cutting your answer short earlier, but I noticed that someone else in the room wanted to contribute.

Mr. Cooper, you have five minutes. Go ahead, please.

5 p.m.

Conservative

Michael Cooper Conservative St. Albert—Sturgeon River, AB

Thank you, Mr. Chair.

Thank you to the witnesses.

I'm going to ask a fairly broad, high-level question to both witnesses. Other jurisdictions are a lot further ahead when it comes to regulation, and there's a vacuum here. In that sense, there's a debate, obviously, about to what extent, in broad terms, regulations should be grounded based upon the precautionary principle to everything up to post-deployment monitoring.

We can look to the EU with its Artificial Intelligence Act, which has had a challenging rollout, arguably, in terms of being critiqued as overly burdensome, with overly high compliance costs. Arguably, Bill C-27, the Canadian model that never came to be, was more restrictive than the EU, insofar as the EU model, the EU act, has greater carve-outs. The U.K.'s regulatory framework is a little more flexible. Then there's the U.S. approach, and there are others. There are ranges there.

I'd be, in very broad terms, interested in your comments on some of the pros and cons of regulations imposed in other jurisdictions.

5 p.m.

Managing Director, The Canadian SHIELD Institute for Public Policy

Vasiliki Bednar

Of course. I will turn it to Matthew.

I'll just share that something I've been noticing is the language around regulatory harmonization being used now. I think it's the new way we signal a kind of deregulation or lower regulatory environment. It's a way to suggest to Canada that because we don't have our own path forward we should continue to wait and to follow others.

But yes, there are other models that are instructive in various ways.

5 p.m.

Senior Policy Researcher, The Canadian SHIELD Institute for Public Policy

Matthew da Mota

Yes, I think the first thing I would say is about the idea that regulation kills innovation. I think there's a lot of evidence that shows the contrary, or at least shows that it's a far more complicated question than that.

I think in the EU AI Act context, some of the things that are prohibited are things like active subliminal or manipulative kinds of AI, biometric categorization by race, things that I think we mostly can agree are probably unacceptable. The fact that companies are saying that the burden is too high is a little concerning, because either they're developing tools that want to do these things or they're just trying to open up space to be able to do whatever they want.

In terms of pros and cons, I think in Canada in some ways we're behind the United States and other leading countries in terms of commercializing AI in the leading companies. We still have probably the best or one of the best research environments for AI and other sciences in general. I would say we can lead in many ways. I think a great pro of thinking about the right kind of regulation is that we could lead on developing the kind of AI that people actually want to use, the safe, useful AI that can be used across all different areas in very specific domains or more generally. I think that's a huge pro to any kind of regulation.

Michael Cooper Conservative St. Albert—Sturgeon River, AB

Mr. Gonzalo.

5:05 p.m.

Consultant, Speaker, Trainer in Digital Marketing and Artificial Intelligence, As an Individual

Frédéric Gonzalo

I think it boils down to what I said earlier. I think we can’t be against regulation. On the contrary. The only thing that I would recommend, which I discuss with the businesses I work with, would be to adopt graduated regulations.

Many businesses make fairly basic use of generative artificial intelligence in general, whereas bigger organizations integrate artificial intelligence on a larger scale. Both types of businesses therefore do not use artificial intelligence in the same way. Unfortunately, there is a tendency to want to introduce uniform regulations that apply to all types of businesses. The only thing that I would recommend would be to tread carefully. I think it’s good to adopt a form of regulation, but it should not be applied too broadly.

5:05 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Cooper.

Thank you, Mr. Gonzalo.

Ms. Church, you have five minutes.

Go ahead, please.

Leslie Church Liberal Toronto—St. Paul's, ON

Thank you, Mr. Chair.

Thank you to the witnesses for being here.

Ms. Bednar, thank you for writing The Big Fix. I would consider that a must read. I just want to commend your book, which has an excellent public policy perspective on many of these issues.

I would like to ask you specifically about the concept of algorithmic pricing, because I think it is actually new for a lot of us. We are, as consumers, already familiar with examples of surge pricing or variable pricing when we purchase an airline ticket, for example. Why should we be more worried about algorithmic pricing? How is AI changing the way businesses set prices for consumers today?

Then I have a follow-up question. What are the ways then that we can help protect consumers, their privacy and their pocketbooks?

5:05 p.m.

Managing Director, The Canadian SHIELD Institute for Public Policy

Vasiliki Bednar

I think one reason we should care about algorithmic pricing is because it's a form of personal pricing. It's personalized pricing that can be interpreted as being inherently discriminatory. Yes, there are a lot of places in the economy where we've come to accept price volatility. We all might drive around to a different gas station because we can see that the price has changed daily, but we can all see the same price.

With personalized pricing, each of us might see a different price for the same item. We're actually seeing that Target and Walmart in the U.S. have stopped, in some instances, even putting price labels on their shelves, saying they can't keep up with tariffs and all those other price changes. You then don't find out what the price is until you go to the checkout.

Loyalty programs are closed pricing ecosystems, where you and I might see and get a different discount. That's a different form of pricing designed to incentivize us to purchase certain things based on our past purchasing behaviour. It also means that the accessibility to, say, coupons—which we all used to get in the newspapers and we could all get the same discount on our milk or diapers, be they for your baby or for yourself—could be kind of equally accessed. That's changing.

You don't have to be a big company to do it. You don't have to be the biggest on the block. It is a practice that firms of all sizes, probably because we have kind of these legislative rule vacuums, have taken into account. One of the more insidious ones I've come across is the Taco Bell app, which can start to infer or learn when your payday may be because of the cookies. Again, these are data-hungry surveillance environments. My gordita deal is more expensive every other Friday.

The people who end up being taken most advantage of.... Again, it's maybe at the margins. It may seem like small sums, but it really adds up. Back to what I said before, that it sucks—this sucks, too.

Back to that element of no ability, it's very difficult to discern when it happens. Years ago, Amazon stopped having prices on its holiday gift guide. Remember getting the Eaton's catalogue and folding pages or peeking at your mom's Victoria Secret? There aren't prices now when it comes to the Amazon catalogue. You and I might see a different price based on the time of day, based on our geography or based on the devices we're using. That price is not to give us the best possible discount; it's to extract as much value as possible.

Leslie Church Liberal Toronto—St. Paul's, ON

I take from this that our legal frameworks right now are insufficient.

How do we get to the bottom of this? How do we make sure that a business isn't setting a personalized price on a discriminatory basis based on what they can infer, presumably, of my background, my financial situation, my geography and this whole constellation of data points that they presumably have access to now through AI?

5:10 p.m.

Managing Director, The Canadian SHIELD Institute for Public Policy

Vasiliki Bednar

A lot of it comes back to knowability. Of course, I'll defer to and look forward to the Competition Bureau's forthcoming study on algorithmic pricing. We did see with the RealPage case, which was studied more in the U.S. than here, that we said there wasn't enough evidence that a software program was being used to drive up rents for apartment buildings that were owned. Again, it's a reminder that you don't have to be the largest firm to use software like this that could be collusive.

Canadians, I think, are still reeling from bread price-fixing. I think right now you can still get like $20 or $25. There's a different class action lawsuit or something. I'm going to have to google that.

Software systems and computer programs can allow this to happen. There are more models in the U.S., often at the state level. New York just introduced new legislation related to that kind of pricing that mostly has to do with disclosure and there have been other proposals to just ban it entirely.

You could argue there are instances where it's preferable or desirable, but again it's fundamentally an extractive process. It's not one that's really about rewarding your loyalty.

5:10 p.m.

Conservative

The Chair Conservative John Brassard

That's fascinating.

Thank you, Ms. Church.

Mr. Thériault, you have the floor for five minutes.