Evidence of meeting #21 for Access to Information, Privacy and Ethics in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was businesses.

A video is available from Parliament.

On the agenda

Members speaking

Before the committee

Gonzalo  Consultant, Speaker, Trainer in Digital Marketing and Artificial Intelligence, As an Individual
Bednar  Managing Director, The Canadian SHIELD Institute for Public Policy
da Mota  Senior Policy Researcher, The Canadian SHIELD Institute for Public Policy

4:30 p.m.

Conservative

The Chair Conservative John Brassard

I call this meeting to order.

Welcome to meeting number 21 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.

Pursuant to Standing Order 108(3)(h) and the motion adopted on Wednesday, September 17, 2025, the committee is resuming its study of the challenges posed by artificial intelligence and its regulation.

I would like to welcome our witnesses for today.

On Zoom, we have Frédéric Gonzalo, who is a consultant, speaker and trainer in digital marketing and artificial intelligence. Welcome.

We also have, from the Canadian SHIELD Institute for Public Policy, Vas Bednar, managing director.

Welcome back to committee. You were here a year ago today. We're celebrating an anniversary. Isn't that wonderful?

Dr. Matthew da Mota is the senior policy researcher at the Canadian SHIELD Institute. Welcome.

Before you begin, I decided today to consolidate the three witnesses together. We have an hour and a half. If it's the will of the committee to go a little bit longer, we will have the ability to have extra time. As it stands right now, we're going to finish roughly around six o'clock.

Mr. Gonzalo, I'm going to start with you for up to five minutes to address the committee.

Go ahead, please.

Frédéric Gonzalo Consultant, Speaker, Trainer in Digital Marketing and Artificial Intelligence, As an Individual

Thank you, Mr. Chair.

Good afternoon, members of the committee.

Thank you for inviting me to contribute to this important discussion on the challenges posed by artificial intelligence regulation.

For more than 30 years, I have been working with small and medium-sized organizations, particularly in the tourism, private education, culture and municipal services sectors in Quebec and internationally. These are often small businesses with fewer than 100 employees that want to adopt artificial intelligence to increase efficiency, but they don’t always know where to start, what to use and what risks to avoid.

My first observation is that regulatory uncertainty creates paralysis. SMEs don’t have legal teams or cybersecurity specialists. They want to do the right thing, but they don’t always have a concrete understanding of what is allowed, what is not recommended or what could lead to non-compliance. A framework that is too technical or rigid risks creating a digital divide between well-resourced organizations that can move forward and those that cannot.

My second observation is that there must be a balance between privacy and innovation. SMEs currently use tools like ChatGPT, Gemini or Canva AI without a full understanding of how their data is being processed. Policies change rapidly, interfaces evolve and it is difficult for SMEs to keep up. A set of simple and visual Canadian guidelines on consent, anonymization and data minimization tailored to small organizations would be extremely useful.

Third, digital literacy continues to be a big challenge. For the past few years, I have been providing artificial intelligence training to managers, municipal organizations, artists, restaurateurs and hoteliers. I have observed the same phenomenon everywhere: there is a real and immense enthusiasm, but people have limited practical knowledge. Employees use artificial intelligence in their personal lives, but rarely do so in a structured setting at work. Without training or support, artificial intelligence risks being misused or not used at all.

Fourth, the transformation of search engines into artificial intelligence engines has created a new challenge of digital discoverability. Businesses are now wondering how to be visible in ChatGPT, Perplexity or Gemini and how their content is cited or not cited by these platforms. The lack of transparency complicates matters for SMEs which simply want to exist in this evolving ecosystem.

Lastly, a proportionate compliance framework is needed. SMEs now mostly use artificial intelligence to write texts, respond to customers, automate administrative tasks or create visuals. These are low-risk uses. Regulations should therefore be tiered: heavy and strict for systems that have a societal impact, but simple, pragmatic and accessible for everyday use in small organizations.

In short, SMEs want to adopt artificial intelligence, but they don’t want to be left to their own devices. They need a clear framework, adequate support and tools that are tailored to their reality. Regulations must protect Canadians while allowing small organizations across the country to innovate, remain competitive and take full advantage of this technological revolution.

Thank you. I will be more than happy to answer your questions.

4:35 p.m.

Conservative

The Chair Conservative John Brassard

Thank you for your opening remarks, Mr. Gonzalo.

I now give the floor to Ms. Bednar.

Ms. Bednar, you have up to five minutes to address the committee. Please start.

Vasiliki Bednar Managing Director, The Canadian SHIELD Institute for Public Policy

Thank you very much, Mr. Chair and members of the committee.

By way of a brief introduction, I'm the managing director of The Canadian SHIELD Institute for public policy and co-author of The Big Fix: How Companies Capture Markets and Harm Canadians. My work focuses on market power, technology and economic sovereignty.

I'm joined today by my colleague, Dr. Matthew da Mota. His work explores how technologies shape information and knowledge environments, particularly AI and the implications for national security and sovereignty. He's also a leader in the AI standardization community in Canada. You heard that it's his first appearance at committee; I hope it will not be his last.

Canada has been talking seriously about AI regulation for the better part of a decade now; and yet, while we've been mostly debating privacy, consent and data collection frameworks, AI hasn't been waiting for us. It hasn't been waiting for businesses, either. The technologies are already being deployed, shaping markets and shaping culture and economic outcomes in real time.

Much of the regulatory conversation to date has treated AI primarily as a data governance problem. That focus is important, but it's no longer sufficient, because what we're now facing isn't speculative or hypothetical. It is a present-day deployment challenge. We're regulating live-use cases, and at least that's how we think we need to start approaching this.

Here is some of what we've been studying at SHIELD. There's AI-generated music and cultural production that cannot be reliably distinguished without disclosure. Earlier today at Little Victories, my coffee, I was surprised to learn, was sponsored by Spotify. I wonder why. There's algorithmic and personalized pricing in housing, groceries, ticketing, insurance and elsewhere. Autonomous and agentic payment systems are beginning to transact without direct human initiation. What does that mean for the future of e-commerce and the discoverability of businesses big and small?

None of these challenges map directly, neatly or perfectly on a simple privacy and consent framework. They're about market governance. They blend consumer protection, competition, labour and financial oversight. They're about how power is exercised through automated systems in everyday life. If we have a gap today as a country, it's mostly that we've been reluctant to take clear positions on how AI is already being used and how it should maybe be constrained in practice.

Let me just expand on those three more concrete live-use cases.

The first is culture in CanCon. You know that Canada recently updated its Canadian cultural guidelines, its framework, to say that AI-generated material does not count as CanCon, but we did not take that extra step of clarifying what AI-generated material should count as. What is it? How should it be labelled? How should human creators be protected in markets that are now saturated with synthetic output? We have a regulatory vacuum in one of the country's most sensitive sovereignty domains.

The second is algorithmic pricing. Automated pricing systems are shaping and reshaping rent, tickets, groceries, consumer credit—all sorts of places. The Competition Bureau's forthcoming study in this arena is a crucial step forward. The challenge here is not just price discrimination, but also the normalization of machine-optimized extraction from households at scale. We care about the cost of living in Canada. We have to care about this practice.

For the third one, I just want to point to payments and financial autonomy. As AI systems begin to initiate transactions autonomously, which is interesting from a consumer protection and competition standpoint, we need to ask whether existing Bank Act principles like fairness, non-discrimination, explainability and regulatory oversight apply. If machines are transacting, then the governance expectations have to follow that transaction—not the interface.

I'll also note one element of caution in the broader economic narrative. We're being told that AI will rescue our productivity rut if only adoption moves fast enough, yet the evidence there remains highly mixed. Many enterprise deployments fail. Some controlled studies show that productivity losses occur rather than the gains that have been promised.

Yes, AI may well transform parts of our economy, but it would be a mistake to predicate Canada's entire growth strategy on unproven assumptions. If we over-promise and then under-govern, the public's going to pay twice—once through disrupted labour markets and again through weakened consumer protections.

In closing, AI regulation cannot remain anchored primarily in upstream debates about data collection alone. We have to regulate the downstream power that is already observable, how systems shape and reshape prices, wages, transactions, culture, information and access to opportunity. The technology is at work, and the question before this committee is whether governance can catch up.

Thank you. We look forward to your questions.

4:40 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Ms. Bednar. I appreciate your opening statement.

We're going to start with our six-minute rounds of questions.

Mr. Barrett is going to kick things off.

Go ahead, Mike.

4:40 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

Ms. Bednar, from your perspective, as someone who studies digital market failures and governance, what is the single biggest structural weakness in Canada's current AI strategy, and what's the effect of that on public accountability and our economic sovereignty?

4:40 p.m.

Managing Director, The Canadian SHIELD Institute for Public Policy

Vasiliki Bednar

Thank you for a wonderful and challenging question.

In terms of a big weakness overall, I think it's very obvious that we're treading so carefully on not wanting to infringe upon or impede innovation.

In 1999, the U.S. took an explicit policy position around permissionless innovation that Canada tacitly echoed. We said, “Let's step back. Let's take our hands off the wheel. Let's throw spaghetti at the wall.” Right now, most of the time, we're trying to scrape some of that tomato sauce off the wall. That's why it's been so challenging for us to bring forward a big tech accountability agenda.

Our biggest constraint is that tension between feeling like any market intervention around governance and guardrails is seen or interpreted as impeding innovation and subsequent growth.

4:40 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

What should it look like? What should those guardrails look like?

Did you have the opportunity to see any of the previous committee hearings or any of the testimony from our most recent meeting, for example?

4:40 p.m.

Managing Director, The Canadian SHIELD Institute for Public Policy

Vasiliki Bednar

No; we looked a little bit at who was appearing and into companies and background.

4:40 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

It's not required homework, though I do encourage all Canadians to regularly watch the proceedings of the Standing Committee on Access to Information, Privacy and Ethics.

4:45 p.m.

Managing Director, The Canadian SHIELD Institute for Public Policy

4:45 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

However, the question I have posed to other witnesses is about the challenge, or the instinct, to regulate and to put up as many guardrails as we can and prevent the runaway freight train of AI superintelligence, and everything will then be okay.

Of course, that has to be done in concert with peer countries or even with a global compact, but if you have any other actors—let's say, bad actors—who are the state sponsors, currently, of cyber-attacks on Canada, how are we able to balance regulation while also allowing ourselves to progress? We're going to need to deploy AI in some form, I would expect, to defend against AI weapons.

4:45 p.m.

Managing Director, The Canadian SHIELD Institute for Public Policy

Vasiliki Bednar

One thing, historically, we have tried to do in one piece of legislation is regulate both the composition of these systems and their application. You can view that as an opportunity to separate some of those thoughts, which is why we're putting forward using use cases to understand where and how this technology is being disruptive or deceiving people. Where do we not understand where it is and how it is distorting markets?

The second fundamental challenge for Canada is how, in complementary trade agreements, we're constrained through the digital chapter in CUSMA from achieving what many people would want us to be able to do, such as, for instance, mandating data residency or auditing algorithms to even try to start to understand them. We cannot do that because we're constrained. As we look forward to what we want to be able to do when it comes to interpreting, understanding, appreciating, governing or having the right oversight or auditability of those algorithmic systems, we are currently unable to do that.

4:45 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

What would institutional reforms need to look like that would insulate our AI oversight as a country from political cycles, inconsistency or, let's just say, knowledge deficits at the political level?

For example, a minister responsible for artificial intelligence is a new thing, so what is the mandate of that minister? What's that ministry responsible for?

That's going to evolve, change, cycle in and, potentially, cycle out with changes in the ministry and in the federal cabinet. How do we insulate against the cyclical nature of the political element so that we have consistency and stable regs?

4:45 p.m.

Managing Director, The Canadian SHIELD Institute for Public Policy

Vasiliki Bednar

I wonder if you want to start with the principle of knowability when a system is being used or deployed or, for instance, when you interact with a chatbot in businesses and governments. It's very “Dude, where's my jetpack?” in terms of what we're going to get with AI.

We have a lot of chatbots. That's interesting and can save money on customer service. Put that aside. Should a chatbot be able to, frankly, masquerade or deceive people that they're a human? It can be very confusing for people. When I think I'm chatting with Mark at Canadian Tire or something, it's a computer system.

When you're chatting with the chatbot from the Government of Canada, and you're asking it questions about the immigration system, you may think that you are speaking with an agent or something like that. Again, it's that principle. Right now, we lack knowability a lot of the time. That's why I brought up music. Synthetic audio makes it basically impossible for us to detect when you're hearing a fake song. I know that sucks.

4:45 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

Thank you.

4:45 p.m.

Conservative

The Chair Conservative John Brassard

Thank you. I'm sure that will get recorded in the blues, “that sucks”. You can say it. It's all good.

Mr. Sari, you have the floor for six minutes.

Abdelhaq Sari Liberal Bourassa, QC

Thank you very much, Mr. Chair.

Thank you very much to the witnesses for being with us today. Their opening remarks were quite compelling and interesting and they truly align with this committee’s study, which is even more relevant at this critical juncture, when we need to protect Canadians and ensure that we do not hinder the growth of the digital economy in Canada. Canada is a pioneer in this field. That is a very important element.

Witnesses have mostly talked about culture and generative artificial intelligence and the creation of music or other forms of artistic or cultural content.

I have the following question with respect to putting in place control mechanisms. Should we have control mechanisms that govern the development of systems when it comes to learning, training systems and large language models, or LLMs, or should we have mechanisms to control use since Canadians are currently using this system?

When we talk about control mechanisms, what are we referring to? Are we talking about control in terms of personal behaviour or within a public organizational framework?

The question is for all of the witnesses.

First, is it feasible to control systems? If so, can you tell us how?

4:50 p.m.

Conservative

The Chair Conservative John Brassard

Mr. Gonzalo can answer the first question.

4:50 p.m.

Consultant, Speaker, Trainer in Digital Marketing and Artificial Intelligence, As an Individual

Frédéric Gonzalo

That’s an excellent question.

I am not an expert on regulations, but I think that when it comes to global platforms, Canada has a role to play regarding control, which can be done at the user level.

It would be very difficult to see how you put in place control mechanisms with OpenAI, Anthropic or the other firms, such as Microsoft. It is not easy to control businesses. There have been attempts to do that with Google and Meta over the past few years. I think that was part of the old Bill C‑18. In an ideal scenario, is it something that we would want to do? Maybe, but I think feasibility will not be easy.

However, we can control its use. At least, it may be possible to narrow the parameters within which consumers, traders and the public can use these tools.

I alluded to that in my remarks: There is a need to define how far we are going to go and what is allowed. It is also important to educate people about what can or cannot be done or should not be done. I think that is where there would be a role to play.

That’s my take on this issue.

4:50 p.m.

Conservative

The Chair Conservative John Brassard

I don't know who wants to address that, Ms. Bednar or Mr. da Mota.

4:50 p.m.

Managing Director, The Canadian SHIELD Institute for Public Policy

Vasiliki Bednar

I'll say that, with the application of generative elements—to bring it back to culture—we are also seeing that it's not something markets really want. iHeart radio recently announced that they will not play any music that has a synthetic component or is synthetically generated. We saw during the Oscars that moviegoers were offended that someone had vocal coaching that was synthetic or in the background of a movie.

We're starting to see, again, outside of more formal regulations, what markets and what people want and don't want. I do think, when it comes to the application of that material, that it's very important to pay attention, because we have a responsibility. Governments have a responsibility to do hard and difficult things.

That's why the government has also been studying copyright, AI and where that value is created. I know companies like OpenAI want us to think that it's very difficult to govern them, but it doesn't have to be that way.

Abdelhaq Sari Liberal Bourassa, QC

I’d like to continue the discussion on OpenAI, but I have another question about Quebec culture.

I really believe in raising awareness to address many societal challenges. It is important to educate Canadians about artificial intelligence so they can better understand it.

Some people don’t even realize that the music they are listening to has been generated using artificial intelligence, be it on Spotify where all algorithms focus on music that is now generated by artificial intelligence, or even on YouTube, for example.

Do you think increasing public awareness could be more effective than control?

4:50 p.m.

Managing Director, The Canadian SHIELD Institute for Public Policy

Vasiliki Bednar

Absolutely not. This isn't an education failure. It's impossible. It's intentional deceit. It is companies that want to extract value from real artists and musicians who have already depreciated the payouts that they receive and are training computer systems. Calling it AI sometimes makes it a bit fancier than it is. They're actively training systems to take artists and real bands out of the equation altogether and earn more for themselves on this fake music.

I find it deeply offensive that we can be in elevators, at work or in a hotel room and listening to something that's frankly not real. It's just a bunch of sounds.

Abdelhaq Sari Liberal Bourassa, QC

I like your words “fake music”.

There's a new word you can use now—fake music. Do you call generative music fake music?