Thank you very much, Mr. Chair and members of the committee.
By way of a brief introduction, I'm the managing director of The Canadian SHIELD Institute for public policy and co-author of The Big Fix: How Companies Capture Markets and Harm Canadians. My work focuses on market power, technology and economic sovereignty.
I'm joined today by my colleague, Dr. Matthew da Mota. His work explores how technologies shape information and knowledge environments, particularly AI and the implications for national security and sovereignty. He's also a leader in the AI standardization community in Canada. You heard that it's his first appearance at committee; I hope it will not be his last.
Canada has been talking seriously about AI regulation for the better part of a decade now; and yet, while we've been mostly debating privacy, consent and data collection frameworks, AI hasn't been waiting for us. It hasn't been waiting for businesses, either. The technologies are already being deployed, shaping markets and shaping culture and economic outcomes in real time.
Much of the regulatory conversation to date has treated AI primarily as a data governance problem. That focus is important, but it's no longer sufficient, because what we're now facing isn't speculative or hypothetical. It is a present-day deployment challenge. We're regulating live-use cases, and at least that's how we think we need to start approaching this.
Here is some of what we've been studying at SHIELD. There's AI-generated music and cultural production that cannot be reliably distinguished without disclosure. Earlier today at Little Victories, my coffee, I was surprised to learn, was sponsored by Spotify. I wonder why. There's algorithmic and personalized pricing in housing, groceries, ticketing, insurance and elsewhere. Autonomous and agentic payment systems are beginning to transact without direct human initiation. What does that mean for the future of e-commerce and the discoverability of businesses big and small?
None of these challenges map directly, neatly or perfectly on a simple privacy and consent framework. They're about market governance. They blend consumer protection, competition, labour and financial oversight. They're about how power is exercised through automated systems in everyday life. If we have a gap today as a country, it's mostly that we've been reluctant to take clear positions on how AI is already being used and how it should maybe be constrained in practice.
Let me just expand on those three more concrete live-use cases.
The first is culture in CanCon. You know that Canada recently updated its Canadian cultural guidelines, its framework, to say that AI-generated material does not count as CanCon, but we did not take that extra step of clarifying what AI-generated material should count as. What is it? How should it be labelled? How should human creators be protected in markets that are now saturated with synthetic output? We have a regulatory vacuum in one of the country's most sensitive sovereignty domains.
The second is algorithmic pricing. Automated pricing systems are shaping and reshaping rent, tickets, groceries, consumer credit—all sorts of places. The Competition Bureau's forthcoming study in this arena is a crucial step forward. The challenge here is not just price discrimination, but also the normalization of machine-optimized extraction from households at scale. We care about the cost of living in Canada. We have to care about this practice.
For the third one, I just want to point to payments and financial autonomy. As AI systems begin to initiate transactions autonomously, which is interesting from a consumer protection and competition standpoint, we need to ask whether existing Bank Act principles like fairness, non-discrimination, explainability and regulatory oversight apply. If machines are transacting, then the governance expectations have to follow that transaction—not the interface.
I'll also note one element of caution in the broader economic narrative. We're being told that AI will rescue our productivity rut if only adoption moves fast enough, yet the evidence there remains highly mixed. Many enterprise deployments fail. Some controlled studies show that productivity losses occur rather than the gains that have been promised.
Yes, AI may well transform parts of our economy, but it would be a mistake to predicate Canada's entire growth strategy on unproven assumptions. If we over-promise and then under-govern, the public's going to pay twice—once through disrupted labour markets and again through weakened consumer protections.
In closing, AI regulation cannot remain anchored primarily in upstream debates about data collection alone. We have to regulate the downstream power that is already observable, how systems shape and reshape prices, wages, transactions, culture, information and access to opportunity. The technology is at work, and the question before this committee is whether governance can catch up.
Thank you. We look forward to your questions.