Evidence of meeting #25 for Access to Information, Privacy and Ethics in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was companies.

A recording is available from Parliament.

On the agenda

Members speaking

Before the committee

Krueger  Assistant Professor, Department of Computer Science and Operations Research, Université de Montréal, As an Individual
Aguirre  Executive Director, Future of Life Institute
Tegmark  Professor, Future of Life Institute
Dufresne  Privacy Commissioner of Canada, Offices of the Information and Privacy Commissioners of Canada

4:25 p.m.

Executive Director, Future of Life Institute

Anthony Aguirre

Canada can certainly stake out a position in its own self-protection that we should not be building superintelligence. I think it will be critical that an end to this race will actually require both the U.S. and China to realize that it's against their own self-interest to build superintelligent AI. As long as they believe that it will grant them power, they will want to pursue it, but this is not the case. AI, superintelligent, will absorb rather than grant power. If they realize this, then it is in their interest as well as everybody else's not to have not to have it developed, and that's the foundation.

4:25 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Aguirre.

Mr. Krueger, I'm going to give you 10 seconds to respond, please.

4:25 p.m.

Assistant Professor, Department of Computer Science and Operations Research, Université de Montréal, As an Individual

David Krueger

Thank you.

We need negotiations for an international treaty to begin immediately. I want everyone here to talk to everyone they know who has any ability to make something like that happen and tell them exactly that: Tell them what you've heard here from all of us and the previous experts.

4:25 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, sir.

Mr. Hardy, you have the floor for two and a half minutes.

4:25 p.m.

Conservative

Gabriel Hardy Conservative Montmorency—Charlevoix, QC

Thank you very much.

Once again, I’ll address my question to Mr. Krueger, because the question deserves analysis.

We’re seeing many companies investing in artificial intelligence in a bid to replace their employees and to make more profits. I raised this question in committee last week, and it prompted some valuable answers. People have questions.

When big tech companies invest in artificial intelligence, how do they measure their return on investment? Do they actually have good returns or ultimately, is it more expensive for them to manage artificial intelligence and ensure the work is done properly than to have a well-trained employee to do the same job?

I’d like to hear your thoughts on that.

4:30 p.m.

Assistant Professor, Department of Computer Science and Operations Research, Université de Montréal, As an Individual

David Krueger

There are going to be cases where it's more expensive or less expensive at present. One important thing to recognize is that AI doesn't have to do the job better if it can do it much cheaper. We might see a replacement of competent humans with less competent AI at scale because it's just cheaper, which I think would be a bad outcome.

On the other hand, I think we have to think about where this is all headed because it's going there very fast in a few years. I think within a few years, we will see AI that is an extremely competitive replacement for most human labour. That is the premise on which these investments are being made. The massive investments to build AI are justified by the belief that this, in fact, will or at least might lead to the creation of something like superintelligence like we've been describing here.

4:30 p.m.

Conservative

Gabriel Hardy Conservative Montmorency—Charlevoix, QC

Businesses don’t invest without looking at key performance indicators, or KPIs and rates of return. We always hear about the future, but now, are they measuring performance? Are they seeing any returns, or would you say that for now, they don’t have any measurements when it comes to their investments in this new technology?

4:30 p.m.

Conservative

The Chair Conservative John Brassard

Answer in 35 seconds or less, please.

4:30 p.m.

Assistant Professor, Department of Computer Science and Operations Research, Université de Montréal, As an Individual

David Krueger

I'm sure they're measuring some things. Silicon Valley companies, historically, care more about growth and dominating a market and addicting customers before they even try to make a profit. Amazon famously didn't make any profit for, like, 10 years or something. I don't know what they're looking at right now, but I don't think that this is what is driving the investments. Yes, I think if you take seriously the possibility of AGI and superintelligence in a few years, which the investors and especially the people building it do, then the investment is certainly justified, except that this is also incredibly dangerous and shouldn't be happening. If it kills everybody, it will kill the investors, kill the people who own these companies, kill everybody in this room. It won't matter if we've created a bunch of good businesses, as Sam Altman says. It won't matter if we've cured cancer, etc.

4:30 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Krueger. I appreciate that.

Mr. Sari, you have the floor for two and a half minutes.

Abdelhaq Sari Liberal Bourassa, QC

Thank you very much, Mr. Chair.

I’d like to thank the witnesses for their very insightful remarks.

I only have two and a half minutes to ask questions on a very complex issue, which I feel very strongly about.

We all agree about the risk you have outlined today. Now, the question is how to intervene. To do so, first, it’s important to specify the field or type of artificial intelligence in question. Here, we’re not talking about generative artificial intelligence, and we are not necessarily talking about superintelligence or agentic artificial intelligence. One thing that deeply concerns me is the concentration of cognitive capabilities in artificial intelligence systems because these cognitive capabilities direct learning in general and universal “cognitive capacity”.

I’d really like one of you to answer the following question: How can we talk about actual human oversight or government oversight when artificial intelligence systems learn, evolve and make decisions at a pace that is faster than our collective ability to understand and challenge these decisions?

4:30 p.m.

Conservative

The Chair Conservative John Brassard

Who wants to take that question in just over a minute?

Mr. Tegmark, can I start with you?

4:30 p.m.

Professor, Future of Life Institute

Max Tegmark

Yes.

What you're so eloquently describing is the digital gain-of-function research, also known as recursive self-improvement, where AI makes better AI which makes better AI.

Again, it's easy to deal with. We've already dealt with it in biology here in America. We've banned gain-of-function research, and there's very strong opposition to it now after the possibility that this might have caused the COVID pandemic. We should similarly ban AI digital gain-of-function research. It's a no-brainer, yet right now a number of companies in America are explicitly doubling down on this kind of digital gain-of-function research because they're unregulated.

4:35 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Tegmark.

Mr. Krueger or Mr. Aguirre, do you have anything to add in 15 or 20 seconds or less on that question?

4:35 p.m.

Assistant Professor, Department of Computer Science and Operations Research, Université de Montréal, As an Individual

David Krueger

Yes.

I think it's important to emphasize that this issue does need to be tackled internationally. It's also something that Anthony and Max, in talking about tool AI...I think it's true that it's a great place to aim for, but I do think we may need to do something pretty drastic in the immediate future to be able to monitor and enforce an international agreement to stop this race. Then, once we've sort of gotten control of the situation, we can think about how we want to proceed to develop beneficial tool AI—

4:35 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Krueger. I'm going to have to cut it off there. We're at the end of the hour.

On behalf of the committee, I want to thank all three of you for participating in this discussion. Thank you.

We are going to suspend for a couple of minutes while we change over to our second-hour panel with the Privacy Commissioner.

4:40 p.m.

Conservative

The Chair Conservative John Brassard

I'm going to call the meeting back to order for our second hour as we return to studying the challenges posed by artificial intelligence and its regulation.

I want to welcome for the second hour today, from the Offices of the Information and Privacy Commissioners of Canada, Mr. Philip Dufresne, who is the Privacy Commissioner of Canada—it's always good to have you back with us, sir—and Marc Chénier, who is the deputy commissioner and senior general counsel.

Mr. Dufresne, you have up to five minutes to address the committee. Go ahead, please.

Philippe Dufresne Privacy Commissioner of Canada, Offices of the Information and Privacy Commissioners of Canada

Thank you very much, Mr. Chair.

Members of the committee, thank you for the invitation to appear as part of your study on the challenges posed by artificial intelligence and its regulation.

Addressing the privacy impacts of the fast-moving pace of technological advancements is one of my strategic priorities. This has also been a significant focus of my domestic, international and cross-regulatory work over the last few years given its rapid and broad adoption by individuals and organizations in Canada and globally.

Privacy is an important and timely issue for Canada. As more and more personal data is being collected, used and shared, data protection becomes increasingly significant for Canadians and Canadian organizations.

The protection of personal information is particularly important in the context of AI, as personal information can be used to train and operate those systems. Recently, I announced an expanded investigation into the social media platform X and its Grok chatbot. The investigation will examine the emerging phenomenon of AI being used to create deepfakes, which can present significant risks to Canadians, including children.

I expect that the results of this investigation, as well as my ongoing investigation into OpenAI, will help to inform privacy and policy direction with respect to AI, and help individuals and organizations to use and deploy these technologies safely and responsibly, and with appropriate protections for personal data.

Investigations by the Office of the Privacy Commissioner in the past two years have demonstrated how Canadian law is able to address major privacy issues that can have serious impacts on individuals.

For example, my investigation into Aylo, which operates Pornhub and other pornographic websites, addressed non-consensual sharing of intimate images. My joint investigation with my U.K. counterpart into the 23andMe breach examined an incident that impacted the highly sensitive personal information of seven million customers, including more than 300,000 Canadians.

Last fall, I announced the result of my investigation with my provincial counterparts into TikTok, which highlighted the importance of protecting children's privacy online. Because of our investigation, the company has implemented, and continues to implement, improvements to its privacy practices in the best interest of its users, especially children.

Technologies such as AI can bring economic, social and public interest benefits. The value of this innovation will be maximized when it is accompanied by trust.

A survey conducted by the Office of the Privacy Commissioner last year found that a significant majority of Canadians are concerned about how their personal information is collected and used—including 83% indicating concern about their privacy when using artificial intelligence tools. Many have taken actions to protect themselves and most indicated that they are less willing to share their personal information with organizations compared to five years ago.

This further underscores the strategic advantage for organizations to develop and deploy AI and other technologies in a responsible, privacy-preserving manner. It is key for developers and providers of AI to embed privacy in the design, conception, operation and management of their products and services and to consider the unique impact that these tools have on children, as well as on groups that have historically experienced discrimination or bias.

Organizations that use AI should be transparent about this use and accountable for any AI generated decisions about an individual, such as whether to grant someone a loan or a job.

As technologies continue to evolve rapidly, and become increasingly integrated into personal and professional lives, it is our collective role as regulators and policy-makers to ensure that privacy is protected for current and future generations. Canada’s privacy laws must be able to meet this challenge, and to do so will require modernization.

With respect to AI, my recommended amendments to Canada's federal privacy laws include recognizing privacy as a fundamental right, as well as establishing requirements to implement privacy by design and to conduct privacy impact assessments for high-impact data processing.

Personal information is at the heart of artificial intelligence, and therefore, privacy legislation should, in my view, be at the heart of AI regulation.

Thank you. I look forward to your questions.

4:45 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Dufresne.

We're going to start with Mr. Barrett for six minutes. I'm going to keep it tight on time.

Mr. Barrett, go ahead.

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

Tell me about what have been called Chinese spy cars. The Premier of Ontario, Mr. Ford, has expressed serious concerns about the plans for the Canadian market to accept nearly 50,000 vehicles manufactured by companies like BYD. What has your office taken a look at so far with respect to those vehicles and that claim, perhaps informed or otherwise, by Mr. Ford?

4:45 p.m.

Privacy Commissioner of Canada, Offices of the Information and Privacy Commissioners of Canada

Philippe Dufresne

I have heard those statements, and we are monitoring the situation generally with respect to connected vehicles. In fact, this year we launched our contribution program on the theme of connected devices, and we're looking forward to finding out more about the types of connection, the types of data information that is collected by cars and other types of devices.

In terms of the Chinese angle, we are not looking at this specifically. However, in the context of our TikTok investigation, one of the elements we highlighted in our conclusion was when data will leave Canadian jurisdiction and there is a risk that other governments can have access to this information, this is something that Canadians should know about. This should be transparent. In our TikTok report findings, we requested, and the organization agreed, to make this explicit.

4:45 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

On TikTok, do you have a recommendation for Canadians on whether or not they should use the app?

4:45 p.m.

Privacy Commissioner of Canada, Offices of the Information and Privacy Commissioners of Canada

Philippe Dufresne

Our recommendation to Canadians is that they should ask questions about this or any app. Frankly, in any situation where their personal information is being sought, they should be asking, “Why do you need it? What will you do with it? Who will you share it with?” In the TikTok investigation, we address these head-on, looking at what the organization is telling Canadians.

It's one thing for citizens to ask questions. Organizations have a big responsibility to be proactive in this transparency. In the TikTok case, we found that the information wasn't clear enough for adults, and it was certainly not clear enough for children, who are a huge part of that market.

The questions should be about the use, the sharing, the purposes, where it's going and who can have access to it. Canadians should ask more of these questions, but organizations should proactively take responsibility for making that information easy to find.

4:45 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

Have you seen a change in the proactive information offered by TikTok to Canadians since you made those recommendations?

4:45 p.m.

Privacy Commissioner of Canada, Offices of the Information and Privacy Commissioners of Canada

Philippe Dufresne

I would say yes, because we are working with them in monitoring the implementation of our recommendations. Our recommendations had a six-month period to put them in place. A big one was better tools to keep underage children off the platform altogether. Others had to do with the transparency, the consent and the information.

They've implemented a number of those, and they have until March to complete the rest. We're going to be monitoring that to make sure that happens.