Digital Charter Implementation Act, 2022

An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts

Sponsor

Status

In committee (House), as of April 24, 2023

Subscribe to a feed (what's a feed?) of speeches and votes in the House related to Bill C-27.

Summary

This is from the published bill. The Library of Parliament has also written a full legislative summary of the bill.

Part 1 enacts the Consumer Privacy Protection Act to govern the protection of personal information of individuals while taking into account the need of organizations to collect, use or disclose personal information in the course of commercial activities. In consequence, it repeals Part 1 of the Personal Information Protection and Electronic Documents Act and changes the short title of that Act to the Electronic Documents Act . It also makes consequential and related amendments to other Acts.
Part 2 enacts the Personal Information and Data Protection Tribunal Act , which establishes an administrative tribunal to hear appeals of certain decisions made by the Privacy Commissioner under the Consumer Privacy Protection Act and to impose penalties for the contravention of certain provisions of that Act. It also makes a related amendment to the Administrative Tribunals Support Service of Canada Act .
Part 3 enacts the Artificial Intelligence and Data Act to regulate international and interprovincial trade and commerce in artificial intelligence systems by requiring that certain persons adopt measures to mitigate risks of harm and biased output related to high-impact artificial intelligence systems. That Act provides for public reporting and authorizes the Minister to order the production of records related to artificial intelligence systems. That Act also establishes prohibitions related to the possession or use of illegally obtained personal information for the purpose of designing, developing, using or making available for use an artificial intelligence system and to the making available for use of an artificial intelligence system if its use causes serious harm to individuals.

Elsewhere

All sorts of information on this bill is available at LEGISinfo, an excellent resource from the Library of Parliament. You can also read the full text of the bill.

Votes

April 24, 2023 Passed 2nd reading of Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts
April 24, 2023 Passed 2nd reading of Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts

Dr. Jennifer Quaid Associate Professor and Vice-Dean Research, Civil Law Section, Faculty of Law, University of Ottawa, As an Individual

Mr. Chair. vice-chairs and members of the Standing Committee on Industry and Technology, I am very pleased to be here once again, this time to talk about Bill C‑27.

I am grateful to be able to share my time with my colleague Céline Castets-Renard, who is online and who is the university research chair in responsible AI in a global context. As one of the preeminent legal experts on artificial intelligence in Canada and in the world, she is very familiar with what is happening elsewhere, particularly in the EU and the U.S. She also leads a SSHRC-funded research project on AI governance in Canada, of which I am part. The project is directed squarely at the question you are grappling with today in considering this bill, which is how to create a system that is consistent with the broad strokes of what major peer jurisdictions, such as Europe, the U.K. and the U.S., are doing while nevertheless ensuring that we remain true to our values and to the foundations of our legal and institutional environment. In short, we have to create a bill that's going to work here, and our comments are directed at that; at least, my part is. Professor Castets-Renard will speak more specifically about the details of the bill as it relates to regulating artificial intelligence.

Our joint message to you is simple. We believe firmly that Bill C-27 is an important and positive step in the process of developing solid governance to encourage and promote responsible AI. Moreover, it is vital and urgent that Canada establish a legal framework to support responsible AI governance. Ethical guidelines have their place, but they are complementary to and not a substitute for hard rules and binding enforceable norms.

Thus, our goal is to provide you with constructive feedback and recommendations to help ready the bill for enactment. To that end, we have submitted a written brief, in English and in French, that highlights the areas that we think would benefit from clarification or greater precision prior to enactment.

This does not mean that further improvements are not desirable. Indeed, we would say they are. It's only that we understand that time is of the essence, and we have to focus on what is achievable now, because delay is just not an option.

In this opening statement, we will draw your attention to a subset of what we discuss in the brief. I will briefly touch on four items before I turn it over to my colleague, Professor Castets-Renard.

First, it is important to identify who is responsible for what aspects of the development, deployment and putting on the market of AI systems. This matters for determining liability, especially of organizations and business entities. Done right, it can help enforcers gather evidence and assess facts. Done poorly, it may create structural immunity from accountability by making it impossible to find the evidence needed to prove violations of the law.

I would also add that the current conception of accountability is based on state action only, and I wonder whether we should also consider private rights of action. Those are being explored in other areas, including, I might add, in Bill C-59, which has amendments to the Competition Act.

Second, we need to use care in crafting the obligations and duties of those involved in the AI value chain. Regulations should be drafted with a view to what indicators can be used to measure and assess compliance. Especially in the context of regulatory liability and administrative sanctions, courts will look to what regulators demand of industry players as the baseline for deciding what qualifies as due diligence and what can be expected of a reasonably prudent person in the circumstances.

While proof of regulatory compliance usually falls on the business that invokes it, it is important that investigators and prosecutors be able to scrutinize claims. This requires metrics and indicators that are independently verifiable and that are based on robust research. In the context of AI, its opacity and the difficulty for outsiders to understand the capability and risks of AI systems makes it even more important that we establish norms.

Third, reporting obligations should be mandatory and not ad hoc. At present, the act contemplates the power of the AI and data commissioner to demand information. Ad hoc requests to examine compliance are insufficient. Rather, the default should be regular reporting at regular intervals, with standard information requirements. The provision of information allows regulators to gain an understanding of what is happening at the research level and at the deployment and marketing level at a pace that is incremental, even if one can say that the development of AI is exponential.

This builds institutional knowledge and capacity by enabling regulators and enforcers to distinguish between situations that require enforcement and those that do not. That seems to be the crux of the matter. Everyone wants to know when it's right to intervene and when we should let things evolve. It also allows for organic development of new regulations as new trends and developments occur.

I would be happy to talk about some examples. We don't have to reinvent the wheel here.

Finally, the enforcement and implementation of the AI act as well as the continual development of new regulations must be supported by an independent, robust institutional structure with sufficient resources.

The proposed AI data commissioner cannot accomplish this on their own. While not a perfect analogy—and I know some people here know that I'm the competition expert—I believe that the creation of an agency not unlike the Competition Bureau would be a model to consider. It's not perfect. The bureau is a good example because it combines enforcement of all types—criminal, regulatory, administrative and civil—with education, public outreach, policy development and now digital intelligence. It has a highly specialized workforce trained in the relevant disciplines it needs to draw on to discharge its mandate. It also represents Canada’s interests in multilateral fora and collaborates actively with peer jurisdictions. It matters, I think, to have that for AI.

I am now going to turn it over for the remaining time to my colleague Professor Castets-Renard.

Thank you.

Erica Ifill Journalist and Founder of Podcast, Not In My Colour, As an Individual

Good afternoon to the industry and technology committee as well as a lot of their assistants and also to whoever may be in the room.

I am here today to talk about part 3 of Bill C-27, an act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts. Part 3 is the Artificial Intelligence and Data Act.

Firstly, there are some issues, some challenges, with this bill, especially in accordance with societal effects and public effects.

Number one, when this bill was crafted, there was very little public oversight. There were no public consultations, and there are no publicly accessible records accounting for how these meetings were conducted by the government's AI advisory council, nor which points were raised.

Public consultations are important, as they allow a variety of stakeholders to exchange and develop innovative policy that reflects the needs and concerns of affected communities. As I raised in the Globe and Mail, the lack of meaningful public consultation, especially with Black, indigenous, people of colour, trans and non-binary, economically disadvantaged, disabled and other equity-deserving populations, is echoed by AIDA's failure to acknowledge AI's characteristic of systemic bias, including racism, sexism and heteronormativity.

The second problem with AIDA is the need for proper public oversight.

The proposed artificial intelligence and data commissioner is set to be a senior public servant designated by the Minister of Innovation, Science and Industry and, therefore, is not independent of the minister and cannot make independent public-facing decisions. Moreover, at the discretion of the minister, the commissioner may be delegated the “power, duty” and “function” to administer and enforce AIDA. In other words, the commissioner is not afforded the powers to enforce AIDA in an independent manner, as their powers depend on the minister's discretion.

Number three is the human rights aspect of AIDA.

First of all, how it defines “harm” is so specific, siloed and individualized that the legislation is effectively toothless. According to this bill:

harm means

(a) physical or psychological harm to an individual;

(b) damage to an individual's property; or

(c) economic loss to an individual.

That's quite inadequate when talking about systemic harm that goes beyond the individual and affects some communities. I wrote the following in The Globe and Mail:

“While on the surface, the bill seems to include provisions for mitigating harm,” [as said by] Dr. Sava Saheli Singh, a research fellow in surveillance, society and technology at the University of Ottawa's Centre for Law, Technology and Society, “[that] language focuses [only] on individual harm. We must recognize the potential harms to broader populations, especially marginalized populations who have been shown to be negatively affected disproportionately by these kinds of...systems.”

Racial bias is also a problem for artificial intelligence systems, especially those used in the criminal justice system, and racial bias is one of the greatest risks.

A federal study was done in 2019 in the United States that showed that Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.

A study from the U.K. showed that the facial recognition technology the study tested performed the worst when recognizing Black faces, especially Black women's faces. These surveillance activities raise major human rights concerns when there is evidence that Black people are already disproportionately criminalized and targeted by the police. Facial recognition technology also disproportionately affects Black and indigenous protesters in many ways.

From a privacy perspective, algorithmic systems raise issues of construction, because constructing them requires data collection and processing of vast amounts of personal information, which can be highly invasive. The reidentification of anonymized information, which can occur through the triangulation of data points collected or processed by algorithmic systems, is another prominent privacy risk.

There are deleterious impacts or risks stemming from the use of technology concerning people's financial situations or physical and/or psychological well-being. The primary issue here is that a significant amount and type of personal information can be gathered that is used to surveil and socially sort, or profile, individuals and communities, as well as forecast and influence their behaviour. Predictive policing does this.

In conclusion, algorithmic systems can also be used in the public sector context to assess a person's ability to receive social services, such as welfare or humanitarian aid, which can result in discriminatory impacts on the basis of socio-economic status, geographic location, as well as other data points analyzed.

The Chair Liberal Joël Lightbound

I call the meeting to order.

Good afternoon everyone, and welcome to meeting No. 101 of the House of Commons Standing Committee on Industry and Technology.

Today’s meeting is taking place in a hybrid format, pursuant to the Standing Orders.

Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming consideration of Bill C‑27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts.

I’d like to welcome our witnesses today, Mr. Jean-François Gagné, an AI strategic advisor, who will be given an opportunity to give his opening address when he joins us a little later. We also have with us Ms. Erica Ifill, a journalist and founder of the Podcast Not In My Colour, and from AlayaCare, Mr. Adrian Schauer, its founder and chief executive officer.

I want to thank you, Mr. Schauer, for making yourself available again today. I know we had some technical difficulties before, but the headset looks fine this afternoon. Thanks for being here again.

Thank you, Madam Clerk, for the help, as well.

We have, from AltaML Inc., Nicole Janssen, co-founder and chief executive officer; and from Gladstone AI, we have Jérémie Harris.

And last, we will have Jennifer Quaid, associate professor and vice-dean research, civil law section, Faculty of Law, University of Ottawa along with with Céline Castets-Renard, full law professor, Faculty of Civil Law , University of Ottawa.

As we have several witnesses, we will begin the discussion immediately. Each of you will have five minutes for an opening statement. Mr. Gagné, please begin.

Madame Ifill, the floor is yours.

December 4th, 2023 / 5:50 p.m.


See context

Senior Policy Analyst, The Dais

Joe Masoodi

Thank you for the question.

There was a question on surveillance capitalism, which is a concept that was introduced by Shoshana Zuboff. It was introduced a couple of times during the hearings. The previous question was on what we can do to try to at least mitigate the impacts of surveillance capitalism, which was really initiated, if we look back, by Google. It was Google, through its machine-learning techniques, that facilitated that process. It was the inadequate regulatory and legal regimes that were in place that allowed that to happen.

If I were to provide some key recommendations or suggestions in terms of takeaways, I would say we need robust privacy laws. We've heard that over and over again. I'd like to emphasize that again. We need to have robust privacy measures in place, specifically in areas with regard to cross-border data transfers. I think Bill C-27 could use an area that specifically identifies cross-border data transfers as an area for robust protections.

December 4th, 2023 / 5:45 p.m.


See context

Assistant Professor, Thompson Rivers University, As an Individual

Matt Malone

I would not support that.

I understand the intent behind the proposal. I think it's well-intentioned, and I considered it seriously, but I think it would have adverse effects that may not be what is intended.

The reality is that we need a privacy law that protects children by default. It shouldn't be the responsibility of a parent. There are mixed harms and benefits with these technologies, and I don't believe that parents or older generations are the ones who are always the best at navigating these technologies. I've seen lots of surveys from within the Privy Council Office itself that show young people are the ones who use these technologies; 30% of teens get their news from TikTok, and a lot of older generations don't use them at all. One concern I would have is that I wouldn't feel comfortable entrusting that responsibility to all parents, but that's just my personal view.

What I would say, though, is that I do believe children should be explicitly referenced as a vulnerable population within Bill C-27. I think it's unacceptable that children and youth, in particular, have been removed from Bill C-27 and are omitted. That was a deliberate intent by the Ministry of Industry. I have an internal brief that talks about the reasons behind that, and I'd be happy to share that with you.

Sam Andrey

I see a role for a digital regulator.

Currently, there's the idea of having an AI data regulator in Bill C-27, but it's an ISED department official. This, I think, is unacceptable, especially given that the minister will have the competing roles of championing the economic benefits of AI and regulating its risks. At a minimum, they should be appointed by the GIC. Ideally, it would be a parliamentary appointment that is separate.

I think you could task the same regulator with the online harms portfolio. It could be two, but that's a lot of digital regulators. That regulator would have the power to do audits and a forum on ombudsman-type functions to support individuals. They would also have a transparency function.

December 4th, 2023 / 5:30 p.m.


See context

Assistant Professor, Thompson Rivers University, As an Individual

Matt Malone

I think Canada has an opportunity to reclaim a bit of the traditional role that we like to see Canada have, which is serving as a middle power with allied states.

Several ideas have been floated around creating safe dataflow zones that map onto the security alliances that already exist, like NATO for example. We already have a commitment to mutual defence with our NATO allies. It would seem logical that we might feel comfortable sharing our data, our personal information, with these allies in a free cross-border dataflow zone. There are opportunities for Canada to certainly create a niche role when it comes to regulation and the creation of regulatory frameworks for cross-border dataflows.

I think the more appalling concern that I have is with the state of the current law. The fact is that a lot of Canadian law, and certainly the priorities of legislators right now, is to create privacy law that applies only to the private sector. I think one of the real problems we've seen—and we saw this through the pandemic as well—is that we need robust privacy and data protection laws that also apply to government. I've been really upset at the fact that the artificial intelligence and data act does not apply to government actions, which is really concerning when you think about the deployment of AI technologies, AI-fueled and AI-driven technologies such as the ArriveCAN app.

I've also been really concerned about the fact that the priorities with Bill C-27 have not focused on government. To me, it's disturbing that this effort has been led by the industry portfolio and Bill C-27 would create new regulatory instruments that would be answerable to the Minister of Industry. It's really hard to say that we're approaching privacy from a human rights or law enforcement or national security perspective when the bodies we're creating are not truly independent. Not only are they not truly independent, but they're subservient to an industry portfolio whose mandate is to grow the economy.

December 4th, 2023 / 5:30 p.m.


See context

Assistant Professor, Thompson Rivers University, As an Individual

Matt Malone

I think the word mirage accurately captures the current state of affairs.

I think informed consent, which is what all Canadian privacy laws are currently based on, doesn't serve the ends that we really need data protection and privacy law in this country to serve. The reality that Bill C-27 has perpetuated this—the idea that this instrument will still work and still serve its ends even with the legitimate business exceptions, even with the rules around implied consent—really won't take us to a place where we have robust privacy and data protection law in this country.

I think you need to fundamentally shift the paradigm so that possessing, retaining, using or disclosing personal information becomes a liability, as opposed to a profitable way to run a business, which is what we have let these ad exchanges/social media companies do.

December 4th, 2023 / 5:05 p.m.


See context

Assistant Professor, Thompson Rivers University, As an Individual

Matt Malone

Thanks for the question.

I think it's really important to identify the TikTok representatives who spoke as lobbyists. They're registered lobbyists, and they do lobbyist work. I think it's important to talk about how a lot of the claims they made were very disingenuous. There are easy bypasses around a lot of the safety controls for children that they vaunted.

TikTok has been caught—to respond more directly to your question—engaging in all kinds of worrying conduct with respect to user data. There is public reporting that talks about TikTok accessing physical locations of journalists who are using the app, in order to track down their sources. That's in the public domain. There is public reporting about TikTok directing user data from the United States through China despite assurances otherwise, and there's a raft of other reporting.

There's internal government reporting from Canadian government actors like the Privy Council Office's intelligence assessment secretariat that identifies all kinds of other problems around the type of data and the persistent collection of data that occurs through the app. There are also materials that I've seen from the cyber-threat intelligence unit at the Canadian Forces intelligence command at the Department of National Defence that identify a series of concerning problems around censorship and so forth.

One of the really difficult issues here is that Canadian law is very permissive when it comes to data transfers. Even if you look at the proposed privacy legislation, Bill C-27, there's essentially nothing that would stop data transfers outside of Canada. Certainly, the privacy notice for TikTok states that by using TikTok you accept the terms and conditions, which are that the subsidiary TikTok can share that data with its corporate body, ByteDance, and Canadian law lets that happen. Even the proposed Canadian law would let that happen. Proposed section 19 and proposed subsection 11(1) of Bill C-27 specifically permit this type of data transfer.

Canadian data transfer law is essentially premised on the idea that organizations can send data to other organizations if they deem the protections are sufficient or adequate, as they would be in Canada. This approach is really different from the European approach, which is jurisdictionally grounded—country to country. You can't transfer data outside of a country unless you're satisfied that the protections would be essentially equivalent. There's a really big difference in Canadian data transfer law compared to the European data transfer law. Once data gets out of Canada, there's really no telling what happens to it. They don't take basic safeguards like you do.

For this meeting, I asked the chief information officer of the House of Commons where the data was being localized and processed for Zoom, which I would be using, and I was told—and I was very happy and impressed by this—that the data would be processed in Canada. Your in camera meetings are even more secure, so good on you. It's not for the users of TikTok.

December 4th, 2023 / 5 p.m.


See context

Assistant Professor, Thompson Rivers University, As an Individual

Matt Malone

When you look at the resources that are available, they're not meeting the demand. In 2018, when Public Safety went through a cybersecurity update and threw a lot of money at the RCMP to get more serious about online cybercrime, that was when the initial announcement was made about NC3, the national cybercrime coordination centre.

I wrote about this three years ago and said that we were already waiting a long time to get this rollout happening, but fast-forward three years, and that reporting system is two years behind schedule. If you visit the website right now, it will tell you that the system is still in beta testing and that it accepts only 25 cybercrime complaints a day for the entire country, which is really low. In a series of access to information requests regarding the number of resources that were devoted in terms of personnel, I discovered that there are several provinces that don't have any cybercrime investigators, which is a really shocking statistic. Here in B.C., the third-largest province in the country, we have only four full-time people on the cybercrime team.

I believe these tools need to be rolled out more rapidly. There should be more transparency around them, and legislation should be crafted around what we're seeing, because these tools allow us to understand what types of harms are being perpetuated. There are all kinds of analyses you can run based on the reporting data that comes in, and NC3 shows that more than half the reports that go to NC3 are about ransomware. It's really interesting that Canadian legislation ignores ransomware, which is the biggest cybercrime threat we're facing.

One thing that's interesting to take into consideration when we talk about Bill C-27 is also Bill C-26, which would regulate things like ransomware for critical industries.

Sam Andrey

I would add that I agree with the premise of your question, that we are falling behind in some respects, though I think we have, as Dr. Laidlaw put it, second-mover advantage to learn from some of the lessons and some of the flawed legislation or approaches that have been passed in allied jurisdictions.

On AI regulations specifically, I think Canada is moving quickly as relates to the rest of the world, which I think is a good thing, but, yes, I would say we need to move more quickly, and Bill C-27 is part of that.

December 4th, 2023 / 4:55 p.m.


See context

Assistant Professor, Thompson Rivers University, As an Individual

Matt Malone

I'm happy to jump in.

I believe one of the problems that Canada faces is that we're not a large power and we're stuck between approaches to privacy and data protection among large powers that are diametrically opposed. Failing to act soon will lock us into one of those approaches. The Europeans have adopted a more restrictive approach. Ever since the drafting, passage and implementation of the GDPR, we've seen an array of restrictive measures, which are leading to things like data localization, stricter requirements around data transfers, and a robust equivalency test.

The United States is taking a diametrically opposed approach with its regulatory framework, in which it has not updated its privacy legislation, and there's no uniform privacy legislation in the United States. At the same time the U.S. is doing that, it's exporting, through trade treaties and governance bodies worldwide, a view of data governance and privacy that locks in what Canada can do.

Discussions about data transfers have to take into consideration the fact that the Canada-United States-Mexico agreement has a prohibition on restricting cross-border dataflows, and it has other restrictions that are relevant as well. The CPTPP has similar restrictions.

One of the problems with Canada's failure to act is that we're getting locked into one of these approaches. Unfortunately, we show no urgency around acting. The Privacy Act, which regulates government conduct, hasn't been updated in over 40 years. PIPEDA is well in need of a meaningful update, not just tweaks. I personally don't believe that Bill C-27 is the appropriate way to do that.

I'll let the other panellists chime in.

Sam Andrey

In our annual survey of online harms, we found that Canadians have very low trust in social media platforms, both to keep their data secure and to act in the best interests of the public, ranking well below other technology companies and other organizations of a variety of types. In fact, trust in TikTok, specifically, fell significantly last year, to last place. Only 7% of Canadians say that they have a high degree of trust in the platform, despite its rapid growth with nearly 30% of Canadians using the platform.

TikTok has been the subject of particular scrutiny, given its corporate structure. As was pointed out earlier in the committee, prior to 2019, TikTok's privacy policy was transparent in stating that it shares people's information “with any member or affiliate of [its] group” in China. This line was later updated to remove that specific location reference, but the sharing provision remains. That same provision is also in the privacy policy of WeChat, which is used by 6% of Canadians. As our colleague Mr. Malone has pointed out, it is true of many others.

Canada's current privacy law does not prohibit companies from transferring personal data to third parties or outside of Canada in this way. We think that there is an opportunity before parliamentarians to respond to these risks through the proposed Bill C-27. However, as it currently stands, Bill C-27 would, in some ways, allow for even easier data sharing to take place between corporate actors by eroding what limited consent provisions do exist. Proposed section 18 of the CPPA creates new, large carve-outs for companies to share data without either knowledge or consent through the inclusion of language like “business activities” and “legitimate interest”.

We don't think that it should be the exclusive responsibility of Canadians to educate and protect themselves online. We would propose that there be more precise requirements added to the bill to ensure that equivalent levels of protection are provided for data when it's transferred outside of Canada. We would also suggest requirements that near the EU's GDPR, to obtain explicit informed consent from Canadians for the transfer of their personal data to jurisdictions that do not provide equivalent levels of protection, providing information about both the specific countries involved and the specific data. While a lot of people have pointed out to this committee that there's consent fatigue, we, at least, think that transparency with respect to data transferred to countries outside of Canada is important.

We'll end by saying that Canadians overwhelmingly support such a change. A representative survey that we conducted found that 86% of Canadians support requirements to keep Canadians' data in Canada, with only 3% disagreeing.

Thanks for your time. We look forward to your questions.

Matt Malone Assistant Professor, Thompson Rivers University, As an Individual

Thank you, Mr. Chair.

My name is Matt Malone, and I am an assistant professor at Thompson Rivers University faculty of law in Kamloops. Today I am attending the meeting in a personal capacity.

I am going to use my opening remarks to share my thoughts using a case study, which is specifically regarding the selective ban of TikTok on government-issued devices that was announced in February 2023. As the committee might recall, that selective ban was accompanied by a statement about concerns relating to privacy and security.

These stated concerns do not explain several things. First of all, they do not explain why the government waited five months to act on the underlying intelligence brief that warned about TikTok's practices. Second, they do not explain why the government continues to buy advertising on TikTok itself. Finally, they do not explain why the government has ignored that TikTok is not the only app that retains user data in foreign jurisdictions and potentially shares it with foreign regimes.

As the Treasury Board Secretariat confirmed to me a couple of days before this hearing, none of the following apps are banned from download and use on government-issued devices: the Russian-affiliated VKontakte social media app, the Russian-affiliated Yandex app, and the Russian-affiliated Mail.ru app, as well as other social media apps, like Facebook, Instagram, Tinder, Snapchat, Bumble, Grindr, Truth Social, Gab and Discord, which was implicated in the 2022-23 Pentagon leaks and which Dr. Laidlaw noted does not have child safety protection measures in place.

As I recommended in a recent article—and as I'll take this opportunity to recommend again now to the President of the Treasury Board—I believe that a better privacy and security baseline would see the government ban all social media apps on government-issued devices, unless there is a strong business justification otherwise. It's crazy to me that the apps I just listed are not banned on government-issued devices. I also believe that the government should stop buying ads on all social media services.

Even with such bans in place, it is worth noting that federal privacy law places no meaningful constraints on data transfers to jurisdictions like Russia and China. An internal government brief that I obtained through the Access to Information Act notes that Bill C-27 and the proposed privacy legislation currently before Parliament avoided putting into that bill any new or European-style restrictions on the transfer of personal information across borders specifically out of deference to commercial interests. It's very telling that the privacy bill before Parliament is being stewarded by the industry portfolio in cabinet, not a portfolio in human rights, public safety or national security.

Like many social media apps, TikTok does deserve opprobrium for its privacy violations, data harvesting and narrative control practices, and for granting access to data despite assurances otherwise. Like other social media apps, it is a vector for online harm visited on young people. Its business model is focused on privacy-invasive, targeted advertising that exacerbates the mental health crisis affecting young people. The app's safety features for children are all easy to bypass.

Through various access to information requests, I have seen several internal briefings where Canadian government actors repeatedly identified these problems. I'm happy to talk about these.

However, it's important to note that the real culprit here is Canadian law, because it does not stop these practices for TikTok or any other social media service. As TikTok lobbyists appearing before this committee repeatedly underscored, TikTok's handling of Canadians' user data is governed by Canadian law. That's the problem. Canada's privacy laws fail to respect the rights and interests of individuals and collectives in the digital age. Enforcement is basically non-existent. At the federal level, the Office of the Privacy Commissioner has become skilled at making fanfare announcements about its investigations, but it is very slow at investigating, as I learned in my own complaint about the ArriveCAN app, which was ultimately sustained.

Law enforcement has struggled to adapt to the new digital landscape as well. The RCMP's national cybercrime and fraud reporting system, which this committee recently heard about in glowing terms as part of this study, is actually two years behind schedule and still in beta testing. Its website says that it accepts only 25 complaints per day nationwide.

To give members another illustrative example, as I learned in a recent access to information request, the RCMP's cybercrime investigative team has only eight employees in all of Alberta. Here in British Columbia, where there was a recent tragic sextortion case involving a young person that was carried out over social media, there are only four employees on the cybercrime investigation team for the entire province. There are none in Saskatchewan, Manitoba or any of the maritime provinces.

With privacy and data protection legislation that deprives citizens of meaningful protection, government funding priorities deeply out of alignment with stated values and actual needs, and gaps in law and policy that the government shows no urgency to fill, the federal government's policies and practices pose significant challenges to addressing the real types of harms that we are seeing perpetuated these days on social media.

To wrap up, I want to thank the committee for its unexpected invitation.

I also want to give a particular shout-out of appreciation to the MP for Mississauga—Erin Mills for her leadership on this very important issue. I've been very impressed with her work on this file.

I look forward to answering, to the best of my abilities, any questions that the committee members might have.

Thanks.

Matthew Green NDP Hamilton Centre, ON

Thank you very much, Mr. Chair.

I want to pick up on some of this, particularly around Bill C-27. I myself think that this portion of the bill would have been better dealt with here under an ethical framework rather than an industry one.

Dr. Laidlaw, can you maybe talk about the ethics of AI and why, from a legal framework, those considerations in terms of the legitimacy of democracy and the ways in which AI is undermining society would probably be best situated as a carve-out, as you just suggested?