She's an expert on Bill C-27.
Let me start by saying that I think, Mr. Schaan, they are linked. They're linked in the idea that one requires the other, in that one is one and two is two. This is important, just so you understand that, because of what schedule 2 says.
Perhaps I can enlighten the Liberal members who aren't aware of what schedule 2 says. Schedule 2 allows the government to moderate content Canadians can see online, and that's why these two are linked.
Let me quote directly from the amendment to schedule 2:
The use of an artificial intelligence system in
(a) moderating content that is found on an online communications platform, including a search engine or social media service; or
(b) prioritizing the presentation of such content.
To be clear, the government has given itself the ability, through this provision, which is linked to schedule 1 in the numbering, to regulate the design, function, presentation and use of AI systems on social media platforms as it relates to what content the government wants prioritized and moderated on social media platforms.
The minister's submission to the committee outlined that the purpose of the provision seeks to tackle the bias in AI. All AI, by the way, have biases. The powers provided to ISED in the regulation will allow it to go much beyond simply addressing the issue in AI systems. ISED has already confirmed this.
In speaking at the business leaders breakfast, hosted by McCarthy Tétrault advisers at the TD Bank tower in Toronto on November 7, 2023, Simon Kennedy, the deputy minister of ISED, told industry groups that the purpose of this provision in the minister's amendments to Bill C-27 seeks to tackle online misinformation. This could be accomplished through the minister's amendments to the AIDA, which are still very vague, and provide ISED with an incredible amount of power, including the legal authority to moderate online content to Canadians, as argued at this committee by Barry Sookman. Importantly, the provisions of the AIDA with regard to content moderation, as they relate to high-impact AI systems, have very few safeguards and are incredibly vague.
As Barry Sookman highlighted in his written submission to the committee, the provisions outlined in Bill C-27 will extend to “AI systems that filter, rank, or recommend content on platforms such as social media, search engines, or any digital service that curates or moderates”—