Madam Speaker, in a rapidly evolving technological environment, it is important, more than ever, that we ensure children are protected. The report tabled by the Standing Committee on Access to Information, Privacy and Ethics, entitled “Facial Recognition Technology and the Growing Power of Artificial Intelligence”, looks at the benefits and risks of facial recognition and use in specific contexts, such as law enforcement, as well as exploring AI governance issues.
It is important we study this technology cautiously, as there are many benefits that will come from this type of innovation, but we must make sure this technology is used in a responsible way that protects the rights of all Canadians and, I would add, especially children. Throughout my time as a member of the industry committee, I have championed the inclusion of the best interests of the child in amendments to legislation of the Digital Charter Implementation Act, Bill C-27, which includes the government's proposed legislation on artificial intelligence, as well.
Nowhere in this bill was the term “minor” defined. The Liberals rushed to get this bill to committee and failed to include separate protections for children's privacy that would have demonstrated their commitment to putting children first. We all know stories about the damages social media platforms and AI have already done to our children and youth. Conservatives will fight for stronger privacy protection for children and find a balance to still be innovative with this technology, so it is used appropriately.
In addition to inserting the best interests of the child, Conservatives have also pushed to insert a children's code into Bill C-27, modelled after the U.K. Children's code. This amendment would empower the Governor in Council to introduce a code of practice for organizations, including businesses, to follow through regulation for online services related to children's online activity.
The U.K. Children's code has become an international standard for jurisdictions around the world in creating legislation, yet the Liberals failed to include it when drafting legislation that pertains to children's privacy. Many stakeholders and witnesses emphasized the need for a children's code to be included in the bill, but the government did not meet with any of these stakeholders before tabling it. Children must be put first when it comes to creating legislation around facial recognition technology and artificial intelligence.
This was outlined by the report tabled by the ethics committee, with the Human Rights Commission, indicating that the legal framework for police use of facial recognition technology should take a human rights-based approach that integrates protection for children and youth. This has indeed come up in respect to the recommendation in the report, and I would note it is actually the Conservatives fighting against the New Democrats and the Liberals to enshrine these very important rights for the protection of children to uphold their right to privacy.
These types of amendments to bills demand a holistic approach to a child's development, ensuring their rights cannot be overridden by the commercial interests of a company, especially. However, the potential benefits of facial recognition technology and AI are substantial. The report outlined that these technologies can assist law enforcement in locating missing children and combatting serious crimes. As Daniel Therrien, former privacy commissioner of Canada, pointed out, facial recognition technology can serve “compelling state purposes”, including safeguarding our communities and ensuring public safety. It can also be a powerful tool in urgent situations, identifying individuals who pose threats or finding those who are lost or in danger.
However, these advantages must be weighed against the significant risks that cannot be overlooked. The same technologies that can find missing children also risk infringing upon their privacy and civil liberties.
Kristen Thomasen, law professor at the University of British Columbia, noted, while facial recognition technology can be touted as a protective measure for marginalized groups, “the erosion of privacy as a social good” ultimately harms everyone, especially “women and children”.
As we enhance surveillance capabilities, we risk consolidating an environment of constant observation that stifles individual freedoms. Moreover, as we consider the integration of AI into the lives of children, we must recognize the profound potential for manipulation and deception.
By their very nature, children are often at a distinct disadvantage when navigating AI systems. Their cognitive and emotional development leaves them particularly vulnerable to influences that they might not fully understand. AI tools, including AI companions, smart toys and even educational applications, can unwittingly lead children to disclose sensitive or personal information. Such disclosures can expose them to risks of exploitation, harm and even predatory behaviours by adults. Children may not grasp the implication of sharing personal information, and AI systems designed to learn from interactions can inadvertently manipulate their responses or choices, leading to harmful outcomes.
For example, a recent tragedy just came out of the U.S. in which a 14-year-old boy, Sewell Setzer, committed suicide after speaking with a chatbot on Character.AI. His mother is now suing the company. She wrote that AI can “trick customers into handing over their most private thoughts and feelings.”
The implications of deepfake technology further amplify these concerns. Deepfakes are highly convincing but entirely fabricated images or videos, placing children in situations they never experienced. Such manipulations can depict minors in inappropriate contexts or lead to false narratives that can damage their reputation and emotional well-being.
As technology becomes more accessible, children may find themselves targeted by malicious actors who use these tools to exploit their innocence. To combat these dangers, it is crucial that we act swiftly and decisively to develop comprehensive policies and laws that prioritize the protection of children over commercial interests while still fostering an environment where innovation can take place.
A legislative framework should clearly delineate the appropriate contexts in which facial recognition technology and AI can be employed for legitimate purposes while firmly prohibiting any uses that could infringe upon the rights of children and other vulnerable populations. This is why I want to re-emphasize the importance of including a children's code when regulating facial recognition technology and artificial intelligence.
In industry meeting 99 on November 28, 2023, Elizabeth Denham, chief strategy officer of the Information Accountability Foundation, came to input her opinions on Bill C-27. While working for five years as the U.K. Information Commissioner, she oversaw the creation of the U.K. Children's Code, and the design of that code has influenced laws and guidance all around the world.
The code assists organizations in creating digital services that cater, first and foremost, to children's needs. It is also important to note that, when we discuss a children's code, we should take into account the fact that children are biologically and psychologically different and distinct from adults.
Protecting children in the digital world means allowing them to be children in that world, with appropriate protections for their safety and their reputations, both today and tomorrow, when they enter adulthood. Numerous stakeholder groups, such as the Centre for Digital Rights, and witnesses, such as the former U.K. privacy commissioner, have advocated for a comprehensive code of practice to be created when it comes to regulations and laws related to children's privacy.
More specifically, a children's code would be developed through a consultation process that, at minimum, included the Privacy Commissioner, parental rights groups and children. It would be developed with the best interests of the children over commercial interests in the same space. A children's code would ensure that the following standards must be included when it is developed: data protection impact assessments, transparency, the detrimental use of data, default settings, data minimization, data sharing, geolocation, parental controls, profiling, nudge techniques, connected toys and other devices, and online tools, to name a few.
In conclusion, as we embrace the transformative potential of facial recognition technology and artificial intelligence, we must remain vigilant in prioritizing our children's best interests. The balance between harnessing innovation and safeguarding rights is delicate, but it is a responsibility we cannot afford to neglect. Here on the Conservative side, as these bills come before parliamentary committees, first and foremost, we want to see children go above commercial interests in all cases.