Mr. Speaker, with regard to part (a), Immigration, Refugees and Citizenship Canada, IRCC, uses advanced analytics, AA; artificial intelligence, AI; and other non-AI-based automated decision support systems to identify routine applications for streamlined processing and make positive decisions on these applications, as well as to perform other functions, such as the sorting of applications based on common characteristics and flagging potential risk factors that may then be investigated by an officer.
This enables IRCC to automate some processing steps for routine applications. By leveraging technology, IRCC is able to direct officer resources toward more complex applications and increase the efficiency of our processing.
These systems do not use opaque AI, do not automatically learn or adjust on their own and are not used to refuse any applications, recommend refusing applications or deny entry to Canada. IRCC does not use any external generative AI tools, such as ChatGPT, in support of decision-making on client applications. IRCC officers remain central to immigration processing and continue to exercise their authority and discretion in decision-making.
With regard to part (b), IRCC is aware that the use of AI in the processing of applications raises concerns related to bias, transparency, privacy, accuracy, reliability, etc.
At this time, none of IRCC’s automated decision support tools, including those that have been developed with AA and AI, can refuse an application, nor can they recommend a refusal to an officer. All final decisions to refuse applications are made by officers after individualized assessments of a file. Officers are provided with training on IRCC’s automated decision support systems in order to ensure they understand that a lack of an automated approval does not constitute a recommendation to refuse an application.
To address AI concerns or issues, IRCC follows the Treasury Board of Canada Secretariat, TBS, directives and conducts algorithmic impact assessments, AIA, for all relevant automated processes and tool systems that play a role in administrative decision-making, whether these systems use AI or not. The AIA is a Government of Canada governance process intended to assist in determining risk and mitigate potential negative impacts of automated decision-making systems.
The department has developed detailed guidance, which includes a policy playbook on automated support for decision-making, to help consider how these technologies can be used responsibly, effectively and efficiently. IRCC has also established an internal governance framework to ensure that AI support tools go through a rigorous review and approval process, which includes coordination with legal experts, policy experts and privacy experts.
Furthermore, IRCC has endorsed its comprehensive AI strategy, which is being finalized to be published in the coming months. This strategy outlines the department’s responsible approach to AI adoption, and places a significant emphasis on implementing strong AI governance while integrating new policies, guidelines and best practices.
With regard to part (c), IRCC uses AI beyond application processing: IRCC is piloting AI for fraud detection, for triaging client emails to provide faster replies and for aiding research and policy development. IRCC also uses AI-powered computer vision to help validate identities, to crop passport photos and, for the online citizenship test, to prevent cheating. IRCC uses natural language processing to categorize client enquiries, freeing officers for client support, which powers QUAID, a chatbot for handling general enquiries with pre-set responses. Lastly, IRCC has opened up public generative AI tools, such as ChatGPT and CoPilot, for employees to use for personal productivity regarding drafting emails, translation, drafting presentations, etc., and has provided guidance to employees aligned to the TBS policy on the use of generative AI for personal productivity.
In alignment with the Privacy Act and the Access to Information Act, IRCC has drafted internal guidance on the use of generative artificial intelligence in application processing. Employees have been clearly informed that entering personal, sensitive, classified and protected information into external AI tools is non-compliant with the Privacy Act and IRCC’s approach. Additionally, IRCC is educating employees and following TBS’s newly published “Generative AI in your daily work” directive, found at https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/generative-ai-your-daily-work.html, which describes how generative AI should and should not be used.