Thank you for your kind invitation to appear before the committee and discuss Bill C-27.
I'm a professor of marketing at the University of Toronto, where I hold the Rotman chair in artificial intelligence and health care. My research focuses on the economics of information technology, including several papers on privacy regulation and on artificial intelligence.
Canada is a leader in AI research. Many of the core technologies underlying the recent excitement about AI were developed right here at Canadian universities. At the same time, our productivity is lacking. My research has shown that AI and related data-focused tools are particularly promising technologies for accelerating innovation, productivity and economic growth. In my view, a big worry for the Canadian economy going forward is that we do not have enough AI, and so our standard of living, including our ability to fund health care and education, would stagnate. It would be a shame if Canada's research success did not lead to applications that increase Canadian prosperity.
This act is a careful attempt to ensure that Canadians benefit from AI and related data-focused technologies while protecting privacy and reducing the potential for these technologies to harm individuals.
Next, I'll provide specific comments on AI regulation in part 3 and on privacy regulation in part 1. I have specific comments [Technical difficulty—Editor] intelligence and data act.
First, the act correctly recognizes that there is always a human or a team of humans behind decisions enabled by AI. In part 1, proposed subsection 5(2) is commendable for noting that “a person is responsible for an artificial intelligence system”. Proposed sections 7 through 9 make these responsibilities clear. In my experience, such clarity about the role of humans in AI systems is both unusual and commendable.
Second, the act constructively defines explainability and transparency in part 1, proposed sections 11 and 12. By making it clear how and why the high-impact system is being used rather than focusing on the inner workings of the algorithm, it will provide useful information without forcing potentially misleading oversimplification of how the algorithms work.
Third, while the details of the act itself implicitly recognize the role of AI in Canadian prosperity, the preamble to the AI and data act does not recognize that technological progress is fundamental to our prosperity, and instead focuses only on regulation and harms.
Fourth, there are two sections of the act that might create incentives not to adopt beneficial AI because the liability is not explicitly benchmarked around some human performance level [Technical difficulty—Editor] and safety.
In part 1 of the AI act, proposed subsection 5(1) examines bias. The bias definition suggests that any bias would be prohibited. AI systems will almost surely be imperfect, because they're likely to be trained on imperfect and biased human decisions. Therefore, this definition of biased output incentivizes the continued use of biased human decision-making processes over potentially less biased but auditable AI-supported decisions.
In part 2 of the AI act, proposed paragraph 39(a) examines physical and psychological harm or physical damage. As with bias, the benchmark seems to be perfection. For example, autonomous vehicles will almost surely cause serious physical harm and substantial property damage, because vehicles are dangerous. If the autonomous vehicle system, however, generates much less harm than the current human driving systems, then it would be beneficial to enable its adoption.
The fifth comment on the AI and data act is about the definition of an AI system in proposed section 2 of the AI act: “the use of a genetic algorithm, a neural network, machine learning or other technique in order to generate content or make decisions, recommendations, or predictions.” This definition is overly broad. It includes regression analysis and could even be interpreted to include the calculation of averages. For example, if an employer receives thousands of applications for a job, calculates the average score on some standardized test and uses that score to autonomously select above-average applications to be sent to a human resource worker for further examination, that scoring rule would be an AI system, as I understand it, under the current definition.
I have two specific comments about the consumer privacy protection act.
First, the purpose of the act in proposed section 5 clearly lays out the often competing goals of protecting privacy while facilitating economic activity. While I do understand the wishful thinking that there would be no trade-offs between privacy and innovation, research has consistently documented such trade-offs. Privacy is not free, but it is valuable. Individuals care about their privacy. In protecting privacy, this act will require companies [Technical difficulty—Editor] on legal expertise for interpretation. Such expertise is readily available for large, established companies, but onerous for small businesses and start-ups. In the implementation by the commissioner, some direction to reduce any unnecessary burden on small businesses and start-ups would be constructive.
Proposed subsection 15(5) makes the cost of an audit payable by the person audited even if the Privacy Commissioner does not bring a successful case. This creates a large burden on small and new businesses if they get audited unnecessarily.
To conclude, while I have specific suggestions to clarify the language of the act, in my view Bill C-27 is a careful attempt to ensure that Canadians benefit from AI and related data-focused technologies while protecting privacy and reducing the potential of these technologies to harm individuals.
Thank you for this opportunity to discuss my research. I look forward to hearing your questions.