Thank you, Chair and members of the committee.
My name is Carys Craig. I'm a full professor at Osgoode Hall Law School at York University, where my teaching and research focus on copyright technology and the public interest. I've published widely on the AI challenge to copyright law, so I'm grateful for the opportunity to share my views with you here today.
In my short time, I want to make three points about copyright protection that I think are relevant to this committee's work. First, I think it's vital to distinguish copyright law from AI regulation. Second, copyright law must not obstruct AI research, development and training in Canada. Third, Canada must continue to refuse copyright protection to AI-generated works.
First, I think there's obviously an understandable concern about the effects of generative AI on creative workers, our cultural industries and our information ecosystem, but I'm going to urge the committee to be cautious about including expanded copyright protections as part of an AI regulatory package to address these concerns. Copyright exists to encourage the creation and dissemination of works, to reward authors and to foster a vibrant public domain. It is technology-neutral. It is not designed to govern technology risks or to restrain technological developments, and it should not be pressed into that service now.
The real risks of AI—from bias and misinformation to deepfakes and privacy violations to labour displacement and corporate consolidation—demand dedicated, fit-for-purpose regulatory responses. Expanding copyright control risks distorting foundational copyright principles while failing to address, or indeed worsening, the harms themselves. This is what I've called running into the AI copyright trap. It's mistakenly turning to copyright as a catch-all—or, for some, a windfall—in response to the threats posed by generative AI.
My second point concerns AI training. Some have called for compulsory licensing for copyrighted works that are used in training data, backstopped by owners' rights to opt in or out. I understand the impulse, but the consequences of this approach would, I think, be deeply harmful.
Under the current law, first, it's not clear that training AI on copyright works even implicates the rights of copyright owners. When a system is trained, it translates expressive content into statistical patterns. It turns the meaning into math. This is a technical, intermediate, non-public use to extract information that copyright does not protect. Even if copyright extends to this data extraction and analysis process, most text and data mining is likely lawful without permission or licence under Canada's fair dealing provisions, as interpreted by the Supreme Court of Canada. If the committee is interested in supporting AI research and innovation in Canada, the real problem is legal uncertainty, not illegality.
Requiring licences for AI training would create a pay-to-play system regulated by private actors. The wealthiest corporations could afford access to the vast data troves required, but academic researchers, non-profits, start-ups and SMEs would be shut out, and this would concentrate AI development even further in the hands of big-tech incumbents, which I think is what we're trying to prevent. It would also incentivize secrecy, reduce the diversity of AI systems, exacerbate bias and be practically impossible to administer effectively, as the EU's implementation efforts already reveal.
If copyright reform is required, it should be to confirm that text and data mining for informational analysis does not constitute infringement. This was the original INDU recommendation in the 2019 Copyright Act review, and it remains, I think, the best way to support a healthy AI ecosystem in Canada. It would most likely align with emerging U.S. fair use jurisprudence, but it would also give us the significant advantage of legal clarity. I think Canada's focus here should be on good data governance, not propping up private control of data in a way that's going to send AI development offshore while Canadian creators gain little, if anything.
My third and final point concerns AI outputs. The most effective thing copyright can do to protect human creators is to maintain the position that copyright requires a human author, while AI-generated content is unprotected in the public domain. That is the correct result. It protects the role of human creators in the creative industries, whereas granting rights in AI outputs would be an unnecessary, misplaced incentive that could further chill human creativity.
In closing, I just want to emphasize that copyright law, at its best, serves human creativity and the public interest. It exists because we value what human beings create, share and learn from each other. We cannot allow it to become a tool for controlling technology, a bargaining chip for corporate licensing deals or a vehicle for granting monopoly rights over information or machine-generated content. I urge the committee to keep copyright's principled limits and its practical consequences in view. There are many more apt solutions to the risks posed by AI systems.
Thank you.
