Well, be careful of the contrast between algorithms very broadly and learning models narrowly. There is the open source that they are doing with the Facebook account case and how that locks you into needing their tools. So those are open or sort of open, but the algorithms that manipulate our children or do the other forms of biasing are long-standing and have been around since the beginning of the surveillance capitalism model some 20 years ago.
I think AIDA's job is a broad one, and LLMs are a subset of that. Again, you received notice, and it was mentioned in previous testimony, that first nations haven't been consulted on this, and they're going to contest this in the courts, and there are many other aspects of civil society. This is complex, multi-faceted stuff. The consequences are high. There are incompatibilities with what certain provinces are doing and who trumps whom, and it looks as though the federal legislation trumps the provincial. This is a place where you have to get it right in a complex zone.
So, yes, LLMs are tricky, and Canada's approach on this, which I commented on, goes beyond AIDA. You cannot think of this stuff independently of computing power and sovereign infrastructure and how we're going to approach those properly to be a sovereign, safe and prosperous country. If you're in for a penny, you're in for a pound.