Thanks for the question.
Technically speaking, the bill doesn't address it because we have the bill. What we also have now, as you know, is essentially a memo from the minister that highlights that they've begun to identify what they intend to include within the high-impact systems. As we just heard from Professor Krishnamurthy, the approach is to seek to regulate those and establish a number of regulatory frameworks around those provisions.
There is generally a consensus that it's appropriate and necessary to have rule sets, particularly where there are concerns around bias coming out of AI systems. Think of the use of AI in labour markets for hiring. Think about it in the health sector, in the financial sector and in law enforcement. There are a lot of places where we can easily identify potential risks, potential harms and the like. That's where much of the discussion has been.
Oddly, at least in terms of the list that has been provided, search engine algorithms and social media algorithms are included here as well. Unquestionably, we need algorithmic transparency with respect to these companies. We need to identify ways to deal with the potential for harms that are coming out of this, such as any competitive behaviour in search results, which is clearly an issue that is raising some significant concerns. However, I find very puzzling the notion that we would be treating that as a high-impact system in the same way we would treat law enforcement's use of this or health uses. I'm not aware that anyone else anywhere in the world has seen fit to do that as they work through some of these questions.