Thank you.
I think this will be an interesting perspective side-by-side with Erica's.
I'm the founder and CEO of AlayaCare. It is a home care software company. We deliver our solutions both to the private sector providers and the public sector health authorities.
In the machine learning domain, we have all sorts of risk models we deliver. One of the things that you can imagine our ultimately building up to is a model that, on the basis of an assessment and patient data, will help at a population health level determine where the health system's resources get optimally allocated. In that use and case, it's definitely a high-impact system.
I really like two things about the framework in this bill. One is that you're looking to adhere to international standards. As a developer of software looking to generate value in our society, we can't have a thousand fiefdoms. Let me start with a thanks for that. The second thing I really appreciate is your segmentation of the actors between the people who generate the AI models, those who develop them into useful products, and those who operate them in public. I think that's a very useful framework.
On the question of bias, I think it raises some interesting questions. I think we have to be very careful about legislating against bias in the right way. In developing the model, really the only difference between a linear regression—think of what you might do in Excel—and an AI model is the black box aspect. Yes, if you're trying to figure out how to allocate health system resources, you probably don't want to put in certain elements that could be bigoted into your model, because that's not how a society wants to be allocating health resources. With a machine learning model, you're going to feed a bunch of data into a black box and out comes a prediction or an optimization. Then you can imagine all sorts of biases creeping in. It might be that a certain identity, for example, that left-handed people can actually get by with a bit less home care and still stay out of the hospital.... That wouldn't be programmed into the algorithm, but it could certainly be an output of the algorithm.
I think what we need to be careful of is assigning the right accountability to the right actor in the framework. I think the model developers need to demonstrate a degree of care in the selection of the training data. To the previous example—and I can say this with some certainty—the reason that the facial recognition model doesn't perform as well for indigenous communities is that it just wasn't fed enough training data of that particular group. When you're developing the AI model, you need to take care and demonstrate that you've taken care of having a representative training set that's not biased.
When you develop and put an algorithm into the market, I think providing as much transparency as possible to the people who will use it is definitely something that we should endeavour to do. Then, in the use of that and the output of that algorithm you have a representative training set and the right caveats. I think we have to be careful that you don't bring inappropriate accountability back to the model developers. That's my concern. Otherwise, you're going to be pitting usefulness against potential frameworks for bias.
What I think we have to be careful about with this legislation is to not disproportionately shift societal concerns on how resources should be allocated—you name the use case—to the tool developer and sit them appropriately with the user of the tool.
That's my perspective on the bill.