That's one of the burning questions we have in the digital service space writ large, not just in Canada, but frankly around the world. It is a question of values. Different countries will automate different things according to their values framework. We want to make sure that Canada automates some of its services based on our values framework. It is a continuous conversation.
Currently, the directives that we're putting in place are looking at certain elements, for example, making sure that we don't have a black box making a decision on behalf of a human. We know that the decision pattern of the algorithms as they change is an important factor, because how do we authenticate the fact that the Government of Canada is responsible for this service if it's an algorithm?
We also have directives in there where, according to the level of severity, you may have to have an internal peer review of the automation that you're working on. That's something we're considering, as well, in putting in some of these directives. It's all part of the algorithmic impact assessment tool that we talked about previously, which we've developed collaboratively around the world.
Those are some of the examples.
I would also point to data. We could have the most unbiased code ever and have bad and biased data. We've seen examples in the private sector, from recruitment tools at Amazon to other things, where the data was biased and therefore the service and algorithm became biased.
It's not just a question of the technology; it's actually a question of the data holdings that we currently have as well. Those are all things that we're looking at.
It's not a perfect solution, but it's something that's going to have to iterate quite frequently.