Let me jump in.
You have to accept that there is a transparency aspect to this. I'll use an example. In the public sector at the moment—and this is very recent for the Government of Canada—there is a questionnaire that any department that is employing automated decision-making needs to fill out. It's 80-some-odd questions. Based upon the answers to those questions, they're assigned basically a level 1, level 2, level 3 or level 4 in terms of the risk.
Then there are measures that need to be taken, some additional notice requirements. They have to obtain experts who peer review the work, but in the initial impact assessment itself, there are questions about the purpose of the automated decision-making that they intend to employ and the impact that it's likely to have on a particular area, such as individual rights, the environment or the economy.
We could argue about the generality of it and whether this could be improved, but it seems, on one hand, to provide a transparency mechanism in that it is requiring a disclosure of the purpose of the algorithm and potentially the inputs to the algorithm, its benefits and costs, and the potential externalities and risks. Then, depending on the outputs to that assessment, there are additional accountability mechanisms that could apply.
If you haven't looked at it yet, my question would be this: If and when you do take a look at the Canadian model for the public sector in more detail, is that something that you could transcribe and treat more like a securities filing—that is, to say “this is going to be required for private sector companies of a certain threshold, and if there is any non-compliance where material terms are excluded purposefully or in a negligent way, then there are penalties for non-compliance”? Would that be sufficient to meet at least the baseline of transparency accountability generally before we get into sector-specific regulations?