I tend to say that there is no single silver bullet for appropriate governance of these systems, so risk assessments can be a very good starting point.
They're very good in the sense of catching problems in the pre-deployment or procurement stage. Their shortcoming is that they're only as good as the people or the organizations that complete them, so they do require a certain level of expertise and, potentially, training—essentially, people who are aware of potential ethical issues and can flag them up while actually going through the questionnaire.
We've seen that with other sorts of impact assessments such as privacy impact assessments, environmental impact assessments and, now, data protection impact assessments in Europe. There really has to be a renewed focus on the training or the expertise of the people who will be filling those out.
They are useful in a pre-deployment sense, but as I was suggesting before with biases, problems can emerge after a system has been designed. We can test a system in the design phase and during the training phase and say that it seems to be fair, it seems to be non-discriminatory and it seems to be unbiased, but that doesn't mean that problems won't then emerge when the system is essentially used in the wild.
Any sort of impact assessment approach has to be complemented as well by in-process monitoring and post-processing assessment of the decisions that were made, and very clear auditing standards in terms of what information needs to be retained and what sorts of tests need to be carried out after the fact, again, to check for things like bias.