That's a first step. We need to have clarity on how these processes are being put in place by companies like insurance companies, which use data to make decisions about people. We need to have some sort of access to that. It's understandable that they might want some secrecy, but government officials should be able to look into how they do it and make sure that it agrees with some of these principles that we put into law or in regulations or whatever. It doesn't mean that the system needs to explain every decision in detail, because that's probably not reasonable, but it's really important that they document, for example, what kind of data was used, where it came from, the way in which the data was used to train the system, and under what objective it was trained so that an expert can look at it and say that, for example, it's fine, or that there is a potential issue of bias and discrimination and maybe you should run such-and-such test to verify that there isn't; if there is an issue, then you should use one of the state-of-the-art techniques that can be used to mitigate the problem.
On April 30th, 2019. See this statement in context.