Sure. Actually, we know a lot of things about how that result is obtained. We know that it's obtained as a consequence of optimizing some objectives—for example, minimizing the prediction error on the large dataset—and that tells us a lot about what the system is trying to achieve. When the system is designed, we can also measure how well it achieves that and how many errors it makes on new cases on average. There are many other things you can do to analyze those systems before they are even put in the hands of users.
It's not really a black box. The reason people call it a black box.... In fact, it's very easy to look into it. The problem is that those systems are very complex and they're not completely designed by humans. Humans designed how they learn, but what they learn and detail is something that they come up with by themselves. Those systems learn how to find solutions to problems. We can look at how they learn, but what they learn is something that takes much more effort to figure out. You can look at all of the numbers that are being computed. There is nothing hidden. It's not black; it's just very complex. It's not a black box. It's a complex box.
There are things that we can do very easily. For example, once the system is trained and we look at a particular case where it's taking a decision, it's very easy to find out which of the variables it takes as input that were most relevant and how they influenced the answer. There are things that can be said to highlight it, to give a little bit of explanation about their decisions.