There are models in the EU in particular, in the EU directives around data privacy, that focus more on bringing human decision-making into the loop. Where a decision is made that affects someone's life chances, for example, there needs to be some sort of human element in the determination of the result.
Again, this adds a certain level of accountability or transparency, where neither you nor I—or maybe even some computer scientist—could actually explain what the algorithm did in terms of how it came to the conclusion that you were in a particular group, or that certain information should come to you or not. Thus, we can have some other form of explanation about what is actually being taken into account in determining what kind of information it is that we're seeing and why a particular decision is being made about us. This is becoming more and more important as we move toward machine-made decision-making in all kinds of atmospheres.
I think people or countries are beginning to think about ways in terms of how to put the “public” in public values and public discourse back into decision-making in this sphere, which, although it is largely privately controlled is really a public infrastructure, in terms of a necessity for people to have access to it increasingly for work, for social life, and for education. It's about how to think about righting the balance between the decisions being made from a private sector perspective—not for nefarious reasons, but for profit reasons, because that's what they're in business to do—and how we re-inject public conversation and public discourse around the issues in terms of what's happening, what kinds of decisions are being made, how people are being profiled, and how they're being categorized. I think this is a really important start.