We already try to provide that information to our users, in the context of that recitation of websites that I mentioned in my account, where underneath that you can see a record of your location history, your search history. You can see how we've made decisions around what advertising you should see, based on broad categorizations that are based on the search behaviour and ads you click on within our properties.
Rather than discussing algorithmic transparency, we need to focus on the outcome of that process. Does that outcome demand intervention or does it demand supervision? You have to have a measure of the levels of harm, and an idea if you're seeing outcomes that are detrimental to the individual.
It's difficult to say that algorithmic transparency, in being able to see outside the box and see the gears, will reveal anything. In many cases the inputs that are coming through the algorithm change are on a near-instantaneous basis, providing immediate results. Understanding both the information that's being collected, which is already a requirement under PIPEDA, and then understanding the outcomes is more relevant to the challenge we're trying to face, which is the individual user's understanding of their interaction with the box and the system, and then how that is influencing the information that's presented to them.