The first thing—and this is a really straightforward example that I was speaking about before—is that, in the same way that you require transparency for donations and spending, you should require transparency for the use of information and advertising. When a party puts out an ad online, it should have to report that. It should also have to report who it is going to.
Personally, I think that companies should do that, too. I don't see why not. I think it would be healthy for people to be able to scrutinize the advertising markets online, in general.
The other thing is that we have to understand that this is not always going to be a data issue. The developments of algorithms and artificial intelligence moving forward means that it will not always be clear whether or not there was consent in an inference, for example. I will give you a tangible example. Your cousin joins a genetic-profiling company, like 23andMe for example. That company is acquired later by an insurance company or some other kind of company that looks at that genetic profile and infers, based on your relationship—because you're their cousin—that you have a 95% chance of having a particular type of breast cancer, and then denies health insurance. This might not be as applicable in Canada, but it absolutely is in the United States.
Here, there was consent in the actual data, because the data was the genetic profile of your cousin who consented to that use. However, the behaviour or action or result applies to you, where you didn't know that this was happening and you didn't consent to that.
Currently, it's difficult to say whether that information was about you. Was that an inference about you? When we're looking at artificial intelligence, we're looking at memories, understandings, behaviours, and inferences.
In the law, when we regulate people about their behaviour, we do have a component as to what's in their heads, but we also have another component, which is their behaviour. Taking a step back and not just looking at the issue of data and consent, but looking at the behaviour and acceptable behaviour of AI in general, is a really healthy mindset for people to have, I think.
These are decision-making machines, so we should be regulating how they can make decisions. This is really important because as society moves forward, all of this information is going to start being connected to each other. What you do with your toaster may affect what your office computer does later down the road, or it may affect the price of Starbucks, when you walk into Starbucks.
There are real issues that aren't to do with consent, but are to do with the ultimate impact in behaviour. That's a more broad mindset.
The third thing is that when you look at technology—whether you're a Canadian, an American, or a Brit, or whoever—the Internet is here to stay. You do not have a choice. You have to use Google. You have to use social media. You cannot get a job anymore if you refuse to use the Internet. This means that the issue of consent is slightly moot. In the same way that we all have to use electricity...it's a false choice to say that if you don't want to be electrocuted, don't use electricity. In the same way, if you don't want to participate in the modern economy, don't use data collection platforms.
We should be looking at these platforms as a utility in the same way that we would look at electricity, water, or roads as a utility, rather than as an entity where people or consumers are “consenting”.
The fourth thing is that there should be rules on reasonable expectations. When I joined Facebook in 2007, it did not have facial profiling algorithms. I put all of my photos onto Facebook, and I consent to “analysis of the data that I put on”, but that technology did not yet exist. Facebook then creates facial recognition algorithms that read my face. Was that reasonably expected at that time? For a lot of people, it probably was not. There's very little regulation or rules on something that's very unique to technology, which is the rapid development of new things.
Having some sort of rule or principle about reasonable expectation.... You might have consented to some platforms several years ago, but if something new happened, was that reasonably expected? If the answer is “no”, then maybe it shouldn't be allowed.