Thank you very much for inviting me to speak to you today.
I will make my remarks in English, but I will be happy to answer questions in either English or French.
I'd like to begin by saying that I think it is very important that more attention be given to data protection and privacy in relation to the activities of social media companies. I do find it somewhat ironic that the committee's mandate was framed in terms of studying the efforts and measures taken by social media companies to protect the personal information of Canadians. It's a bit like studying the efforts made by foxes to protect the lives of chickens.
I note that to the extent that Google, Facebook and other social media companies attempt to protect the personal information of Canadians, these efforts have been shaped by data protection law. The adequacy of our data protection legislation must therefore be a focus of attention.
The amendments from the first five-year review in 2006 have yet to make it through Parliament; the second five-year review is already late in getting under way. These should be matters for concern, particularly since the data protection environment has changed substantially since the law was first enacted.
The current law is particularly weak with respect to enforcement. The commissioner has no order-making powers and lacks the ability to impose fines or other penalties in the case of particularly egregious conduct.
The focus on social media and privacy, in my view, has two broad aspects. The first relates to how individuals use these tools to communicate amongst themselves. In this regard we hear concerns about employers accessing Facebook pages, people posting the personal information of other people online, criminals exploiting Facebook information, and so on. These are concerns about the information that individuals have chosen to share, the consequences of that sharing, and the norms that should govern this new mode of interpersonal exchange.
The second aspect, and the one on which I'll focus my attention, is the role of these companies in harvesting or in facilitating the harvesting of massive amounts of information about us in order to track our online activity, consumption habits, and even patterns of movement. In this respect, attention given to large corporations such as Facebook and Google is important, but there are also many other players in the digital environment who are engaging in these practices.
The business models of social media companies are generally highly dependent on the personal data of their users. In fact, social networking, search engines, email and many other services are offered to us for free. By hosting our content and tracking our activities, these services are able to extract a significant volume of personal data. The nature and quality of this data is constantly enhanced by new innovations. For example, information about the location and movements of individuals is highly coveted personal information. More and more individuals carry with them location-enabled smart phones and they use these devices for social networking and other online activities. Even computer browsers are now location-enabled, and thus information about our location is routinely gathered in the course of ordinary Internet activities.
The point is that more and more data of increasingly varied kinds is being sought, collected, used, and disclosed. This data is compiled, matched, and mined in order to profile consumers for various purposes including targeted behavioural marketing. In some cases, this data may be shared with third party advertisers, with application developers, or with related companies. Even where the data is de-identified, its fine-textured nature may still leave individuals identifiable, as companies such as AOL and Netflix have learned the hard way.
Individuals may also still be identifiable from detailed profile information. The substantial volumes of information gathered about us make us highly vulnerable to data security breaches of all kinds. It's become very difficult to protect our personal data, particularly in contexts where privacy preferences are set once, and often by default, and the service is one that we use daily or even multiple times each day. Facebook or a search engine would an example of those.
It's often difficult to determine what information is being collected, how it's being shared and with whom. Privacy policies are often too long, too unclear, and too remote for anyone to actually read and understand. We now enter into a myriad of transactions every day and there simply isn't time or energy to properly manage our data. It's a bit like walking through a swamp and being surrounded by a cloud of mosquitoes. To avoid being bitten we can swat away; we can even use insect repellents or other devices, but in the end we're inevitably going to be bitten—often multiple times.
It's also becoming increasingly difficult to avoid entering this swamp. People use social media to keep family and friends close regardless of how far apart they live or because the social network communities have become a part of how their own peer groups communicate and interact. Increasingly, businesses, schools, and even governments are developing presences in social media, which give even more impetus to individuals to participate in these environments. Traditional information content providers are also moving to the Internet and to Facebook and Twitter, and are encouraging their readers, listeners, and viewers to access their news and other information online and in interactive formats. These tools are rapidly replacing traditional modes of communication.
To date, our main protection from the exploitation of our personal information in these contexts has been data protection law. Data protection laws are premised on the need to balance the privacy interests of consumers with the needs of businesses to collect and use personal data, but in the time since PIPEDA was enacted, this need has become a voracious hunger for more and more data, retained for longer and longer periods of time. The need for data has shifted from the information required to complete particular transactions or to maintain client relationships to a demand for data as a resource to be exploited. This shift risks gutting the consent model on which the legislation is based. This new paradigm deserves special attention and may require different legal norms and approaches.
Under the traditional data protection model, the goal was to enable consumers to make informed choices about their personal data. In the big data context, informed choices are very difficult to make. Beyond this, there is an element of servitude that is deeply disturbing. Nancy Obermeyer uses the term “volunteered geoslavery” to describe a context where location-enabled devices report on our movements to any number of companies without us necessarily being aware of this constant stream of data. She makes the point that equipping individuals with sensors that report on their activities leaves them vulnerable to dominance and exploitation—yet this is a growing reality in our everyday lives. Going beyond the simple collection of data, social networking services encourage users to make these sites the hub of their daily activities and communications.
Our personal data is a resource that businesses, large and small, regularly exploit. The data is used to profile us so as to define our consumption habits, to determine our suitability for insurance or other services, or to apply price discrimination in the delivery of wares or services. We become data subjects in the fullest sense of the word. There are few transactions or activities that do not leave a data trail.
The case demonstrates how the provision of personal data is overlooked as an element of the contract between the company and the individual. It is treated as a matter governed by the tangential privacy policies. This lack of transparency regarding the quid pro quo makes it the consumer's sole responsibility to manage their personal information.
Concerns that excessive amounts of personal information are being collected can then be met by assertions that people simply don't care about privacy. To regard the sharing of personal data as part of a consumer contract for services, by contrast, places both competition law and consumer protection concerns much more squarely in the forefront. In my view, it is time to explicitly address these concerns.
Another social harm potentially posed by big data is, of course, discrimination. Oscar Gandy has written about this in his most recent book. We understand how racial profiling leads to injustice in the application of criminal laws. Profiling, whether it's based on race, sex, sexual orientation, religion, ethnicity, socio-economic status or other grounds, is a growing concern in how we are offered goods or services. Through big data, corporations develop profiles of our tastes and consumption habits. They channel these back to us in targeted advertising, recommendations, and special promotions. When we search for goods or services, we are presented first with those things that we are believed to want.
We are told that profiling is good because it means that we don't have to be inundated with marketing material for products or services that are of little interest. Yet there is also a flip side to profiling. It can be used to characterize individuals as unworthy of special discounts or promotional prices, unsuitable for credit or insurance, uninteresting as a market for particular kinds of products and services. Profiling can and will exclude some and privilege others.
I have argued that big data alters the data protection paradigm and that social networking services, along with many other free Internet services, are major players in this regard. To conclude my remarks, I would like to focus on the following key points.
First, the collection, use, and disclosure of personal information is no longer simply an issue of privacy, but also raises issues of consumer protection, competition law, and human rights, among others.
Second, the nature and volume of personal information collected from social media sites and other free Internet services goes well beyond transaction information and relates to the activities, relationships, preferences, interests, and location of individuals.
Third, data protection law reform is overdue and may now require a reconsideration or modification of the consent-based approach, particularly in contexts where personal data is treated as a resource and personal data collection extends to movements, activities, and interests.
Fourth, changes to PIPEDA should include greater powers of enforcement for data protection norms, which might include order-making powers and the power to levy fines or impose penalties in the case of egregious or repeated transgressions.
Those are my comments. Thank you very much.