Information & Ethics Committee on May 31st, 2012
Evidence of meeting #42 for Access to Information, Privacy and Ethics in the 41st Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was kids.
A video is available from Parliament.
On the agenda
The Chair Pierre-Luc Dusseault
Good morning, everyone. Welcome to the second meeting of the committee in which we will hear from witnesses who will be speaking to our study on privacy and social media.
It is my pleasure to welcome Ms. Scassa, Mr. Geist and Ms. Steeves, all three from the University of Ottawa. They will each have 10 minutes in which to make a presentation. Then there will be a time for questions and answers about the presentations.
Without further delay, I will give the floor to whomever wishes to start.
Go ahead, Ms. Scassa.
Teresa Scassa Canada Research Chair Information Law, Faculty of Law, Common Law Section, University of Ottawa
Thank you very much for inviting me to speak to you today.
I will make my remarks in English, but I will be happy to answer questions in either English or French.
I'd like to begin by saying that I think it is very important that more attention be given to data protection and privacy in relation to the activities of social media companies. I do find it somewhat ironic that the committee's mandate was framed in terms of studying the efforts and measures taken by social media companies to protect the personal information of Canadians. It's a bit like studying the efforts made by foxes to protect the lives of chickens.
I note that to the extent that Google, Facebook and other social media companies attempt to protect the personal information of Canadians, these efforts have been shaped by data protection law. The adequacy of our data protection legislation must therefore be a focus of attention.
The amendments from the first five-year review in 2006 have yet to make it through Parliament; the second five-year review is already late in getting under way. These should be matters for concern, particularly since the data protection environment has changed substantially since the law was first enacted.
The current law is particularly weak with respect to enforcement. The commissioner has no order-making powers and lacks the ability to impose fines or other penalties in the case of particularly egregious conduct.
The focus on social media and privacy, in my view, has two broad aspects. The first relates to how individuals use these tools to communicate amongst themselves. In this regard we hear concerns about employers accessing Facebook pages, people posting the personal information of other people online, criminals exploiting Facebook information, and so on. These are concerns about the information that individuals have chosen to share, the consequences of that sharing, and the norms that should govern this new mode of interpersonal exchange.
The second aspect, and the one on which I'll focus my attention, is the role of these companies in harvesting or in facilitating the harvesting of massive amounts of information about us in order to track our online activity, consumption habits, and even patterns of movement. In this respect, attention given to large corporations such as Facebook and Google is important, but there are also many other players in the digital environment who are engaging in these practices.
The business models of social media companies are generally highly dependent on the personal data of their users. In fact, social networking, search engines, email and many other services are offered to us for free. By hosting our content and tracking our activities, these services are able to extract a significant volume of personal data. The nature and quality of this data is constantly enhanced by new innovations. For example, information about the location and movements of individuals is highly coveted personal information. More and more individuals carry with them location-enabled smart phones and they use these devices for social networking and other online activities. Even computer browsers are now location-enabled, and thus information about our location is routinely gathered in the course of ordinary Internet activities.
The point is that more and more data of increasingly varied kinds is being sought, collected, used, and disclosed. This data is compiled, matched, and mined in order to profile consumers for various purposes including targeted behavioural marketing. In some cases, this data may be shared with third party advertisers, with application developers, or with related companies. Even where the data is de-identified, its fine-textured nature may still leave individuals identifiable, as companies such as AOL and Netflix have learned the hard way.
Individuals may also still be identifiable from detailed profile information. The substantial volumes of information gathered about us make us highly vulnerable to data security breaches of all kinds. It's become very difficult to protect our personal data, particularly in contexts where privacy preferences are set once, and often by default, and the service is one that we use daily or even multiple times each day. Facebook or a search engine would an example of those.
It's often difficult to determine what information is being collected, how it's being shared and with whom. Privacy policies are often too long, too unclear, and too remote for anyone to actually read and understand. We now enter into a myriad of transactions every day and there simply isn't time or energy to properly manage our data. It's a bit like walking through a swamp and being surrounded by a cloud of mosquitoes. To avoid being bitten we can swat away; we can even use insect repellents or other devices, but in the end we're inevitably going to be bitten—often multiple times.
It's also becoming increasingly difficult to avoid entering this swamp. People use social media to keep family and friends close regardless of how far apart they live or because the social network communities have become a part of how their own peer groups communicate and interact. Increasingly, businesses, schools, and even governments are developing presences in social media, which give even more impetus to individuals to participate in these environments. Traditional information content providers are also moving to the Internet and to Facebook and Twitter, and are encouraging their readers, listeners, and viewers to access their news and other information online and in interactive formats. These tools are rapidly replacing traditional modes of communication.
To date, our main protection from the exploitation of our personal information in these contexts has been data protection law. Data protection laws are premised on the need to balance the privacy interests of consumers with the needs of businesses to collect and use personal data, but in the time since PIPEDA was enacted, this need has become a voracious hunger for more and more data, retained for longer and longer periods of time. The need for data has shifted from the information required to complete particular transactions or to maintain client relationships to a demand for data as a resource to be exploited. This shift risks gutting the consent model on which the legislation is based. This new paradigm deserves special attention and may require different legal norms and approaches.
Under the traditional data protection model, the goal was to enable consumers to make informed choices about their personal data. In the big data context, informed choices are very difficult to make. Beyond this, there is an element of servitude that is deeply disturbing. Nancy Obermeyer uses the term “volunteered geoslavery” to describe a context where location-enabled devices report on our movements to any number of companies without us necessarily being aware of this constant stream of data. She makes the point that equipping individuals with sensors that report on their activities leaves them vulnerable to dominance and exploitation—yet this is a growing reality in our everyday lives. Going beyond the simple collection of data, social networking services encourage users to make these sites the hub of their daily activities and communications.
Our personal data is a resource that businesses, large and small, regularly exploit. The data is used to profile us so as to define our consumption habits, to determine our suitability for insurance or other services, or to apply price discrimination in the delivery of wares or services. We become data subjects in the fullest sense of the word. There are few transactions or activities that do not leave a data trail.
The case demonstrates how the provision of personal data is overlooked as an element of the contract between the company and the individual. It is treated as a matter governed by the tangential privacy policies. This lack of transparency regarding the quid pro quo makes it the consumer's sole responsibility to manage their personal information.
Concerns that excessive amounts of personal information are being collected can then be met by assertions that people simply don't care about privacy. To regard the sharing of personal data as part of a consumer contract for services, by contrast, places both competition law and consumer protection concerns much more squarely in the forefront. In my view, it is time to explicitly address these concerns.
Another social harm potentially posed by big data is, of course, discrimination. Oscar Gandy has written about this in his most recent book. We understand how racial profiling leads to injustice in the application of criminal laws. Profiling, whether it's based on race, sex, sexual orientation, religion, ethnicity, socio-economic status or other grounds, is a growing concern in how we are offered goods or services. Through big data, corporations develop profiles of our tastes and consumption habits. They channel these back to us in targeted advertising, recommendations, and special promotions. When we search for goods or services, we are presented first with those things that we are believed to want.
We are told that profiling is good because it means that we don't have to be inundated with marketing material for products or services that are of little interest. Yet there is also a flip side to profiling. It can be used to characterize individuals as unworthy of special discounts or promotional prices, unsuitable for credit or insurance, uninteresting as a market for particular kinds of products and services. Profiling can and will exclude some and privilege others.
I have argued that big data alters the data protection paradigm and that social networking services, along with many other free Internet services, are major players in this regard. To conclude my remarks, I would like to focus on the following key points.
First, the collection, use, and disclosure of personal information is no longer simply an issue of privacy, but also raises issues of consumer protection, competition law, and human rights, among others.
Second, the nature and volume of personal information collected from social media sites and other free Internet services goes well beyond transaction information and relates to the activities, relationships, preferences, interests, and location of individuals.
Third, data protection law reform is overdue and may now require a reconsideration or modification of the consent-based approach, particularly in contexts where personal data is treated as a resource and personal data collection extends to movements, activities, and interests.
Fourth, changes to PIPEDA should include greater powers of enforcement for data protection norms, which might include order-making powers and the power to levy fines or impose penalties in the case of egregious or repeated transgressions.
Those are my comments. Thank you very much.
The Chair Pierre-Luc Dusseault
Thank you very much.
Mr. Geist, you have 10 minutes
Dr. Michael Geist Canada Research Chair, Internet and E-commerce Law, University of Ottawa, As an Individual
Thank you very much.
Good morning. My name is Michael Geist. I am a law professor at the University of Ottawa, where I hold the Canada research chair in Internet and e-commerce law. I was a member of the national Task Force on Spam, and I currently serve on the Privacy Commissioner of Canada's expert advisory committee, but I appear before this committee today in a personal capacity representing only my own views.
My opening comments will identify several areas for potential government action, but I want to provide a bit of context with three key caveats.
First, which I think may be stating the obvious, is that social media is an enormously important and positive development. The number of users is staggering and its role as a key source for communication, community, and political activity grows by the day. The opportunities presented by social media should be embraced, not demonized, in my view, and government should be actively working to ensure that it incorporates social media into its policy consultation processes.
Second, Canada has played a leadership role, to a certain extent, in the use and regulation of social media. The Privacy Commissioner of Canada was the first to conduct a major privacy investigation into Facebook and has led on other issues with respect to social media and Internet companies.
Third, while we have had some influence through those investigations, Canada has not led in creating the social media services used by millions around the world. I believe that the failure to articulate and implement a national digital economy strategy comes back to haunt us in these circumstances, where the ability to place an unmistakable Canadian stamp on social media is undermined by the policy failures that have done little to encourage the development of Canadian e-commerce and social media.
With those caveats, what is there to be done? I'd like to focus on four areas of interest.
First, I think we need to finish what we've started.
The government has introduced and even passed legislation that can be helpful in addressing some of the concerns that arise from social media, yet these initiatives have stalled short of the finish line. Anti-spam legislation, for example, received royal assent in 2010, yet has still not taken effect as final regulations have not been approved. In fact, Industry Canada officials now indicate that it could be well into 2013 before the regulations take effect. Given the amount of work that went into this legislation, I find it shocking that it has been left in limbo.
Moreover, Bill C-12, the PIPEDA reform bill that seeks changes arising from the 2006 privacy review continues to lag in the House of Commons, with there frankly seeming to be no interest in moving forward with the bill. Indeed, I'd argue that the bill is even now outdated, and a full PIPEDA review to address emerging concerns such as order-making power—as you just heard—and damages, and tougher security breach requirements than those found in the bill is needed. In fact, the Bill C-12 security breach reporting rules are primarily bark with little bite, given the absence of penalties for failure to comply.
Successive governments have promised a digital economy strategy for years and have failed to deliver. The strategy has come to be known as the “Penske file”, a reference to the Seinfeld episode that involves working on an imaginary file. While other countries are now years into implementing their strategies, in Canada we still lag behind.
I think it also should be noted that these issues must increasingly be addressed in concert with the provinces. The line between federal and provincial jurisdiction on many of these issues is blurry, and legal challenges against federal legislation is a real possibility. Work is needed to begin to develop minimum standards that can be implemented at the provincial level, should federal leadership be challenged in the courts by companies seeking to circumvent their privacy obligations.
Second, the devil is in the defaults. In many respects, social media and Internet companies are the most powerful decision-makers when it comes to privacy choices. As my colleague Professor Ian Kerr says, the devil is in the defaults. In other words, the choices made by leading social media companies with respect to default privacy settings are the de facto privacy choice for millions of users. Given the increasing pressure to generate revenues, we can expect that those default choices are going to change in more aggressive ways to make use of user data.
There are examples of companies that are doing good work in this area. Twitter recently implemented do-not-track options that won plaudits from the Federal Trade Commission in the United States. Google offers its users transparency tools so they can obtain detailed information about what information is collected, some of the ways Google uses it, and how they can modify some of their privacy choices. The company has also been transparent about law enforcement requests for information and copyright takedown demands.
There needs to be continued work on these defaults, as well as initiatives to provide users with greater information and transparency, and steps to ensure that companies live by their privacy commitments.
Third is the issue of lawful access. The introduction of Bill C-30 brought with it an avalanche of public outrage and concern over proposed Internet surveillance legislation. While much of the focus was on mandatory warrantless disclosure of subscriber information by telecom service providers, the potential for social media and big data Internet sites to serve much the same purpose cannot be overlooked.
A recent investigation by the Privacy Commissioner of Canada into Nexopia, a Canadian social network, identified hundreds of law-enforcement requests for customer name and address information, frequently for accounts that should have been deleted months earlier. Social media, as we've heard, generates a treasure trove of personal information that must enjoy full privacy protection and court oversight before disclosure. Indeed, documents that I recently obtained under access to information indicate that Public Safety is thinking about how these rules are applied to social media sites and services. I believe that Bill C-30 needs to go back to the drawing board to effectively account for these privacy concerns.
Fourth is the question of new legal issues, which Professor Scassa has identified a number of. I would argue that while much can be done to use or augment existing rules, social media and Internet sites do raise some unique issues that may require targeted responses. In the interest of time I would like to quickly identify two.
First is the issue of “do not track”. As you may know, cookies can be used to trace the web-browsing habits of users, including when they visit third-party sites. For example, Facebook inserts a cookie on user browsers that traces your activity as you surf the Internet. Any site with nothing more than a Facebook “like” button, as found on Conservative, NDP, and Liberal websites, means that Facebook records a visit to that site and retains that information for months. A growing number of sites, including Yahoo, AOL, and Twitter, respect the functionality found in Firefox browsers that allows users to choose not to be tracked. Google has said it will implement similar technology in its Chrome browser.
However, many sites have been slow to adopt the do not track option, and Facebook has thus far declined to do so. Given the failure of the industry to self-regulate, it is appropriate for government to step in with stronger measures to ensure that this form of user choice is implemented and respected.
Second is the growing problem of social media misuse. For example, in recent months there has been an increasing number of stories of employers requiring employees to provide their Facebook user ID and password as a condition of a job interview. Seeking the same information with direct questions would typically be prohibited, so this is used to circumvent long-standing standards and principles within employment law. In response, the State of Maryland recently passed a law banning employers from requiring employees or job applicants to provide access to their personal digital and social media accounts. Several other states in the United States are working on similar legislation, and I believe that Canada should follow suit.
Thanks very much for your attention.
The Chair Pierre-Luc Dusseault
Thank you, Mr. Geist.
Now we move to our last witness for today. Ms. Steeves, you have 10 minutes.
Dr. Valerie Steeves Associate Professor, Department of Criminology, University of Ottawa
Thank you very much.
I'm the principal investigator of MediaSmart's Young Canadians in a Wired World research project. We've been collecting data about young people's experiences of online privacy for the past 12 years, which coincidentally means that we've been collecting data throughout the lifetime of PIPEDA. Over that time, we've tracked significant shifts that, I would suggest, provide important context for the work the committee has set out for itself. So I'd like to start my comments with a brief discussion of these shifts and then leave you with four specific recommendations.
In 2000, when PIPEDA came into force, the idea behind the legislation was that it would develop infrastructural mechanisms that would encourage people to have trust in e-commerce so that they would participate in this new form of wealth creation. When it came into force, we sat down and talked to parents and kids. The parents we talked to were very enthusiastic about this project. They had a lot of faith that the Internet was going to bring a lot of benefits to their children, and they felt that the companies that were developing these technologies were giving their kids tools to help them deepen their educational experience and also to help prepare them for the marketplace of the future.
They also told us that they trusted their kids when they went online to exercise good judgment. They weren't going to watch them all the time. They'd be in the background. They figured that their kids would make a few mistakes, but that when they kids got into trouble, they would come and say, “Hey, I need some help”. When we asked them if they would consider monitoring their children when they were online, they all told us, “Oh, no, that would be a breach of the trust between me and my child. If I did that, I would be invading my kid's privacy, so I would not do that”.
For their part, the kids we talked to in 2000 described the Internet as a completely private space. Adults couldn't even find it, let alone control it. They weren't worried about online privacy in 2000 because they were convinced that they had total anonymity when they were online. Interestingly enough, when they were deciding where to go when they were on the web, they looked for corporate brands because they felt that the companies that owned these brands were trustworthy. They were friends; they could trust them.
By 2004, for parents, certainly, the Internet had gone from being a panacea to a source of family conflict. They were aware their kids could release personal information online. They knew this was problematic. They had strict rules in the house, “Just don't do it”, but they spent an awful lot of time limiting, managing, and fighting over their kids' online activities.
The kids we talked to in 2004 had fully integrated online technologies into their personal lives, which I think underlines Professor Geist's introductory comments about the benefits of social media. These kids use this media and continue to use it to try on different identities, to deepen their connections to their real-world friends, and to follow their own interests. In 2004 they sometimes still did this anonymously, but most of the time they wanted to identify themselves because, contrary to popular opinion, they weren't talking to strangers. They were talking to the kids they went to school with and they needed to identify themselves so they could find their friends when they were online.
Even though they knew they could be watched, and they knew they were on so-called public media, online privacy was still incredibly important to these kids. I would suggest to you being very cautious about any claim that kids don't care about privacy because they post their lives on Facebook. Anyone who says that just hasn't taken the time to talk to kids; they care deeply about online privacy. It was becoming a growing concern for them in 2004, and in a follow-up survey of 5,500 Canadian school kids, about half of the kids we surveyed were beginning to notice that ads were popping up and were built into the places they went online.
Fast-forward to 2011. Now parents tell us that because kids go online through multiple points of entry or devices— laptops, computer labs, library networks, iPods, smart phones, iPads, gaming consoles—it's becoming increasingly difficult to supervise their kids' online activities. They also told us that it was highly problematic because they needed to supervise more because releasing personal information is now just taken for granted. You go online, because that's what you're expected to do. They were angry at online companies because from their point of view, these companies were encouraging their kids to disclose everything in order to make a profit. This resentment and lack of trust is a significant shift from 2000, when high-tech companies were seen to be building a future in which their kids would be empowered through technology.
Over that same time period, corporate sites, especially those sites targeting children, shifted from talking about privacy to talking about safety. It makes sense from the corporation's point of view, because when I'm talking about privacy, I'm the privacy risk because I'm collecting your information. If I'm talking about safety, I can tell you and your parents not to worry, because I'm keeping an eye out, I'm watching your child and I'll keep them safe.
Interestingly enough, almost all of the parents we talked to in 2011 were overwhelmed by this discourse of online danger. In fact, the sense of fear was so strong that they argued that good parents can no longer trust their kids and no longer exercise the benign neglect that was so common in 2000. And again, many of them blamed online corporations. As one Toronto parent said, “I really resent the fear that these companies have instilled in people.” All the parents said they were not even sure what the dangers were. All they know was that they're very afraid. They don't want to spy on their kids because that will hurt their relationship with them, but if they have to do it to keep them safe, they will spy on them.
For their part, the kids knew it. They told us that the unregulated private space they so enjoyed in 2000 and 2004 is now fully monitored, and they know it's fully monitored by parents, by schools, by their own peers, and by the corporations that own the sites they visit.
This puts kids in a very uncomfortable position, precisely because network technologies are so embedded into their social interactions. Interestingly enough, too, the kids said that all they needed was space to talk to their friends. They want parents and adults in the background, but they need privacy if they're to get the benefits of social interaction.
A number of them started talking about getting off Facebook and getting off their cell phones, because they were under so much surveillance. Interestingly enough, they all reported that the surveillance they experienced from all of those different people eroded the relationships of trust that are essential to their getting the help they need when they need it.
They're also beginning to question what will happen now that employers and police can get access to their Facebook profiles. They are also beginning to worry about what they called “the creepy people in the corporation who are watching them”. When you hear kids talk about creeps, creeping or being creepy, pay attention. That means somebody has overstepped the norms that are associated with exposure and have invaded their privacy.
It was particularly difficult when corporations did this, because when the 40-year-old creep sent you a message on Facebook, you simply blocked or un-friended him. The kids said “We can't do that with the corporations, because they own the sites we're on”. They also felt that privacy policies were written in totally incomprehensible language on purpose, precisely so companies wouldn't have to reveal what they were doing with their information.
Although kids still tend to congregate on corporate sites, like Facebook and YouTube, they no longer see online corporations as friendly or trustworthy. I think that's particularly important to keep in mind, because PIPEDA was designed to create that level of trust.
What can we do about it? How can we make it better? I have four suggestions for you.
First, I suggest that we need to increase the transparency of the business plans behind these sites. In 1999, when a number of us appeared before your predecessor committee, the government talked about PIPEDA being a floor and not a ceiling. As soon as it was passed, it quickly became the ceiling.
I would suggest that there's a great deal of empirical evidence out there that the consent mechanisms that we rely on and user license agreements and privacy policies are not being drafted to inform the individual so they can make choices about what they disclose; they're being drafted to protect the organization collecting the information from litigation.
In addition, it's becoming increasingly difficult to discover how that information is being used. I want to give you two very quick examples to illustrate that.
In 2000, I did a lot of research on a site called Neopets, which allows kids to create an online pet. They have to earn points on that site in order to buy their pet products and they would earn points by filling out market surveys.
In 2000, kids were asked to fill out a survey on breakfast food, for example, and in that context they were asked additional questions, like: How much money do your parents make? Do you have a big house? How many cars are in your family? What kind of cars do your parents drive? They were also asked to identify, off a list of 60 interests or things they might be interested in. The list included things like beer, liquor, cigars, cigarettes, and gambling. That information was then used to embed advertising into the site to encourage certain kinds of consumption.
I have some idea of the business plan behind that site. Since that time, because of concerns that were raised, both in Canada and the United States, those practices have become far less transparent. I can only get access to that kind of information now by snail mail and if I guarantee to them that I am a corporation. As a researcher, as a parent, and as a concerned citizen, I'm out of luck. I can't tell you what they're doing with the information.
It also has become much more difficult to see how this information is being used. Collection no longer occurs right in front of you. It occurs in the background. I got a friend request from Facebook, though I've never had a Facebook account. I have no relationship with this company. It said there was somebody called Melissa that I might want to be friends with, so I should join their network. I've never had a relationship with them, but they managed to track me to my daughter, even though we don't have the same last name, and even though she's never had a Facebook account either. I didn't release that information. I have no relationship with that company, and yet it is able to try to manipulate my behaviour through some business use that is non-transparent.
Second, I urge the committee to look not just at the use of personal information—
The Chair Pierre-Luc Dusseault
Could I ask you to wrap up quickly by providing your last two recommendations?
Associate Professor, Department of Criminology, University of Ottawa
Okay, I'm almost done.
I urge the committee to look also at the uses of aggregate data, because that's how those profiles that Professor Scassa was talking about run. Personal information isn't the only privacy problem that we face online.
Third, I would suggest that section 3 of PIPEDA gives you an opportunity to find out about the purposes for which personal information and aggregate data are used by corporations.
Fourth, when we have these discussions the conclusion is often that we need more education. After working in privacy education for the last 18 years, I would suggest it is time that we started to take digital literacy education seriously. Right now, because the government is not supporting it, you're leaving it by default to corporations. We need to support public-interest organizations so they can provide people with the information they need to make intelligent choices and informed decisions on the Internet.
Thank you very much.
The Chair Pierre-Luc Dusseault
Thank you very much. My thanks to the three witnesses.
We now move to a 10-minute question and answer period.
Mr. Angus, you may start.
Charlie Angus Timmins—James Bay, ON
Thank you very much, all three. This has been a fascinating discussion.
The New Democratic Party sees incredible democratic potential and social development possibilities through new media. The question is how to strike the balance. There are some disturbing elements that are happening in the world of Web 2.1 or 2.0, and we need to be careful. This is the thing. We don't want to overreach and interfere, but we want to ensure that protection is there.
Madam Scassa, I wanted to begin with the issue that you raised with PIPEDA, because this is our front line of defence. Our Privacy Commissioner has pointed out that Canada is falling farther and farther behind. We are becoming a laggard in basic privacy protections. I'm concerned about the breach reporting requirements. It seems that the potential rewrite of PIPEDA would allow the companies quite a bit of leeway in deciding whether or not to tell someone that their personal privacy has been breached. They talk about a serious risk. That is a pretty high bar. I can't imagine any company ever willingly telling their consumers that somebody has been hacking their data.
Do we need to have mandatory reporting? Would administrative monetary penalties ensure that these companies take the protection of personal data seriously?
Canada Research Chair Information Law, Faculty of Law, Common Law Section, University of Ottawa
With respect to the two aspects, monetary penalities and data security breach notification, the concern with putting some boundaries on the obligation to report security breaches is that they're, unfortunately, so common and of so many different varieties that, if there were an automatic mandatory obligation to report every security breach, consumers would quickly become overwhelmed and even more distrustful of what corporations are doing with their data. There can be all kinds of issues or problems. Some middle ground was sought where only the more serious ones that really posed a risk to consumers or individuals would have to be reported.
There are different ways that you can do that. You can leave it to the corporation to determine the seriousness of the breach and whether or not they should be reporting it. Or you can have an obligation that corporations report breaches to the Privacy Commissioner and then decide, in consultation with the Privacy Commissioner, what steps should be taken to notify consumers. There can be a range of different types of notification or different types of responses.
I'm somewhat sympathetic to the concern about overwhelming consumers with information about breaches, but at the same time, I think there are ways to do it that won't leave the decision-making entirely in the hands of companies to determine when a breach presents a serious risk of harm.
The concept of serious risk of harm is a difficult one as well, just because it may not always be easy to assess what amounts to a serious risk of harm for individuals. I think that's going go be a difficult threshold.
As for administrative penalties, I think that would be an important weapon in the arsenal of the Privacy Commissioner. Not only does the administrative penalty impose a sanction on companies, which can be important in signalling that there has been a lapse in behaviour that is problematic and needs to be addressed, but it also has a more public shaming dimension as well. I think one of the concerns that's frequently been expressed about PIPEDA is that the commissioner has taken a very soft approach to dealing with corporations and doesn't name names, particularly in the context of most complaints, and so on, so that there's not enough information provided.
Charlie Angus Timmins—James Bay, ON
We're definitely feeling the need to allow the Privacy Commissioner to be the adjudicator, because an individual could certainly panic over any manner of breach without necessarily knowing the extent of it. The Privacy Commissioner certainly represents the public interest and has the ability.
Mr. Geist, I'm interested in this idea of our falling behind. Canada was a world leader in digital development. Six or seven years ago we had some of the highest penetration rates and access and speed. Now we look at the OECD standards and we're in the lower bottom third of the pack. We're looking at a paucity of vision of where we need to go with a broader digital strategy, in terms of democratic involvement, consumer rights, and economic initiative. Could you explain to us what your concerns are?
Canada Research Chair, Internet and E-commerce Law, University of Ottawa, As an Individual
Well, sure. This is an issue that I think most of our peer countries, most of the developed world, have identified as absolutely crucial to future long-term innovation and economic prosperity, as well as integral to what our education system, entertainment, and culture look like. It plays a role in so many different ways.
I think, as we've seen over the last number of months, whether through Bill C-30, or SOPA in the United States, or ACTA in Europe, the reality is that there's a very important political and participatory dimension here as well.
Unlike most other countries that have developed digital economy strategies, focusing on everything from ensuring widespread access to bridging the digital divide in terms of just basic access to computers, as well as the digital literacy and skills that Professor Steeves talked about, and the policy to ensure that we create the right framework to ensure that businesses start here and grow here, what we've seen in Canada is virtually nothing on that front.
In fact, there was a perfectly good consultation on this a couple of years ago when Industry Minister Clement was minister. There was a lot of feedback on it. We've seen a lot of other countries that have provided models we could look to, and yet there has been no digital economy strategy put forward. Few legislative initiatives have been put forward. I mentioned one, the anti-spam bill, but, 18 months after the bill received royal assent, there is still no finalization of the regulations themselves.
We've seen things like CAP, the community access program, eliminated during the most recent budget at the very time when there was at least an opportunity to look for some private sector leadership. In the United States you've got efforts between the government and large ISPs to provide low-cost computers, and low-cost broadband connectivity to ensure that the poorer parts of our society have access. We don't have any of that taking place in Canada.
So when you come into committee and you start asking what are some of the big policy issues that we have to grapple with, part of the problem is that you've got virtually no leading Canadian companies that are imbuing the kind of Canadian values that we're talking about in what they're doing.
You've got little leverage in trying to ensure compliance, because all of these companies are located outside of the jurisdiction. While that's not to say that you can't do anything—we've seen that there are some measures that can be taken—we'd be on far stronger ground, frankly, if we'd just get on with this issue of trying to set out a framework for the future.
The Chair Pierre-Luc Dusseault
Thank you, Mr. Geist.
Your time is up, Mr. Angus.
Mr. Del Mastro has the floor for seven minutes.
Dean Del Mastro Peterborough, ON
Thank you, Mr. Chair.
Thank you to the witnesses. All the witnesses have given very interesting presentations today.
The Privacy Commissioner appeared on Tuesday and said that big data is the currency that Canadians are freely giving away without really understanding what it is that they're providing.
Professor Scassa, I believed you talked about companies harvesting information about us. I believe that was the term you used.
Professor Geist, you talked about default privacy settings and the devil being in the details.
Ms. Steeves, I'm actually very surprised that you could receive a friend's suggestion through your daughter with a different last name, and you've never been on Facebook.