Thank you for this opportunity to speak with you today.
Over the past several years my research has focused on some of the social media sites most popular among children, from online communities like Neopets to virtual worlds like Club Penguin. These types of sites don't look very much like Facebook, but they nonetheless do allow for many of the same types of social interactions and activities we identify as characteristic of social media.
Privacy issues are of enormous relevance within these environments. The research shows that since the very early days of the World Wide Web, kids' privacy rights have been infringed upon for commercial purposes within certain online social forums. This happens with much greater frequency than most of the other risks associated with kids online. It's also something that in other countries has led directly to the establishment of child-specific privacy legislation. The key example here is the U.S. Children's Online Privacy Protection Act, or COPPA, which was initially created in response to the then growing practice of soliciting names and addresses from children in order to direct-market to them.
Today the type of data collected from kids and the purposes for which it's used have both expanded significantly. The article that was circulated to you in advance of my appearance here today describes this shift in detail, explaining industry trends toward data mining, in which children's conversations, behaviours, and ideas can become fodder for market research and product development.
In my work in this area, I have observed that within social media forums, when children are considered at all, concern for their rights often plays second fiddle to narrowly defined notions of risk. Children are still very much more often seen as potential victims or conversely as potential criminals in the online environment. As such, the emphasis is placed on protecting them from strangers, from each other, and from themselves, rather than supporting and empowering them as citizens.
This tendency has greatly impacted the way in which social media companies address child users. The first and most common response has been to simply ban children under the age of 13 from participating in social media sites. This was the strategy found until very recently on Facebook, and it remains common throughout other popular social media as well. Although some children may, and often do, bypass these bans—by lying about their age, for instance—a formalized age restriction still has deep impacts on how and where children use social media. It also serves as a way of deflecting some of the public and regulatory scrutiny that can be associated with sites that do openly allow children or invite children to participate.
While in some cases age restrictions may very well be appropriate—there are many sites where they would be—in others, the no-children-allowed approach has more to do with wanting to avoid the risks and complications that kids might bring than it does with the actual content or activities that unfold there, which means that younger children are frequently banned from participating fully and inclusively in online culture and from reaping many of the benefits and opportunities that social media presents, simply because it's been deemed too much work or too expensive or simply too risky to accommodate them.
Another increasingly common response is the creation of tightly controlled child-specific social media, found in social networking sites, virtual worlds, and online communities designed and targeted specifically to children, usually under the age of 13. In my research I've found that in many of these cases the emphasis on risk has put privacy front and centre. Privacy concerns integrated at the level of design are quite apparent. They surface in legal documents such as privacy policies in terms of use, and they appear in the marketing of the sites themselves.
However, a number of areas are in dire need of improvement. As mentioned, there is continued evidence that children's online interactions are being surveilled and data-mined, most often without the full knowledge or consent of the kids involved, or that of their parents and guardians. While kids are regularly asked to agree to these kinds of activities through the privacy policies and terms of use they are required to agree to in order to participate, even on sites designed and targeted to younger children, these documents are long and extremely complex. They describe a wide variety of data collection activities and include a number of terms that are inappropriate and even inapplicable to ask children to agree to.
This raises important questions about informed consent, an issue that's particularly pressing when the users consist of young children with widely varying literacy levels and emerging capacities for understanding complex legal relationships. Best practices would include providing a child-friendly version of both of these documents to ensure that children and their parents know exactly what they're agreeing to. While there are definitely some really great examples of this practice out there, overall very few sites for kids bother to do it. When they do, the child-friendly versions are rarely comprehensive: most don't explain the full reasons for user data collection or only describe items that present the social media company in a positive light.
The misrepresentation of children's privacy as a matter of online safety is also becoming an increasingly prevalent trend. Now, don't get me wrong here. A broader consideration of how rules and design features aimed at protecting children's privacy rights might also offer protection from online predators and bullies has some very real benefits for children's safety and for their enjoyment of social media. But so far, in many cases this dual function has been realized in ways that work primarily to obscure the underlying commercial practices that privacy policies are actually meant to address. By reframing children's privacy as predominantly a matter of online safety--which is, in these cases, defined as safe from other users--the more mundane and less obviously risky threats to children's privacy, such as corporate surveillance and invasive market research, are sidelined.
A related emerging trend is to commercialize the safety features themselves, as I discovered in a recent study of kids' virtual worlds. Some kids' virtual worlds come with a “safe chat” version, where chat between users is limited to selecting preconstructed sentences from a drop-down menu. In one case, the “safe chat” version limited kids' options to a mere 323 different phrases, 45 of which were cross-promotional and 30 of which promoted third-party ads. As you might have guessed, none of these phrases were in the least bit negative. Kids could chat about how much they loved the brand but were prohibited, by design, from saying anything critical about it.
Among the many potentially negative impacts this can have on children is the impact it has on children's rights. These examples reveal that an unfortunate trade-off is taking place, as limited approaches to children's privacy and safety can place undue restrictions on children's other rights, such as the right to freedom of expression or the right to participate freely in cultural life.
Now, it's important to note that what I've described here are general trends, mostly found in commercial social media sites that are considered to be popular among children. Not all social media companies follow these practices. And there are, in fact, a number of Canadian companies that have come up with some pretty brilliant alternative strategies for balancing kids' privacy, safety, self-expression, and cultural participation. There is potential for real leadership here, but there's currently a lack of the kind of regulatory and government support that would be necessary for these types of individual, small-scale, ethical, rights-based approaches to develop into widespread industry practice.
In the time I have left, I'd like to outline four key take-aways or recommendations.
First, there is a clear and growing need for child-specific regulation on the collection, management, and use of children's data. In so doing, however, we'll need to avoid making the same mistakes that have plagued certain previous attempts, such as COPPA, in the U.S., which resulted in kids losing access to certain very important social spaces and/or widespread lying about their ages. We'll also need to expand this regulation in ways that better reflect current and emerging online data collection practices.
Second, we need a much clearer articulation of the ethics of informed consent where children of various ages are involved.
Third, we need to strive for a better balance between children's privacy rights and other rights, such as freedom of expression and the right to participate in cultural life, both within our discussions of these issues and within regulations, either amended or new.
Last, we need to establish clearer leadership and stronger enforcement of these child-specific rules, which would include acknowledging and supporting the innovative, ethical, rights-based examples that certain independent and small Canadian social media companies are already working to build.
I look forward to discussing these issues further with you during the question period.
Thank you.