Thank you very much, Mr. Chair.
Members of the committee, thank you for the invitation to appear today to speak to you on such an important subject.
We, meaning Google and I, haven't had the opportunity to appear before this committee in quite a while, so I'd like to take a few brief moments to tell you about Google in Canada.
In 2002, Google opened its doors in Toronto. It was one of our first offices outside the United States. After 15 years of growth, we now have more than 1,000 Googlers working across four offices: in Toronto, Kitchener-Waterloo, Montreal, and right here in Ottawa. We are excited about Canada. We are excited about the way we've been able to build world-class engineering teams that work on products used by billions of people every day.
Those products are being worked on in the four offices I just mentioned. Our products are being used to map northern communities, to make national parks more accessible to all, and to make our morning commute as painless as possible.
We are also increasingly working with Canada's community of artificial intelligence and machine learning researchers in both Toronto and Montreal. Canada, as we all know, is a world leader in this field, and the opportunity for scientific breakthrough, practical innovation in consumer and business products, and industry-wide growth bodes well for the Canadian economy.
I will turn to the subject under discussion today, PIPEDA. I've been in this field for more than 10 years, and I've always debated how to say it, so I'm glad to hear that there's a mixture.
As a principles-based privacy framework, PIPEDA is as relevant today as when it was first introduced. The broad principles that underpin privacy and data protection regulation have held fast through many cycles of technological change. We expect that the same will hold true as we see mobile devices gain in popularity and as machine learning gains wider use.
Of course, the specific application of these privacy principles will change and evolve, as it always has. At Google, we believe that data-driven innovation is compatible with a commitment to privacy. Our commitment focuses on four elements.
The first is choice. We provide users with meaningful privacy choices throughout the lifespan of their Google account: when creating their account, as they use our services, and when they abandon or delete their account.
The second is transparency. We help users make good privacy decisions by making it easy to see what data Google collects to power the personalization of their services and the advertising they may see.
The third is control. We provide our users with powerful, meaningful privacy controls, ensuring that they are experiencing Google on their own terms.
Finally, and I would say importantly, comes security. We invest heavily in keeping users' data accessible to them and only to them.
At Google we know that there is no “one size fits all” approach to protecting user privacy. Privacy means different things to different people, and we want to help our users to feel comfortable and confident about the information they share with us, even as they interact with our products on desktop, tablet, phone, or home devices.
We place value on being upfront and transparent with our users and speaking to them about privacy in clear language that they understand. In 2015, we introduced a site, privacy.google.com, that answers some of our users' biggest questions, such as what data Google holds or collects and what we do with that data. We've also made users' settings easier to find, understand, and manage, putting it all together in one place called My Account.
I want to underline that while I'm listing websites and URLs, the effort that has been put into experimentation and user experience design to make these useful has been a decade-long investment and process of refinement.
We're not stopping there. We continue to innovate and to improve users' access and control over their account data. For example, we are giving users unprecedented transparency through a site called My Activity, where they can see and manage the information used by Google resources.
How are they reacting? There were 1.6 billion unique users to this My Account site in 2016, and importantly, for we all realize how we use devices and how we access the Internet nowadays, more than 50% of that traffic was from mobile devices. Users have questions about their privacy and their security, and they're getting those answers relatively easily on a device that is really quite small.
With a focus on data security and access control, reasonable user awareness and empowerment, and data portability, we—both Google and the industry writ large—can ensure both privacy and innovation. It's the misuse of data, not its collection, that should concern us most. Let's consider the application of machine learning and the use of algorithms.
These techniques are already deployed in many features that Google's users know and love, such as Google Translate, spell-checking, or spam filtering, and within products such as Gmail, for instance.
Those of you who use our email products may be familiar with something called Smart Reply, which is generated by machine learning and uses our latest neural nets to suggest short responses relevant to incoming email, like “sure, I'll jump on that” or “that looks good to me”. People use it for 10% of all replies in our mobile mail products, so when you see that next time you'll know it might not be that genuine.
Google Home, which is a stand-alone device that provides access to our services, is also screenless and voice-controlled. We had to think of a new way to deliver our privacy notice to users by designing a specific sign-up and user consent flow for this product using a Home mobile app, and to make users aware that they can access their privacy controls through their Google account. You've had conversations around this sort of subject in your previous meetings, and it is truly a complex area.
At Google, we feel well positioned as we transition to a new era of computing in which people will experience computing more naturally and seamlessly in the context of their lives, powered by intelligent assistance and the cloud. This transition is as significant as the move over the last decade from desktops to mobile devices.
I'll just touch on two specific points that came up in your previous meetings, and we can follow up in the questions, if you like. You've heard from several witnesses about the challenges of maintaining children's privacy online. We are acutely aware that all our users need to understand the technology they use every day. We invest in making information available to parents. Through tools like the Safety Center, Family Help Centers, and in-product notifications, we work to provide parents and families the information they need to guide decisions about their children's use of technology. We want to provide parents with the tools and information they need to make their own choices regarding their children's online activity. We have built features into our Family Link app, which at the moment is only available in the United States, and our YouTube Kids app to enable parents to decide what is right for their family. The goal is to give kids an experience, guided by their parents, where they can build the skills to engage in smart and responsible online practices as they grow as individuals.
Finally, you've asked previous witnesses, and you've heard from Ms. Bourne-Tyson, about Europe's right to be forgotten.
Information-finding services like search engines are critical for sifting through the vast amount of information online. Many have likened the ruling by the Court of Justice of the European Union to removing cards from a library card catalogue but leaving the books on the shelf. However, on the Internet there are no shelves to browse, no way to walk through the stacks and follow the alphabet to the information you seek. Decisions to delist URLs can affect users' access to media properties, past decisions by public figures, and information about many other topics.
Of course, we at Google understand that there are instances where it's appropriate to remove content from search results because, for example, it's been deemed illegal under local laws. Our products have well-established systems for users to flag content that violates our policies. Authorities may also submit requests to locally block content that is deemed illegal under local laws, including laws about privacy. We have worked hard to be a responsible actor. A crucial aspect—which has been mentioned already today—of this responsibility means balancing privacy with other values, specifically the right to free expression.
While the CJEU may have established a right to be forgotten in Europe under European laws, it is important to note that freedom of expression is a broadly recognized, and passionately defended, right here in Canada and across the Americas. Any framework that has such significant implications for the freedom of expression must be accompanied by transparency, accountability, and recourse mechanisms. And any discussion of the possible application of a right to be forgotten in Canada should recognize and address the complex dialogue around this issue that continues to exist today in Europe.
Thank you for this time, and I look forward to your questions.