Evidence of meeting #122 for Access to Information, Privacy and Ethics in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was users.

A video is available from Parliament.

On the agenda

MPs speaking

Also speaking

Colin McKay  Head, Public Policy and Government Relations, Google Canada

12:15 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

You mentioned data portability in your comments in passing. We've heard that might be an answer to competition antitrust issues. We've heard that it's certainly a component of the GDPR.

Others coming before us have recently talked about not only portability but interoperability. Would you support a change of law here in Canada to require data portability and interoperability?

12:15 p.m.

Head, Public Policy and Government Relations, Google Canada

Colin McKay

I mentioned the data transfer project in my opening remarks. The data transfer project is a solid attempt to start addressing that challenge.

We've long had both the data liberation front and then data takeout tools that allow our users to export their information so they can be used in a different service. The data transfer project is in concert with Facebook and Microsoft and Twitter and hopefully others. It's to enable you in effect to transfer your information from Google services to another service almost seamlessly.

12:15 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

You're familiar with the Honest Ads Act in the U.S.?

12:15 p.m.

Head, Public Policy and Government Relations, Google Canada

12:15 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

That's probably a pretty limited way of tackling the problem. We've heard other testimony here to suggest that we should go further and that there should be more than just a searchable database, which is now going to be required for elections in some modest fashion, but not for other advertising; we've heard recommendations for real-time ad disclosure about engagement metrics, the number of dollars spent, and the specific criteria by which individuals were targeted.

You have made a number of steps absent of the law requiring you to do so. Google has taken a number of precautions and changed, as you were talking about, third party applications.

Are you looking at providing a real time ad disclosure, or do we have to legislate?

12:15 p.m.

Head, Public Policy and Government Relations, Google Canada

Colin McKay

Do you mean real time ad disclosure to the level of detail you have just described for political advertising?

12:15 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

It wouldn't be just political advertising—

12:15 p.m.

Head, Public Policy and Government Relations, Google Canada

12:15 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

It would absolutely be for political advertising, but perhaps for other advertising as well.

12:20 p.m.

Head, Public Policy and Government Relations, Google Canada

Colin McKay

I think the process is iterative, and it's incredibly complex.

12:20 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

So perhaps legislation is needed.

This is a really tricky problem. When Dr. Ben Scott was before us, he recommended that big publishers.... Google is not categorized as a publisher today and they are not a broadcaster today, so they don't have content control obligations in the same way. We know algorithms have replaced editors to a significant degree.

We heard some testimony that, for example, on YouTube, Google has profited significantly from showing advertising based on Alex Jones and InfoWars videos that Google, in fact, has recommended. They have recommended these videos millions and millions of times.

If that's the case, is Google not in some way then responsible for the misinformation being spread, and shouldn't there be some level of disgorgement then? If you're making money off that misinformation, should it not be disgorged?

12:20 p.m.

Head, Public Policy and Government Relations, Google Canada

Colin McKay

You have pretty clear policies and guidelines in place that we apply to content that's available on YouTube and elsewhere, which we've applied stringently, and we're continuing to tighten the enforcement.

12:20 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

Who flagged that problem for you? YouTube recommended Alex Jones' videos millions of times. Did you finally say, “Oh, we should stop doing this?” Did someone bring it to your attention?

12:20 p.m.

Head, Public Policy and Government Relations, Google Canada

Colin McKay

It's a multisided process. We have automated review. We also have human review. Then we have YouTube users who flag specific content.

12:20 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

You can see, though, that if a company isn't proactively preventing this from happening in the first place, lawmakers like ourselves think we have to fix the problem if the companies are not going to act in the public interest themselves.

12:20 p.m.

Head, Public Policy and Government Relations, Google Canada

Colin McKay

You mentioned a specific person and a specific site. In that instance and in many other instances, there are specific pieces of content that could be found objectionable, yet not violate our policy guidelines or not violate law within a jurisdiction, or there could be. We're trying to balance that law and intervene where necessary, but it's not a yes/no binary option around whether someone has access to our services.

12:20 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

I understand.

This is my last question.

Maybe we need a better process whereby Google's not policing content and not being asked to police content. If Google has profited from hateful content, and you have the engagement metrics to know how many eyeballs have seen it and you have the internal measurement to know how much you have profited through advertising for that content, why should you be allowed to profit from that content? Once it's deemed to be hateful and we know the engagement metrics, pay the money back. Disgorge.

12:20 p.m.

Head, Public Policy and Government Relations, Google Canada

Colin McKay

Ideally the content comes down very quickly, and in fact what happens as soon as we enter into that review process is we stop any revenue generation based on that content. It's not a question of a review process taking too long and the content continuing to accrue revenue on the content; that process stops. Sometimes we actually eliminate compensation for content producers altogether.

12:20 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

If any companies profit over a certain level, there should be some level of penalty, perhaps.

Thanks for listening.

12:20 p.m.

Conservative

The Chair Conservative Bob Zimmer

No worries. You will have time for more questions coming up.

Next up, for five minutes, is Mr. Clement.

12:20 p.m.

Conservative

Tony Clement Conservative Parry Sound—Muskoka, ON

Thank you.

Colin, it's good to see you. I'm a little bit of a visitor to this committee, but since I'm here and you're here, I thought we'd proceed on that basis.

You may have covered this at your previous encounter, but I wanted to go through the hacking incidents that have occurred, and particularly what we saw on the U.S. presidential campaign. Maybe you've covered this ground.

I don't even know whether it was Gmail or some other service, but my knowledge of the hack on the Democratic national campaign, the Hillary Clinton campaign, is that it was a kind of sad story.

They had rules and so on, and just one human error—and unless we're all taken over by robots tomorrow, human error is going to continue—created the opportunity for the Russians to gain access to every single email of Hillary Clinton's national campaign director. That was on the basis of a Bitly. In a case like that, what these hackers like to do, if they're phishing or spear phishing, is give you a sense of urgency. If you don't do something right away, if you don't click right away, your credit card is going to be compromised or access to your bank account will be compromised or what have you.

There was a Bitly attached to the email that went to John Podesta. He rationally flipped it to his director of IT in the Hillary Clinton campaign, asking if it was for real, if it was legitimate, which was the right thing to do.

The director of IT in the campaign figured out that this was wrong, it was suspect, and flipped the email back—but he forgot a word. He said, “This is legit” rather than saying, “This is not legit.” Then as soon as Podesta saw that, he clicked on the Bitly, and the rest is part of the history books now.

I'm just asking a question. Based on that, we know there is human error. You can have all the systems in the world, but human error does take place. How do we...? Maybe it's a combination of education and better systems, and maybe there's AI involved. I wanted your take on this, because we're all coming up to an election campaign and we're all susceptible to hacks. I'd be very surprised if there were no hacking attempts in the next federal election campaign in Canada.

Let me get your side of this issue.

12:25 p.m.

Head, Public Policy and Government Relations, Google Canada

Colin McKay

This is certainly something that we recognize because of the billions of users we have, particularly starting in Gmail. We've attacked this problem in multiple ways over the years.

To start, in 2010, we were notifying Gmail users that we were seeing attempts to access their account, attempts to try to crack their account by force or to send them spoof emails that would force them to make a decision much like yours. We built on those notifications security protections that now give you a notification if we're seeing an attempt to access your account from an unusual area or an unusual geography, so that if someone outside your normal space or even you while travelling log in to your account from elsewhere, you'll get a notification either on your account or on your phone if you've enabled two-factor authentication. We've forced the implementation of two-factor authentication across most of our products so that someone can't just hack into your account by virtue of having the account name and the password. You now need a physical token of some kind.

However, we also recognize that you can force your way into a system through brute force. Jigsaw, which is an Alphabet company, has developed a service called Shield, which is available to non-profits, political parties, and others, to help them counter denial-of-service attacks, where there is that brute force attempt to cause a security system to fail.

As well, earlier this year we put in advanced privacy protection, particularly for elected officials, so they could put in place security controls that we have within the company that not only require two-factor authentication but also place specific restrictions on unusual log-in attempts and attempts to access information within your Google account services. You are forced to provide additional verification. It's an inconvenience for the user, but it also provides more surety of mind that you have the security protection that allows you to identify those sorts of flagrant attempts.

For the general user, I mentioned Safe Browsing in my remarks. Safe Browsing is developed specifically for that concern. When people have clicked on a link and they use the Chrome browser to go to a page, we can see if that move to a page causes unusual behaviour, such as immediately hitting the back button, trying to leave that page, or shutting down their browser altogether. Over billions and billions of interactions, we can recognize the pages that are generating suspicious or harmful content and are causing our users to behave that way. We then generate an API that we share with Microsoft and Firefox that allows them to flag those URLs as well, so that when you actually click on a link to those pages, you get a red warning box that tells you that it's a security threat and most times will not let you advance to the page. Therefore, we're taking insights from behaviour to try to eliminate the concern as well.

12:25 p.m.

Conservative

Tony Clement Conservative Parry Sound—Muskoka, ON

Are you using AI, then, more and more for security purposes?

12:25 p.m.

Head, Public Policy and Government Relations, Google Canada

Colin McKay

We have that integrated into the systems around our search. Certainly we're using machine learning in our analysis of the behaviours and analysis of the content as well.

12:25 p.m.

Conservative

Tony Clement Conservative Parry Sound—Muskoka, ON

Thank you.