It's hard to know. Sometimes I stare at the screen, and I'm not really sure who should go first or who should go second.
I will address the substantive on this challenge of addressing misinformation online in a moment, but I think it is incumbent on all of us to be very wary of—and I'm sure that's not what you intend, sir—what others may interpret as potentially some form of censorship of what people can say. I think that's something that we're very mindful of. We have taken an approach on misinformation that's a little bit different. I'm not sure that we want to be watching over our users—and I don't think users would want that—to be able to say that we authorize them to say this and we don't authorize them to say something else.
What we do is ensure that we are reducing the spread of misinformation on Facebook. We do this in three ways, three ways that I think are important when we try to understand what we've learned from the past few years.
The first thing, as it turns out, is that the majority of pages and fake accounts that are set up are actually motivated by economic incentives. These are people who create a kind of website somewhere. They have very little content—probably very poor, low-quality content, probably spammy content. They plaster the site with ads, and then they share links on platforms like Twitter, Facebook, and any other social media platform. It's clickbait, so it's designed to get you to see a very salacious headline and click through. As soon as you click through to the website, they monetize.
We've done a number of things to ensure that they can no longer do that. First, we are using artificial intelligence to identify sites that are actually of this nature, and we downrank them or prevent certain content from being shared as spam.
We are also ensuring that you can't spoof domains on our website. If you are pretending to sound like a legitimate website very close to The Globe and Mail or The New York Times, that is no longer possible using our technical measures. We are also ensuring that from a technical standpoint you're no longer able to use Facebook ads to monetize on your website.
The second thing we're doing is for the fake accounts that are set up to sow division, as you say, or to be mischievous in nature and that are not financially motivated. We are using artificial intelligence to identify patterns about these fake accounts and then take them down. As I said earlier, in Q1 we disabled about 583 million fake accounts. In the lead-up to the French and German elections, we took down tens of thousands of accounts that we proactively detected as being fake.
Then, of course, the last thing I should really stress which is very important in this is that we are putting in tremendous resources, and we are already implementing all these measures directly on the platform. I would say, of course, that at the end of the day the final and ultimate backstop is to ensure that when people do come across certain content online, whether it's on Facebook or anywhere else online, they have the critical digital literacy skills to understand that this stuff may actually not be authentic or high-quality information. That's where the partnerships that we have, such as with MediaSmarts on digital news literacy, are hoping to make an effort. I think public awareness campaigns are actually quite important. That would be the first element of what we're trying to do.