Thank you very much, sir, for that question.
Obviously, I think, just to roll back a bit, looking at what happened in the U.S. presidential election, we were clearly slow to react to this. We were slow to get on top of it. I want to assure you that we're now putting in all of our efforts to address this challenge head on.
I'm not familiar with the particular example you gave. But if I may, I can give you in general terms how we think about the challenge of misinformation. It turns out, upon study and research on this phenomenon, there are two things that we've identified. One is the sort of classic clickbait, low-quality content misinformation. People may not have a particular political objective, but they're going to put stuff online; they're going to try to put stuff on Facebook. The intent is to have people click to a site where they're publishing very low-quality, potentially fake information, and get people to click through until then they monetize.
A lot of this turns out to be financially motivated. What we're trying to do, using new technologies like machine learning, the artificial intelligence that we talked about earlier, is to identify this kind of behaviour, and through our signals being able to prevent them from using Facebook ads, so effectively drying up the financial incentive to cause mischief.