Okay. I didn't have that information.
On the cannabis example, there's a relatively fast-breaking space here, and we're trying to adjust our internal systems to differentiate between the promotion and use of cannabis in the illegal context, especially in an international arena, versus what's happening in Canada and the fact that it's now legal. With our advertising systems, as you might imagine, we're learning internally, through that algorithm, through manual review, through flagging by our users, that there's certain content that is now allowed and certain content that isn't.
It's an iterative process and it overlaps technology and human intervention, and that's especially important in sensitive areas. What happens in the case of violent content or extremist content is that once we're given an opportunity to recognize that there's a video or audio clip that is objectionable and illegal, we can use that same content ID flagging system to identify it as soon as it's uploaded and make it unavailable, and in some instances we shut down that account.