Thank you for your question, sir.
Deplatforming works. There's plenty of evidence to suggest that deplatforming does work in limiting the spread of terrorist content on platforms, but it's not enough on its own. In order to effectively prevent terrorist abuse of online platforms, we need to accept two things. First, there will always be some content that falls in the grey zone and will not be liable for removal and these groups walk the line very carefully.
Second, there will always be some spaces on tech platforms that are not liable for moderation. I've mentioned “search” a few times now—that's a great example here. Search engines don't prevent you from entering anything you'd like into the search engine box. That search engine box is a great moment to intervene with someone who is searching actively for terrorist content.
For these kinds of cases, in addition to moderation efforts, we need to be thinking about how we deliver safer alternatives to users who might be at risk of getting involved in violence. You can delete the user and you can delete the account or the video, but that person still exists in the community around us.
Thank you.