Conversation AI is in the process of being created and is not a finished product, but here's an example of this issue around removing content that I've been able to witness up front. When we have child sexual abuse imagery—and we scrub for that on all our platforms—we then reach out to NCMEC, the National Center for Missing & Exploited Children, and we red-flag that content with them. If it is child sexual abuse imagery, that content is removed.
On December 7th, 2016. See this statement in context.