I have worked in tech for the last 10 years. I was at Google and Facebook before, always in this field, and I have always been very skeptical about just using algorithms. They won't necessarily lead to more abuse or violence in the platforms, but if you rely on just the algorithms to provide support to users, you can have a lot of collateral damage. You may have certain accounts and certain activities that are flagged by the algorithm that are not abusive and that you need to manually review.
I'll give you a perfect example. We started seeing abuse on hashtags on Twitter—a hashtag is a mechanism to have a conversation in a platform around a specific topic—and an example would be #stopIslam. We immediately thought there must be hate speech within this hashtag. When we started looking at the data—by the way, the Dangerous Speech Project helped us, and The Washington Post did a great article on this—we found that the majority of the tweets were actually positive tweets. It was people saying, “This hashtag is atrocious. You should never say this.” Or, on the word “bitch”, when we started automating our processes, we were looking at the word “bitch”— pardon my not-French—and we realized there is a whole demographic that is using bitch as a way to say hi. The majority of our systems nearly collapsed because we were looking at this content that was not abusive.
What we have to think about in government and in these companies is whether these measures are proportional. If you were just to rely on algorithms, would it be proportional to be looking at people's accounts without there being any reports or any abusive activity? That's why I would always advocate for algorithm plus manual action in order to automate the support.