You make an excellent point. If people don't feel safe coming to Twitter, they won't use it, so it's in our best interests to do a better job with regard to these actions.
Recently we've changed our approach. Twitter has a different approach to content moderation than other platforms. There are many human interactions with our people reviewing them. We are depending more on proprietary technology and now 38% of accounts are actioned to us by technology for review—they're flagged for us to review—whereas previously we didn't have any. We plan on making more investments in proprietary technology.