This reminds me of the argument that the answer to gun violence is more guns on the streets. It has a certain logic to it that you could control misbehaving algorithms with the policing algorithms, but to me the real root of the issue is not having more technology to try to patch the holes in the existing system; the real end to this problem is oversight and transparency. We need to better understand how the algorithms are working and we need to understand what the vulnerabilities are for weaponization.
In markets that have grown large and powerful and have a strong impact on the public interest, such as health and safety rules for the restaurant sector or third-party review for pharmaceuticals, we have a long history of auditing these kinds of businesses, not in order to verify that they're misbehaving intentionally but to ensure that there aren't unintended consequences to the development of products in the market. I think ultimately where we're heading is toward a system of oversight and review of algorithms that can be weaponized to ensure we don't have strong negative effects.