One of the concerns we had with the original iteration of this bill was that it would have mandated platforms to have to, in fact, monitor essentially all content that was going up. It no longer does that, but we're concerned that it doesn't stop them from doing that. It doesn't block that. The reason for that is, if that were the case and they were to do that, they would by default have to rely almost primarily on algorithmic decision-making, and that's not included. As we said, transparency is included in the bill. It would almost by default result in an overmoderation. They would have to lean towards taking down content and dealing with it later, rather than narrowly defining it.
In some cases, for child sexual abuse material, there are hashes and things that can be used to specifically identify particular types of content, and that would avoid having to monitor all online content, but that's missing from this bill in terms of an obligation for the platform to not engage in that activity.