Yes. This was a paper that used a really interesting data set of misinformation narratives that we had identified and reported to the platforms. We then assessed whether or not they did or did not take action against the content that was reported to them and checked by researchers to see whether this was a misinformation narrative or not. Some of the things that explained the differences were really simple technical fixes. For example, if we had reported a narrative of things that were on a certain date, things that were published before weren't necessarily backdated with a label, but things going forward were. We also noticed differences across the different kinds of media. If things were screen grabbed or cut or edited slightly, the automated detection tools didn't always do a great job identifying similar kinds of misinformation narratives. You have to remember that a lot of this kind of content takedown is done automatically by automated systems.
So there were a couple of problems there, but for the most part, we saw about 70% of the content being enforced. A lot of the decisions to not enforce were relatively arbitrary, based on these small technical problems or problems with the automated systems that I think could easily be fixed.