I guess the difference on this one that we're looking at—and I'm going to go back to some other earlier testimony—is that when we have this FRT system, it's identified that we see up to 35% error rates in identifying, for instance, Black females versus white females.
When it comes to that identification, you stated in past testimony that you have a human who looks through that data, but are we still seeing that? Your testimony—I'm just going to get you to confirm that—was that the technology you're using was the least biased. Is that correct?