Yes, that's the thing: it's a kind of a tragedy of the commons problem, where, when you and I make individual decisions in individual situations, and we think we're fine because we've agreed to what we've done, the implications of what we've done, our choices, can be part of what aggregates. It's the algorithmic sort of aggregation.
I'll give you an example from Latanya Sweeney's research. She did research in the United States which showed that black-sounding names were more likely to have pop-up advertising for services that allowed you to get a criminal record check than white-sounding names were. The advertising itself reflected embedded prejudice.
Then the question became, how did that happen? The search engine said, “Well, it's not us, we didn't program in a prejudice.” They said that it must be that the algorithm was reflecting societal prejudice. They said that it was more likely in the databases that we're searching that more people are searching for a criminal record check on a black-sounding name than on a white-sounding name, so they put it back as a reflection of consumers.
Part of the answer is that we won't necessarily know, but it's a powerful indicator of how it can happen, whether or not.... The algorithm curates our aggregate bias and our aggregate discrimination and feeds it back to us in ways that obviously have disparate impacts on members of marginalized communities, impacts that are not felt by members of the majority. It's complicated.