There are many ways this could materialize into something that is not beneficial for some groups. For example, predictive policing is one way that we see artificial intelligence in use to predict criminal activity, but the training data that's used is historical. If you're using historical or certain types of data to train the AI system, you're going to get a compounded effect whereby those neighbourhoods that are overpoliced become even more policed.
Another way it comes about is in hiring. Hiring agencies have used AI to search for executives for executive positions. Unfortunately, a lot of that data is also historical, which means there's a bias against women, because traditionally, women haven't held those positions.
These are very real consequences that are at scale, and I think the scale and the speed at which this could happen are very concerning. I believe the Edmonton police recently used a system using DNA to predict the facial features of one of the suspects of a sexual assault, and what it came up with was a 14-year-old Black boy. That's the other thing. This adultification of Black boys is another way AI manipulates what we see and what we consider as victims and as perpetrators, or anything like that.
I think the problem is that it has to do a lot with the training data, but the systems.... I'm not sure if the right questions have been asked or the right assumptions have been made to create the model itself.