Thank you very much for that question. I'm just trying to compile all my thoughts, because there are many issues that could fall under the umbrella you set out.
The first point I would make is that you're absolutely right in tracing that line. That's something we heard from a lot of the racial justice activists we talked to in the research for our report. For them, this is just 21st century state violence. It used to be done with pen and paper, and now it's done with computers and algorithms.
We're trying to move away from the term “predictive policing” just because, by this point, it's more of a marketing term and suggests a lot more certainty than the technology can really promise, because it's been popularized and it's what people know. One way that highlights the racial justice history behind it is asking if this would still be a problem if the technology worked perfectly. Our answer would be to look at what it's being used for. It's used for break and enters and so-called street crime and property crime. You will only ever catch a particular type of person if you're looking at a particular type of crime.
There's this great satirical project that makes a very compelling point in New York. They published something that they called the “white collar” crime heat map. That is essentially a crime heat map that only focuses on the financial district of downtown Manhattan. So, why are there not venture capitalists rushing to fund the start-up to create that type of predictive policing? It's because even if it worked perfectly, it still only enures to the benefit and detriment of particular social groups that fall along historical lines of systemic oppression.
The second point is I'm really happy that you brought up the “zooming out” contextualization of these technologies, because I believe in the next panel, you will be talking to Professor Kristen Thomasen, who is a colleague of mine. I would highly encourage you to pay attention to her comments, because she primarily focuses on situations in these technologies in the broader context of their being a socio-technical system and how you can't look at them divorced from the history that they're in. Even in Brazil, there was a rising field within the algorithmic accountability field that looked at the idea of critical algorithmic accountability or critical AI. They looked at what would it look like to decolonize artificial intelligence studies, for example, or to centre these historically marginalized groups even among the data scientists and the people who are working on these issues themselves.
I think I had one or two other thoughts, but maybe I'll stop there for now.