Certainly. I've heard over and over again witnesses talk about scale, but not violence at scale. That's what we see—how AI is being used in the military. We have to go back to something I spoke about when Parliament did a study on facial recognition technology—that's companies that are defence contractors, which are now spun up as AI and data analytics firms. A famous one is Palantir. You may know of them.
Palantir is interesting, because it started in defence, but now it's everywhere. The NHS in the U.K. just gave them a contract of millions of dollars, despite so much opposition to it. Palantir promised that the U.K. government would be in charge of the data of the people, but in the end it is not so. We have past examples of Palantir abusing human rights. Let's bring that into context. For example, an Amnesty U.S.A. study showed how, in the U.S., government planned mass arrests of nearly 700 people and “the separation of children from their parents...causing irreparable harm”.
I'll go back to the military. What does this mean? The military is the biggest funder of AI. We see rapid, exacerbating killing at scale. When we are racing to move forward with making more AI, making it faster and creating faster regulation just so we can justify to ourselves that we use it, we are not thinking about what should be banned, what should be decommissioned—