Thank you for the question, and I'll gladly answer. I love that you stated that “ethics” and “Palantir” are not synonyms, because that is correct.
As I already stated, Palantir is a tech data analytics company, and hence this is the problem with the way “AI” is defined by the federal government. The definition is really broad, and I think it's just important for me to note what it is in this meeting. The Treasury Board defines “artificial intelligence” as “Information technology”—which is IT—“that performs tasks that would ordinarily require biological brain power to accomplish, such as making sense of spoken language, learning behaviours or solving problems.”
This is how Palantir managed to be on this list, which I will gladly share with you. The problem with Palantir is that it's actually really loved by governments all around the world, but it is getting some pushback right now from the EU—although it is involved in GAIA-X's project.
They were largely funded and created by Peter Thiel and others, and there are many conflict of interest cases even within that governance.
The problem is that they're still there. Clearview AI is also still there, although Canada has made a direct statement within OPC around having them out of the country, so to speak, although that's questionable. They're still scraping the web.
With Palantir, they really do data governance around the world. Why they are dangerous is that even though everyone knows they're not ethical and some people think they're cool, they're still hired by the law enforcement and—