Again, I think we can push back on tech inevitability, and we can say no to some of this technology, but that also requires funding and resources for education around these technologies. A lot of these contracts are made behind closed doors. In industry-government relationships, the public-private partnerships sometimes involve universities and labs, but it's always for a private interest focus. You want to fund these technologies, to build them, and then to use them. You don't think about the consequences. Very little money, time and resources go into dealing with the mess these technologies create and the harm they create.
We need to make sure there's a balance in that and move away and reconsider what we think about innovation when we fund that, especially as taxpayers. We need to really branch out. Right now I would say that the innovation work has been captured by specifically tech innovations that are designed to develop and deploy these technologies first and ask questions later. We can see how much harm they have caused, and yet here we are still debating this.
I don't want us to have a Clearview AI case, so what do we do? The free trial software transparency is really important, because that is beyond FRTs. That goes to all those AI systems and technologies that the government uses. Nobody sees that information anywhere. If we can get that information there, especially for law enforcement and national security, who won't use those excuses to say they're covering trade secrets....
We need to go beyond that. Again, if we want to build trust with the government, we need to have that level of transparency to know even what they are procuring and using so that we can ask better questions.