One of the problems is that it does tend to misidentify not only racialized people but also non-binary people. There are cases such as self-driving cars having a problem recognizing women. When these technologies start to affect larger proportions, or a significant proportion, of the population without some sort of accountability measure, we're looking at a very bad fragmentation of society on an economic level, a social level, and in ways that would fracture our politics. I think that can be minimized, to be honest.
One of the things I would like to see with AIDA is that it be its own bill. I personally think it should be spun off so that we can look at these things more clearly, because, as it stands right now, there is nothing to.... For example, if you go for a loan and AI predicts that your loan should be rejected because of a variety of factors, or maybe factors that aren't attributed to you because of race, gender, class, geographical location, religion, language, all the things.... If we're going to build these systems, we have to protect people from the negative impacts of those systems, especially when they happen at scale and especially when they happen with government agencies.
I think one of the problems with this bill is that a lot of government agencies, especially in national security and law enforcement, will be exempt. Those are some of the areas—you think of immigration too—where you will see large uses of AI.
I would say about education that a lot of the education over time should have come from journalists and journalism. We should have had a more robust journalistic tech field that could inform all of us and look into these issues with AI and tech writ large.