I think this is something that is not gaining sufficient attention, to be quite honest.
Satellite systems are more and more dependent on AI for various aspects. Some of it is just about the data processing. Earth observation satellites are gathering enormous amounts of data, and AI is used to process some of that data. However, if that's being used to then feed into military decision-making, we need to know that there is sufficient trust in those decision-making models and those automated models.
A lot of the space traffic management is dependent on AI. Again, it's over 10,000 operational satellites and well over 130 million pieces of debris, some of which, as Dr. Byers said, is too small for us to even track. AI is increasingly used as a space traffic management tool to have satellites perform collision avoidance movements and to try to predict where there might be collisions.
All of this is really necessary because of the physical speed in space, the amount of data that we're using and our dependencies on satellites. However, I don't think there is sufficient understanding about what risks that brings, particularly if that's going to feed into military decision-making for targeting, for navigation of one's own troops on land, at sea or in the air, for communications, and for understanding the movements of adversaries. AI is becoming more and more a part of that decision-making chain. When it's built into satellite systems, we don't have enough opportunities, I think, or enough proactive mechanisms for those working on the AI systems and those who are space experts to really be bringing those two worlds together. There are high risks, I think.