Yes, I mentioned urgency many times in my little presentation because you have to understand AI not as a static thing where we are now but as the trajectory that is happening in research and development, mostly in large companies but also in academia. As these systems become smarter and more powerful, their abilities have dual use, and that means more good and more harm can happen. The harm part is what we need government to protect us from.
In particular, going back to the question from Mr. Perkins, we need to make sure that one of the principles is that major harm, such as a national security threat, will not be coming easily from the products that are considered legal and are within the law. This is why the high-impact category and maybe the different ways that it could be spelled out are so important.