That's a great question.
The simple way to view this is that all of the diverse applications of AI simply have this same approach that we have in all other powerful industries, that it's the company's job to innovate and demonstrate to independent, government-appointed experts that the harms are outweighed by benefits.
I would start rather politically with child safety, because that is so incredibly politically salient and winnable right now. In America, we have about 95% of Republicans and Democrats agreeing that this has to happen. I call it the Bernie to Bannon coalition, and I think we're likely to see some legislation this year here in the U.S.
Once this precedent is set that we're going to treat AI like any other industry, we can add to the list of safety standards not only that they must not greatly enhance suicide risk in kids but also national security things. For example, you can't sell things if they can teach terrorists to make bioweapons. You can't release things if they could overthrow the government, as we heard from Professor Aguirre and Professor Krueger. It flows naturally from this simple approach of just treating AI companies like other companies.
I want to add one more thing. If this business about loss of control sounds strange, it's a very obvious idea that goes back to Alan Turing in 1951, that, if you build a bunch of robots that are vastly smarter than all humans, then of course they can build robot factories and make new robots. This is very much what companies are trying to do now.
Also, because they can make more robots, they check off the definition of being a species. Go down to your nearby zoo and ask yourself who is in the cages right now. Which species is it? Is it the humans? No, it's not. Why not? It's because we are the smartest species on earth. What we're basically saying is, let's keep it that way. Let's not let companies sell something—
