We have to worry about the technology that already exists that can be used to create deepfakes of various kinds and imitate people, their voices, their visual appearances and their movements. I think we need to start preparing against tools on the horizon that could be coming out in six months or something like this.
Again, AI is not a static thing. It's getting better as new systems and companies are coming up with new ways of training it that make it more competent.
I'm going to go into a little bit of a technical thing here, which is that once one of these very large systems that cost over $100 million has been trained, it's fairly cheap to take it—especially if it's open source—and do a little bit more work to make it really good at one particular task. This is called fine tuning.
You could imagine, for example, that the Russians might be taking Facebook's LLaMA. They might make it run on social media and interact with people to see how well it works, and then they might take that data in order to make the system even better at convincing people to change their political opinion on some subject.
As I said earlier, there are already studies showing that GPT-4, as it stands, is already better than humans, but only slightly, especially when it has access to your Facebook page. However, it can get a lot worse without any new scientific breakthrough, just with a bit of engineering of the kind that it could easily do.
What that would mean is that they can now unleash bots that would be talking to potentially millions of people at the same time and trying to make them change their opinions. It's a kind of technology that we haven't seen, or maybe it is already happening and we're not aware of it. It could be a game-changer for elections in a bad way.