Maybe the shortest-term concern that was a priority, for example, for the experts consulted by the World Economic Forum just a few weeks ago is disinformation. An example is the current use of deep fakes in AI to imitate images of people by imitating their voices and rendering their movement in video and interacting with people through texts and through dialogue in a way that can fool a social media user and make them change their mind on political questions.
There's real concern about the use of AI in politically oriented ways that go against the principles of our democracy. That's a short-term thing.
The one that I would say is next, which may be a year or two later, is the threat in terms of the use of these advanced AI systems for cyber-attacks. These systems, in terms of programming, have been making a lot of rapid progress in recent years, and it's expected to continue even faster than any other ability, because we can generate an infinite amount of data for that, just like in playing the game of Go. When these systems get strong enough to defeat our current cyber-defences and our industrial digital infrastructure, we are in trouble, especially if these systems fall into the wrong hands. We need to secure those systems. One of the things that the Biden executive order insisted on is that these large systems need to be secured to minimize those risks.
Then there were other risks that people talk about, such as helping bad actors to develop new weapons or to have the expertise that they wouldn't have otherwise. All of these things need a law as quickly as possible to make sure that we minimize those risks.