I'd like to thank you for asking me this question, because I don't think we look far enough into the future.
We have to understand that human intelligence knows no absolute limits. It's almost certain that we'll be able to build machines that will surpass us in many areas. We can't know for sure whether this will be in a few years or a few decades, but we need to be prepared for it.
What I find perhaps most worrying is that this means that those who will control these systems will have immense power, whether they be states, companies or others. I mention this because today we're concerned with protecting democracy. We're going to have to set up safeguards to make sure we don't have too much power concentrated in one place, whether it's in the hands of one person, a company director, any other organization, or even a government. The greater the possibilities of these systems become, the more important the question of governance will become.
It's a bit like creating entities or a new species whose intelligence might surpass our own. It's a very dangerous thing. We need to exercise control over this to ensure that artificial intelligence remains a tool, and not something that could compete with humans. We're talking about something much further away in time, but people at companies like OpenAI and Anthropic think it could happen as quickly as five years from now. So we need to start worrying about it today.