I think the automated drones, or what are termed LAWS—lethal autonomous weapons systems—are definitely areas where further focus is acquired. I would also say that what's been mentioned here about the spread or proliferation of surveillance and AI technologies that can be misused by authoritarian governments is another an area where there is an urgent need to look more closely.
Then, of course, you have whole sectors that have been mentioned by this committee already—media, hate-speech-related issues and issues related to elections. I think we have a considerable number of automated technical systems changing the way the battleground works, and how existing debates are taking place.
There's a real need to take a step back, as was mentioned and discussed before, in the context of AI potentially being able to solve or fix hate speech. I don't think we should expect that any automated system will be able to correctly identify content in a way that would prevent hate speech, or that would deal with these issues to create a climate. Instead, I think we need a broad set of tools. It's precisely not relying on just humans or technical solutions that are fully automated, but instead developing a wide tool kit of issues that design and create spaces of debate that we can be proud of, rather than getting stuck in a situation where we say, “Ah, we have this fancy AI system that will fix it for you.”