I think right now we live in a situation where these decisions are overwhelmingly made by private companies, and almost none of these decisions are made by democratically elected governments, and this is the problem for citizens, for rights, for governance. It poses a considerable challenge, but that doesn't mean that it's impossible. Whether it's the trade in the technologies where you choose to export them; the development of the technology, and which ones you focus on developing; the research and the research funding for different technologies, what you focus on, and what you ensure is developed, I do think there is an opportunity for moral leadership— which I think is the right word there.
But also to be perfectly blunt, there aren't that many countries in the world that are seriously trying to develop artificial intelligence in a positive way for their citizens and for its development in the context of human rights. There are many that are discussing it and trying, but a lot of the time they're saying, “Ah, but we're not quite sure. Would it have issues for economic development? Ah, we're not quite sure if some of our companies will have some mild issues here or there.”
I think there is a need to be willing and also have the strength to take that stand, but I also do think it's important because if there are no countries left in the world that are willing to do that, then we're in a very difficult spot. I think the European general data protection regulations have a perspective on what things could be done on data. But for artificial intelligence, for algorithms, we have a whole new set of issues, a whole new set of challenges, where I think there will be further leadership required to really get to a human rights basis and a basis that benefits all citizens.