Thank you very much for the opportunity to speak here. I really appreciate the standing committee dealing with these issues. My name is Ben Wagner. I'm with the Privacy and Sustainable Computing Lab in Vienna.
We've been working closely on these issues for some time, specifically trying to understand how to safeguard human rights in a world where artificial intelligence and algorithms are becoming extremely common. This has included helping prepare Global Affairs Canada for the G7 last year. It was a great pleasure to work with colleagues there like Tara Denham, Jennifer Jeppsson and Marketa Geislerova.
The results that were produced in that, I think, are quite relevant also for this committee. You have the Charlevoix common vision for the future of artificial intelligence. Related to that, last year we were also working on—this is now in a Council of Europe context—a study on the human rights dimensions of algorithms, which I also think would be extremely helpful, especially if you're discussing studies and common challenges faced. Many of the common challenges you're discussing are already mentioned in these G7 documents and also in the statements developed by the Council of Europe.
To come back to a more general understanding of why this is important, artificial intelligence or AI is frequently thought of as some unusual or new thing. I think it's important to acknowledge that this is not a new and unusual technology. Artificial intelligence is here right now and is present in many existing applications that are being used.
It's increasingly permeating life-worlds, and it will soon be difficult to live in the modern world without having AI touch your life on a very daily basis. Its deep embedding in societies of course poses considerable challenges, but also opportunities. I think when we specifically look at the ethical and regulatory dimensions as, I believe, this committee is doing, it's extremely important to remember to try to ensure that all citizens have access to the opportunities of these technologies and that the opportunities provided by these technologies are not limited to just a select few.
With regard to how that can be done, there is a variety of sets of challenges and different issues. One of the most common ones is whether we talk about ethical framework or a more regulatory governance framework. I think it's important that they not be played off against each other. Ethical frameworks have their place. They're extremely important and they're extremely valuable, but of course they can't override or prevent governance frameworks from functioning. Indeed it would be difficult if they could. But if they function in parallel in a useful and sustainable manner, that can be quite effective.
The same is true even if you take a more governance-oriented human rights-based framework. It's very frequent that in these contexts different human rights are played off against each other. The right to freedom of expression is seen as being more important than the right to privacy. The right to privacy is seen as being more important than the right to free assembly, and so on. It's very important that in developing standards and frameworks in this context, we always consider all human rights and that human rights be the basic foundation for how we think about algorithms and artificial intelligence.
If you look at the Charlevoix documents that were developed last summer, you'll also note a considerable focus on human-centric artificial intelligence. While that's an extremely important design component, I think it's also important to acknowledge that human-centric focuses alone are not enough. At the same time, while we're seeing an increasing number of automated systems, lots of actors who are developing automated systems are not willing to admit how they're actually developing them or what exact elements are part of these systems.
It's often joked that some of the most frequently used examples in the start-up business plans of artificial intelligence are closer to Mechanical Turk—that is to say human labour—than to actual advanced artificial intelligence systems. This human labour often gets lost on the way or fails to be acknowledged.
This is also relevant in the context of extra-legal frameworks that are frequently applied when we talk about ethical frameworks, when we talk about frameworks that don't govern in the way that rule of law can. I think we need to be extremely careful there with regard to the extent to which frameworks like this actually come to replace or override the rule of law. That's specifically also the case where we see lots of conversations right now. I'm sure you will have heard about Google's AI board, which was recently created and then shut down within the space of just a week or two.
You'll notice that there's an attempt on the one hand, a great push by some actors, to try to be more ethical, but this ethical framework is not enough and the actors realize this, given the heavy criticism of this that you see, which again isn't to say that ethics isn't important or ethics is necessary but that ethics needs to be done right if it's going to have a meaningful impact on this. That means there's a strong role for the public sector as well. We can't allow ethics squashing. We can't allow ethics shopping. We can't allow for lowering the bar for the standards that we already have.
As I'm sure you are aware, the existing standards in many areas of public governance—when we're talking about existing norms related to how we govern technology and how we govern the activities of corporations, if you look at the business and human rights framework of the United Nations, for example—are already relatively weak. In some areas, there's a danger that these ethical principles will even go below existing business and human rights standards.
At the same time, to take a more positive note as well, there is an extremely important role for the public sector here, and I think it's again possible to commend the work specifically of Michael Karlin, who has done some fantastic work on algorithmic impact assessments for the Government of Canada. There's really an important measure to be seen there in how Canada is also taking a lead and really showing what is possible in the context of these algorithmic impact assessments. I can definitely commend his work there.
At the same time, when you look at the recent accusations now that Facebook has been breaking Canadian privacy laws, we have a serious issue related to implementation. Specifically, these breaches that have been of concern to numerous Canadian privacy regulators do raise a question. Can we just focus on the public sector alone and can the public sector alone lead the way, or do we need to take similar considerations for, at the very least, large, powerful private sector companies? Because in the world we live in right now, whether you're talking about opening a bank account, posting something on Facebook, talking to a friend online or even getting a pizza delivery, algorithms and AI are part of every step that takes place in that context.
Unless we're willing to limit the agency of these algorithms, there are two things when we consider those things democratically relevant. They increasingly begin to dominate us, and this is not a Terminator-like scenario where we need to be scared that the robots will come and take over the world.
It's rather that, through these technologies, a lot of power becomes concentrated in the hands of very few human beings, and these are precisely the types of situations that democratic institutions, such as the parliamentary committee that's hearing about this topic right now, were built to deal with. That is to ensure that the power of the few is spread to the power of the many, and to ensure that having access to AI and to the benefits of AI, but also to the foundational promise of AI that technology can make people's lives better, both inside Canada and beyond, is accessible to every human being, and that basic human rights provide the core foundation of how we develop and how we think about technology in the future.
Thank you very much for listening. I look forward to answering any questions you might have.