Hello. My expertise is in computer science. I've been a pioneer of deep learning, which is the area that has changed AI from something that was happening in universities into something that is now taking a big economic role and where there are billions of investments in industry.
In spite of the progress that's remarkable, it's also important to realize that the current AI systems are very far from human-level AI. In many ways they are weak. They don't understand the human context, of course. They don't understand moral values. They don't understand much but they can be very good at a particular task and that can be very economically useful, but we have to be aware of these limitations.
For example, if we consider the application of these tools in the military, a system is going to take a decision to kill a person and doesn't have the moral context a human can have to maybe not obey the order. There's a red line, which the UN Secretary-General has talked about, that we shouldn't be crossing.
Going back to AI and Canada's role, the thing that is interesting is we've played a very important role in development of the recent science of AI and clearly we are recognized as a scientific leader. We also are playing a growing role on the economic side. Of course, Canada is still dwarfed in comparison to Silicon Valley, but there is a very rapid growth of our tech industry regarding AI and we have a chance, because of our strength scientifically, to become not just a consumer of AI but also a producer, which means Canadian companies are getting involved and that's important to keep in mind as well.
The thing that's important, in addition to the scientific leadership and our growing economic leadership regarding AI, is moral leadership, and Canada has a chance to play a crucial role in the world here. We have already been noticed for this. In particular I want to mention the Montreal declaration for responsible development of AI to which I contributed and which is really about ethical principles.
Ten principles have been articulated with a number of subprinciples for each. This is interesting and different from other efforts in trying to formalize the ethical and social aspects of AI because in addition to bringing in experts in AI, of course there were scholars in the social sciences and humanities, but ordinary people also had a chance to provide feedback. The declaration was modified thanks to that feedback with citizens in libraries, for example, attending workshops where they could discuss the issues that were presented in the declaration.
In general for the future, I think it's a good thing to keep in mind that we have to keep ordinary people in the loop. We have to educate them so they understand issues because we will take decisions collectively, and it's important that ordinary people understand.
When I give talks about AI, often the biggest concerns I hear are about the effect of AI on motivation and jobs. Clearly, governments need to think about that and that thinking must be done quite a bit ahead of the changes that are coming. If you think about, say, changing the education system to adapt to a new wave of people who might lose their jobs in the next decade, those changes can take years, can take a decade to have a real impact. So it's important to start these things early. It's the same thing if we decide to change our social safety net to adapt to these potential rapid changes in the job market. These things should be tackled fairly soon.
I have another example of short-term concerns. I talked about military applications. It could be really good if Canada played more of a leadership role in the discussions that are currently taking place around the UN in the military use of AI and the so-called “killer drones” that can be used, thanks to computer videos, to recognize people and target them.
There's already a large coalition of countries expressing concern and working on drafting an international ban. Even if not all the countries—or even major countries such as the U.S., China or Russia—don't go with such an international treaty, I think Canada can play an important role. A good example is what we did in the nineties with anti-personnel mines and the treaty that was signed in Canada. That really had an impact. Even though countries such as the U.S. didn't sign it, the social stigma of these anti-personnel mines, thanks to the ban, has meant that companies gradually have stopped building them.
Another area of concern from an ethical point of view has to do with bias and discrimination, which is something that is very important to Canadian values. I think it's also an area where governments can step in to make sure there's a level playing field between companies.
Right now, companies can choose to use one approach—or no approach at all—to try to tackle the potential issues of bias and discrimination in the use of AI, which comes mostly from the data that those systems are trained on, but there will be a trade-off between their use of these techniques and, say, the profitability or the predictability of the systems. If there is no regulation, what's going to happen is that the more ethical companies are going to lose market share against the companies that don't have such high standards, and it's important, of course, to make sure that all those companies play on the same level.
Another example that's interesting is the use of AI not necessarily in Canada but in other countries, because these systems can be used to track where people are by, again, using these cameras all over the place. The surveillance systems, for example, are currently being sold by China to some authoritarian countries. We are probably going to see more of that in the future. It's something that is ethnically questionable. We need to decide if we want to just not think about it or have some sort of regulation to make sure that these potentially unethical uses are not something that our companies are going to be doing.
Another area that's interesting for government to think about is advertising. As AI becomes gradually more powerful, it can influence people's minds more efficiently. In using information that a company has on a particular user, a particular person, the advertising can be targeted in a way that can have much more influence on our decisions than older forms of advertising can. If you think about things like political advertising, this could be a real issue, but even in other areas where that type of advertising can influence our behaviour in ways that are not good for us—with respect to our health, for example—we have to be careful.
Finally, related again to targeted advertising is the use of AI in social networks. We've seen the issues with Cambridge Analytica and Facebook, but I think there's a more general issue about how governments should set the rules of the game to minimize this kind of influencing by, again, using targeted messages. It's not necessarily advertising, but equivalently somebody is paying for influencing people's minds in a way that might not agree with what they really think or what's in their best interests.
Related to social networks is the question of data. A lot of the data that is being used by companies like Google and Facebook, of course, comes from users. Right now, users sign a consent to allow those companies to do whatever they want, basically, with that data.
There's no real potential strength for bargaining between a single user and those companies, so various organizations, particularly in the U.K., have been thinking about ways to bring back some sort of balance between the power of these large companies and the users who are providing data. There's a notion of data trust, which I encourage the Canadian government to consider as a legal approach to try to make sure the users can aggregate—you can think of it like a union—where they can negotiate contracts that are aligned with their values and interests.