You talked about how the impact of AI at the moment is not so concrete; it's more theoretical. I think we are at an important point between the theoretical and the realized. In our survey, for example, when we asked workers about the impact on job quality and when we asked employers about the impact on job quantity, we were really asking about what has happened already. We found that there were already changes happening in the manufacturing industry and the financial industries. Yes, absolutely, much of the impact is theoretical, but I think it's important to note that there are changes happening already.
I think that there's an important point here about ethics and that there are important ethical questions related to AI. Within the workplace, we already have tools for dealing with some of the issues that AI raises. For instance, we can talk about the ethical risk in terms of the biases that AI can introduce, but, within the workplace, we have anti-discrimination legislation, so I think we can frame things in terms of ethics. This makes them sound a little more theoretical, but I think we do have real tools already at our disposal to deal with some of the more theoretical or more ethical aspects of AI.
What's an action that we can take now? I think we can start thinking about how we want to use AI in our society and in our workplace, and what we think are acceptable and good uses of AI. That can be a starting point for legislation around AI.
Just because something can be done, just because something is feasible within the technology, it doesn't mean that we have to follow through with it. Whether it's as a society or unions or businesses deciding this, perhaps all together we can already start to talk about what kind of society we would like to have and what AI's role in that society should be.
Thank you.