Thank you, Mr. Chair.
The fundamental problem with algorithmic management is that wehave no information. There’s no framework for all kinds of elements. There seems to be a wish to pass this problem on to unions and employers, but unions can’t be the solution for managing artificial intelligence in the workplace, when we know that the unionization rate is around 15% in the private sector. This will require a regulatory framework deployed by every level of government.
Nothing is known. No doubt the clauses in collective agreements relating to technological change were used to address artificial intelligence issues, and that was a mistake. It was a mistake because, often, the triggers for technological change clauses are related to job losses or potential job losses. Unfortunately, that doesn’t address issues related to artificial intelligence, which deals with a multitude of situations that don’t result from job loss.
We hear about artificial intelligence as if it’s something positive that will lighten the load on workers. Unfortunately, there’s a downside, such as reduced autonomy and increasingly intrusive surveillance. Workers are constantly being monitored, since algorithms need data to do their jobs. We don’t know how this data is stored, how it’s analyzed or how it’s reused. The ability to collect data is not regulated. We therefore need to regulate data and what is done with it, but above all we need to regulate and mandate dialogue between employers and employees to understand the whole issue of explainability and transparency. There isn’t any.
For years now, we’ve been using tools that make decisions on behalf of workers, but they haven’t been presented as algorithmic management or artificial intelligence tools. They were simply described as new tools. For example, at Bell Canada, there’s the Blueprint tool for customer service staff. When speaking with a customer, workers are required to follow a decision tree that tells them what to do based on the customer’s stated problems. The employee’s judgment is completely removed from the process. What’s more, the employee must enter data into the tool to ensure that the various interpretation scenarios are effective and appropriate for the customer.
This is done in various industries, such as transportation, where algorithms make decisions for truckers, whether it’s about the best route or the best driving practice to use. This completely eliminates the individual’s judgment and ability to drive their vehicle. They are required to follow the tool’s instructions. They must be managed.
The Organization for Economic Cooperation and Development, or OECD, has laid down four principles: artificial intelligence must be oriented towards sustainable development, it must be human-centred, it must be transparent and explainable, and the system must be robust and accountable. At present, we have none of those things, because there’s no disclosure obligation. In our view, this is the first step that needs to be taken. It’s about knowing the tools, understanding their effects and then implementing solutions that are truly benefiting from the efficiency or added value of technological tools in the company.
We’re in a period marked by a shortage of workers. It is simply untrue that we’re going to transform a customer service operator into someone who will program or manage algorithmic tools. In any case, in Quebec, there’s currently a shortage of 9,000 to 10,000 workers in the IT sector, and our workers who can’t fill the gap. There’s a kind of vicious circle that has to stop, and it has to start with the implementation of mandatory disclosure or mandatory dialogue between employers and their employees.
Thank you very much.