The approach currently used in Canada to assess and anticipate risks and the impact of this technology is mainly based on self-assessment. The proposed artificial intelligence and data act promotes the idea that we must create a model so that businesses can govern themselves by taking certain parameters into account, while making sure that the effects on their work are as limited as possible.
One of the problems I see with this approach is that AI is deployed in a very wide variety of sectors. Therefore, at some point, these tools need to be tailored to each sector and industry in which AI is deployed. This will allow us to properly represent the reality of workers and users whose quality of life, work and well-being are directly influenced by this technology.
One of the first ideas that comes to mind is that risk assessment tools should be set aside for different industries. In fact, at all levels of government, there are specific frameworks to assess the environmental, financial, social or human impact. However, we do not see the same degree of precision in the evaluation of this technology when it is deployed.
Off the top of my head, I would say that we need a more specific development of analytical tools.