I love this question. I spend a lot of time researching this question.
What you'll find from research on automation is a lot of use of the word “exposure”. Workers are exposed or tasks are exposed to AI. There is not a lot of commitment to what “exposure” means. That is because some workers are freed up by technology to do other things that complement AI, so they become more productive and more valuable with AI. In an extreme case, where many tasks are automated by AI, then you can be completely substituted for, and that would be a negative outcome for the worker.
I think we need to be more specific than just saying that a worker or a task is exposed moving forward. The way to do that is to get data on how skill sets shift in response to the introduction of AI. When a new tool is introduced, in a dream world, we would have data that reflects what every worker is doing all the time.
Of course, there are a lot of privacy concerns with that, but for the sake of conversation, let's just imagine that world. We would have very good information on what changes when a worker is introduced to a new tool. You can even imagine having these little natural experiments, where there's randomization in who does and does not have access to a technology. You could start to get at the causal impact of technology shifts.
That would be the ideal. I think there are some things that are a few steps away from the ideal that would also be very useful.
I'm much more familiar with the labour statistics we get in the U.S. than in Canada. Those of you who read my brief probably picked that up very quickly.
Very important labour dynamics like job separation rates or unemployment are not typically described by industry, firm or job title. Clearly getting at those concepts at the more granular level would be much closer to the consequences in shifts of skill and would allow for more proactive policy interventions—not just from AI but from any labour disruption moving forward.