Thank you. I hope you can hear me well; if not, ask me, and I will speak up.
I'm a labour economist at the University of Guelph and I think I'm here because my specialization is in program evaluation. Most of my work looks at how to evaluate all these active labour market policies: what the right methodology is, the literature, what the conclusions are, how we can interpret all of these findings.
First of all, I want to congratulate whoever put out the report. It's an excellently written report, and it's public. I'm going to use it in class. I wish the media would use it more often, because there many misconceptions going around. It had lots of numbers, which I love.
I think I'm here for the part about the employment benefits and support measures evaluation. There has been a medium- to long-term evaluation of outcomes over five years for people who have gone through these programs.
Now, I have to say that I'm lost in acronyms, Maybe you're better than I am with the acronyms—I don't know—because although we define the same concepts, the literature has one name, and each country has its own definition. So I'm going to try to be less confusing.
Again I'm going to talk about the evaluation of EBSM, employment benefits and support measures. We can split them into two, employment benefits and support measures.
Employment benefits are a bit more expensive. They refer directly to cash that we pay for individuals to go to training, or targeted wage subsidies, or creating jobs especially for the individuals who come to these programs. On the support measures—the other part of EBSM—I'm only going to talk about employment assistance services, which the literature also calls job search assistance.
Let me from the get-go mention that when we look at these evaluations, we as economists tend to focus on the efficiency goal of these evaluations. There is also an equity level, about which we say, perhaps these programs are in place to help the neediest, who otherwise would have access to no other types of services. While we acknowledge that, when we look at the hard numbers we are focused mostly on the efficiency side, and so we tend to ignore the equity part, although the other speakers have well addressed it, and we tend to see how much these programs are worth—what the bang for the buck is, if you want.
This long-term evaluation has found very large impacts for the skill development programs. These programs are for unemployed people who are on benefits and can qualify for training. We look at the impact on four types of outcomes: their earnings, their probability of being employed, their probability of being on EI, and the amount of benefits they claim on EI for one and up to five years after they have gone through this program.
We looked at the impact of the program in the early 2000s. What the evaluation found was very large impacts for the skills training programs, I think a bit larger than the literature finds, and for a couple of reasons.
One is that maybe the methodology is somewhat geared—and if I have time, I may explain why—towards finding higher impacts. That's one possibility that goes against the large impacts that were found.
Another possibility is that we looked at long-term impacts. Most evaluations look at a year or at most two years after the training happens, but here we go up to five years. There is an emerging literature that shows that longer-term impacts could be higher for people who have gone through these training programs.
So let us go back to the expensive programs. The skill development, the training, seems to have a very large impact. On targeted wage subsidies and earning supplements, there is mixed evidence. People seem to move on and off employment insurance in subsequent years. Maybe they learn how the system works; maybe being in a targeted wage subsidy program gives them enough labour market experience that they can claim benefits. That is a more mixed kind of evidence.
We have very bad evidence for self-employment programs, but the report acknowledges, and I agree, that we don't measure self-employment programs well, because we look at earnings outcomes, and the self-employed have other types of benefits—the way they file taxes, the way the tax incentives are, the way they build the business. We should look at the rate of success or failure of their self-employment business, because the outcomes that we look at currently are not very relevant for them.
Concerning job creation partnerships, the [Inaudible--Editor] employment created jobs. I hope there is not a typo, because while the report didn't talk much about it, there were huge employment benefits in years four and five. If it's not a typo, I think we really need to look into it. If it does have huge employment benefits, maybe it's even a contribution to the theoretical literature, because we tend to think that these job creation programs don't do too well. It's true, if we look at our own evaluation that I am talking about, they don't do too well in the first year or the second year; they pick up in year four and year five. And if this is true, and further evaluation shows that this is true, maybe we should put more money and more energy into these job creation partnerships, if truly the impact in years four and five is this high.
So these are the expensive ones. The cheaper one is the employment assistance service, the job search assistance programs where you just teach people how to write their resumé, how to dress for an interview, and what to say at the interview. It is the darling of all labour market programs because it's very cheap. It doesn't cost as much as to retrain a worker in a new occupation. You just put them in a classroom or one-on-one interventions and just tell them how to behave at an interview, and it's very successful. The impacts are modest. They are not huge, but they are very consistent all across time and easy to implement, easy to deliver.
So what has happened is that a lot of the provinces have switched their attention and focus on these employment assistance services because they work and they are cheap. I don't want to put them down too much, but I think we have to be very careful here because emerging evidence shows that, while they are effective, they are mostly displacement programs. They do not create new jobs; they do not benefit in terms of productivity. It's just that you direct an individual to a job that could have been occupied by another equally qualified individual, but this other individual gets displaced from the job because they didn't come to this particular program. So yes, they are cheap and they seem to be effective, but they do not create new jobs; they do not improve productivity.
So, again, it depends on what the government has in mind with all of these LMDA policies. If the goal is to increase productivity and to make the Canadian labour force more productive, then I think we should be careful about the displacement effects, which nobody has measured in the Canadian context because it is hard to measure them. But the literature seems to indicate that the skill development programs do build skills and do have an impact on productivity. Employers see that the skill is there and create new jobs to attract the skill. There is an extra layer that we do not address with evaluation because it talks about the productivity effects and general equilibrium effects that are more likely to be important for skill development rather than for the cheaper employment assistance service.
I'll just say one more thing. In terms of the methodology of the report, we are worried that what we measure in this evaluation of the EBSM is a bit too high, because claimants can be selected into the different streams. Either they self-select or the case workers may select them because case workers are graded on managing for results, so there is an incentive for the case worker to take the best workers and assign them to treatment because then these workers will be successful. The problem is that the workers who get assigned to these treatments and who we measure the effects for might have been more successful regardless, because they are cream scheme; they are selected in this context.
That's why I think that all of these impacts are a bit too high. If we measured them properly in a random assignment trial, for instance, they would be a bit lower, but I still believe that the impacts are there and I think the point that I take home from this is that long-term impacts are even higher and that all of the these programs seem to work in the long term.
I'll stop here.