Thank you, committee members.
My name is Julien Larregue. I'm an associate professor of sociology at Université Laval. I'm speaking on my own behalf, but really, I'm speaking primarily as an expert in the scientific field. I've been conducting research on evaluation on an international scale, not just in Canada, for several years. Since 2023, I have been leading a research project on the distribution of funding at the Social Sciences and Humanities Research Council, or SSHRC. So today, I will focus on the results of that work. I would like to thank you for inviting me to appear because this will allow me to highlight the impact of my work in my next grant application.
I'll start by echoing my colleague Yves Gingras in saying that if we believe that excellence is something that can be objectively established, then it doesn't exist. There are no criteria that specifically define what constitutes good or bad research. In every discipline, there are always different views competing with each other as to which current theory should be followed. We must therefore start from the premise that there is no single criterion for deciding what is excellent and what is not.
Determining who should receive money and why is a social and political choice. We have to accept that basic fact. However, that doesn't mean the policy should be shortsighted. Many of my academic colleagues have strong opinions on a lot of things. We've heard it all morning, particularly on the subject of EDI, or equity, diversity and inclusion. Policies can also be data-driven. That's what I'm going to try to propose to you today.
How much money is being distributed, how it is being distributed, why and using which system, these are things that could be done in a smarter and more informed way if, before starting, we knew how the system works, how committees evaluate applications, what is required of them and what the consequences are for the people who apply. Thanks to SSHRC's transparency and willingness to move forward intelligently on this issue, my research team and I have had full access since 2023 to all grant applications that have been submitted to this council for about 20 years, whether they were accepted or rejected. Through statistical analyses and interviews with evaluation committee members, who are professors, we were able to understand what factors influence the likelihood of receiving funding.
To give you a general idea, there are two criteria that play a major role. One is the number of grants that have been awarded in the past. The second, which we discovered after testing the hypothesis that it played a major role, was the prestige and size of the universities. A professor at McGill University, the University of Toronto or the University of British Columbia is much more likely to receive grants than, for example, a professor at the Université du Québec à Trois-Rivières or Wilfrid Laurier University. This is particularly true in certain disciplines. For example, in economics or management, the weight of university hierarchy and prestige is particularly strong, whereas in other disciplines, such as history or anthropology, these factors are less important. That doesn't mean they never are, but they're not as central. I'm not giving you my opinions; these are the results of published work that you can consult and that I referred to in the brief I submitted to you.
This example allows me to emphasize three things.
First, we know very well that the concentration of funds isn't beneficial to research systems; this has been empirically documented. However, as I just told you, this concentration exists. It benefits a small group of universities that claim to be excellent, but, as I also told you, being excellent is a quality you claim after being designated as such.
Obviously, one of the current problems is that the evaluation of grant applications isn't anonymous. When the committees receive the files, their members know who submitted each application and whether the applicants are from a particular university or not. However, it would be very easy to create an evaluation system in which the evaluation of the project is separate from the evaluation of the CV.
First, the project would be evaluated anonymously, without knowing whether the applicant's home university is Toronto or Trois-Rivières. Second, the CV could be evaluated to ensure that the applicant has the skills to carry out the project. This is an initial proposal of experience that could move things forward and solve the problems observed.
Next, it must be clearly understood that the general criteria adopted by funding agencies, such as SSHRC, are not applied uniformly by committees. Much depends on the cultures of the disciplines. As a result, committees do not operate or interpret the rules in the same way because their conception of excellence and quality differ.
What's important to know is that it would be naive to think that changing the rules at SSHRC or the Natural Sciences and Engineering Research Council, or NSERC, would change evaluation committees' practices. It's important to be aware of those practices.