Thank you, Madam Chair and members of the committee, for the invitation to discuss the impact of federal funding criteria on research excellence in Canada.
I am a scientist at the University of Ottawa Heart Institute and an associate professor at the University of Ottawa. I also co-chair an international initiative called DORA, the Declaration on Research Assessment. DORA operates globally and across all disciplines. Our recommendations at DORA apply to funding agencies, academic institutions, journals, metrics providers and individual researchers. DORA advocates broader assessment criteria to acknowledge the diversity of researcher activities.
Our meeting today comes at a time when the criteria to assess researchers in this country are shifting. Historically, decisions were based on quantitative metrics, such as the number of articles we published, the journal impact factor of where those publications sat and the amount of funding that we brought in. Quantitative metrics are easy to calculate, which makes them convenient for assessing a lot of people very quickly. Unfortunately, they're not evidence-based, they're not responsive to changes in the research ecosystem and they can't be used for any mission-driven goals of the federal government.
The misuse of the journal impact factor, as well as the overemphasis on quantitative metrics, has created a culture in the research ecosystem of “publish or perish”. As researchers, we often feel that the surest or only pathway to success in our domain is through publishing more and doing more, with less emphasis on quality and more on quantity.
However, presently in Canada, we're seeing a principled shift away from these quantitative metrics and toward consideration of qualitative metrics that consider a broader impact of research. Canada's tri-agencies signed DORA in 2019 and have been working to implement its recommendations since then. This process is an evolution, not a revolution. In my view, Canada is becoming active on the global science policy stage with respect to the criteria to assess researchers. The tri-agencies are actively involved in DORA's community of practice for funders, they have a leadership role in the Global Research Council's research assessment committee and, through SSHRC, they have joined RORI, the Research on Research Institute.
Concretely, as researchers, we see recent changes that have had a widespread and immediate impact on us. For example, CIHR has an entirely new research excellence framework that now considers research excellence across eight domains, one of which is open science. The tri-agencies as a collective are implementing a new narrative CV, which sounds exactly like what it is: It's a descriptive report on what a researcher is doing, how they did it and why it had an impact. This is replacing a traditional CV, which was much more considerate of a list of outputs as opposed to a qualitative, nuanced assessment.
This new format requires researchers and reviewers alike to be trained in how to create these narrative CVs as well as how to appropriately adjudicate them. Otherwise, there's the concern that old habits and these leadership-style quantitative metrics are going to persist in the written narrative form. Narrative CVs are part of the solution to assessing research appropriately; however, I would say that I'm concerned about how these reforms are being implemented in our country and that there's a gap between the strong science policy that we're creating around this and the actual realities of what's happening at committees. We need to ensure effective monitoring and implementation as we roll out these changes.
I have three final short points.
First, how the federal government chooses to assess research excellence directly impacts what research is done, how it is done and who does it.
Second, the tri-agencies' new definitions of research excellence do not always come to be considered in practice in how research is evaluated by committees. This again comes back to repeated implementation gaps between what we say we want to do and what actually happens.
Finally, even if we assume that the criteria used to assess excellence in this country, historically or presently, were appropriate, there are a series of issues with how funding is administered in this country that prevent us from achieving that excellence in an efficient way. One example is the across-the-board funding cut for funded research projects.
There's also, in my view, incredibly limited grant monitoring. Once we get funds based on the promises of what we wrote in our grant, there's very little monitoring to see that, as researchers and as a federal government, we're providing returns on that investment.
Thank you.