Thank you very much for the invitation to testify on the important issue of research excellence.
My name is Vincent Larivière, and I'm a professor of information sciences at the Université de Montréal. I'm also the UNESCO Chair on Open Science and the Quebec research chair on the discoverability of scientific content in French. I'm not representing the Université de Montréal today. I'm appearing as an individual, as an expert who has spent about 20 years studying the scientific community, and specifically the issue of research excellence and evaluation.
The first thing that's important to mention is the lack of consensus on what research excellence is. This can be seen virtually everywhere in the scientific community. Funding evaluation committees don't always agree on which projects are the most important. Journal editors and reviewers don't always agree on the quality of a paper.
Excellence in research is, in a way, the holy grail of the scientific world, but it remains quite difficult to define. There's a lot of subjectivity in all of this. It can be explained in a number of ways, but one thing is clear: Scientific excellence is multi-faceted. It can vary depending on the context. It can be the ingenuity of a method, the originality of a research issue, the quality of an argument's construction or the potential applications of a research project.
Because of this lack of consensus, evaluation committees often rely on quantifiable indicators, things that can be measured: the number of papers written in prestigious journals, the number of times they are cited, whether the person graduated from a prestigious university or whether they have gotten funding before. One of the main criteria for getting funding is having already gotten it. Those quantifiable markers don't always reflect research excellence, but they make the evaluation much simpler. A dozen or so publications will always be more than five. A million dollars will always be more than $100,000. That way of evaluating scientists and their projects, often done implicitly, raises important questions for the Canadian scientific community.
Focusing on publication volume will promote certain works, but also certain themes that are more easily published. That contributes to an overproduction of papers, which shouldn't be confused with overproduction of knowledge. Overproduction of papers contributes to noise and information overload, especially of mediocre quality. Many Nobel Prize winners, including Peter Higgs, have said that they wouldn't have been able to make their discoveries in today's context of research evaluation.
I'd like to make three recommendations for improving research excellence in Canada.
The first one is quite complicated, but I think it's doable. The idea would be to enable funding agencies to experiment with peer review. Peer review is known to be imperfect, but many countries are experimenting with it, including Switzerland, Norway and the United Kingdom. We can't say that those countries are lagging behind in science. There are countries that have taken the bull by the horns, realized the biases currently associated with research evaluation and decided that they should try to find new ways to encourage excellence. As my colleague Julien Larrègue says, it's important for the results of those experiments to be available to the expert community.
The second recommendation is somewhat related to what my colleague Ms. Cobey said on the issue of CVs, which are evaluated by the various committees. Narrative CVs were recently put in place, which I think sounds like a good idea on the surface, but it isn't entirely clear how those CVs are going to be interpreted. They will, in fact, also be interpreted based on their volume. I recently received a seven-page narrative CV that was longer than the application itself. We have absolutely no idea how committees are going to evaluate that. That has to be considered. Some countries have implemented a requirement for short, two-page CVs that don't focus on the publication volume and that can then show the publications that are most relevant to the project.
The third recommendation goes back to indicators. In Canada, there usually isn't an explicit request to provide indicators for evaluations. However, during evaluations, committee members often pull indicators out from nowhere. Obviously, committees are often sovereign, so there isn't much that can be done. I think there needs to be a ban on using those indicators in the evaluation committees of granting agencies. It isn't just a matter of not encouraging them; it's also about telling the committees that all of that is outside the scope of the evaluation.
Thank you, and I look forward to taking your questions.