Evidence of meeting #26 for Science and Research in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was witnesses.

A recording is available from Parliament.

On the agenda

Members speaking

Before the committee

Babul  Distinguished University Professor, As an Individual
Shariff  Professor, University of British Columbia, As an Individual
Oransky  Executive Director, Center for Scientific Integrity Inc.
Bouchard  Dean, Faculty of Arts and Sciences, Université de Montréal, As an Individual
Triandafyllidou  Professor and Canada Excellence Research Chair in Migration and Integration, Toronto Metropolitan University, As an Individual
Maltais  President, Association francophone pour le savoir
Montreuil  Executive Director, Association francophone pour le savoir

The Chair Liberal Salma Zahid

I call this meeting to order.

Welcome to meeting number 26 of the Standing Committee on Science and Research. The committee is meeting to study governance and accountability of federal science policy and institutions.

I would like to make a few comments for the benefit of witnesses and members.

Please wait until I recognize you by name before speaking. For those participating by video conference, click on the microphone icon to activate your microphone and please mute yourself when you're not speaking. For those on Zoom, at the bottom of your screen, you can select the appropriate channel for interpretation: floor, English or French.

I will remind you that all comments should be addressed through the chair.

I would like to welcome our witnesses for the first panel. Joining us by video conference are Arif Babul, distinguished professor, University of Victoria; Azim Shariff, professor, University of British Columbia; and Dr. Ivan Oransky, executive director, Center for Scientific Integrity.

Welcome to all the witnesses.

All of you will have five minutes for your opening remarks. Then we will go to our rounds of questioning.

Professor Babul, we will start with you. You will have five minutes. Please go ahead.

Arif Babul Distinguished University Professor, As an Individual

Thank you very much for inviting me to contribute to your deliberations.

I would like to start by noting that my comments today are based on my experiences in the domain of natural sciences and to a lesser extent health sciences.

I would like to touch upon three issues.

First, there are roughly four distinct and different types of research that government funding agencies support: basic research, which is the search for new knowledge; applied research, which uses that knowledge to solve specific problems; engineering research, which transforms those solutions into usable systems; and innovation, which then deploys these systems into society. Each of these has different objectives and outcomes. It is important, therefore, that any governance or accountability effort ensures that each is evaluated using appropriate metrics.

There is, however, one common and important output: the training of highly qualified personnel. More than 50% of science Ph.D.s now work and drive innovation outside academia. They are in demand not only because they are technology- and data-savvy, but because they are trained to think creatively and solve complex problems.

Second, these four areas are an integral part of the discovery to innovation ecosystem. To start with, I would like to emphasize the importance of sustaining discovery research, because innovation ultimately depends on it. Importantly, only the government is in a position to fund discovery. To that end, Canada's investment in research capacity has not kept pace with costs. The real purchasing power of an NSERC discovery grant, for example, has largely stagnated over the past two decades, while research expenses have risen substantially. Consequently, grants today support less research activity than they did 15 to 20 years ago.

Third, we must ensure that our discovery to innovation ecosystem operates on a level playing field and is guided by transparent evaluation procedures. On the whole, the current process is well regarded internationally, but there is room for improvement. This is where this committee can play a role in continuing to push for change that further improves the system.

I would like to briefly touch upon four areas for your consideration.

First, governance must recognize that current evaluation systems can inherit and amplify past biases. There is substantial evidence that prestigious awards and recognition histories, all commonly used as indicators of excellence, have themselves historically reflected disparities related to gender, race and other factors. When these are used in grant evaluations, earlier inequities can cascade forward, affecting funding levels and future competitiveness. Governance framework must, therefore, examine whether the criteria and indicators used in assessment unintentionally reproduce structural barriers. The objective is not to weaken standards, but to ensure that Canada's research funding system identifies and supports genuine excellence rather than reinforcing historical patterns that may obscure it.

Second, external assessments of our research systems have noted that they tend to be risk-averse, favouring proposals with incremental outcomes over true innovation—high-risk innovation. From a governance perspective, this underscores the importance of mechanisms that can identify and mitigate such tendencies.

Third, fairness requires mechanisms to enhance transparency. Presently, applicants can only challenge procedural issues, not substantive assessment errors. However, evaluators are human, and mistakes do happen. Governance systems should consider structured appeal mechanisms, as well as additional transparency measures that would enhance accountability. In this regard, the European Research Council's approach is worth considering.

Additionally, securing participation of qualified independent international panellists, as has been done by the National Science Foundation in the U.S. for many years, would further strengthen the perception of impartiality. With virtual meetings now common, international participation is more feasible than ever.

To conclude, a strong science policy is not only about funding decisions; it is about ensuring that the system consistently identifies and enables excellence and innovation.

Thank you.

The Chair Liberal Salma Zahid

Thank you, Professor Babul.

Now we will proceed to Professor Shariff.

Please go ahead. You have five minutes for your opening remarks.

Azim Shariff Professor, University of British Columbia, As an Individual

Thank you, Madam Chair and members of the committee.

I am a professor and Canada 150 research chair of moral psychology at UBC. I'm not here to comment on the legal or financial feasibility of the proposed monitoring body. Instead, I'm here to describe the psychological factors involved in why such a body might be necessary, how it would be received and how it might affect the mission of truth-seeking in Canada.

One finding from my research is especially relevant. We find that the perception of politicization reduces trust in and support for institutions. Critically, this occurs even for people who share the institution's perceived political orientation. Even liberals distrust institutions they see as liberally biased, and even conservatives distrust institutions they see as conservatively biased. When an expertise-based institution like science is seen as politically aligned, trust doesn't just become polarized; it erodes across the board. When that happens, truth-seeking in Canada suffers.

That politicization can come from two directions. The first is externally. When political power over institutions is used to advance contested political objectives or to manage how scientific findings are communicated, these signals cue people to interpret scientific research through their partisan lenses. Science, depending on one's political in-group, is seen as an ally or an enemy rather than as our shared common ground for truth. There is internal politicization as well. When institutions blur the line between empirical scholarship and political advocacy, or are perceived to be enforcing political conformity among their ranks, this too erodes trust in our institution. Both of these processes have undermined science in the United States. My plea is that we avoid walking further down that path in Canada.

The reason this is so fragile is rooted in basic human psychology. We are all subject to motivated reasoning and other biases, but it's much easier to see these biases in others than ourselves, something called the bias blind spot. Scientists are just as guilty. Most academics do their jobs in good faith, but we are also overconfident in our ability to detect and correct our own biases.

When we talk mostly to politically like-minded colleagues, we are subject to another well-researched process; that's the law of group polarization. When most members in a group start out leaning in one direction, the group tends to drift towards greater extremity over time. Left unchecked, groups can drift a long way indeed. Being smart is no protection from this. In fact, because motivated reasoning relies on thinking, the smarter you are, the more powerful it can be.

Science has historically managed these human tendencies not by assuming that scientists are unbiased but by building in proper incentives, norms and guardrails—peer review, replication and an environment that encourages disagreement of any idea at any time by anyone, so long as they have the evidence. Science works not just because of the abilities of scientists but also because of the constraints on them. No one likes being scrutinized, but a thoughtfully designed monitoring and accountability body could be useful in protecting these structures, thereby ensuring that Canadian science is both effective and trusted across the political spectrum.

For such a body to strengthen rather than weaken trust among both scientists and the public, its design must carefully minimize both actual and perceived politicization. Any monitoring body should, first, like the Office of the Auditor General, be visibly insulated from day-to-day partisan motives. Mechanisms like multi-party appointments and fixed or staggered terms can help ensure that the body neither is, nor appears to be, politicized.

Second, the body should audit procedural fairness and integrity, not adjudicate the merits of particular research projects. It should not try to replace peer review. The body's outputs should emphasize aggregate trend-level reporting—patterns in funding outcomes or demographic and viewpoint diversity—rather than spotlighting individual grants. Your committee, your colleagues and the public will always be able to cherry-pick research programs that sound absurd. Some really are absurd. Others lead to medical revolutions like GLP-1 agonists. It's sometimes hard to know in advance which is which. If oversight becomes focused on anecdotes, it will sow antagonism between scientists and the government, and fuel the politicization that it should be trying to extinguish.

Finally, and most mundanely, its design should actively restrain itself from mission creep and administrative overload. In other countries, comparable bodies that have been good for ensuring accountability are broadly despised because of the workload they impose. That burdensome paperwork doesn't just cause frustration; it can also reshape incentives. Time and resources shift toward timid bureaucracy rather than scientific risk-taking.

The question is not whether science needs guardrails. It does. The trick is to design a system that neither denies nor amplifies biases but disciplines them. Any accountability body should be designed to manage politicization and strengthen science rather than the other way around.

Thank you.

The Chair Liberal Salma Zahid

Thank you, Mr. Shariff.

We will now proceed to Dr. Oransky.

Please go ahead. You will have five minutes for your opening remarks.

Ivan Oransky Executive Director, Center for Scientific Integrity Inc.

Thank you very much, Madam Chair and committee members, for the opportunity to speak today to this important issue.

I'm the executive director of the Center for Scientific Integrity, a non-profit organization based in New York that is perhaps best known for publishing Retraction Watch, a journalism outlet. In December 2024, I had the honour of speaking to this committee about threats to the scientific record brought on by academics gaming the system to inflate their number of publications, their citations and the other metrics by which they are judged.

That was an example of how government can accidentally erode scientific integrity by overrelying on these simple metrics. Today, you are considering how government can promote scientific integrity. As you may know, Canada's current system of oversight has been described by others as a patchwork that does not prioritize transparency, but this can be said of many countries' approaches, and I'd like to share our reporting with you and what we've learned over the last 16 years.

The U.S. was the first country to establish formal oversight of research misconduct, beginning in the late 1980s with the creation of what later became the Office of Research Integrity, or ORI, and the National Science Foundation, or NSF, and its Office of Inspector General, or OIG. Their legal mandate is to ensure the integrity of federally funded research. The ORI covers Public Health Service-funded research, including NIH-funded research. The NSF's OIG covers science and engineering.

The ORI assesses investigative reports submitted by academic institutions and decides whether misconduct has occurred. Misconduct specifically means, in the federal definition, falsification, fabrication or plagiarism that is “committed intentionally, or knowingly, or recklessly”, and represents “a significant departure from accepted practices.”

Notably, while the ORI can investigate any researcher receiving government funding, it can do so only at the request of the researcher's academic institution, which creates a significant conflict of interest. Universities are generally reluctant to discuss, let alone properly investigate, misconduct. This and a lack of subpoena power limits ORI's reach. If findings of misconduct are made, sanctions can include mandating retraction, suspension of funding, and oversight of future research activity. In sharp contrast, the NSF OIG does have subpoena power.

A completely separate U.S. regulatory arm, the Food and Drug Administration, or FDA, can investigate and sanction clinical investigators conducting regulated research. The FDA's relatively toothy regulations permit the disqualification of investigators who repeatedly fail to comply with requirements or who submit false information. Debarment typically follows a misdemeanour or felony conviction, and debarred researchers are prohibited from working with anyone with an approved or pending drug product application. In rare cases, researchers who have committed severe misconduct while working with government funds have been forced to pay back those funds and have received lifetime debarments.

By contrast, Europe lacks an overarching federal authority analogous to the ORI, instead having a decentralized and heterogeneous regulatory landscape shaped largely by institutional autonomy. Germany has opted for a highly decentralized approach. To be eligible for funding from the German Research Foundation, commonly referred to as the DFG, institutions are required to establish internal structures capable of investigating and dealing with misconduct allegations. The DFG also maintains its own committee, which investigates allegations related to its funded research.

Separately and independently, the national German research ombudsman can also receive allegations and occupies an advisory role in cases requiring conflict reconciliation. A notable weakness of this system is that not all research is conducted in universities. There is little oversight of doctors, for example, undertaking clinical research.

Denmark, Norway and Sweden maintain the most formalized European oversight structures. All three countries have a centralized agency authorized to conduct misconduct investigations, although the primary burden of regulating integrity still falls on institutions.

The U.K. exemplifies a non-statutory, institution-centred model with no national investigative authority comparable to the ORI. Institutions there are expected to handle allegations internally, guided by the concordat to support research integrity and supported by research bodies such as the U.K. Research Integrity Office, which provides guidance but lacks the power to conduct independent investigations or impose sanctions. China has also recently introduced a comprehensive punishment framework for misconduct.

I thank you for your time, and I welcome the opportunity to expand on my comments during the Q and A with the committee.

The Chair Liberal Salma Zahid

Thank you.

Now we will proceed to our first round of questioning of six minutes each. We will start with MP Ho.

Please go ahead.

3:55 p.m.

Conservative

Vincent Ho Conservative Richmond Hill South, ON

Thank you, Madam Chair.

My first set of questions is for Professor Shariff.

You mentioned in your opening statement the perception of political bias and the politicization of public institutions and how that may erode public trust in our institutions. Could you elaborate on that a bit more? Have you seen this trend getting worse in recent years? Can you provide a few examples of that?

3:55 p.m.

Professor, University of British Columbia, As an Individual

Azim Shariff

Trust in science and in universities in Canada is relatively high. It's higher here than it is in the United States, so that's good. There is a difference between how much it's trusted by liberals and conservatives, with conservatives trusting it about 20 percentage points lower than liberals.

In the United States, we've seen it decline for both groups, but more sharply for conservatives. I think the United States represents a cautionary tale for the direction that things could go if both external and internal politicization factors are present.

Vincent Ho Conservative Richmond Hill South, ON

The last time you were at this committee, you spoke quite extensively on some of the negative consequences of EDI policies in granting research funding in Canada, and you listed a couple of examples. Do you think that plays any role in that trust?

I believe you mentioned the appointment to a CRC vacancy. Because the pool of candidates was rather limited and because of the Liberal policy of finding someone from only equity-seeking groups, it was virtually impossible to fill it. Do you think that plays into the lack of perception of trust?

3:55 p.m.

Professor, University of British Columbia, As an Individual

Azim Shariff

There is some research, actually, that directly bears on that. A couple of political scientists at UBC ran an experiment where they presented the demographic quota policy related to the Canada research chairs. What they found was that for the group presented with the policy, it reduced trust in the research the university produces. It did that for both liberals and conservatives to almost exactly the same degree.

That is a good example of the fragility of trust and legitimacy and how they respond to different policies, so yes, for that particular policy, I think there is evidence that it erodes trust.

3:55 p.m.

Conservative

Vincent Ho Conservative Richmond Hill South, ON

Do you mean EDI policies in terms of filling research—

3:55 p.m.

Professor, University of British Columbia, As an Individual

Azim Shariff

I can't comment on all EDI policies. It's a wide range of things for the demographic quotas. People cue into issues of procedural fairness for them, which I think rubs people the wrong way across the political spectrum. I think that's what they were responding to for that policy. It might be different for other EDI policies.

3:55 p.m.

Conservative

Vincent Ho Conservative Richmond Hill South, ON

I am referring to that specific policy.

To build on that, if someone were from an equity-seeking group and they were selected for one of these key positions, that person would now carry the weight of how they may have been selected because of this policy. Of course, the public view would potentially be that this person wasn't picked up based on merit, but because of demographic factors.

Do you think that plays a role in the public's perception?

3:55 p.m.

Professor, University of British Columbia, As an Individual

Azim Shariff

I'm not familiar with the research on that, and I apologize for that, because I imagine there is quite a bit of research on it. In my own experience, I wrote a note to the committee to say it affected me.

3:55 p.m.

Conservative

Vincent Ho Conservative Richmond Hill South, ON

Why do you think that's the case? Do you find these policies are too restrictive? Do they exclude bodies of talent based on merit and therefore create those tensions?

3:55 p.m.

Professor, University of British Columbia, As an Individual

Azim Shariff

For this particular policy, anything that shrinks your talent pool means you're less likely to get the best candidates. There is a way it compromises the broad, truth-seeking mission of hiring excellent scholars.

For that particular policy, there could be an erosion of trust from that, which we talked about. If those are the two primary goals—truth-seeking and trust—I think that particular policy has effects on both. Again, however, that's narrowly referring to just that policy.

4 p.m.

Conservative

Vincent Ho Conservative Richmond Hill South, ON

On a slight change of topic, you mentioned increased politicization in your opening statement, and the example you used was the Auditor General. Could you elaborate a bit more about that and why that might be the case?

4 p.m.

Professor, University of British Columbia, As an Individual

Azim Shariff

What we've seen in the United States is an attempt to solve perceptions of internal politicization with external politicization. There's been a heavier hand of government intervention in micromanaging content aspects of academia in both what gets researched and what gets specific funding grants, as well as the curricula.

Doing it from the external side has very much exacerbated politicization and has eroded trust. It's created a lot of enmity between academics and the government. It's made people perceive more politicization rather than less. The solution to politicization is rarely more politicization.

4 p.m.

Liberal

The Chair Liberal Salma Zahid

Your time is up.

4 p.m.

Conservative

Vincent Ho Conservative Richmond Hill South, ON

Thank you.

4 p.m.

Liberal

The Chair Liberal Salma Zahid

We will now proceed to MP Noormohamed for six minutes.

Please go ahead.

4 p.m.

Liberal

Taleeb Noormohamed Liberal Vancouver Granville, BC

Thank you, Madam Chair.

I thought we were going to have a conversation today about ensuring that we measure research well, but we're back on the EDI thing. It's important to take a minute on that, Professor Shariff, because the implication in the question from my colleague seems to be that equity hires produce poorer research. In your experience, is that the case?

4 p.m.

Professor, University of British Columbia, As an Individual

4 p.m.

Liberal

Taleeb Noormohamed Liberal Vancouver Granville, BC

Thank you for that very clear answer.

Perhaps I can turn to Professor Babul.

On the same question of how the EDI awards ladder works and whether we are leaning in to give an award to people who deserve a chance because they're not good enough, is that really how the system works, as my friends opposite might want us to believe?