Good morning, Chair and committee members. Thank you for the opportunity to be here today with you in your review of Bill C-69, and the impact assessment act in particular.
Briefly by way of background, I am an assistant professor at the University of Calgary faculty of law. Prior to joining the law school back in 2013, however, I spent almost six years as counsel at the Department of Fisheries and Oceans, where my practice included advising that department with respect to its environmental assessment responsibilities under both the previous Canadian Environmental Assessment Act and the current CEAA 2012.
I hold bachelor degrees in science and law from the University of Saskatchewan, and a master of laws degree from the University of California at Berkeley. I have been an active participant in this reform process for the last two years, having filed submissions with both the expert panel and with the government directly.
With the time I have I will focus on what I believe to be some specific shortcomings in the current bill as drafted, especially with respect to the role of science in impact assessment. I will not be tackling the IAA's general architecture in my opening remarks, but I am prepared to speak to that. As context, essentially I think it's fair to describe the Impact Assessment Act as a kind of CEAA 2012 plus. It has essentially all the same parts as the previous act except for certain parts that have just been expanded.
My comments will track my written submission to the committee. I understand that has been translated and provided to you. I also brought a small supplemental brief. There are three figures in that brief. I don't know that I'll get to all of them, but I wanted to have them with me just in case and to have them for you for your record.
As noted in part II of my brief, one of the more important themes to emerge in the context of the current reform process is that the science of impact assessment needs more rigour. In a 2015 piece in BCBusiness, for example, one professional biologist described these as dark days for his profession, including having his professional opinion heavily pressured, and his wording, results, and interpretations changed.
The expert panel on environmental assessment heard this message loud and clear and concluded that stronger guidelines and standards are needed.
The government itself, in its 2017 discussion paper and in the various policy documents that have accompanied Bill C-69, also seems to understand this issue, yet Bill C-69 falls far short on this score. The terms “science” or “scientific” are only mentioned five times, and in no case are they given any real work to do.
I want to refer the committee members to the first figure in my supplemental materials which is a little triangle diagram that we came up with. The idea basically here is straightforward. Science is foundational to the entire impact assessment exercise. Every step and subsequent step, whether planning phase, assessment, or decision-making, relies on scientific information. The flip side of this of course is that an error or flaw in the science has the potential to compromise the entire process.
As a starting point, Bill C-69 should be amended to include a “duty of scientific integrity” on those persons involved in the impact assessment process, which at a minimum would capture the principles of objectivity, thoroughness, and accuracy.
A further amendment should give the government the power to develop regulations to further flesh out what this duty requires, including guidelines and standards for such things as the design, data collection, and analysis of baseline sampling, as well as monitoring during and after projects.
I want to reiterate here a point that has been made by others in their briefs. There is nothing new under the sun about having a duty of scientific integrity. References to scientific integrity can be found in numerous American environmental laws, regulations, and policies.
A second critical shortcoming along this line of science is the continued gap between the legislated contents of the public registry and the agency's internal project files.
To ensure transparency and open science, the registry provisions—and these are at clause 105 of the proposed impact assessment act—should match the provisions for the agency's internal files which are described at clause 106. The act should also make explicit that all scientific information submitted in the course of an impact assessment is presumptively public unless a request for confidentiality is made and granted pursuant to narrow terms. This would require an amendment to the current clause 107, which appears to create a presumption of confidentiality.
I really need to stress this point. I have never received any explanation, let alone a compelling one, as to why proponent data and models should not be readily available. I can understand that some data and models may be proprietary, but that does not mean they need to be confidential. It simply means their use would be governed by the Copyright Act, which of course includes “fair use” exemptions for academic and other public purposes.
My next set of recommendations has to do with mitigation measures and how they have been dealt with under both previous CEAAs.
Here it is important to recall the basic nature of this regime. I'm referring to 1992, 2012, and the current proposed impact assessment act. Like all of its predecessors, the IAA does not draw an environmental, or any other, bottom line. The whole regime boils down to the consideration of effects, which is then supposed to enable political accountability for project approval or refusal. You need to keep this in mind when I discuss my next recommendations.
While a lot of attention falls on baseline studies—how we decide what the state of the environment is before a project proceeds—mitigation is also a critical aspect of the IA process. Mitigation includes the strategies that a proponent might implement to reduce known adverse environmental impacts or others. It may come as a surprise to the committee, but there is actually a long and troubling history of proponents and EA panels relying on unproven mitigation measures to avoid concluding that a project will result in significant adverse effects.
In my view, this fundamentally undermines the assessment process and the public accountability it is intended to enable. Consequently, I recommend provisions aimed at ensuring that mitigation measures be demonstrably effective or their effectiveness be reasonably certain based on the best available science. Again, this would not mean the projects that cannot meet this threshold would not be approved. There is no bottom line set out in the IAA. It just means Canadians would have a more honest and accurate assessment of a project's likely impacts.
I would also allow some reliance on mitigation measures whose effectiveness is uncertain, but only if a proponent commits to a structured process of learning, otherwise known as “adaptive management”. Here I refer committee members to page 5 of my brief. There is a figure at the top of the page indicating the adaptive management cycle. Adaptive management has a long history. You can pull up any number of joint review panel reports; last summer we did a quick word search, and 90% of the projects on the CEAA registry contained a reference to adaptive management.
The problem is that it's not actually being done. Adaptive management is a good idea in theory, but it's not being done in practice. To substantiate that, I looked at 18 projects in a recent research project at the University of Calgary. We looked at the environmental impact statements filed by proponents where they claimed to rely on impact assessments. Now I'm referring you to the second figure on page 5, which shows the percentage of completeness of the adaptive management cycle by project type. We find that whereas adaptive management is supposed to be this rigorous process for learning that requires identification of objectives, of indicators, of planning and rigour, none of that work is being done. Proponents say they're going to do adaptive management as a way of convincing regulators that everything is going to be fine, but then they never do it.
At appendix A of my submission, I propose some basic language to ensure that adaptive management will actually be done where proponents say they will do it. I pause to note that I submitted a much more detailed set of provisions, three pages' worth, back to the government in August 2017. I've scaled those down considerably for you. They are about three-quarters of a page now. I'm hopeful that the committee will see their merit.
Alternatively, if the committee is not prepared to prescribe some process around adaptive management that would ensure its actual implementation, then I suggest that the IAA should be amended to explicitly bar reliance on it. As things currently stand, it is essentially being used as a smokescreen when proponents don't know how to deal with environmental effects.
Those are the main points I wanted to make. I have five or six remaining recommendations that I will briefly cover. I'd be happy to discuss more during the questions and answers.
Recommendation five is a reflection of existing case law and specifically the problem of what “consideration” means. A series of cases in the last couple of years have basically said that so long as there is some consideration of an environmental effect, then it's not reviewable. There's no error of law. In other words, there would have to be no consideration whatsoever. I think everyone would agree that it's not a very high threshold. I suggest that some kind of modifier needs to be added to the beginning of the term “consideration” at, for instance, proposed sections 22 and 63 to ensure that it's meaningful. Whether it's “meaningful” or “robust”, either of those would be useful.
I think the mandate provisions at proposed subsection 6(2) should be cross-referenced to specific process points in the IAA to make sure that the mandate is being followed. I also am concerned with the total jettisoning of the term “significance”. I think overall it's a good idea that we would frame our environmental assessment or impact assessment around a basic binary significance; non-significance would be problematic. At the same time, however, not having anything also creates real problems and the potential for ambiguity.
I see that my time is up, so I'll wrap up there.