Thank you.
Good afternoon, and thank you for this invitation to appear before the committee.
My name is Jonathan Histon and I have been involved in the study of aviation human factors and aviation safety for just under 17 years. I currently lecture in aviation safety and aviation human factors in the commercial aviation management program at Western University in London, Ontario, and hold an adjunct appointment as an assistant professor in the department of systems design engineering at the University of Waterloo. In addition, throughout my career, I have consulted with airline, air traffic control, and equipment manufacturers, amongst other organizations.
My expertise is in the area of human factors, or the relationship between human operators, technology, and system design. As director of the Humans in Complex Systems lab, I have led projects examining airspace design and its effect on complexity, UAV integration into controlled airspace, simulator use in air traffic controller training, and the use of flight data to identify emerging human factors challenges.
From my perspective, the philosophy behind the core of Canada's aviation safety approach, safety management systems, reflects what I understand to be the best practices in the safety literature, namely, a focus on continuous improvement, data collection and data-driven decision-making, and the fostering of a learning culture that understands mistakes and errors will occur, and the most important thing, how the system responds, mitigates, and corrects.
System is a key word. I think it is most valuable to think of safety as an emergent property, something that emerges from the interactions between the many parts of a system. The task of getting an airplane or helicopter off the ground and to its destination safely is one that requires contributions from an immense range of talent: mechanics, controllers, ground crew, flight crew, and all the broader systems behind them.
The design of how all these parts interrelate and work together is critical to establishing effective defences that prevent catastrophic situations from occurring. Perhaps most importantly, a system perspective helps move attention away from errors made by individuals and directs it towards the broader context those individuals are operating in. It forces us to question how the system could be improved for the future.
I want to use my remaining time to briefly raise some key challenges that I see facing the industry. One of the critical challenges that many, if not all, organizations face is what is termed in the literature as procedural drift, practical drift, or normalization of deviance. In a short form, these terms capture the observation that how work actually is done is often quite different from how it should be done according to written procedure.
The difference is usually a consequence of the multiple pressures workers and managers face: time pressure, equipment malfunctions, poor or repurposed design in terms of equipment, and just changing conditions. It's a complex problem and I'm not here to offer easy solutions. Introducing more rules or stricter penalties with the well-intentioned goal of increasing accountability and deterrence run the risk of creating adversarial relationships, creating an atmosphere of blame and cover-up, and are not long-term solutions if the underlying pressures created by the systems design aren't addressed.
This leads me to a second challenge: preserving and enhancing the ability of organizations to observe their own performance through the collection and protection of safety data. Processes we've just heard about, such as confidential reporting systems, immunity protocols, and operational data analysis can provide key insights, including helping to identify cases of normalization of deviance. But collecting such data also requires a lot of trust, as you just heard. That trust is that the data collected won't be used for punishment or otherwise misused for non-safety purposes.
I think data collection can be particularly challenging in organizations that do not have the scale to support the processes that the big airlines, as one example, have. Confidential reporting systems are great in theory, but may not be all that confidential in a very small operation. Across all organizations, however, I firmly believe that the more an organization knows about where its vulnerabilities are, where mistakes are being made, the better that organization can adapt and respond.
A final challenge that I'll direct your attention towards is determining how to assess training needs in a rapidly changing technological environment. New technologies are coming, whether in automated ground service vehicles at airports or new autopilot modes. There are a lot of things that are going to be changing. New technologies and new forms of automation will change training needs and raise questions such as how proficient operators have to be using the automation, and also when the automation isn't available, in non-automated operations. Are there legacy training requirements that are no longer appropriate, and has due consideration been given to the skill foundations built by that legacy training?