Evidence of meeting #147 for Public Safety and National Security in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was vulnerabilities.

A video is available from Parliament.

On the agenda

MPs speaking

Also speaking

Deborah Chang  Vice-President, Policy, HackerOne
Steve Waterhouse  Former Information Systems Security Officer, Department of National Defence, As an Individual
Jobert Abma  Founder, HackerOne
Ruby Sahota  Brampton North, Lib.

4:05 p.m.

Conservative

Pierre Paul-Hus Conservative Charlesbourg—Haute-Saint-Charles, QC

Right.

4:05 p.m.

Liberal

The Chair Liberal John McKay

You have 20 seconds left.

4:05 p.m.

Conservative

Pierre Paul-Hus Conservative Charlesbourg—Haute-Saint-Charles, QC

What is the best way to make people aware?

4:05 p.m.

Former Information Systems Security Officer, Department of National Defence, As an Individual

Steve Waterhouse

Financial institutions should invest a little more in training people, their customers. Training sessions should not be by means of videos on the Internet, where it is easy to become distracted.

The training should be interactive so that we know whether the customers have fully understood. When training is done on screen, especially after some time, a customer can push the “pause” button and start again, which is great. If not, the customer can also decide to press the “play” button and go and do something else in the meantime. You never know whether they have understood the information. The box will be checked off, but you will never know whether the material has been absorbed properly.

4:05 p.m.

Liberal

The Chair Liberal John McKay

Thank you, Mr. Paul-Hus.

Mr. Dubé, you have seven minutes.

February 4th, 2019 / 4:05 p.m.

NDP

Matthew Dubé NDP Beloeil—Chambly, QC

Thank you, Mr. Chair.

I want to ask a question of the folks from HackerOne, since the example I will use is what's going on currently in the U.S.

The NSA does what they call a vulnerabilities equities process, or VEP. Probably about a year ago I was asked about this by a journalist, because our equivalent body here—not quite equivalent; it's not always exactly analogous—the CSE, doesn't have the same kind of transparent process.

I wonder if you could talk about whether that process—in your mind, given the work that you do—has been successful in achieving more transparency when the agencies themselves are discovering vulnerabilities in software that they could potentially use to glean all kinds of information on people?

4:05 p.m.

Founder, HackerOne

Jobert Abma

Yes, I'd be happy to take that.

The U.S. government has spent a lot of money and time in securing its own systems. Our data shows that after it established a transparent process to work with the hacker community, over 5,000 security vulnerabilities were identified, for which hundreds of thousands of dollars have been awarded as an additional incentive to those hackers to look into those systems.

The number of vulnerabilities discovered by the hacker community is much greater in volume than some of the vulnerabilities identified by the U.S. government itself. The fact that there are so many of them shows that working with hacker communities is the right thing to do to uncover more security vulnerabilities.

4:05 p.m.

NDP

Matthew Dubé NDP Beloeil—Chambly, QC

That's an interesting point, because it leads me to another question of mine, but let me back up for a minute. If law enforcement wants to unlock, say, an iPhone to obtain information that's on it, there's obviously a process in place to do so with obtaining a warrant. Obviously, it might vary in both our countries, but I think the spirit of it is similar enough that we can discuss it. My question then becomes this. If a hacker wants to do good, let's say, as a white hat hacker, the hacker might look to the government thinking that they are doing the right thing by providing that information, but it doesn't necessarily then go back to the company, and people remain vulnerable because that agency might have an interest in keeping that vulnerability. Do you think there should be some kind of law or regulation in place that creates the same kinds of checks and balances on the police when they obtain a warrant to unlock a phone and apply those checks and balances to national security agencies as well? They would say that if you want to use a vulnerability, then you have to go through the same hoops required of law enforcement to protect people's privacy.

4:05 p.m.

Founder, HackerOne

Jobert Abma

It is an ethical dilemma that I think is very important to cover. The problem that we've seen so far is with governments buying zero-day vulnerabilities, meaning vulnerabilities that are not known to the vendor who is there to patch them. These are currently being used in warfare to extract information or intelligence that is currently unknown to them. By not disclosing that to the vendor, you're also putting your consumers or citizens at risk by not disclosing that.

We believe that zero-day vulnerabilities should be reported to the vendor no matter what, but we're addressing that from a different side. We're addressing that by leveraging the hacker community to find the same vulnerabilities that either their government or criminals have found, which will then be disclosed to the vendor directly. That is our way of making sure that those vulnerabilities are becoming known to the vendor.

It would be amazing, in my opinion, if the government would also have a law like that, because I don't believe it is worth the risk for your own citizens. However, I think we're far away from having that today.

4:10 p.m.

NDP

Matthew Dubé NDP Beloeil—Chambly, QC

I appreciate that response.

My final question goes to you as well, Mr. Waterhouse.

This is the question of the role of media essentially and the fact that some of these vulnerabilities get reported on. One example jumped to mind. I don't recall if I saw this in the news coverage or if someone just told me this anecdotally, so I could be wrong, but last week, when the vulnerability with FaceTime on iPhones and iPads was found, the individual who had unintentionally found the vulnerability was then asked by Apple to go through their process almost as if the person were going for the bug bounty without being a hacker. I'm just wondering about the cases were some of these vulnerabilities get found by accident and reported in the media. What impact does that have on how things play out both for the vulnerability itself and also for the effort to try to fix it afterwards?

Mr. Waterhouse, you can comment on that.

4:10 p.m.

Former Information Systems Security Officer, Department of National Defence, As an Individual

Steve Waterhouse

It's something that will be ongoing for the rest of our lives. Software is so incomplete. We have billions of lines of code right now in all kinds of applications, especially operating systems, that it's almost virtually impossible to.... Because the competition is very strong in the market, the companies just push out the software incomplete as it is and they just say they'll fix it as we go. This is one of the reasons we're getting these kinds of findings once in a while.

By an engineering analysis, people back at the company would say, well, nobody will think about doing that. But guess what? In the real world, we have people who are just doing whatever they seem interested in finding out. And, yes, by accident, they find these vulnerabilities, as we call them today. Should they be disclosed mandatorily? Of course.

The youngster and his parents went on to disclose it, and lawfully. They didn't want to exploit the situation; they just wanted to report it, and they even got turned down by the company.

Certainly I agree with you on this. There should be a law that says to a company that whenever someone comes to them, listen to that person, or whoever the party is who is bringing you the information, and act upon it promptly. If not, the company should be fined.

4:10 p.m.

NDP

Matthew Dubé NDP Beloeil—Chambly, QC

I think I have about 30 seconds left if you guys want to jump in there.

4:10 p.m.

Vice-President, Policy, HackerOne

Deborah Chang

I'd like to jump in. I think that fundamental to this discussion is privacy.

When I read about that situation.... Privacy, in our opinion, is a fundamental right, and your data is a fundamental right. A company should be strongly encouraged to protect one's right. In that case, I think the mom contacted Apple a couple of times. She was protecting her right, her son's right and her family's right. Not to have a VDP or a way to handle these issues infringes on one's privacy rights.

4:10 p.m.

Liberal

The Chair Liberal John McKay

Thank you, Mr. Dubé.

Mr. Picard, you have the floor for seven minutes.

4:10 p.m.

Liberal

Michel Picard Liberal Montarville, QC

HackerOne, when you submit your report on vulnerabilities to financial service providers, what kind of feedback do you get from your recommendations? Do they implement them right away, or do they evaluate the cost of implementation versus the cost of taking the risk not to?

4:10 p.m.

Founder, HackerOne

Jobert Abma

At the end of the day, a mature security organization is an organization in which every risk or vulnerability that is uncovered, regardless of its source, should be given the investment that would be needed protect the organization from the reported threat.

We've seen many organizations, including financial organizations, that have put in different defences based on the vulnerabilities that have been reported to them, in order to eradicate entire vulnerability classes or to protect consumers against security threats. Two-factor authentication is often used when you sign into a bank account.

The most common security vulnerabilities are usually pretty straightforward for companies to address, but especially with the data we have, we can help an organization to prioritize in-depth defences in order to better protect the organization in the long term.

4:15 p.m.

Liberal

Michel Picard Liberal Montarville, QC

Mr. Waterhouse, on the subject of the products that financial services use, I would like to know whether their reported vulnerabilities are at the beginner level, unimportant in a practical sense, or whether the programs are now so sophisticated that, despite everything, the vulnerabilities still require an extremely high level of precision or expertise.

Where are we at the moment?

4:15 p.m.

Former Information Systems Security Officer, Department of National Defence, As an Individual

Steve Waterhouse

Mr. Picard, we see both extremes. Last week, for example, the personal information of 500 million customers of the biggest bank in India were exposed to the general public because the server that contained that information was not secure and had no password. A system within that megabank's network was vulnerable, plain and simple. It was a basic error; personally, I would call it a rookie mistake.

Today, Canadian financial institutions use the best equipment they can find. They have sufficient resources to afford it. The fact remains that these are commercial products, available to any company in the world and with the same kinds of vulnerabilities. So they need teams that are able to conduct checks and more checks, over and over again, in a constant cycle, in order to determine whether the system is still solid and valid. Most SMEs have systems installed for them; they say they have a firewall, they believe they are protected and they stop worrying. Unfortunately, some of that equipment is vulnerable. So the checking must be constant.

4:15 p.m.

Liberal

Michel Picard Liberal Montarville, QC

In terms of strategy, as a financial service provider I may choose to split my data as much as possible to complicate things if I want to put data together to create some intelligence out of that. Or, if I go to a concept of open banking, I can centralize everything on one server for performance and efficiency. What seems to be a good strategy? We have both strategies on the table to discuss these days.

4:15 p.m.

Founder, HackerOne

Jobert Abma

One of the problems we've seen, especially with some of the more recent data breaches, is that centralization of data is becoming a problem. It makes it easier for the organization to protect itself against certain risks because it is only one component that they have to defend. The problem is that when things do go wrong, through either a misconfiguration on the organization's side as happened in India, or negligence or vulnerability in third party software, the consequences are usually too big to oversee. Decentralization on the other hand is much harder to maintain. There are a lot more moving components, but from a data privacy perspective, it does look like the right way to go, the right strategy to take.

When you're talking about the insights that can be gained when it is a central system, the same thing can be achieved with multiple systems, with decentralized systems, but instead of using the data themselves, extrapolate the data or the insights away from that data and use only that to give recommendations or do the data analysis that is required. With that, a lot of organizations are moving to the cloud, which is essentially the same problem that large organizations face since they have centralized a lot of their data. We are seeing an uptick in the number of breaches that happen because people are unaware of some of the consequences of putting data into a system that they don't fully understand. This also goes back to Mr. Waterhouse's point that consumers don't read the manuals, but sometimes organizations also don't understand the threat that they're putting themselves up against by moving into new territory.

4:15 p.m.

Liberal

Michel Picard Liberal Montarville, QC

I have just one minute left.

We have software that is vulnerable and we are beginning to look at the use of artificial intelligence to help us monitor what that software does not do well.

Should we put our trust in a system blindly? Artificial intelligence is still programmed by humans.

4:15 p.m.

Former Information Systems Security Officer, Department of National Defence, As an Individual

Steve Waterhouse

The situation is the same for today's software, Mr. Picard. Programmers produce operating systems that are incomplete to which we attach information-processing software that is itself incomplete.

You are asking me whether artificial intelligence will be better. I have been told that supposedly cutting-edge security measures to prevent that kind of behaviour does exist. However, I do not believe that the new software will be free from all flaws.

4:20 p.m.

Liberal

Michel Picard Liberal Montarville, QC

Do you have a final word on that, HackerOne?

4:20 p.m.

Founder, HackerOne

Jobert Abma

Artificial intelligence, in my opinion, is a very important technology that we should leverage as much as possible. At the end of the day, we believe that where people work, people will make mistakes. Artificial intelligence is not going to help protect us against these threats. Where artificial intelligence can be used to come up with defences to protect us better, we believe that is the right thing to do, but it is not a permanent fix or permanent solution to protect oneself against security threats.

4:20 p.m.

Liberal

The Chair Liberal John McKay

Thank you, Monsieur Picard.

Mr. Motz, go ahead for five minutes.

Oh, hang on, there's some confusion here.

Mr. Paul-Hus.