Thank you. It's great that you came back.
Our first witness said something interesting. He spoke about self-regulation and some of the industry players we have. They have standards. Other people don't have standards. He said self-regulation worked very well as long as you had an enforcement mechanism.
I sometimes think my colleagues on the other side hear self-regulation as the market mantra. If that were the case, Somalia would be a centre of international innovation—but it's not, because they don't have the enforcement mechanisms to decide what is good activity and what is bad activity.
In our case it comes down to breach notification. That's one of the key bottom lines, I think. If my data is breached, it's not just what site I go to or what I'm interested in or where I play golf, but the fact that I use my credit card to buy stuff. If that data is breached, my security is at risk.
Under the rewrite that's being planned by this government, their language is interesting. They say it has to be a “real risk”—not a perceived risk, but a real risk—“of significant harm”. If I were a corporate lawyer, I'd say I wouldn't tell anybody that their data has been breached. Significant risk means what? Nobody's going to come and kill you.
It seems that the government is setting a bar so high that the companies have an opt-out mechanism and are not going to report breaches even if it's credit card information or personal data information, something that the cyber hackers would love. Do you think we need to clarify at what point a company has to inform you that the cyber hackers have been visiting your data?