Thank you.
My name is Robert Parker. I'm a retired partner with Deloitte & Touche. I first got involved in privacy in 1995 on an ISO privacy task force. Subsequent to that, in 2000, I joined the initiation of the Canada-U.S. privacy task force that developed generally accepted privacy principles, and most latterly, the privacy or maturity model.
I started a privacy practice at Deloitte, and when I retired in 2005 we had 40 people, 15 full-time and 25 part-time, in our privacy practice.
As mentioned, I'm with Risk Masters International, LLC. We're a group of four retired partners, three in the United States and myself in Canada. We do risk management work, including privacy work. We have a privacy course that we teach in the United States, dealing with United States health care privacy requirements.
I appreciate the opportunity to present some thoughts to the committee and I look forward to the discussion.
I've identified seven areas, and I realize that's a little more than the two that David identified. I would like to focus on just four of them.
I'm going to pass by privacy breach notification. I think we need to do some ramping up of the privacy breach notification requirements and rules, and to specify the obligations and rights of either party if there is a privacy breach. I dealt with this with a U.S. company, which is a global corporation, in terms of how they were dealing with privacy breaches for both electronic and hard copy documents.
Meaningful and effective consent has been discussed in a number of the documents. The issues here seem to be along the lines of front office versus back office. The centre for democracy and information did a study that showed that there's a total disconnect between what you tick on the form or what you click on the website and what happens in the back office. In the back office, they have to change their databases to be able to record that consent. They have to change every application program that looks at that database to test for that consent and then they have to act on it accordingly. That's a huge task, and a lot of organizations have just blown right past that. That's why there is a disconnect between what people consent to and what they are often given.
The last one is the ownership of non-provided personal information and who owns that. There was a court case—and I'm not a lawyer—in Ontario a few years ago. It was a very narrow case so it couldn't be taken as precedent, but it dealt with human tissue. It said that the human tissue taken from a person, once taken from them, belonged to the hospital and not to them. I think some clarification on non-consent issues like that would be helpful.
Of the four I want to talk about, the first one is collection versus retention, use, and disclosure. With the change in society right now, we have a number of individuals, millennials, and so on, who will give all of their information away. They post what they ate for breakfast on Facebook and they go to Twitter. They're very free about their information. They don't see some of the problems that other groups in society and other demographics happen to see. Perhaps the idea or the issue is not so much collection, but retention, use, disclosure, and security over that information.
In 2005, after the London subway bombings, they could go back six months and see who that person met with. They followed it all up and were successful in identifying a number of the perpetrators.
In Ontario, the initial ruling was that the TTC could keep them for 72 hours. If they didn't need them after 72 hours.... I realize that in all the legislation there is the national security clause, which would allow you to keep them longer, but a lot of people are keeping information. They're collecting it and keeping it for a long period of time, and that's even expected to go back years for an email or a piece of correspondence.
If we look at that, maybe collection is not the issue as much as retention, use, and disclosure, as well as how we secure that and nail it down really tightly so that it is not used in an inappropriate manner. That's the first big one: collection, use, and disclosure.
The second one is the Internet of things, and that's where we're using IP protocol to drive “things”. They could be mechanical things. They could be system things. It doesn't matter what it is.
I'll give a couple of examples. Your car, if it's newer, has an engine management module. That engine management module will record a lot of things, including acceleration rates, deceleration rates, how fast you were going, etc. Is that personal information? Could your car tell people? The mechanic can gain access to it, but so can police. In fact, an insurance company in the United States is saying if you will give them access to that, they'll lower your premiums, under the belief that they wouldn't have jackrabbit starts, fast braking, and excessive speed. Is this personal information? That's one example.
Your dashcam would be another example. Is that personal information? Can the police seize it? Do they need a court order, etc.? There's a whole lot coming out in this Internet of things, which I think we should take a look at when we look at the legislation.
The third one out of the four is digital exhaust. “Digital exhaust” can be loosely defined as what's left over after the power is put on. You consummate an Internet transaction and there's all this digital exhaust, like what time the transaction occurred, what happened here, what happened there, who was involved, what the mailing address was—all of that information. That can be resold, and certain people are reselling it in the United States. You might have seen the Federal Communications Commission issue over the weekend that dealt with part of that.
What we have here is this digital exhaust, this secondary information about the transaction. Is that yours? Does that belong to the organization that has collected that information about you? What rights do you have over the use of it, and particularly, over their selling it to other parties who would say, “These are your behavioural patterns,” and issues that you would not, perhaps, want them to deal with?
The fourth one is the adequacy and appropriateness of security. When we look at the first one, about having to nail down all this information if we're going to collect more information, now we have to have security there. The problem is, we're building higher walls, and we're building thicker walls, and we're building deeper and wider moats, and they aren't working. The bad guys still get in. There still are data breaches.
A couple of partners in Pricewaterhouse in the United States suggest a paradigm shift. That is, we let everybody in. You know, “Keep your friends close but your enemies closer.” You would build a profile about everybody who visited your website, and you would look at what they did and what an expectation model was. Combine this with big data and you would be able to create a profile on these people. If they went outside that profile, then you could stop it right then and there.
We don't have a fortress mentality. We need a different paradigm shift to look at that, but that means we are collecting information about identifiable individuals, and we're building profiles on each and every one of them. Is that something we want to do, or is that something we want PIPEDA to look at? That's coming down the road, the new paradigm shift in how security is going to happen.
Those are the four key topics. I'm pleased to answer any questions on the three subtopics at the end of this session.
I will mention generally accepted privacy principles. Generally accepted privacy principles were developed by a joint Canada-U.S. task force. Fortunately, because Canada's on it, it's published in both official languages, so it's readily available and I can get copies for the committee. It has 10 principles and 72 criteria, and it's very prescriptive. It deals with breaches. It deals with notification and so forth. It's a very prescriptive document at a very high level. Because it was so prescriptive, we went on to the privacy maturity model. The privacy maturity model takes the CMM, the capability maturity model—Carnegie Mellon and the U.S. Department of Defense— and we put that together into a privacy maturity model which says how an organization should go through....I can send that to you as well.
Thank you for your time. I know I've used my 10 minutes and a few seconds, but I appreciate the opportunity. As you might feel, I'm passionate about privacy.