Thank you.
Good evening, everybody.
My name is David Eaves. I'm a lecturer here at Harvard University. I teach technology in government and digital transformation at the Harvard Kennedy School. That said, I was born and raised in Vancouver Quadra, so I know Ms. Murray, who may be in attendance. I used to live in her riding until a few years ago.
I have also been advising on and thinking about transformation for about 15 years now. In fact, I appeared twice before the ETHI committee to talk about open data and my framework around open data, open information and open dialogue. It kind of turned into the policy framework that I think is still broadly used to organize transparency in government.
Today I want to talk a little bit about digital transformation and its impact on privacy. Particularly, I'm concerned with issues of governance and trust. One thing that the chair may, if he is so inclined.... Just today, I published an article in Policy Options about lessons from Estonia. It deals with some of the governance issues that I think are particularly pressing, questions that need to be asked. If it is of interest, it might be worth translating so that the committee can share it with all its members.
First, I just want to level set about what we're actually talking about when we're talking about Estonia, and what Estonia has done that makes it unique and worth talking about. There are really three things I think the members need to take away about what Estonia has done.
The first is that it has created a set of what we would call canonical databases, where it stores information about its citizens—that is, where you live, what your driver's licence number is and so on. All these things are being stored in databases, but they're being stored in a single database. There's only one database for addresses, one for drivers' licences, one for something else and one for something else.
The second is that the information in these databases is linked together because every citizen has a unique identifying ID. Everybody has their own number. The number gets attached to that information in those various databases, so it's easy to pull disparate information about a citizen all together to get a very clear view about who that person is, and then to offer that information to different parts of government as it's trying to do its service. This is a very different model from what you would find in most countries, including Canada, where these databases tend to be what my colleagues refer to as siloed. The information is actually stored in several places. It doesn't get shared. It's hard to get a full picture, and it's hard to grab all the information you have about someone, and that's why you have to keep collecting it over and over again.
Finally, the third big piece the Estonians have done is that they've gathered information, connected it to individuals through unique IDs and then made those databases—what I want to call “core infrastructure”—available to anybody who works in government, across all government agencies, so they can then leverage it to build new services or improve the services they already offer.
Those three innovations, for me, are at the core of what we're talking about, and if you don't understand those, then it's very hard to talk about the innovations or the costs or the dangers that are facing us if we want to go down that path. First, I just want to level set the committee around understanding those core issues.
Why does this matter? Just speaking a little bit to my predecessor Amanda Clarke's point, once you have this infrastructure in place, it's much easier to innovate and build new services. The core promise that the Estonian government makes to its people is that, by law, it will only ever ask for a piece of information from you once. If, say, the Canada Revenue Agency asks for your address, that means that if you go to the passport office, they'll already have your address on file and you won't have to give it to them again. The advantage of this is that, as you're building services as a government, you don't have to re-collect and re-store all this information. You have it in a single place, so you can leverage it when you build a new service and not have to ask for that information again, nor do you have to build all the infrastructure in that service to store and manage that information.
There are three key questions I would really like the committee to think about.
The first is that, as you're thinking about privacy information, I would love for you to be at least asking this question: What is the threat model that we're trying to protect ourselves against? There are predominantly two types of concerns people have about privacy, particularly in government. One is that they're worried about an external actor attacking the system and gaining access to data that the government stores about people. This is typically a foreign power. The fear is that it will then use that information to undermine the government or possibly even collapse confidence in government institutions and thereby cause people not to want to access information or not to trust the government.
The other core threat model that I hope a lot of time is actually spent thinking about is the internal threat model. I'm actually much more concerned about what my own government can do to me than I am concerned about what a foreign government might do to me. I'm significantly more concerned about what my own government can do to me than what a private actor might be able to do to me. In this particular example, this can range from a government engaged in surveillance to relatively narrow activities.
I'm particularly concerned about perhaps the ex-husbands of women using their access to government information to track where their former spouses are living and what they are doing. We certainly have ample history of that happening in all sorts of places, particularly in police forces, but in other places as well.
Even in small ways, this happens and comes up on our radar. People may remember that when Rob Ford went to the hospital, his records were illegally looked at by multiple people within the hospital records system, and relatively recently, two of those people were charged and fined. That type of access, what you can do with someone's personal information and the way you can share it as an internal actor, in some ways, concerns me more than what an external actor can do. Who we are worried about matters a lot here.
The second piece is that, while I am concerned about internal actors, this does not mean I want to create so many burdens for them in using these types of systems or gaining access to them. I very much want to echo Professor Clarke's points about how increasing security can be good, but if it comes at the cost of usability, then you create a system that's highly secure that no one can access or use. I have students who work in the military here who talk to me about their laptops that take 45 minutes to boot for them to access because they have so much security on them. As a result, people don't tend to use their laptops. I'm not sure we want a system that's so secure that nobody will end up using it.
The third is that privacy is not actually absolute. We want some flexibility. I may not want you to be able to look at my health care records at any point, but if I'm dying in the street, as my colleague Jim Waldo says, I definitely want you to have access to my health care records, and I might not be in a place where I'm able to give you permission to do that. We need a system that, while secure, provides some flexibility.
My key recommendations on this particular piece are.... Before any technical work happened on their systems, the Estonians did a lot of work of really updating their privacy laws for the 21st century and, more importantly, creating systems of logs and audits, so individual citizens could see who was accessing their data, and they could pose questions about whether said access was legitimate or not, and challenge authorities accordingly.
The second thing that I'm particularly concerned about is whether building this type of infrastructure might break the social contract that government has with its citizens. This may be humorous to hear, but most people are often quite comfortable giving information to their government because they believe their government does not have the capability to actually use that information to know very much about them. They're willing to hand information over because they don't actually think government has the competency to weave information together to create a story about them.
In the type of world that the Estonian government has created, this is simply not true anymore. The government's ability to pull together information about someone and actually really understand the totality of that person's life is vastly increased. Estonia has a very specific history and context that allowed that to happen. It's not clear to me that this exists in Canada, so I would strongly encourage the committee to do outreach to the Canadian public to understand how much comfort there is in the public for them to have that type of experience, what they want the government to know about them and what they want the government to be able to do with it.
The particularly large challenge I think you will have is that the citizens will tell you they want two things simultaneously. They will want you to treat them as Amazon does, which means they will want you to recommend new services to them, and they will want customized experiences. They will not want to have to re-enter their information over and over again, but they will say, “Don't you dare use my data to figure out that I have not been filling out my tax forms correctly, or that I actually owe money to the government for this other reason, and I don't want you to invade my life in ways that will make me unhappy.” It's not clear to me that you can have one without the other, or if you can, it's going to require a fair amount of rule thinking in order to get to that place. I don't think we've even begun to have the public conversation to engage and educate the public about how to get to that place and rate what their comfort levels are about such a possible future.
Finally, I'm very concerned about who's going to end up building—and more importantly, controlling—the infrastructure that Estonia has built. These database systems and the unique identifiers that come with them.... I wrote a case recently about a similar system in India, and I went in thinking there was a way to build this infrastructure to prevent a future political actor or a future actor from abusing this infrastructure, and the short answer is that there is not. There is not going to be a technology solution to the types of problems of privacy that we're talking about. There may be technology that can help, but ultimately, we're going to be relying on governance solutions. What is the governance that's going to protect the public from current actors and from future actors?
There are three futures that I can imagine for us. One is that we decide that building this infrastructure is simply too scary, that a government that knows this much is not one that we're comfortable with.
There's a second model, which is that we build it the way the Estonians did: highly distributed, so different ministries own different parts of this core infrastructure and they're sharing their databases with other ministries. The dangerous piece about this is that I actually think the governance in some ways is quite weak; ministries may be unwilling to cut off other ministries' access to data if they're doing something inappropriate, because they fear retaliation from that ministry cutting them off.
Finally, the third option might be that we build it in a way that's highly centralized, where there may be new governance models around the central institution.
I'm almost done, sir.