Thank you, Mr. Chair.
I am Tristan Harris. It's a pleasure to be with you today. My background was originally as a Google design ethicist, and before that I was a technology entrepreneur. I had a start-up company that was acquired by Google.
I want to mirror many of the comments that your other guests have made, but I also want to bring the perspective of how these products are designed in the first place. My friends in college started Instagram. Many of my friends worked at the early technology companies, and they actually have a similar basis.
What I want to avoid today is getting into the problem of playing whack-a-mole. There are literally trillions of pieces of content, bad actors, different kinds of misinformation, and deepfakes out there. These all present this kind of whack-a-mole game where we're going to constantly search for these things, and we're not going to be able to find them.
What I'd like to do today is offer a diagnosis that is really just my opinion about the centre of the problem, which is that we have to basically recognize the limits of human thinking and action. E.O. Wilson, the great sociobiologist, said that the real problem of humanity is that we have paleolithic emotions, medieval institutions and god-like technology. This basically describes the situation we are in.
Technology is overwriting the limits of the human animal. We have a limited ability to hold a certain amount of information in our head at the same time. We have a limited ability to discern the truth. We rely on shortcuts like what other people are saying is true, or the fact that a person who I trust said that thing is true. We have a limited ability to discern what we believe to be truthful using our own eyes, ears and senses. If I can no longer trust my own eyes, ears and senses, then what can I trust in the realm of deepfakes?
Rather than getting distracted by hurricane Cambridge Analytica and hurricane addiction and hurricane deepfakes, what we really need to do is ask what the generator function is for all these hurricanes. The generator function is basically a misalignment of how technology is designed to not accommodate, almost like the ergonomic view of a human animal.
Just like ergonomics, where a pair of scissors can be in my hands and I can use it a few times, it will get the job done. However, if it's not geometrically aligned with the way the muscles work, it actually starts to stress the system. If it's highly geometrically misaligned, it causes enormous stress and can break the system.
Much like that, the human mind and our ability to make sense of the world and our emotions have a kind of ergonomic capacity. We have a situation where hundreds of millions of teenagers, for example, wake up in the morning, and the first thing they do when they turn off their alarm is turn their phone over. They are shown evidence of photo after photo after photo of their friends having fun without them. This is a totally new experience for 100 million teenage human animals who are waking up in the morning every day.
This is ergonomically breaking our capacity for getting an honest view of how much our friends are having fun. It's sort of a distortion. However, it's a distortion that starts to bend and break our normal notions and our normal social construction of reality. That's what's happening in each different dimension.
If you take a step back, the scale of influence that we're talking about is unique. This is a new form of psychological influence. Oftentimes what is brought up in this conversation is, “Well, we've always had media. We've always had propaganda. We've always had moral panic about how children use technology. We've always had moral panic about media.” What is distinctly new here? I want to offer four distinct new things that are unprecedented and new about this situation.
The first is the embeddedness and the scale. We have 2.2 billion human animals who are jacked into Facebook. That's about the number of followers of Christianity. We have 1.9 billion humans who are jacked into YouTube. That's about the number of followers of Islam. The average person checks his or her phone 80 times a day. Those are Apple's numbers, and they are conservative. Other numbers say that it's 150 times a day. From the moment people wake up in the morning and turn off their alarms to the moment they set their alarms and go to sleep, basically all these people are jacked in. The second you turn your phone over, thoughts start streaming into your mind that include, “I'm late for this meeting”, or “My friends are having fun without me.” All of these thoughts are generated by screens, and it's a form of psychological influence.
The first thing that's new here is the scale and the embeddedness, because unlike other forms of media, by checking these things all the time, they have really embedded themselves in our lives. They're much more like prosthetics than they are like devices that we use. That's the first characteristic.
The second characteristic that's different and new about this form of media propagandic issue is the social construction of reality. Other forms of media, television, and radio did not give you a view of what each of your friends' lives were like or what other people around you believed. You had advertising that showed you a theoretical couple walking on a theoretical beach in Mexico, but not your exact friends walking on that specific beach and the highlight reels of all these other people's lives. The ability to socially construct reality, especially the way we socially construct truth, because we look at what a lot of other people are retweeting, is another new feature of this form of psychological manipulation.
The third feature that's different is the aspect of artificial intelligence. These systems are increasingly designed to use AI to predict the perfect thing that will work on a person. They calculate the perfect thing to show you next. When you finish that YouTube video, and there's that autoplay countdown five, four, three, two, one, you just activated a supercomputer pointed at your brain. That supercomputer knows a lot more information about how your brain works than you do because it's seen two billion other human animals who have been watching this video before. It knows the perfect thing that got them to watch the next video was X, so it's going to show another video just like X to this other human animal. That's a new level of asymmetry, the self-optimizing AI systems.
The fourth new distinct thing here is personalization. These channels are personalized. Unlike forms of TV, radio or propaganda in the past, we can actually provide two billion Truman Shows or two billion personalized forms of manipulation.
My background in coming to these questions is that I studied at the Persuasive Technology Lab at Stanford, which taught engineering students essentially how to apply everything we knew about the fields of persuasion, Edward Bernays, clicker training for dogs, the way slot machines and casinos are designed, to basically figure out how you would use persuasion in technology if you wanted to influence people's attitudes, beliefs and behaviours. This was not a nefarious lab. The idea was could we use this for good? Could you help people go out and get the exercise they wanted, etc.?
Ultimately, in the last class at the Persuasive Technology Lab at Stanford, someone imagined the use case of, what if in the future you had a perfect profile of what would manipulate the unique features, the unique vulnerabilities, of the human being sitting in front of you. For example, the person may respond well to calls from authority, that the Canadian government's summoning the person would be particularly persuasive to his or her specific mind because the person really falls for authority, names like Harvard or the Canadian government, or is really susceptible to the fact that all of his or her friends or a certain pocket of friends really believed something. By knowing people's specific vulnerabilities, you could tune persuasive messages in the future to perfectly manipulate the person sitting in front of you.
This was done in the last class of my persuasive technology class, done by one of the groups. It was on the future of the ethics of persuasive technology, and it horrified me. That hypothetical experiment is basically what we live inside of every single day. It's also what was more popularly packaged up at Cambridge Analytica where, by having the unique personality characteristics of the person who you're influencing, you could perfectly target political messaging.
If you zoom out, it's really all about the same thing, which is that the human mind, the human animal is fundamentally vulnerable, and there are limits to our capacity. We have a choice. We either redesign and realign the way the technology works to accommodate the limits of human sense making and human choice making or we do not.
As a former magician who can tell you that these limits are definitely real, what I hope to accomplish in the meeting today is we have to bring technology back inside those limits. That's what we work on with our non-profit group, the Center for Humane Technology.