Thank you very much.
Mr. Chair, distinguished committee members, ladies and gentlemen, thank you very much for inviting me today.
I would like to speak to you about a subject that I have been studying for years as a researcher, a University of Calgary professor and a fellow of the Canadian Global Affairs Institute. It's artificial intelligence. I believe it's going to revolutionize everything, including cybersecurity and cyberwarfare, and a lot quicker than most people expected.
AI is all over the news right now, because of things like ChatGPT. I was teaching it 30 years ago, and many of my students who built backpropagation in neural networks then have gone on to do great things. Artificial intelligence now does everything from detecting tiny tumours on MRIs to helping cities optimize traffic signals.
There is a dark side to artificial intelligence, something we call adversarial AI. My fear is, like so many industries, our defence folks will embrace AI without fully understanding how it can be used against us.
You probably heard about those snoopy information kiosks in Cadillac Fairview shopping malls. The company was chastised by the Privacy Commissioner for secretly collecting data on five million people, including their approximate age and gender. How did they know your gender? They made an educated guess using AI and facial recognition.
For 25 years, I ran a program for highly gifted high school students called Shad Valley Calgary. It culminated in a science fair where they showed off their work. One year, they built a neural network to predict your gender from body measurements, like hip to waist ratio. One corporate sponsor stopped at their booth and, yes, he was rather portly. They measured him, and they told him that with 84% probability he was female. That was actually a good thing, because those students realized that AI was just making informed guesses.
If you go to ChatGPT or similar programs, they will give you answers that don't have any percentages or degrees of uncertainty. They read like statements of fact. They can be dead wrong. I asked ChatGPT, “Is Danielle Smith intelligent?” It came back with, “I cannot accurately determine who you are referring to as Danielle Smith.” It does say that Justin Trudeau is widely considered to be intelligent. What's going on here?
I lifted the hood on the current free public version of ChatGPT. It's knowledge base ends at 2021. At that time, Danielle Smith was an unemployed talk radio host. She didn't rise to her current political prominence until 2022. I'm sure that ChatGPT's database will be updated, and its answer will be different in the future.
That's another problem. You can ask an AI bot the same question twice and get wildly different answers. It doesn't tell you why.
Don't get me wrong, I love AI and its potential upside. There are plenty of companies that will tell you all about that, since they have products to push. My mission is to make sure we look at the risks and apply this tool intelligently.
Here are three things to worry about as AI moves into national defence.
First is the source of training data. Most AI is trained on public domain data that might be inadequate. We've seen issues with facial recognition having trouble recognizing people of colour, because it was exposed mainly to white faces. In the defence industry, much of the most important data is not in the public domain.
Second is the lack of ethics in AI. We all remember Tay, the Microsoft chatbot that went off the rails and started spouting Nazi ideas and foul language and referred to feminism as a cult. Tay was just learning from people who interacted with it. Unfortunately, that's what it talked about.
Third, malicious actors can try to poison the database. A woman has been trying to rewrite the Wikipedia entry on Nazis to paint them in a favourable light. Way back in 2003, Democratic supporters linked the terms “miserable failure” on Google to George W. Bush's official White House biography. When you did a Google search for “miserable failure”, up came the president's picture.
If you don't want your political profile to be linked to miserable failure or worse, you should heed what ChatGPT has to say about this very committee:
The Standing Committee on National Defence,
Within the House of Commons, its power immense.
A place where decisions are made with care,
For the safety and security of all to share.
With members from every party, they convene,
To review and assess, and to make things clean.
Wait a minute. To make things clean...? What does that even mean? Only ChatGPT knows for sure, and it's not telling.
Thank you very much.