Thank you for the invitation.
My name is Diana Inkpen. I'm a professor of computer science at the University of Ottawa. My research area is artificial intelligence, natural language processing and machine learning with a focus on social media text processing. I'm going to offer a bit of what I know from a computer science perspective. I'm not sure if it will be that much use for this committee.
In our research we look at individual messages or groups of messages from certain users. For our methods, it's easier to have more than one message at a time. There is more information for text to be analyzed.
I looked at cyber-bullying messages to protect children while they are online, or at detecting signs of mental health or suicide ideation. There are some benchmarks of hate speech that we play with in some small projects. I didn't look particularly at extremist messages, but I think the same kinds of methods, AI tools, could be used.
Most of the time we need to, with classifiers and automatic methods, pick up on words and phrases associated with certain topics and certain very strong, negative emotions, for example. Most often they learn from data. Besides classifying a text, a set of messages or a user, we can also summarize texts. We can find similar things. We can identify bots and fake accounts, because the language they use is different and they have other behaviours.
I am more concerned about the accuracy of these kinds of tools. We work in computer science to improve accuracy with the latest deep learning methods.
Besides that, accuracy is what computer scientists try to provide. These tools are not perfect. In my opinion, there will always be a need for humans in the loop, not only to use these tools with a grain of salt but also to try to get an explanation of why the machine recommends such things. We work on explainable language classifiers and so on, even if it's a very...research area, so it's not easy to get an explanation.
Besides accuracy, of course, it's very important to use any AI tool in a very strong, ethical way. I know the government is putting in place regulations for how to use AI tools. That's what I'm more focused on increasing, the accuracy of these kinds of decisions and their explainability.
I think about the recent events—the protests, the trucker convoy. Maybe these users were known to relevant authorities. Their accounts could be automatically monitored to detect very specific extremist messages. If somebody, an unknown user, is preparing a hate crime, probably they will post relevant messages that could be detected, and warnings could be raised and so on.
To conclude, I want to say that AI tools could be useful for detecting extremism and dangerous ideologies, but only if they're customized properly in terms of accuracy and if they are used carefully in terms of ethics by relevant authorities.
Thank you.