Thank you, Chair. It's always a pleasure to be on. It's my first time under your chairmanship, so it's a delight.
This is a topic that is very interesting to me. I take it as very similar to what I've seen with South Asian gang violence in Vancouver, where there's prevention, intervention and then there's enforcement.
Dr. Leuprecht said very clearly, I think, on the enforcement side, it seems like our government is doing a decent job, especially CSIS, CSE and the RCMP, in making sure that it doesn't reach levels of violence. What concerns me is that even a small group can end up influencing a lot, and the differences happening are through algorithms.
My question is to Dr. Celestini from SFU. Welcome, from my neck of the woods.
How do you regulate algorithms by social media to prevent them from contributing to ideologically motivated violent extremism movements? What we're seeing is that people punch in once—they may have a question—and then they get bombarded with that theory or those extreme ideologies over and over again.
They may have initially just wondered if it was true, but then they get so much information that they start believing that it is true. I'm more concerned about that level of people who get influenced by it as opposed to those who are already hardened and extreme.
Is Dr. Celestini still on?