I can try to answer part of that.
The online environment is obviously fairly complex, and it's evolving. The important part to understand is when behaviour is inauthentic, when countries use various tools either to covertly amplify messages they know run counter to the interests of the countries they're targeting or to specifically amplify their own interests.
There is inauthentic activity through things such as bots, but you also have to understand that they've cultivated a range of supportive actors within a state, who will amplify those messages. Sometimes inauthentic behaviour may actually look authentic, because it is being amplified by legitimate actors in a certain state.
It's all going to be made much more complex, as colleagues have mentioned, by AI, including things like deepfakes. On the front page of the Ottawa Citizen the other day there was a story about how images and videos are going to be manipulated in such a way that it will be very difficult to tell the difference between what is real and what is fake. Those kinds of tools, in combination with the kinds of amplification that can be done in an online environment, are going to make it particularly challenging, I think, for average citizens and also for national security intelligence organizations to be able to constrain and address that behaviour. It's something we have to pay particular attention to into the future.