Absolutely. I think, in general, automated content or bot accounts—which are sometimes called “coordinated inauthentic behaviour” by some of the big platforms—have been tactics used by foreign governments and state actors for quite a number of years. It's a form of automation. The concern is that the sophistication of these tactics is growing and that AI—in particular, generative AI, where text, video or images can be created more easily—is rapidly getting better and will make the detection efforts that platforms have tried to ramp up over time less successful. That is, I think, the biggest concern. Doctored videos, deepfake images and...text, as well, are growing in frequency.
In terms of what to do about it, there have been proposals that any synth-fake media should be labelled. Of course, there is legitimate use of synthetic media that is satirical or artistic. However, if we're concerned about the spread of misinformation, perhaps there should be a little label on these platforms that informs the user that's the case. If it is meant to mislead, the platforms could try to impose labels on these images and improve over time—get to the point of kicking users off the platforms who continue to post manipulated images without labels, for example.
In its most extreme form—I'm sorry if I'm talking for too long—there have been suggestions that generative AI tools like ChatGPT could keep a log of their outputs and that platforms could then, basically, track against that log to automatically add the labels that....
Again, these are ideas people are putting forward to try to address the risks.