We can clearly see confusion in the future between what's authentic and what is not. Right now, we're not at that stage. In fact, in a number of studies that I've seen, when they ask human participants whether or not they recognize some of the deepfakes and other artificial intelligence-created artifacts, the humans still can recognize these.
We also see a time in the near future when it will be much harder to differentiate between generative AI content and authentic content or works. I think that's where the next battle is. We see some platforms exploring options requiring their content creators to first disclose whether any generative AI tools were used to produce that content. That's an important step.
The next step is perhaps to do some kind of digital certification or watermark on the content, so that we actually know how it was created. There is nothing wrong with generative AI, but if the content it may create is used for malicious purposes, that's of course problematic.