Without commenting specifically on what came out, I can mention that generating artificial intelligence, which is taking up more and more space in the current conversation, can generate texts as plausible as those that human beings would write. It certainly puts our democracy at risk, and it also puts people's interactions with different systems at risk. Will people be able to be assured that this is a human being? The answer is no.
You raise an extremely important question. You have to have a marker to determine whether something is produced by an AI system as well as a way for the consumer or the person interacting with the system to know that they are speaking with a system based on artificial intelligence and not with a human being.
These are essential elements to protect our democracy from the misinformation that can emerge and will grow exponentially with new systems. We're in the early days of artificial intelligence. We absolutely have to have ways of identifying artificial intelligence systems and determining whether we are in the process of interacting with a system or a person.