Is there more than one strategy that needs to be employed when we look at some of the real effects that we're seeing from the use of this new teenage generation of AI? Some of the most real effects we've seen, of course, are news reports about language models counselling vulnerable people to die by suicide, to take their own lives. We've seen also a rise in the creation of child sexually exploitive material and deepfakes using children. That is the worst kind of deepfake, but it's not the only kind. It's happening to public figures, to anyone who has a picture online, and really to anyone whose likeness can be described to a language model. These are some of the real-world consequences we're seeing today. You referenced the effects on the job market for youth for entry-level jobs.
What's the answer? Is it a series of measures that need to be taken? When we talk about deepfakes, is it a question of needing to update the criminal law so that individuals are held personally responsible for their actions in the creation of this unacceptable material that is not intended to be covered by free speech laws or freedom of expression and goes well beyond that? It's victimizing individuals. Is it instead that we need to pass laws where it's incumbent on the tech companies to ensure the safeguards are in place? Is it both?
