Oh, yes, and it already has.
The large language models, for example, have been trained on datasets that include published works by writers around the world. I think there's a class action suit going on with writers like Stephen King saying, “Look, your tool that you're making billions of dollars from—you have like a $90-billion capital valuation—is piggybacking on my work and it's completely uncompensated.”
There's that kind of rip-off of intellectual property—absolutely.
Then, in terms of privacy, there are so many different ways that AI is going to interfere with people's privacy. If you just take generative AI, we've talked about ChatGPT. There's Claude, and Bard from Google. There are all of these LLMs that are out there.
These companies are trying to stay ahead of the issue by putting in guardrails that are ethical and all that, but what we haven't talked about is that there is going to be a grey market for large language models that are free of these constraints that governments and companies in the public eye are focused on. What do you do with that?
Ms. Chabot, it's very hard to do a moratorium on that kind of thing.