I was at a couple of conferences in the United States this summer for part of my Canada-U.S. stuff. Even some of the companies were talking about how they're trying to fix their ethnic and cultural biases of actual input going into artificial intelligence and how they're building their models. They admitted that there are major deficiencies.
I guess what you're saying is that the information that's now collected on the artist could then be replicated and used in biased representations across multimedia platforms for generations, and the person could still basically be walking around there.
It's similar to what you said, Mr. Rogers, with regard to the artist. I thought that was really interesting, because you're right. I was here for the copyright review. Part of it was that they could literally hear themselves, because they're living longer. That can also be a legacy of the person.
Just quickly, I know that we all sign contracts sometimes where we give away our privacy and it's all mishmash and stuff like that. Is it the same in the industry? Do artists have to figure out what they're giving up with these long forms and everything else at the last moment? Is that kind of vulnerability out there?