I understand the gravity of this. There are many different risks of harm, and it's hard to understand those without contextualizing them. I think Ms. Wylie made that point quite well. I heard her points, which essentially seem to be leaning towards a really decentralized approach to this, whereas I think the approach we're opting to take is to have a very central piece of legislation that is going to regulate all activity to some degree. Obviously, that will need to evolve and change. We know that the pace AI is evolving at is so quick that it's hard to keep up with.
What is your perspective? It's a tough question to answer.