Thank you very much.
Actually, I wanted to pick up on that particular thought. It's one thing to moderate content when there is actual hate speech or something that is outright misogynistic. What you've been discussing today is more about the algorithms and the fact that the toxic platform actually prioritizes the kind of speech that might not reach the threshold of hate speech but is still racist or has underlying sexist messaging.
The difference, for instance, with television is that when you put on a commercial, everybody sees the same commercial. Obviously it has to be moderated to be what most people would want to see and consider to be acceptable versus, for instance, if somebody did something that might have underlying misogynistic undertones and they click on it and it says “The reason you got this is that you are a white male between the ages of 20 and 25 and you just broke up with your girlfriend.”
If they knew that, then that would allow that person to think twice and say, “Why am I getting this?”
Is this what we're talking about here? I'm asking because there are two different things. There's actual hate speech and then there's the way in which all of these messages are being targeted at individuals, and that's a lot harder to regulate.