I'm sorry, I'm not. The request to have me appear was last minute. However, I am very familiar with the international efforts on this.
We are actually organizing a conference in Washington in two weeks' time to talk about global alignment on a set of standards by which we would analyze the effectiveness of any legislation. We've invited your colleagues to attend.
Can I just lay out very simply what those standards are? I think it will help to give you the insight you need as to whether or not the legislation you're proposing meets those standards.
One is forcing safety by design. At the moment, companies can act in a profoundly negligent way in designing their systems. In the U.K. and the U.S., for example, there's a big push for ensuring that there is safety by design for children, but there is no reason why that should not be extended to adults as well.
Second is transparency. It's the transparency of how the algorithms work, what their outputs are and the transparency of the economics. Let's not forget that 98% of Meta's revenues come from advertising. There is a reason why content is structured the way it is. It's structured to maximize advertising opportunities. Understanding those economics is absolutely vital. Then there are the enforcement decisions. Why do they decide to take down one piece of content and not another, or to leave one up when they've taken one down of similar content? It's understanding those enforcement decisions.
Third is accountability. Are there bodies setting the standards and also doing independent analysis of the effectiveness of that work? That's the space where you're looking for public-private partnerships because of course not all of that can be done by the state.
Finally, some mechanisms for responsibility are needed, whether that is through civil litigation or criminal responsibility. For companies and the executives, when they create negative externalities which have a cost paid in lives, some of that cost should be borne by the companies themselves economically, to disincentivize the production of harm. Do you have proper mechanisms for responsibility to disincentivize the production of harms in the first instance?
That's our safety, transparency, accountability and responsibility platform that we're encouraging countries around the world to analyze their overall regulatory package by.