It's a really core challenge, especially when it comes to misinformation, as opposed to some other content that's more clearly illegal, say things like hate speech.
With respect to misinformation, yes, we have to be very careful, but I tend to focus on a “more speech” approach rather than a censorship approach, which is building the systems where fact checkers are adding context to things we're seeing online and where things like deepfakes are being labelled so people know it. It's not to say there won't be manipulated imagery online, of course—that has always been the case—but people should know that what they're seeing is that. I think that's a way to balance freedom of expression and the real harms that are happening with respect to disinformation.
There are other pieces about algorithmic propagation and the financial motives that we can get into, but I think, at its core, any legislation or regulation through the AI act that tries to regulate speech needs to put at the forefront. Companies need to consider the freedom of expression alongside the other aims.