Yes, but I also think that, in the context of misinformation and disinformation, and coordinated efforts to manipulate election processes, there's a much longer “kill chain”—in technical platform speak—around an information operation.
In order to get the deepfake on the platform and its going viral, you have to be able to create a fake account. In order to create the fake account, you have to deceive a lot of the internal systems within these platforms. Even though there is an easier ability to create and disseminate deepfake-related content, if we're talking about it in the context of an information operation, we still need to consider the broader life cycle that these operations have to go through.
A lot of the mitigation measures have nothing really to do with the AI side of things. They still rely much more on the old school IP detection and all of the tricks that platforms play to figure out if this is a real person or a fake account.