The way I would make the metaphor effective is that the kinds of calculations that are posing these security risks depend on the specific problem. It can potentially be, as was often said in the previous answers, vastly accelerated.
In any experiment, in any new product development, and in any new AI development, there is always a component that is dedicated to computation. But not just computation, like you push a button, and you create a system out of that. A lot of the experimentations, hyperparameter optimizations, and trying different settings of a model while you're training it, all consume vast amounts of energy and time.
There is some theoretical work that would suggest that if we can convert some of these training problems for neural networks into a quantum computable problem, the same benefits that you would have for decrypting an encrypted message could actually be applied for the training of a neural network. That would enable you to run multiple computations simultaneously and vastly accelerate your training time.
The emphasis from our original opening statement was that this was just one component of the greater health care application, but it was a substantial part of it. Some of those resources, some of those calculations, are only accessible to the largest, best funded public institutions and private companies.
The ability to vastly accelerate, by several orders of magnitude, these kinds of computations is going to make what was previously hard maybe a little bit easier, and what was previously impossible, now possible.
Stepping outside of the field of radiology for a second, many of the things within them, like proteomics and genetics research, involve even larger degrees of analysis, drug discovery, and drug development, which involves protein folding. Things and applications like that, that are extremely expensive and very difficult to perform now, may become much faster. This is going to enable a whole new generation of potential treatments and potential AI systems.