Actually, I found an article through search engines that Dr. Wardle participated in, in Australia, which explains it very well. I would encourage people to hit their favourite search engine to find it.
Basically, it learns details from a series of images that are publicly sourced or sourced through other means. It learns details about the face and then uses deep-learning techniques—they're algorithmic and not logic in nature—to learn how the face interacts as it moves. Then, using a transplant victim.... If I were to take a video of Pablo here and I had enough video that had been pumped into the deepfake learning engine, I could just put my face onto Pablo's and very convincingly make Pablo look like he's talking while I'm moving.