It depends largely on the specifics of the task. In a task in which passport staff who have been trained and have many years' experience in the job are asked to compare live faces presented in front of them against photographed identity documents similar to passports, we typically see error rates of around 10%. That means for every 10 comparisons that are made, one of them is made erroneously. I'm talking about a decision on whether there's a match or a mismatch between the photo and the live person.
As for computer-based systems, we have very little understanding in how most of them operate in realistic conditions. Many of the test results that are reported by vendors are based on idealized situations in which image quality is reliably good and the conditions under which the match is being conducted are very good. That ignores the noise and complexity of the real world. So we just don't know enough about that, in my view.