Thank you.
Yes, I would stand behind those recommendations.
The application of risk management and the way that has occurred is very much connected to the deployment of these technologies, so I think examining that relationship is very important.
As these sorts of experiences tend to endlessly confirm for me, there is a quick reaction to looking at numbers that suggest that somehow biometrics are good, full stop. There are very rarely specifics ever given, in terms of a precise issue it will deal with at the border that is currently not dealt with sufficiently. I think that's a significant issue.
The other issue is that the public in general, and particularly those who inhabit border communities, often tend not to be terribly favourable in terms of wanting the use of these technologies.
In the NEXUS case, there's a reasonable enrollment because of lifestyle. But studies have shown—and there have been quite significant studies done on the Washington-B.C. border about this—that many people have chosen not to enrol in the NEXUS program, to stay in the lineups, simply for the fact that they have questions about the use of this technology and the way it can create this sort of “data double” issue, where a variety of pieces of data are linked together to create a persona that may or may not be a reasonable approximation of you.
Are we happy to make judgments on that basis, not to mention that increasingly data is demonstrating that these are not infallible technologies, by any stretch of the imagination?
When the designers of AVATAR themselves say their technology does not meet their own threshold, I think that's rather significant. But in that case, it's not significant to those willing to buy in to it.