Part of the Clearview AI issue was that we didn't have a proper assessment process, so we're in the process of putting that in place. We've had consultations on the board policy that looks at AIML, and we're in the process of drafting the procedure that will sit underneath that.
Essentially, it starts by a determination of what the benefit of the technology might be that would drive us to even look at it. Then there's a set of flags, which would increase the risk around a set of various risk factors that we determine through the consultation that we ran on the public policy, and those risk factors would flag it into a separate process, ultimately to go through public consultation around that specific technology and a risk assessment to determine whether it needs to go forward.