Thank you so much.
We will be told that explainable AI is a computational solution we're going to have to make sure FRT can go forward.
I want to argue that even though explainable AI is a growing field, it's actually adding more complexity, not less. This is because explanation is entirely audience dependent. That audience is usually comprised of computer scientists, not politicians.
Who gets to participate in that conversation and who's also left out is really important. It's not enough to have explainable AI even because of the neural network type of AI that FRT is. It can never be fully explained.
That is also part of our recommendation. In short, it is really trying to get to the core of what the technology is and understanding the black box. Having a technical solution to a very problematic technology doesn't mean we should use it to go forward and not consider the ban.