I highlighted in our presentation that there still needs to be human control over all artificial intelligence. It has to be enabled by human control. At the end of the day, there's a lot of fear of the unknown in this space but I think that allowing industry to place the standards and, wherever there are market failures, creating the right legislative and regulatory frameworks to address those market failures is going to be important.
What we as an industry want to see is an ongoing dialogue and a balanced approach to legislation and regulation. We don't say there is no need for it. We understand that going through our own standard setting is not always the be-all and end-all. Sometimes there will be market failures that will require legislative or regulatory action. What we're suggesting is that it has to be done in a dialogue and be balanced so that we don't impede our access to innovation and our ability to do R and D in this field.