Here is a final question. You guys can comment on the other question, but I want to make sure I get this question in.
There has been some discussion philosophically that—because of the global race for AI and machine learning, some commentaries have suggested—maybe we should take it one step at a time, because the research far surpasses any legislative ability or any human comprehension of how to deal with the moral and ethical implications of AI.
Would you suggest that we should have some framework whereby, as we hit certain milestones in the progress of AI, we should take a step back and regroup to think about how we're going to manage the next phase of development?