Here is one other question I want to ask,
I'm sure you're all aware of the term “singularity”. Singularity can be construed as a science fiction term for cases in which eventually you may have overlords of machinery that control human beings. Let's, however, take one step back from that point.
The basis of artificial intelligence and machine learning is for them to be smarter, more efficient and more capable than human thinking. Ultimately, though, there still has to be a human component. If you look at technology the way it is, you can program it within a certain narrative, but there's also the human dimension that makes calculations as you go along. One thing I've read is that if you program autonomous cars to go at the speed limit and human beings don't always go at the speed limit, how do you compensate for that?
If we look at singularity as the end point, how do we make sure that the human dimension is still involved? We want the advantages. We want the resources that AI and machine learning can provide us, but how do we make sure that there's still a human component to ensure that decisions are still being made in the human interest or with human interest involved?
It's an easy question. Take 20 seconds.