Unfortunately, there is a lot of confusion in many people's understanding of AI. A lot of it comes from the association we make with science fiction.
The real AI on the ground is very different from what you see in movies. The singularity is about the theory. It's just a theory that once the AI becomes as smart as humans, then the intelligence of those machines will just take off and become infinitely smarter than we are.
There is no more reason to believe this theory than there is, say, to believe some opposite theory that once they reach human-level intelligence it would be difficult to go beyond that because of natural barriers that one can think of.
There is not much scientific support to really say whether something like this is an issue, but there are some people who worry about that and worry about what would happen if machines became so intelligent that they could take over humanity at their own will. Because of the way machines are designed today—they learn from us and they are programmed to do the things we ask them to do and that we value—as far as I'm concerned, this is very unlikely.
It's good that there are some researchers who are seriously thinking about how to protect against things like that, but it's a very marginal area of research. What I'm much more concerned with, as are many of my colleagues, is how machines could be used by humans and misused by humans in ways that could be dangerous for society and for the planet. That, to me, is a much bigger concern.
The current level of social wisdom may not grow as quickly as will the power of these technologies as they grow. That's the thing I'm more concerned about.