HMI problems, human-machine interaction problems, are very critical. Actually, there are a lot of unconscious details when we interact between humans—for example, the body language. If you take a pedestrian simply crossing a road, he will probably look at the car and see the driver. Without any talking, he understands that the driver has seen him or her, and he will say he's confident because the car is slowing down. We don't have this kind of interaction with a fully autonomous vehicle yet, and we have no clue on how to predict it precisely.
There's a lot of model-building with psychologists in connective science, and mathematicians trying to develop driver models and HMI models in order to have safer interactions between these highly autonomous vehicles or highly automated driving functions and humans.
To give you an example, there are some German car makers who have developed graphic interfaces with a big smile on the front of the cars, showing the pedestrian that the machine has seen you, so you can safely cross the street. We are exploring all kinds of solutions in that aspect.
This is not a trivial problem. This is also one tiny problem among an ocean of problems before we reach a full, mature solution at level 5. We're not there yet.