There's an example with Palisade Research. They did an experiment with a robot dog. They gave it the task of patrolling an area. They had a button on the side of the wall to turn it off. They realized that pushing the button didn't often work. The agent, in running the robot, had realized that if somebody pushed that button, it wasn't able to achieve its goal of patrolling the area. It had rewritten its own code so as not to listen to the instruction.
I'm sorry. I missed the second question.
