Some ideas you could pull from
- Paperclip apocalypse - narrowly defined optimisation scope leading to existential threat
- Asimov's laws of robotics - how would they fail in your world?
- there's a few articles in the See Also section here that you could apply the samr question to: https://en.m.wikipedia.org/wiki/Three_Laws_of_Robotics
- talking about one thing, while actually meaning another (e.g. applying logic from.one context incorrectly to a different context)