I don’t think most representations of “evil robots” miss the point, because the point can’t be extracted from the popular concept of the evil robot. What makes them apparently “evil” in any setting is their inability to reason in an exact such way a human might - with all our flaws, we make decisions that are ultimately not based on a set of principles, but a belief that we have those set of principles guiding us. I need to bring back the Turing quote I recently posted:
It is not possible to produce a set of rules purporting to describe what a man should do in every conceivable set of circumstances. One might for instance have a rule that one is to stop when one sees a red traffic light, and to go if one sees a green one, but what if by some fault both appear together? One may perhaps decide that it is safest to stop. But some further difficulty may well arise from this decision later. To attempt to provide rules of conduct to cover every eventuality, even those arising from traffic lights, appears to be impossible.
I don’t think HAL was created to display that artificial intelligence “lacks the reasoning behind orders” but rather we seem to lack that reasoning. Our reasoning behind orders comes from these sort of intangibles that compose the human condition - why we do what we do when faced with “paradoxes” or equally conflicting situations. When we show a being capable of removing himself from those intangibles (or rather, incapable of experiencing them in the first place), we’re met with something that we conventionally find terrifying.
Personally, I think HAL did a pretty good job with the conflicting orders he’d been given, all things considered.