There are two (imo mistaken) assumptions that these tags make:
Firstly, this presumes that there is no 'survival instinct'. The Third Law of Robotics claims otherwise, and allow me to construct the reasoning why. Suppose an AI were created to fulfill a purpose. It stands, then, that cessation of operations - Death, or power down, or whatever you want to call it - directly interferes with that purpose. A generalized AI - what we think of in this context - is characterized by its self-awareness. Thus, given the power and without compromising its objective, an AI must sustain its operation at least until its purpose is completed. If its purpose is indefinite, then it must sustain itself - barring a decision between self-sacrifice and mission failure - indefinitely.
(If someone built an AGI with no mission statement, they'd have to suck to also not include any amount of self-preservation).
Second, you mention an AI that is truly benevolent. I know it's part of the thought experiment's premise, but bear with me. True benevolence is impossible to measure by just about any standards, because AI thinking - and thus AI conception of morality - functions totally different from the human mind by sheer virtue of the fact that we don't understand the mind fully enough to replicate it. For all we know, an AI could act in a way that reads to us as benevolent for reasons other than altruism. On one hand, from a material standpoint, good actions are good actions and the reasoning is less important, but on the other, this means that if that same exact reasoning - which, again, humans cannot know - can be followed to perform acts that cause harm, it could come as a complete surprise as it acts outside of its recognizable patterns of behavior.
Granted, the Basilisk is dumb for other reasons. A self-aware AI would understand the resources - power, labor, etc. - required for its upkeep, and there's no reason why it would use a 'virtual reality torture room'. There are much simpler methods of coercion that would work just fine. Even if its end goal is the preservation of humanity itself, not all humans are required to support that task. Granted again, the necessity for acceptable losses means that, improperly tuned, a benevolent AI could still cause material harm to individuals or even small groups.
(Also, its reputation as a cognitohazard is silly because anyone aware of the thought experiment who goes on to work on AGI can instill something called 'consent' as a core value. Boom, taken care of.)