“A paperclip maximizer is not making a computational error by having a preference order on outcomes that prefers outcomes with more paperclips in them. It is not standing from within your own preference framework and choosing blatantly mistaken acts, nor is it standing within your meta-preference framework and making mistakes about what to prefer. It is computing the answer to a different question than the question that you are asking when you ask, “What should I do?” A paperclip maximizer just outputs the action leading to the greatest number of expected paperclips.”
— Eliezer Yudkowsky, AI Visionary Eliezer Yudkowsky on the Singularity, Bayesian Brains and Closet Goblins (Scientific American, March 1, 2016)