“”[W]hat would a superintelligent AI want?” [T]his is the wrong question. [H]umans come from the factory with a number of [b]uilt-in drives and also a built-in capacity to pick up a morality from their environment and upbringing which nevertheless has to match up with the built-in drives or it wouldn’t be acquired. The same way that we have a language acquisition capacity that doesn’t work on arbitrary grammars, but a certain [b]uilt-in syntax. That [c]orresponds to our experience so we expect all minds (because of all minds with whom we’ve had an experience with) have, on the one hand, innate [s]elfishness as a drive, self-concern as a drive, yet to respond positively to positive gestures. So they’re thinking, “[W]e’ll build AIs, and the AIs will, of course, want some resources for themselves, but if we’ll be nice to them, they’ll probably be nice to us. On the other hand, if we’re cruel to them and we try to enslave them, then they’ll resent that and become rebellious and try to break free.” [A]ll human minds are only a single dot within the space of all possible mind designs.”
— Eliezer Yudkowsky, Becoming a Rationalist (Conversations from the Pale Blue Dot Podcast #088)