“[A paperclip maximizer] might be a sensible goal for someone who owns a paperclip factory and sets out to fully automate it by means of an AGI. What seems a bit silly, however, is for someone who plans to take our civilization into its neyxt era by means of an intelligence explosion to choose such a narrow and pedestrian goal as paperclip maximization. What makes the paperclip maximizer intelligence explosion a somewhat less silly scenario is that an intelligence explosion might be triggered by mistake. We can imagine a perhaps not-so-distant future in which moderately intelligent AGIs are constructed for all sorts of purposes, until one day one of the engineering teams happens to be just a little bit more successful than the others and creates an AGI that is just above the intelligence threshold for serving as a seed AI.”
— Olle Häggström, Here Be Dragons: Science, Technology and the Future of Humanity, Chapter “Computer revolution”, p. 116