“Turing (1951) predicted that [an AI with human or superhuman levels of general intelligence] might soon thereafter lead to a scenario not unlike what Good (1965) later coined an intelligence explosion, where an AI’s intelligence level spirals quickly to levels far beyond human capabilities. [A]nd once AIs far more intelligent than ourselves exist, we cannot realistically count on remaining in control, so from that point our fate will be in their hands, and depend on their goals and values.”
— Olle Häggström, Here Be Dragons: Science, Technology and the Future of Humanity, Chapter “Doomsday nevertheless?”, pp. 195-196