“[P]eople who say, for example, that we’ll only be able to improve AI up to the human level because we’re human ourselves, and then we won’t be able to push an AI past that. I think that if this is how the universe looks in general, then we should also observe, e.g., diminishing returns on investment in hardware and software for computer chess past the human level, which we did not in fact observe.”
— Eliezer Yudkowsky, AI Visionary Eliezer Yudkowsky on the Singularity, Bayesian Brains and Closet Goblins (Scientific American, March 1, 2016)