“You can look at the resources we’re currently investing in global warming, how much we’re currently investing marketing lipstick in New York, and how much we’re currently investing solving the friendly AI problem, it is very clear that the next marginal philanthropic investments should be going into friendly AI. We are rationalists, [w]e are not doing this because we wandered into it at random. We’re doing this because there has to be one cause in the world that has the single highest marginal return on investment in expected utility, and friendly AI is it. And if it were that it were not it, we would be off doing something else. Right now, this is where the maximum marginal return on investment is.”
— Eliezer Yudkowsky, Becoming a Rationalist (Conversations from the Pale Blue Dot Podcast #088)