Brian Tomasik


“[C]onsider what we would think if we were the tiny ones. Would we be okay with giants squishing us because they couldn’t bother to watch where they were stepping? [S]tuart Russell and Peter Norvig actually raise this point in Artificial Intelligence: A Modern Approach: “We can’t just give a program a static utility function, because circumstances, and our desired responses to circumstances, change over time. For example, if technology had allowed us to design a super-powerful AI agent in 1800 and endow it with the prevailing morals of the time, it would be fighting today to reestablish slavery and abolish women’s right to vote. On the other hand, if we build an AI agent today and tell it how to evolve its utility function, how can we assure that it won’t read that “Humans think it is moral to kill annoying insects, in part because insect brains are so primitive. But human brains are primitive compared to my powers, so it must be moral for me to kill humans.”””

Brian Tomasik, Is Brain Size Morally Relevant? (Essays on Reducing Suffering)