“[T]he victims do not care about the agent’s inner thoughts, their evolution towards “being good”, possible resentments [or] indifference. This argument is even stronger for an agent that we create deliberately to act morally since all of us will be the potential victims and it does not help us if an AI has a good will or behaves according to certain rules if this leads to suffering. A sufficiently powerful artificial intelligence is like a mechanism or a force of nature and we do not care whether a thunderstorm has good intentions or behaves according to some rules as long as it does not harm us.”
— Caspar Öesterheld, Machine Ethics and Preference Utilitarianism (May 25, 2015)