Lukas Gloor


“From a suffering-focused perspective, the main reason to be concerned about the risks from artificial intelligence is not the possibility of human extinction or the corresponding failure to build a flourishing, intergalactic civilization. Rather, it is the thought of misaligned or “ill-aligned” AI as a powerful but morally indifferent optimization process which, in the pursuit of its goals, may transform galactic resources into (among other things) suffering.”

Lukas Gloor, Suffering-focused AI safety: Why “fail-safe” measures might be our top intervention (2016)