Humanity faces multiple existential threats that are planetary in scale. Currently, most threats are from nature, but in the near future we’re likely to create new risks. Most of these disasters could directly affect everyone on the Earth, but their destructive power cannot reach out across the void of space. A hypothetical independent human colony on Mars would not be directly endangered if the Earth experienced a global catastrophe like an asteroid impact or a supervolcanic eruption.1
As humans spread beyond Earth, these catastrophes eventually cease to be existential risks. Eventually, individual catastrophic events will no longer be able to endanger all of humanity. The key requirement is that these offworld colonies must be capable of surviving without the Earth. We will describe such colonies as being existentially independent. In a future with existentially independent human colonies, even the destruction of every human life on Earth would not mean the end of humanity. Human colonization of the solar system would make us more resilient and eventually even immune to broad categories of existential threats, both natural and human made.
Offworld colonies would still be deeply affected by catastrophes on Earth. They would be deprived of material, intellectual, and cultural exchange with Earth, likely resulting a dramatic reduction in their ability to grow and thrive. Surviving the loss of Earth is very different from being unaffected by it. However, even with these setbacks, existentially independent colonies should eventually be able to to expand and develop on their own.
Unfortunately, even if we had fully independent offworld colonies, there are still some risks that could endanger all of us. Most natural risks would not be a threat to us because in almost all cases their impact is limited to a single planet. But several human-made risks would remain just as dangerous even if we’re arrayed on many worlds. In particular, the creation of the first superintelligent AI could endanger all of humanity regardless of whether we’ve spread beyond Earth.2 3
Another way to see the value of existentially independent colonies is to think about how they can preserve hope. If we had independent colonies, it would mean that many scenarios which would have killed 100% of humanity will instead only kill the Earthbound majority of us. Some people would survive and have a chance to recover and eventually thrive again. Even after a horrific disaster, the future of humanity in the universe could still be a bright one.4
There are many worthy reasons to develop human outposts in the solar system. However, actually building them will be extremely challenging. Next, we’ll discuss the problems we need to surmount in order to establish self-sufficient offworld colonies.
- Baum, S. D., Denkenberger, D. C., & Haqq-Misra, J. (2015). Isolated refuges for surviving global catastrophes. Futures, 72, 45-56. [↩]
- Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. OUP Oxford. [↩]
- Yudkowsky, E. (2008). Artificial intelligence as a positive and negative factor in global risk. Global catastrophic risks, 1, 303. [↩]
- Bostrom, N. (2013). Existential risk prevention as global priority. Global Policy, 4(1), 15-31. [↩]