The end of humanity is difficult to contemplate. Merely having a coherent discussion about existential risks requires that we arm ourselves with effective habits of thought and cultivate a resilient disposition. Let’s muster our courage and talk about a few of the main challenges we humans face when we try to confront the problem of existential risks.
First of all, risk assessment is hard even for systems that we’ve already built. Even though we can learn a lot about the design and function of every part of a nuclear reactor, when you put them all together you get a system of staggering complexity.
Risk assessment of future technologies is even harder. In most cases we have only the barest understanding of how these future systems might be designed. We don’t know what their constituent parts will be, let alone how they will interact with one another. Prospective technologies such as molecular nanotechnology, machine intelligence, and bioengineering involve many interacting parts that we do not yet understand. When many uncertain elements interact strongly, our uncertainty about the final result must grow immensely. Prediction is hard and we must necessarily be very uncertain about the potential dangers that these technologies may pose.
Assessing risks gets even harder when we bring in a human element. Human beings are diverse, hard to predict, adaptable, and intelligent. If even one human intelligence is trying to change the outcome that you’re trying to predict, it introduces even more uncertainty.1
Second, human thinking is imperfect. We make mistakes in many ways, but there are particular cognitive biases that make it very hard to think about existential risks.
Worse still, social science experiments have shown that merely knowing about biases typically isn’t enough to improve one’s performance. If we hope to really improve our thinking, we need to make profound changes to how we approach problems. This field is far too vast to even summarize here. It is the opinions of the authors that at a minimum, thinking effectively requires that we be disciplined, thorough, and humble in scrutinizing ideas.2
Third, there are institutional and social factors that downplay the importance of existential risks. As one example, standard methods for “discounting” the future tend to downplay large gains or losses that take place more than a few years into the future. If someone is reasoning this way, even an existential risk might be so discounted that it never registers as a genuine danger for them. Living in the present is an important part of life, but an excessive focus on the present leads individuals, institutions, and society to pay too little attention to what the future may hold.
This list is incomplete, but it hopefully provides a useful starting point for thinking about existential risks. Digging deeper will lead you to cognitive biases pertaining to existential risks and the danger humanity poses to itself.
- This is related to the computer security concepts known as the security mindset and Schneier’s Law. The latter can be summarized as “Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can’t break. It’s not even hard. What is hard is creating an algorithm that no one else can break, even after years of analysis.” – Bruce Schneier [↩]
- For a short but impactful introduction to the field, see the Checklist of Rationality Habits from the Center for Applied Rationality. [↩]