The Survival of Humanity
By Lawrence Rifkin | September 13, 2013
An existential catastrophe would obliterate or severely limit the existence of all future humanity.
As defined by Nick Bostrom at Oxford University, an existential catastrophe is one which extinguishes Earth-originating intelligent life or permanently destroys a substantial part of its potential. As such it must be considered a harm of unfathomable magnitude, far beyond tragedy affecting those alive at the time. Because such risks jeopardize the entire future of humankind and conscious life, even relatively small probabilities, especially when seen statistically over a long period of time, may become significant in the extreme. It would follow that if such risks are non-trivial, the importance of existential catastrophes dramatically eclipse most of the social and political issues that commonly ignite our passions and tend to get our blood boiling today.
Ignoring global risks makes sense in some situations. If there is good scientific evidence that the risks are just too remote to worry about, for example. Or if the costs of prevention were just too absurd. But too often denial is based on ideological blinders (God will provide, the free market will provide, nature will provide), fatalism (“what will be will be”), apathy, or narrow-minded ignorance of the history of life on our planet. 99.9% of all species on Earth that ever lived are extinct. Earth has encountered repeated mass extinctions. The Permian-Triassic extinction event destroyed over 90% of all species. After the volcanic super-eruption in Toba, Indonesia about 70,000 years ago, the human species literally teetered on the brink of extinction with, according to some estimates, only about five hundred reproducing human females remaining in the world. We cannot extrapolate from our very narrow time frame perspective and conclude that future risks are negligible.
One would think that if we are mobilized to fight for issues that affect a relatively small number of people, we would have an even stronger moral and social emotional motivation to prevent potential catastrophes that could kill or incapacitate the entire human population. But there are significant psychological barriers to overcome. People who would be emotionally crushed just hearing about a tortured child or animal may not register even the slightest emotional response when contemplating the idea that all human life may one day become extinct. As Eliezer Yudkowsky wrote, “The challenge of existential risks to rationality is that, the catastrophes being so huge, people snap into a different mode of thinking.”
There are psychological barrier of perceived powerlessness. Avoidance of difficult issues may be an understandable psychological defense mechanism. But the results can be maladaptive and catastrophic. There are also significant ideological barriers to overcome. For some, a belief in an afterlife of any sort may provide false reassurance. Millennialism — the belief that the world as we know it will be destroyed and replaced with a perfect world — takes various forms, from Christian movements based on the Book of Revelations to Al Qaeda beliefs rooted in Islamic thought involving Jihad. From certain theological perspectives global catastrophes are brought about by divine agency as just punishment for our sins. A believer of apocalyptic visions might judge such an event as, on balance, good.
On the whole, our evolved minds do not appear naturally predisposed to think long-term about risks on a global scale. From what we know about evolutionary history and mechanisms, this is completely understandable. In the early Cenozoic years, those better able to dodge the lion were more likely to pass on their genes, while those focused on contemplating probabilities and defense against gamma-ray bursts from outer space might have represented good news for the continuation of the lion’s genetic line, but not their own. The fact that sustained global planning and action might not come naturally does not, by itself, make ignoring global risks the right thing to do. Perhaps ironically (since after all survival is biologically adaptive) it will take more than our evolved behavioral inclinations to confront potential catastrophic risks and make informed decisions. Obviously we need objective factual information and analyses. But to do the job, to overcome maladaptive cognitive biases and blinders, we’ll need much more. We’ll need a motivating deep-felt sense of connection, value, and responsibility toward other conscious beings and the future of life.
Catastrophic risks
Risks to existence include both natural catastrophes and human-caused catastrophes. These include risks such as infectious pandemic disease, asteroid impact, climate change catastrophe, global nuclear war, volcanic super-eruptions, potential risks from molecular manufacturing, and bioterrorism.
The risks of global pandemics should not be underestimated. “The Black Death” plague during the 13th till 15th century devastated Europe, killing 1/3 of the entire population. The consequences of large asteroid impact on the course of life on Earth are also well established. 65 million years ago a six mile wide asteroid plunged into Earth and exploded. Shock waves, earthquakes, tsunamis, volcanic eruptions, and dust blocking out the Sun resulted in mass extinctions, including the dinosaurs. Less commonly known is that the frequency of volcanic super-eruptions causing climactic disaster on Earth is even greater than that of asteroid disasters. There have been multiple volcanic super-eruptions over the last two million years. Another volcanic super-eruption could devastate world agriculture and lead to mass starvation. It seems almost all of the known 15 cataclysmic mass extinction events in Earth’s history were mediated by changes to the Earth’s climate and atmosphere, historical evidence that should make modern skeptics of climate change impact at least be careful.
What can we do?
Here is a partial list of suggestion worthy of consideration. The idea here is not to advocate for some extreme survivalist or “Chicken Little” mentality, but rather to use reason, foresight, and judgment about how best to protect our future.
Create a larger worldwide stockpile of grains and other food reserves.
Support and prioritize global measures to detect, prevent, and halt emerging pandemic infectious diseases, such as the WHO’s The Global Outbreak Alert and Response Network.
Invest in technologies to discover and deflect large asteroids and comets on a deadly collision course with our planet.
Consider banning the synthesis and public publication of the genome sequences of deadly microorganisms such as smallpox and the 1918 influenza virus, thereby reducing the risks of bioterrorism or accidental release.
Maintain stores in multiple locations of wild plant species, seed banks, and gene banks to safeguard genetic diversity.
Invest in space station research. Because of the Sun’s ultimate expansion heating up the planet, Earth will become uninhabitable for humans in about 1-1.5 billion years (it will become uninhabitable for all life on Earth several billion years after that). This is, understandably, almost too long from now to contemplate. Nonetheless, our best (and possibly only) chance for survival in the very distant future may be to live in space or to colonize other planets or moons.
Create strains of agricultural species better able to withstand major environmental change and threats.
Continue to strive towards scientific accuracy in predicting climate change effects, and work towards renewable energy sources, sustainable use, technological solutions, and other measures to prevent potential climate catastrophes. Human-caused environmental changes that increase the risk of global pandemics deserve particular attention.
Develop appropriate oversight of new molecular manufacturing technologies.
Prioritize international cooperation to reduce nuclear proliferation, secure existing nuclear weapons, develop systems to minimize technological mishaps, and decrease the world’s nuclear armamentarium.
Maintain a well-chosen small number of people in a deep, well protected refugee sanctuary, with adequate supplies to last for years to buffer against human extinction from a range of causes. Genetically diverse international volunteers who live in such a bunker could be rotated, say, every two months. A similar Noah’s ark refuge could be established on a space station.
Work towards changing the social conditions that foster ideological absolutism.
Promote evidence-based thinking and reason at all levels of society.
Plan in detail to quickly produce and administer vaccines and other medical interventions during a pandemic.
The idea is not that we should do all these, but that the issue deserves our very highest consideration. When considering how much social, political, and economic priority to place on preventing a negative consequence, here is an idealized equation:
Extent of harm x Risk
Cost of effective prevention (in dollars and in wellbeing)
The higher the result of this idealized equation, calculated as best we can at a given time in history, the greater the priority should be for real action. The actual decisions of what to do and whether to do it depends on all three components of the equation. For global existential catastrophes, the “extent of harm” part of this equation would be astronomical. One can in this way question the relative amount of ink space, dollars, and time spent on comparatively trivial priorities. Of course, the other parts of the equation — the statistical likelihood of specific events and the costs of specific effective preventive actions — need to be taken into account before determining the idealized priority. Many of the above suggestions represent a small investment relative to the world’s GNP or total federal outlay. It would be frankly irresponsible from a societal perspective to not reduce preventable non-trivial risks that threaten existence itself.
Those from all sides of the political spectrum can probably agree that self-defense is a legitimate goal for a government; preventing global catastrophe is self defense.
Technological progress is not inherently all-good or all-bad — technology in some of these scenarios might lead to unprecedented mass destruction, but in other scenarios technology might be humanity’s savior.
Apathy, absolutism, and fatalism must be avoided when considering whether it makes sense to prioritize global catastrophe prevention. Instead we should strive to apply our best evidence, judgment, and moral values. Whether or not our current set-point of concern and attention towards global catastrophic risks is appropriate is a question worthy of significant consideration. This is a social issue, whose goal is to prevent suffering and allow for life to flourish on the grandest scale. In short, the goal is to apply rationality and compassion to the deepest of human concerns.
We accept health insurance, car insurance, property insurance, life insurance, disability insurance, and dental insurance as societally appropriate and often worthwhile. Let’s add existence insurance to the mix.
The prevention of existential catastrophe may not be receiving enough attention, and should be one of human civilizations highest priorities. Consideration of risks in the form of the above equation and similar analyses is not cold rationality. It is rationality melded with the deepest possible morality and genuine compassion — a deeply felt concern for people, life, and the future of humanity. The total sum of all present and future conscious experience is what potentially is at stake. We owe our existence to chains of generations that have survived, and we now alive ought not to abandon the future great epic that our ancestors have struggled for millions of years with their lives to maintain.
(¯`*• Global Source and/or more resources at http://goo.gl/zvSV7 │ www.Future-Observatory.blogspot.com and on LinkeIn Group's "Becoming Aware of the Futures" at http://goo.gl/8qKBbK │ @SciCzar │ Point of Contact: www.linkedin.com/in/AndresAgostini