Can biological threats wipe out humanity?

The five greatest threats to humanity

According to Anders Sandberg, a futurologist, we urgently need to talk about real dangers. Because if we don't deal with them, they could kill us.

This is not about current refugee debates, stock market crashes, exits from the EU, psychopathic US presidents or similar problems that we often call "crises" today. It is about really existential dangers that could prevent our species from even more in 1,000 or 10,000 years exists.

Sandberg, futurologist and neuroscientist at Oxford University, “transhumanist” and author, recently published an oppressive article on the topic on the science platform “The Conversation”. Sandberg talks about how we have slowly lost sight of real threats to humanity; but also how we could mitigate it and perhaps even avert it. A first step: think about it.

“Humanity could become extinct!” - “Oh! HM okay."

"In the past there have been some who have tried to warn mankind of disasters, including Nostradamus, who tried to calculate the end of the world," writes Sandberg. Many other “mystics” speculated or fantasized about the extinction of mankind. While this was often dismissed as spinning, there were, on the other hand, few who were committed to protecting our species. "On the other hand, nobody could have done it with the existing means." Today, however, mankind is on, writes Sandberg. Nowadays, in the course of technical progress, we are far enough to correctly assess risks and prevent extinction.

But: According to Sandberg, these risks have not yet been communicated and researched enough. Most of all, they are underestimated.

People have been talking about the apocalypse for millennia - but not about preventing it. "

One of the main reasons is the accompanying feeling of powerlessness, a powerlessness that leads to blunt fatalism; the belief that nothing can be saved anyway and that the downfall of mankind will just happen at some point must. "People are also pretty bad at doing something about problems that haven't arisen yet," writes Sandberg. This is due to the so-called "availability heuristic". This is our tendency to overestimate problems that we already know through experience examples. And conversely, to underestimate events that we cannot look back on.

"If humanity dies, that means at least the loss of all living individuals and their goals." But the real loss is greater than that: the extinction of humanity would mean the loss of the meaning of past generations, the life of all future generations (quite a few could be) and all the value that they could presumably create. If human traits such as consciousness and intelligence were lost, it could mean, among other things, that these values ​​would also be obliterated from the universe. “It's a big, moral reason to work hard to prevent existential threats from becoming a reality. We must not fail in this, ”writes Sandberg.

With these things in mind, the scientist compiled a list of what he believes are currently five major threats to the human species:

1. A nuclear war

So far, a nuclear weapon has "only" been used twice in history - in Hiroshima and Nagasaki during World War II. And the supply of atomic bombs is at its lowest level since the Cold War. Still, it would be wrong to believe that nuclear war is impossible, writes Sandberg. "In fact, it is not at all unlikely."

The Cuban Missile Crisis almost turned into a nuclear war. According to the researcher, there is a good chance that such events will repeat themselves; this incident was only the best known in recent history. "Between the Soviet Union and the USA there were almost constant dangerous decisions and almost thrown-offs." Due to international tensions and dependencies, the risks have lessened somewhat today; nevertheless they are there.

A real nuclear war between two great powers would cost hundreds of millions of lives, directly or through the aftermath. "But that's not enough to represent an existential risk," writes Sandberg.

The dangers of fallout are often overestimated. They are locally serious, but viewed globally, they are rather insignificant. The real threat would be the following nuclear winter - the soot shot into the stratosphere would cool and dry the world for several years. "Modern climate simulations show that agriculture in large parts of the world would not be possible for years," writes Sandberg. If this scenario were to happen, billions would starve to death. And the few survivors would be plagued by unknown diseases. “The main problem is how the soot would behave in such a case. The results could be very different. We don't have a good way of assessing this at the moment. "

2. A biotechnological pandemic

"Natural pandemics have killed more people than wars," writes Sandberg. Nevertheless, such pandemics (like the plague) are not an existential threat: As a rule, many people are immune to the pathogens and the descendants of the survivors would be even more resistant to it. In addition, evolution eradicates parasites that want to wipe out their host - which is why syphilis, for example, changed from a malignant killer to a chronic disease.

Unfortunately, people today are able to make illness significantly worse, writes Sandberg. "One of the most famous examples is the insertion of an additional gene in mouse pox - the mouse version of chicken pox: This makes them far more deadly and able to infect mice that have been vaccinated." The most recent studies on avian flu have also shown that the The risk of contagion of an illness can consciously be increased very drastically.

“Right now, the risk of someone deliberately releasing something really devastating is low.” But as biotechnology gets better and cheaper, various smaller groups could deliberately exacerbate known diseases.

Most bioweapons studies have been conducted by governments in search of something controllable, Sandberg writes. The extermination of humanity is not of any military use. But there will always be people who want to do things because they can do it. Some also have “higher goals”. The "Aum sect" tried to accelerate the apocalypse by using biological weapons and attacking people in Japan with nerve gas.

Some people think the earth would be better off without people. "

"It looks as if the deaths from biological weapons and epidemics are subject to the law of potency," writes Sandberg. In most cases, the victims are few, but a few kill many. If you look at current figures, the risk of a global pandemic from bioterrorism seems to be very low. But that only applies to terrorists. Governments, on the other hand, have so far killed more people with bioweapons. Almost half a million people died as a result of Japanese biological weapons experiments in World War II. "As technology will become more powerful in the future, malicious pathogens will be easier to produce."

3. Super intelligence

“Intelligence is very powerful. With only a minimal amount of problem-solving skills and group coordination, humans could evolve beyond the other apes. ”But now the persistence of our intelligence could be beyond human decisions depend, not on evolution. Being intelligent is a real benefit for people and organizations, writes Sanberg. This is why there is so much effort to find out how we can improve our individual and collective intelligence: from cognitive-enhancing drugs to artificial intelligence.

The problem with this is that such intelligent entities ("information objects") are good at achieving their goals. But if people's goals were poorly set (programmed), they could use their power to achieve disastrous and destructive goals.

There is no reason to believe that intelligence itself is nice and moral. "

In fact, it is already possible to prove that certain types of superintelligent systems - that is, machines with humans in many or all areas of superior intelligence - do not obey any moral rules. “Even more disturbing, we are trying to explain profound practical and philosophical problems to artificial intelligences. Human values ​​are diffuse, complex things that we cannot express well. And even if we could do that, we couldn't understand all the logics of what we want. "

According to Sandberg, software-based intelligence could very quickly go from being used in small everyday aids to "something terrifyingly powerful". The reason are the differences to biological (human) intelligence. Artificial Intelligence (AI)work simply faster - and even faster if it were installed on ever faster computers. Parts of this intelligence could even be distributed over several computers, new versions could be tested "on the fly", updates installed, new algorithms integrated - that would give the AI ​​a significant performance boost. If intelligence software were good enough to create better software itself, it would be called an "intelligence explosion".

And should that happen, there would be a power shift between intelligent systems and the rest of the world. "This has a clear potential for a catastrophe if the goals mentioned were programmed poorly or incorrectly."

The unusual thing about superintelligence is that we don't know whether such fast and powerful “intelligence explosions” are really possible: “Perhaps our current civilization as a whole will improve and will then be on a par with the highest possible computing power.” But there are good reasons to believe that technology is moving faster than today's society could handle. "We also have no plan of how dangerous various forms of superintelligence would be, or which strategies to weaken it would actually work." According to Sandberg, it is therefore very difficult for people to decide about technologies and the dangers that we still have not have at all. Or about artificial intelligence that is greater than our human intelligence.

4. Nanotechnology

“Nanotechnology” is the control over matter - with atomic or molecular precision. That in itself is not dangerous - on the contrary, it would be very valuable for most areas of application. According to Sandberg, the problem is that the higher the level of development, the greater the potential for abuse, similar to biotechnology.

The big problem isNot, that nanomachines will at some point reproduce themselves and eat everything around them. For this purpose they would have to be designed very cleverly. It is difficult to get a machine to reproduce itself: Biology is, by definition, much better at that. "Maybe some madman could do that, but there are much more subtle dangers in this subject."

The most obvious risk is that precision atomic manufacturing is very well suited for the quick and inexpensive manufacture of weapons. "

In a world where each Government could “print” large quantities of autonomous or semi-autonomous weapons, an extreme - and unstable - arms race could ensue. "The urge to land the first punch before opponents get too strong could be tempting."

Weapons can be much smaller, more precise things than machine guns: an “intelligent poison”, for example, that acts like a nerve gas but specifically seeks out its victims. Or ubiquitous surveillance systems designed to keep the population obedient. "All of this would be possible without any problems," writes Sandberg. It would even be possible to consciously change nuclear and air conditioning technology in a targeted manner and let it proliferate in an uncontrolled manner. The existential risk of nanotechnology is difficult to assess, but it is a potential danger. "Simply because it could enable people to do what they dream of."

5. The great unknown

"Perhaps the most disturbing possibility is that there is something that is very deadly - and we have no idea about it yet." Whatever the threat, it could be something that is almost inevitable. We were not aware of these threats ourselves, but they could exist. "Just because something is unknown doesn't mean we can't think about it."

If you wonder why climate change or meteor effects are not on this list: “It is unlikely that the entire planet will be rendered uninhabitable by climate change, no matter how terrifying it is. Meteors could of course wipe us out in one fell swoop, but we would have to be very unlucky. ”The average mammal survives for about a million years. Therefore, only one species dies out every million years. The risk that humanity would be wiped out by a nuclear war is much higher.

“Ultimately, you don't have to be afraid just because something is possible and potentially dangerous.” There are some risks that we cannot mitigate, such as gamma rays, which are caused by explosions in space. But when we realized that we could do something in other aspects for our own good and the future of humanity, priorities would change. For example, with hygiene, vaccines and antibiotics, the plague went from “God's punishment” to “poor general hygiene,” writes Sandberg. People learned - and changed their behavior and attitude towards the subject.

And now?

The availability heuristic leads us to overestimate risks that are heavily represented in the media - and to underestimate those that have never been seen before, as Sandberg writes. “If we want to be there in a million years, we have to talk about such thoughts.” That is the scientist's message: We should be sensitized. And encouraged to think more fundamentally, on a larger scale. Sure, today's problems and crises are those that need to be addressed. But we shouldn't forget that there are far greater dangers.

What exactly can we do? Break through the “availability heuristic” scheme - and perhaps think about the generations that will only grow up in a thousand years. Which decisions in recent history have been good ones? Which were dangerous? It takes a lot of time to understand the bigger picture. Above all, however, we need the courage to address the original - and thereby initiate discussions that are important for our species. Sandberg's article could provide an impetus for this.