Does humanity underestimate the risk of its own extinction?
Once is not custom, this post will not describe a discovery published recently in a review. Over the course of my various readings, over the past few weeks, I have picked up a few puzzle pieces and I have noticed that they fit together rather well, that there was a guiding idea behind them. It started at the end of 2011 in Durban, with the new failure of the international community to agree on limiting greenhouse gas (GHG) emissions. Then there was this announcement, at the end of December, of the creation by researchers of mutant avian influenza viruses, capable of being transmitted more easily between infected humans. Announcement followed, first of all, by a debate on whether it was very relevant to publish the methods with which the biologists had modified the H5N1, then by the more pragmatic question: can the ordinary terrorist easily achieve it?
Then there was another announcement, on January 12, more ritualistic this one, but also more discreet: that of the Bulletin of the Atomic Scientists announcing that the clock of the end of the world which, since 1947, symbolically warns the humanity when it takes steps towards its extinction or reassures it when it takes measures to move away from it, was advanced one minute towards midnight. It is now 23:55 pm on this clock and this progression of the large hand has been justified by the lack of progress in limiting both nuclear proliferation and greenhouse gas emissions. The statement read: "The world community may be near a point of no return in its efforts to prevent disaster due to changes in Earth's atmosphere. The International Energy Agency predicts that" Unless societies begin, over the next five years, to develop alternatives to carbon-emitting energy technologies, the world is doomed to a warmer climate, to rising sea levels, to the disappearance of island nations and increased ocean acidification. " It is not without a certain irony that another piece of information, directly linked to this one, came out a few days ago and I gave it, as a foundry, in one of my weekly selections: never, Over the past 300 million years, the oceans have not been as acidic as they are today. Despite its importance, the news did not seem to move anyone ...
At the very moment when many books are being published on the theme "2012, year of the end of the world predicted by the Mayan calendar" (I was amazed to see a whole table of works at FNAC on this subject), the men playing at frightening oneself knowing full well that it is nonsense, we sweep under the carpet the real reasons for worry. Hence the question that makes the title of this post: is humanity underestimating the risk of its own extinction by failing to address the problems that threaten it or by risking to bring down technologies of mass destruction between? malicious hands? I obviously do not have the answer and I leave it to everyone to think about it, but I wanted, to finish this post unlike any other, to mention the interview, in The Atlantic, with Swedish philosopher Nick Bostrom, who teaches at the University of Oxford, directs the Institute for the Future of Humanity there and is pictured at the top of this page.
With a background in physics, neuroscience and the philosophy of science, Nick Bostrom does not necessarily have the typical profile of the philosopher as we usually imagine him. He has worked a lot on the concept of "existential risk", in the sense of a disaster scenario leading "either to the total destruction of all intelligent life on Earth, or to a permanent paralysis of its development potential". In this interview, he is therefore not interested in the distant consequences of global warming, but, considering that this twenty-first century will be crucial for humanity due to the rapid development of new technologies, the risks that the latter will present in a very near future to us: "In the short term," he said, "I think several developments in the fields of biotechnology and synthetic biology are quite disconcerting. We are in the process of acquiring the capacity to create modified pathogens. and the blueprints of several pathogenic organisms are in the public domain: you can download from the Internet the genetic sequence of the smallpox virus or that of the Spanish flu. So far, the ordinary citizen only has their graphic representation on the Internet. screen of his computer, but we are also developing more and more efficient machines synthesizing DNA, which can take one of these digital plans and manufacturereal strands of RNA or DNA. Soon these machines will be powerful enough to recreate these viruses. So you already have some sort of predictable risk and if, then, you start modifying these pathogens in different ways, you see a dangerous new frontier appear. In the longer term, I think that artificial intelligence, once it has acquired human and then superhuman capacities, will bring us into an area of major risk. There are also different kinds of population control that concern me, things like surveillance and psychological manipulation with drugs. "
When the reporter asking him why the risk of a major slippage is estimated at one or two in ten over the course of the century, which is a lot, Nick Bostrom has this response: "I think that which leads to that is the feeling that humans are developing these very powerful tools (...) and that there is a risk that something will go wrong. If you go back with nuclear weapons, you find that to manufacture an atomic bomb, you needed rare raw materials like enriched uranium or plutonium, which are very difficult to come by. But suppose there was a technique that allowed you to make a nuclear weapon by baking sand in a microwave or something. If that had been the case, where would we be now? Presumably once that discovery was made, civilization would have been doomed. make one of these discoveries, we put our m ain in a big urn full of bullets and we shoot a new bullet: so far we have taken out white and gray bullets, but maybe next time we will shoot a black bullet, a discovery that means disaster . At the moment, we don't have a good way to put the ball back in the ballot box if we don't like it. Once the discovery has been published, there is no way to "unpublish" it. "
Nick Bostrom is absolutely not opposed to technology: on the contrary, he is a big supporter of transhumanism. Simply, he is campaigning for us to keep control. Control of our technologies, our planet, our future. Because the extinction of man is not the only risk we run. The other face of existential risk is the total disappearance of freedoms on a planetary scale: "One can imagine the scenario of a totalitarian global dystopia. Once again, it is linked to the possibility that we develop technologies that will make it much easier for oppressive regimes to eliminate dissidents or monitor their populations in order to achieve a stable dictatorship, rather than those we have seen throughout history that have ended up being overthrown . " George Orwell and his 1984 are not far off.
Pierre Barthélémy
Source: http://passeurdesciences.blog.lemonde.f ... xtinction/