This year the Doomsday Clock was set at three minutes to midnight, two minutes closer to catastrophe than in 2014. The members of the Bulletin of the Atomic Scientists Science and Security Board cited unchecked climate change, nuclear weapons and emerging technological threats such as synthetic biology and artificial intelligence as the reasons we are teetering towards global catastrophe.

According to the Bulletin “world leaders have failed to act with the speed or on the scale required to protect citizens from potential catastrophe. These failures of political leadership endanger every person on Earth.”

Founded in 1945 by University of Chicago scientists who had helped develop the first atomic weapons in the Manhattan Project, the Bulletin of the Atomic Scientists created the Doomsday Clock two years later, to convey threats to humanity and the planet. The decision to move (or to leave in place) the minute hand of the Doomsday Clock is made every year by the Bulletin’s Science and Security Board in consultation with its Board of Sponsors, which includes 17 Nobel laureates. The Clock has become a universally recognised indicator of the world’s vulnerability to catastrophe from nuclear weapons, climate change, and emerging new technologies.

The Bulletin raises concerns about the lag between scientific advances in dual-use technologies, i.e. those that can be used for good and malicious ends, and the ability of civil society to control them. For example, the Bulletin discusses the possibility of synthetic biologists accidentally or deliberately creating dangerous pathogens. The Bulletin states that “in the age of synthetic biology and globalization, world governance must develop ways to react quickly and effectively to confront emerging disease and the possibility of bioterrorism.”

The Bulletin also raises concerns about other emerging technological challenges to civil society and international governance:

“It is clear from the recent hacking of major organizations and government facilities that cyber attacks constitute a threat with the potential to destabilize governmental and financial institutions and to serve as a medium for new escalations of international tensions. Meanwhile, advances in artificial intelligence have led a number of prominent individuals to express concern about human command and control capabilities in the field, on national and international scales, over coming decades.”

The Bulletin calls for the international community to strengthen existing institutions that regulate emergent technologies and to create new forums for exploring potential risks and proposing potential controls on those areas of scientific and technological advance that have so far been subject to little if any societal oversight.

According to the Bulletin “scientific advance can provide society with great benefits, but the potential for misuse of potent new technologies is real, unless government, scientific, and business leaders take appropriate steps to explore and address possible devastating consequences of those technologies early in their development.”

Last year, with the Doomsday Clock at five minutes to midnight, the members of the Science and Security Board concluded their assessment of the world security situation by writing: “We can manage our technology, or become victims of it. The choice is ours, and the Clock is ticking.”

This year the Board added that “the probability of global catastrophe is very high, and the actions needed to reduce the risks of disaster must be taken very soon.”

World Economic Forum echoes concerns

The Bulletin’s concerns were echoed by a World Economic Forum report released last week which polled the opinions of 900 experts, including researchers, politicians and business leaders.

According to the report, new technologies, such as synthetic biology, nanotechnology and artificial intelligence, carry largely unknown – but potentially huge – risks. “Synthetic biology commands tremendous and rising interest from both academia and industry”, said John Drzik of Marsh, a risk advice company “but could be risky due to ‘error and terror’. The field is likely to grow dramatically but lacks oversight.”

Drzik believes we face similar hazards from nanotechnology and that Governments haven’t done enough to regulate these risks. “Risks are not fully understood, yet already we have 180 products on the shelves,” Drzik said. “Emerging technologies carry a higher risk because the pace of innovation is faster, and governments have not caught up with that,” he concluded.

It’s good to see such prominent global institutions echoing the concerns Friends of the Earth has been raising with politicians and regulators for a decade now. Hopefully their words won’t fall on deaf ears this time.