Skip to content
Commentary | 17 November 2023

Existential threats beyond the bomb: emerging disruptive technologies in the age of AI

Image of Konrad Seifert

Konrad Seifert |Co-CEO of the Simon Institute for Longterm Governance

Image of Anemone Franz

Anemone Franz |Physician and Biosecurity/Emerging Disruptive Technologies Advisor

JRCT Emerging technologies NEVER Nuclear Weapons Risk Reduction Global Security NEVER

The world has seen astonishing progress in advanced technologies such as artificial intelligence (AI), robotics and synthetic biology in recent years. The extent and scale of this change has led some individuals and organisations, such as the World Economic Forum, to believe we may even be living through the “Fourth Industrial Revolution” or “The Age of AI”. The opportunities engendered by this transformation are vast. However, the rapid advancement of technological change also has the potential to cause global catastrophes, and policymakers are increasingly paying attention to these “emerging disruptive technologies”.

To better understand these threats, we outlines the evolution of the risk landscape around emerging disruptive technologies. Our understanding of emerging disruptive technologies stems from the example of nuclear weapons, widely seen as the earliest instance of a man-made global catastrophic threat. We draw parallels between the dangers posed by nuclear weapons and those posed by novel biotechnologies, as well as divergences in terms of accessibility. We also explore the broader challenge of governing emerging technologies: How can we pace technological development, governance, and adoption to avoid disruption? We suggest important interventions, including monitoring, research and development on safety technologies, emergency planning, legal liability frameworks, and staff training requirements.

The threat of mass destruction

The discovery of nuclear fission marked a pivotal moment in human history. In a single act, nuclear weapons enabled state actors to catastrophically disrupt entire nations and the international community, exemplified by the bombings of Hiroshima and Nagasaki. However, the global disruption did not just stem from the destruction itself. The dominant security paradigms were disrupted by the mere existence of nuclear weapons, dramatically reshaping the balance of power and, thereby, the nature of geopolitics.

Like nuclear fission, new technologies are usually developed by scientists striving to advance humanity: AI tools can make humans more productive, and DNA synthesis has revolutionised biotechnology, facilitating, for example, the creation of better medicines or agricultural products. However, both technologies could also be misused to develop maximally virulent and infectious pathogens.

The democratisation of access to technology

There are many parallels between the Cold War era and today, yet crucial differences exist. Notably, nuclear weapons were mainly wielded by national governments. Even today, the spread of nuclear weapons is limited as their production requires vast industrial capabilities. However, advances in biotechnologies and AI may put similarly destructive capabilities into the hands of small groups or even individuals.

The potential risks from leading tech companies cutting corners are immense. For example, by inadequately screening the orders of DNA they synthesise or failing to monitor the training runs of large AI models. The situation is not helped by the fact that current guidelines and levels of oversight are inadequate to track the activities of these companies. This has led us to the concerning situation whereby humanity’s very existence could depend on a handful of cutting-edge laboratories voluntarily adhering to best practices that have yet to be determined.

DNA synthesis devices: an example of emerging disruptive technologies

Recent developments in synthetic DNA production exemplify how emerging technologies continue to alter the risk landscape. Currently, the DNA synthesis market is highly centralised, and most providers voluntarily screen their customers and synthesis orders. However, a new generation of benchtop synthesis devices is likely to disrupt the status quo. Benchtop synthesisers allow labs to print their DNA without relying on commercial providers, making monitoring the production process for potential misuse more difficult. As the technology improves, allowing for precise DNA synthesis without special equipment, small groups or even individuals can access capabilities once restricted to governments or sophisticated research labs. Combined with widely available and rapidly improving AI tools like large language models and the use of robotics to automate synthesis steps, it could significantly lower the barrier for less sophisticated actors to engineer pathogens, which could cause pandemics worse than COVID-19.

What can we do?

It is crucial that policymakers learn from nuclear history and, instead of using these powerful tools to create novel weapons of mass destruction, leverage these technologies for the benefit of humanity. Clear steps must be taken to responsibly govern the increasing access to previously well-controlled knowledge, tools, and applications.

At its heart, the governance of emerging disruptive technologies boils down to ensuring that proper tools are in place to monitor any emerging threat before it gets the chance to escalate out of control.

  1. People in frontier research must operate within a framework that brings risks to their attention at each step of the development process and prompts mitigating actions. For example, researchers should have to justify their goals and methodology regarding dual-use risks when applying for grants or submitting their work to a journal.
  2. Organisational decision-making processes must be transparent to make sensible risk-benefit trade-offs. For example, internal safety testing processes and results should be made easily accessible to the relevant authorities to help facilitate compliance monitoring.
  3. Regulatory bodies must plan for emergencies and monitor developments to catch systemic shortcomings before they escalate. For example, laboratories should be expected to invest in ensuring they can deal with potential disasters, and this built-in surge capacity should be maintained through routine simulation exercises.

These suggestions require professionals to understand emerging technologies’ risks and take them seriously. Introducing mandatory ethics and safety courses in relevant technological study programs could help embed this new working culture within the field.

Ensuring that those monitoring the risks stemming from emerging disruptive technologies receive the same support as those devoted to nuclear non-proliferation would help to adjust institutional priorities for this new era. This could be implemented by requiring companies and funders to allocate one-third of their resources to research that exclusively advances safety in advanced technological industries, as defined by globally representative expert committees.

Like tracing uranium production, tracking the materials used in critical stages of the supply chains of different emerging disruptive technologies would help governments log who has access to these new tools and guard against their misuse. In the example of benchtop synthesisers, this could mean conducting customer background checks before selling a benchtop synthesis device, requiring user safety training and screening DNA orders.

To incentivise these changes, developers and distributors of tools and applications would also need to be made legally liable for the consequences of their products. To ensure that innovation isn’t stifled, a tiered liability structure could be implemented, linking the liability costs to the size and revenue of the company. This ensures that larger corporations like Amazon or Meta bear proportionate responsibility, while startups can innovate at a safe scale without facing prohibitive costs. Should certain innovations seem too risky for open experimentation, an international licensing regime and national regulators would need to maintain a healthy market within the bounds of globally acceptable risk-benefit trade-offs.

Regulation that prudently balances innovation with risk analysis is essential to improving society while averting catastrophe. To make such policy progress, technical expertise is needed at all governance levels, both national and international. Internal technical expertise is necessary to ensure that policy processes work with adequate threat models and continuously update in light of new developments.

At the international level, governments in advanced economies must act to ensure this expertise also empowers currently under-resourced governments. Emerging disruptive technologies affect everyone globally. Underrepresented communities need to have a seat at the decision-making table. This could be done by putting emerging disruptive technologies more firmly on the international agenda and introducing training sessions for parliamentarians to ensure they know and understand this new era of risk.

In the 1940s, global powers rushed to create nuclear weapons. Many would argue that instead of weaponising nuclear technology, governments should have invested in developing it as a safe, abundant source of clean energy. Today, politicians must decide whether they want to develop the ecosystem for novel weapons of mass destruction, or for technology that helps us eliminate illness and hunger. The choice seems relatively straightforward.

The opinions articulated above represent the views of the author and do not necessarily reflect the position of the European Leadership Network or all of its members. The ELN’s aim is to encourage debates that will help develop Europe’s capacity to address the pressing foreign, defence, and security policy challenges of our time.

Image: Pixaby