Skip to content
Commentary | 27 June 2024

The fast and the deadly: When Artificial Intelligence meets Weapons of Mass Destruction

This article was originally published for the German Federal Foreign Office’s Artificial Intelligence and Weapons of Mass Destruction Conference 2024, held on the 28th of June, and can be read here. You can also read “The implications of AI in nuclear decision-making“, by ELN Policy Fellow Alice Saltini, who will be speaking on a panel at the conference. 

Artificial intelligence (AI) is a catalyst for many trends that increase the salience of nuclear, biological or chemical weapons of mass destruction (WMD). AI can facilitate and speed up the development or manufacturing of WMD or precursor technologies. With AI assistance, those who currently lack the necessary knowledge to produce fissile materials or toxic substances can acquire WMD capabilities. AI itself is of proliferation concern. As an intangible technology, it spreads easily, and its diffusion is difficult to control through supply-side mechanisms, such as export controls. At the intersection of nuclear weapons and AI, there are concerns about rising risks of inadvertent or intentional nuclear weapons use, reduced crisis stability and new arms races.

To be sure, AI also has beneficial applications and can reduce WMD-related risks. AI can make transparency and verification instruments more effective and efficient because of its ability to process immense amounts of data and detect unusual patterns, which may indicate noncompliant behaviour. AI can also improve situational awareness in crisis situations.

While efforts to explore and exploit the military dimension of AI are moving ahead rapidly, these beneficial dimensions of the AI-WMD intersection remain under-researched and under-used.

The immediate challenge is to build guardrails around the integration of AI into the WMD sphere and to slow down the incorporation of AI into research, development, production, and planning for nuclear, biological and chemical weapons. Meanwhile, governments should identify risk mitigation measures and, at the same time, intensify their search for the best approaches to capitalise on the beneficial applications of AI in controlling WMD. Efforts to ensure that the international community is able “to govern this technology rather than let it govern us” have to address challenges at three levels at the AI and WMD intersection.

AI simplifies and accelerates the development and production of weapons of mass destruction

First, AI can facilitate the development of biological, chemical or nuclear weapons by making research, development and production faster and more efficient. This is true even for “old” technologies like fissile material production, which remains expensive and requires large-scale industrial facilities. AI can help to optimise uranium enrichment or plutonium separation, two key processes in any nuclear weapons programme.

The connection between AI and chemistry and biochemistry is particularly worrying. The Director General of the Organisation for the Prohibition of Chemical Weapons (OPCW) has warned of “the potential risks that artificial intelligence-assisted chemistry may pose” to the Chemical Weapons Convention and of “the ease and speed with which novel routes to existing toxic compounds can be identified.” This creates serious new challenges for the control of toxic substances and their precursors.

Similar concerns exist with regard to biological weapons. Synthetic biology is in itself a dynamic field. But AI puts the development of novel chemical or biological agents through such new technologies on steroids. Rather than going through lengthy and costly lab experiments, AI can “predict” the biological effects of known and even unknown agents. A much-cited paper by Filippa Lentzos and colleagues describes an experiment during which an AI, in less than six hours and running on a standard hardware configuration, “generated forty thousand molecules that scored within our desired threshold”, meaning that these agents were likely more toxic than publicly known chemical warfare agents.

AI lowers proliferation hurdles

Second, AI could ease access to nuclear, biological and chemical weapons by illicit actors by giving advice on how to develop and produce WMD or relevant technologies “from scratch”.

To be sure, current commercial AI providers have instructed their AI models not to answer questions on how to build WMD or related technologies. But such limits will not remain impermeable. And in future, the problem may not be so much preventing the misuse of existing AI models but the proliferation of AI models or the technologies that can be used to build them. Only a fraction of all spending on AI is invested in the safety and security of such models.

AI lowers the threshold of WMD use

Third, the integration of AI into the WMD sphere can also lower the threshold for the use of nuclear, biological or chemical weapons. Thus, all nuclear weapon states have begun to integrate AI into their nuclear command, control, communication and information (NC3I) infrastructure. The ability of AI models to analyse large chunks of data at unprecedented speeds can improve situational awareness and help warn, for example, of incoming nuclear attacks. But at the same time AI may also be used to optimise military strike options. Because of the lack of transparency around AI integration, fears that adversaries may be intent on conducting a disarming strike with AI assistance can increase, setting up a race to the bottom in nuclear decision-making.

In a crisis situation, overreliance on AI systems that are unreliable or working with faulty data may create additional problems. Data may be incomplete or may have been manipulated. AI models themselves are not objective. These problems are structural and thus not easily fixed. A UNIDIR study, for example, found that “gender norms and bias can be introduced into machine learning throughout its life cycle”. Another inherent risk is that AI systems designed and trained for military uses are biased towards war-fighting rather than war avoidance, which would make de-escalation in a nuclear crisis much more difficult.

The consensus among nuclear weapons states that a human always has to stay in the loop before a nuclear weapon is launched, is important, but it remains a problem that the understanding of human control may differ significantly.

Slow down!

It would be a fool’s errand to try to slow down AI’s development. But we need to decelerate AI’s convergence with the research, development, production, and military planning related to WMD. It must also be possible to prevent spillover from AI’s integration into the conventional military sphere to applications leading to nuclear, biological, and chemical weapons use.

Such deceleration and channelling strategies can build on some universal norms and prohibitions. But they will also have to be tailored to the specific regulative frameworks, norms and patterns regulating nuclear, biological and chemical weapons. The zero draft of the Pact for the Future, to be adopted at the September 2024 Summit of the Future, points in the right direction by suggesting a commitment by the international community “to developing norms, rules and principles on the design, development and use of military applications of artificial intelligence through a multilateral process, while also ensuring engagement with stakeholders from industry, academia, civil society and other sectors.”

Fortunately, efforts to improve AI governance on WMD do not need to start from scratch. At the global level, the prohibitions of biological and chemical weapons enshrined in the Biological and Chemical Weapons Conventions are all-encompassing: the general purpose criterion prohibits all chemical and biological agents that are not used peacefully, whether AI comes into play or not. But AI may test these prohibitions in various ways, including by merging biotechnology and chemistry “seamlessly” with other novel technologies. It is, therefore, essential the OPCW monitors these developments closely.

International Humanitarian Law (IHL) implicitly establishes limits on the military application of AI by prohibiting the indiscriminate and disproportionate use of force in war. The Group of Governmental Experts (GGE) on Lethal Autonomous Weapons under the Convention on Certain Conventional Weapons (CCW) is doing important work by attempting to spell out what the IHL requirements mean for weapons that act without human control. These discussions will, mutatis mutandis, also be relevant for any nuclear, biological or chemical weapons that would be reliant on AI functionalities that reduce human control.

Shared concerns around the risks of AI and WMD have triggered a range of UN-based initiatives to promote norms around responsible use. The legal, ethical and humanitarian questions raised at the April 2024 Vienna Conference on Autonomous Weapons Systems are likely to inform debates and decisions around limits on AI integration into WMD development and employment, and particularly nuclear weapons use. After all, similar pressures to shorten decision times and improve the autonomy of weapons systems apply to nuclear as well as conventional weapons.

From a regulatory point of view, it is advantageous that the market for AI-related products is still highly concentrated around a few big players. It is positive that some of the countries with the largest AI companies are also investing in the development of norms around responsible use of AI. It is obvious that these companies have agency and, in some cases, probably more influence on politics than small states.

The Bletchley Declaration adopted at the November 2023 AI Safety Summit in the UK, for example, highlighted the “particular safety risks” that arise “at the ‘frontier’ of AI”. These could include risks that may “arise from potential intentional misuse or unintended issues of control relating to alignment with human intent”. The summits on Responsible Artificial Intelligence in the Military Domain (REAIM) are another “effort at coalition building around military AI” that could help to establish the rules of the game.

The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, agreed on in Washington in September 2023, confirmed important principles that also apply to the WMD sphere, including the applicability of international law and the need to “implement appropriate safeguards to mitigate risks of failures in military AI capabilities.” One step in this direction would be for the nuclear weapon states to conduct so-called failsafe reviews that would aim to comprehensively evaluate how control of nuclear weapons can be ensured at all times, even when AI-based systems are incorporated.

All such efforts could and should be building blocks that can be incorporated into a comprehensive governance approach. Yet, the risks around AI leading to increased risk of nuclear weapons use are most pressing. Artificial intelligence is not the only emerging and disruptive technology affecting international security. Space warfare, cyber, hypersonic weapons, and quantum are all affecting nuclear stability. It is, therefore, particularly important that nuclear weapon states amongst themselves build a better understanding and confidence about the limits of AI integration into NC3I.

An understanding between China and the United States on guardrails around military misuse of AI would be the single most important measure to slow down the AI race. The fact that Presidents Xi Jinping and Joe Biden in November 2023 agreed that “China and the United States have broad common interests”, including on artificial intelligence, and to intensify consultations on that and other issues, was a much-needed sign of hope. Although since then China has been hesitating to actually engage in such talks.

Meanwhile, relevant nations can lead by example when considering the integration of AI into the WMD realm. This concerns, first of all, the nuclear weapon states which can demonstrate responsible behaviour by pledging, for example, that they would not use AI to interfere with the nuclear command, control and communication systems of their adversaries. All states should also practice maximum transparency when conducting experiments around the use of AI for biodefense activities because such activities can easily be mistaken for offensive work. Finally, the German government’s pioneering role in looking at the impact of new and emerging technologies on arms control has to be recognised. Its Rethinking Arms Control conferences, including the most recent conference on AI and WMD on June 28 in Berlin with key contributors such as the Director General of the OPCW, are particularly important. Such meetings can systematically and consistently investigate the AI-WMD interplay in a dialogue between experts and practitioners. If they can agree on what guardrails and speed bumps are needed, an important step toward effective governance of AI in the WMD sphere has been taken.

The opinions articulated above represent the views of the author(s) and do not necessarily reflect the position of the European Leadership Network or any of its members. The ELN’s aim is to encourage debates that will help develop Europe’s capacity to address the pressing foreign, defence, and security policy challenges of our time.

Image credit: Free ai generated art image, public domain art CC0 photo. Mixed with Wikimedia Commons / Fastfission~commonswiki