Skip to content
Commentary | 25 April 2025

From nuclear stability to AI safety: Why nuclear policy experts must help shape AI’s future

Artificial intelligence (AI) is advancing at hypersonic speed. The effects of these developments are yet to be fully realised. Still, it remains apparent that as AI systems become more capable, there is a heightened risk of these advancements exacerbating strategic instability, accelerating military decision-making timeframes, increasing the potency of cyberattacks, and compelling states to engage in a new arms race for AI supremacy. These challenges are scenarios that nuclear policy experts understand and are well-positioned to help govern. With most machine learning experts now believing that we are more likely than not to see AI systems with human-level intelligence by 2060, we must adapt to these developments. AI is not science fiction — it is a rapidly emerging strategic reality.

Like nuclear technologies, AI has dual-use capabilities. It holds promise for solving our most urgent global challenges — from climate change to pandemic prediction — yet it also risks becoming a catalyst for mass destruction by increasing uncertainty, its malicious misuse leading to the development of bioweapons and the dissemination of disinformation, or even loss of control leading to the unintentional use of nuclear weapons. There are important lessons to draw from the efforts made by those involved in governing nuclear weapons, including norm-building, institution crafting, and verification tools. The tools developed under the atomic age must now be adapted for the AI era.

Nuclear non-proliferation and disarmament experts are already engaging with AI in military contexts, but broader strategic engagement is needed. While some argue that nuclear governance models can’t be replicated like-for-like with AI, their capabilities to disrupt global security and cause mass devastation require shared attention. Some AI experts have begun directly drawing upon nuclear deterrence theories such as the proposal for “Mutual Assured AI Malfunction” (MAIM), suggesting that pre-emptive sabotage will deter states from pursuing AI dominance. Whilst this analogy is provocative, it underscores the potential for AI catastrophe and demonstrates the need for international governance. We have already seen the destructive consequences of nuclear weapon use; we cannot afford to wait and see what destruction a MAIM catastrophe might cause.

AI is not science fiction — it is a rapidly emerging strategic reality. Andrew Jones

The Bulletin of Atomic Scientists’ Doomsday Clock accounts for AI and its possible future application to nuclear weapons among its existential risks. It is now 89 seconds to midnight. AI governance is required now — before the technology outpaces our ability to govern it. As the seconds tick closer to midnight, we must act before catastrophe compels us.

The dual-use dilemma — Why AI and nuclear risk feel so familiar:

Nuclear policy experts understand the complexities of dual-use technologies. Nuclear energy has powered progress toward net-zero goals and medical breakthroughs. Despite these contributions, the devastation of Hiroshima and Nagasaki remains etched in memory.

AI poses a similarly complicated risk landscape. Its benefits are seductive: increased economic prosperity, improved medical diagnostics, and increased food security, to name a few. Yet these societal promises are inseparable from their risks. AI can be used to bioengineer chemical and biological weapons, disseminate disinformation, and increase the potency of cyberattacks. In warfare, the risk of nuclear miscalculation increases as AI speeds up the decision cycle in conflict.

Despite these similarities, we must be clear: nuclear weapons are totally unique. No other weapon system carries the same destructive power as nuclear weapons. Compelling arguments have been made to state why nuclear governance models won’t work for AI: AI lacks state control, has no reliable verification tools, and is inherently harder to contain. But the uniqueness of the challenge is not a reason for disengagement. It’s a reason to lead.

The tools developed under the atomic age must now be adapted for the AI era. Andrew Jones

What is currently being done and how nuclear policy experts can engage:

The good news is that action is beginning to take place. National governments, international organisations, frontier AI companies, and civil society organisations are all putting forward proposals. However, they remain siloed, lack implementation, and are largely voluntary. AI experts reportedly trust non-governmental and international organisations more than private companies and are sceptical towards militaries that might misuse AI. This trust opens the door for nuclear policy experts to help build good governance.

Despite the lack of trust between private companies and AI experts, frontier AI companies are taking promising steps. Google, OpenAI, Anthropic, and META have each developed Responsible Scaling Policies (RSP), voluntary frameworks that identify technological and organisational protocols intended to mitigate against catastrophic chemical, biological, radiological and nuclear (CBRN) risks. Those working in nuclear policy are positioned to educate technological experts on the intricacies of catastrophic nuclear risks and help to refine these RSPs and develop a set of agreeable standards. Inviting those working for AI companies to policy workshops — directly related to AI or not — and to Track 1.5/2 diplomatic efforts provides a platform to share knowledge and build consensus.

Significant developments are being made within nations leading in AI development. AI experts’ increased level of trust with the EU could be seen as a result of the EU AI Act — the first legal AI framework. The EU’s AI Act sorts risks into unacceptable, high-level, limited, and acceptable categories, prioritising regulation on risks that infringe the rights of EU citizens. The UK has launched the AI Security Institute to conduct research and build the necessary infrastructure to understand advanced AI’s capabilities, impacts, and risk mitigations. China has introduced algorithm registries and mandated labelling of deepfakes, though their proposals fail to address CBRN hazards. Under the Biden Administration, the US set out a series of Executive Orders (EOs) promoting safe, secure, and trustworthy development and use of AI. The Trump Administration has since revoked many of Biden’s EOs and set out a plan to develop an Artificial Intelligence Action Plan within 180 days of the presidency. Whilst it remains unclear what will be included in the action plan, it is likely that export controls will feature. OpenAI has set forward a proposal to maintain the Diffusion Rule — a framework for managing export controls — with adaptations to expand limitless exports to democratic countries and restrict access to American AI technologies to China. There are parallels to be drawn between how AI and nuclear technologies exports can be governed, such as the Nuclear Suppliers Group, an informal group of countries that seek to promote best practices on the export of nuclear dual-use items and technologies in compliance with international, legally binding treaties such as the Non-Proliferation Treaty.

As AI technologies increasingly intersect with nuclear weapons and introduce new levels of uncertainty and competition in geopolitics, the expertise of the nuclear policy community is urgently needed. Andrew Jones

Despite the lack of enforceable international coordination, multilateral efforts are emerging: the summit on the Responsible Artificial Intelligence in the Military Domain (REAIM), the UN’s Global Digital Compact and the curation of the High-Level Advisory Body on Artificial Intelligence present opportunities for international dialogue at a multilateral level. Building consensus between geopolitical rivals remains a key challenge for developing good AI governance. Those engaged in nuclear disarmament have been here before. They should continue to engage with multilateral efforts and share knowledge and experience on how international norms can be promoted.

Global efforts to govern AI — from national regulation to voluntary safety standards by AI companies — remain uncoordinated and lack accountability and verification. Geopolitical competition inflames these challenges. This is where nuclear experts can lead. Whilst they may be imperfect, global efforts to govern nuclear weapons have successfully prevented further use of nuclear weapons.

Pathways forward:

Nuclear weapons and AI may be fundamentally different, but they share the capacity to destabilise global security. As AI technologies increasingly intersect with nuclear weapons and introduce new levels of uncertainty and competition in geopolitics, the expertise of the nuclear policy community is urgently needed.

Nuclear experts understand how to manage escalation and uncertainty and how to build norms between competitors. Their experience can help build consensus, shape policies, and advise on the design of institutions capable of managing transformative technologies.

Action towards nuclear disarmament was not inevitable — it was built through diplomacy, institutions, and norm-setting. And regrettably, catastrophic events. Those who survived the attacks on Hiroshima and Nagasaki have spent decades campaigning to ensure that we never repeat this mistake. The question is not whether AI will reshape geopolitics but whether those with the deepest experience of managing existential risk will help shape its trajectory.

The European Leadership Network itself as an institution holds no formal policy positions. The opinions articulated above represent the views of the authors rather than the European Leadership Network or its members. The ELN aims to encourage debates that will help develop Europe’s capacity to address the pressing foreign, defence, and security policy challenges of our time, to further its charitable purposes.

Image credit: Composite image of RawPixel, and Wikimedia Commons / 長岡外史