Skip to content

Firebreaks: Mitigating the Risks of AI Integration into Nuclear Operations

This project addresses Carnegie Corporation of New York’s call for solutions to address how and where technological developments affect nuclear risks and identify solutions for reducing these dangers. Through this project, the Arms Control Association (ACA), Berkeley Risk and Security Lab (BRSL), and the European Leadership Network will deliver a comprehensive menu of specific options to further advance the conversation on artificial intelligence (AI) safety and security in nuclear operations.

Why?

Over the last decade, analysis of the integration of AI and nuclear weapons operations has extensively explored the ways AI could contribute to crisis instability through accidental or inadvertent escalation. There is a broad consensus that there are risks arising from the technical characteristics of AI, but also how AI is used in operational settings by military personnel and decision-makers.

As of yet, analyses of specific recommendations for addressing these risks have been tentative and scattershot. In part, this is because of the novelty of AI applications in military settings and the opacity of nuclear operations. But with military leaders now calling for accelerated adoption of AI into nuclear or nuclear-adjacent military computer systems, the time is ripe for a study of possible solutions – Firebreaks – for preventing the worst harms of AI integration.

The nuclear policy community has focused its efforts to date on the concept of maintaining the “human in the loop” principle in nuclear weapons operations. In 2024, the U.S. Congress emphasised that it is U.S. policy to ensure that AI integration does not compromise “the principle of requiring positive human actions in execution of decisions by the President.” While this abstract policy is a necessary step toward preventing nuclear catastrophe, it is insufficient to prevent all conceivable categories of accidents that could arise from AI integration.

This project will develop and expand upon existing ELN materials including a report on emerging and disruptive technologies, which noted the need for nuclear powers to conduct “Fail-Safe review[s] of the safety, security, and reliability of nuclear weapons…in the context of potential malfunctions” and “ensure [the] relevant level of human control,” among other recommendations.

How?

The Arms Control Association, Berkeley Risk and Security Lab, and the European Leadership Network will bring together leading scholars of AI policy and nuclear weapons operations to develop a menu of specific, targeted, and actionable policies for mitigating AI integration risks.

Then, through feedback mechanisms including tabletop exercises, the Firebreak project will assess the adequacy and appropriateness of these mechanisms in the U.S. context.

Finally, the project will test the generalisability of the proposals with experts on the nuclear operations of non-U.S. nuclear-armed states.

The ELN’s role will focus on providing support in expanding the project in its second phase, drawing on our network within Europe to engage experts on non-U.S. nuclear powers.

Staff contacts

Partner Organisations

Funded by