Skip to content

Alice Saltini

Former ELN Policy Fellow and Non-Resident Expert on AI at the James Martin Center for Nonproliferation Studies (CNS)

YGLN Membership Italy

Alice Saltini is a Non-Resident Expert on AI at the James Martin Center for Nonproliferation Studies (CNS), specialising in the impact of AI on nuclear decision-making. She advises governments and international organisations on managing AI-related nuclear risks, focusing on mitigating the challenges of integrating AI into military and nuclear weapons systems by translating complex technical concepts into actionable policy insights. She has published extensively on military applications of AI and has developed a general-purpose risk assessment framework for analysing AI and nuclear risks.

Previously, she worked with the European Leadership Network, the Comprehensive Nuclear-Test-Ban Treaty Organization, and CNS. She holds a Master’s degree in Russian Studies and a Postgraduate Certificate (PgCert) in Nonproliferation Studies from the Middlebury Institute of International Studies.

As a Former Policy Fellow at the European Leadership Network (ELN), Saltini was involved in a range of projects within the Global Security Program, including an examination of the interplay of AI and nuclear risks.

Content by Alice Saltini

Policy brief

Assessing the implications of integrating AI in nuclear decision-making systems

This policy brief analyses the integration of AI into nuclear command, control and communications systems (NC3), exploring potential benefits and significant risks. Former ELN policy fellow and Non-Resident Expert on AI at CNS Alice Saltini highlights the need for a better assessment of risks and the establishment of thresholds for integration to prevent miscalculations and nuclear escalation. It proposes that the EU leads international dialogue on AI risks in the nuclear domain in relevant international discussions.

11 February 2025 | Alice Saltini
Commentary

Ok, Doomer! The NEVER podcast – Fake brains and killer robots

Listen to the fifth episode of the NEVER podcast – Ok, Doomer! In this episode, we explore artificial intelligence and its relationship with existential risk. Featuring an introduction to the topic, why we should be especially wary when integrating AI with nuclear weapons systems, the role of AI in arms control, how best to regulate AI on the global level and what international institutions are best placed to do so, as well as historical perspectives on technological change and its impact on our cultural understandings of existential risk.

Report

Asia-Pacific flashpoints: Comparing Australian, Japanese, South Korean & UK perceptions

This ELN and APLN report compares Australian, Japanese, South Korean, and UK risk perceptions towards Taiwan and North Korea. It finds that diverging perceptions of risk in the Asia-Pacific are potential obstacles to policy coordination and offers recommendations for how to address this.