Skip to content

Filter

24 results found
Page 1 of 4
Commentary

From nuclear stability to AI safety: Why nuclear policy experts must help shape AI’s future

Artificial intelligence, much like nuclear technologies, has the capacity to transform our world for the better, offering breakthroughs in several fields whilst simultaneously posing catastrophic risks. Nuclear policy experts, skilled in managing existential threats, are well-suited to guide AI governance. ELN Network and Communications Manager Andrew Jones argues that urgent, coordinated international action and further collaboration between experts in the nuclear and AI fields is needed before AI outpaces our ability to control it.

25 April 2025 | Andrew Jones
Policy brief

Technological complexity and risk reduction: Using digital twins to navigate uncertainty in nuclear weapons decision-making and EDT landscapes

This policy brief explores the integration of digital twin technologies into nuclear decision-making processes, assessing their potential to reduce risks stemming from emerging disruptive technologies (EDTs). It argues for international dialogue, transparency, and responsible innovation to prevent misuse, enhance NC3 resilience, and strengthen strategic stability through informed, scenario-based crisis simulations.

Report

How to save the world: Influencing policy on the biggest risks to humanity

A new report published from the European Leadership Network’s New European Voices on Existential Risk (NEVER) project calls for a systemic international approach to be taken to address man-made existential risk. The risks from nuclear weapons, climate change, biological threats, and AI are interconnected and cross-cutting lessons should be drawn.

Policy brief

Assessing the implications of integrating AI in nuclear decision-making systems

This policy brief analyses the integration of AI into nuclear command, control and communications systems (NC3), exploring potential benefits and significant risks. Former ELN policy fellow and Non-Resident Expert on AI at CNS Alice Saltini highlights the need for a better assessment of risks and the establishment of thresholds for integration to prevent miscalculations and nuclear escalation. It proposes that the EU leads international dialogue on AI risks in the nuclear domain in relevant international discussions.

11 February 2025 | Alice Saltini
Commentary

Deterrence without destruction: Rethinking responses to biological threats

Scientific advances have renewed a discussion around the possibility of potentially devastating biological attacks. Eva Siegmann writes that nuclear deterrence is inadequate to deter biological threats. Instead, the threat of biological weapons should be addressed via international efforts rooted in transparency and cooperation. Leveraging the mechanisms of the Biological Weapons Convention and implementing deterrence-by-denial strategies can effectively mitigate risks.

28 November 2024 | Eva Siegmann