Skip to content

Filter

8 results found
Page 1 of 2
Commentary

Ok, Doomer! The NEVER podcast – Fake brains and killer robots

Listen to the fifth episode of the NEVER podcast – Ok, Doomer! In this episode, we explore artificial intelligence and its relationship with existential risk. Featuring an introduction to the topic, why we should be especially wary when integrating AI with nuclear weapons systems, the role of AI in arms control, how best to regulate AI on the global level and what international institutions are best placed to do so, as well as historical perspectives on technological change and its impact on our cultural understandings of existential risk.

Report

Asia-Pacific flashpoints: Comparing Australian, Japanese, South Korean & UK perceptions

This ELN and APLN report compares Australian, Japanese, South Korean, and UK risk perceptions towards Taiwan and North Korea. It finds that diverging perceptions of risk in the Asia-Pacific are potential obstacles to policy coordination and offers recommendations for how to address this.

Commentary

Navigating cyber vulnerabilities in AI-enabled military systems

As countries continue incorporating AI into conventional military systems, they should prepare themselves for the risk that adversaries are likely already working to exploit weaknesses in AI models by threatening datasets at the core of AI. To address this, Alice Saltini writes that states should develop metrics to assess how cyber vulnerabilities could impact AI integration.

19 March 2024 | Alice Saltini
Report

UK thinking on AI integration and interaction with nuclear command and control, force structure, and decision-making

Alice Saltini analyses the British literature on the UK’s perception of military and nuclear applications of AI and their impact on strategic stability and NC3. The paper offers recommendations for unilateral measures that the UK can take, as well as multilateral initiatives within the P5 framework, to address the risks associated with AI in nuclear decision-making

13 November 2023 | Alice Saltini
Report

AI and nuclear command, control and communications: P5 perspectives

The nuclear-weapons states China, France, Russia, the United Kingdom, and the United States are increasingly recognising the implications of integrating AI into nuclear weapons command, control, and communication systems. Exploring the risks inherent to today’s advanced AI systems, this report sheds light on characteristics and risks across different branches of this technology and establishes the basis for a general purpose risk assessment framework.

13 November 2023 | Alice Saltini
Commentary

To avoid nuclear instability, a moratorium on integrating AI into nuclear decision-making is urgently needed: The NPT PrepCom can serve as a springboard

The integration of neural networks into NC3 poses a multitude of risks to global security. Alice Saltini writes that to pave the way for a moratorium, NPT State Parties should use the PrepCom to focus discussions on understanding the risks associated with integrating deep learning models into nuclear decision-making.

28 July 2023 | Alice Saltini