Skip to content

Filter

6 results found
Commentary

The fast and the deadly: When Artificial Intelligence meets Weapons of Mass Destruction

Ahead of the German Federal Foreign Office’s Artificial Intelligence and Weapons of Mass Destruction Conference 2024, the ELN’s Policy and Research Director, Oliver Meier, argues that governments should build guardrails around the integration of AI in the WMD sphere, and slow down the incorporation of AI into research, development, production, and planning for nuclear, biological, and chemical weapons.

27 June 2024 | Oliver Meier
Commentary

Ok, Doomer! The NEVER podcast – Biological threats: Going viral

Listen to the fourth episode of the NEVER podcast – Ok, Doomer! In this episode, we explore biological existential and global catastrophic risks. Featuring an introduction to the topic and an exploration of some of the technological mitigation techniques we have available for biological risks, a panel discussion exploring the current state of the international governance of biological risks, and we will also delve into whether humanity’s recent brush with COVID-19 has better or worse prepared us for future pandemics, be they man-made or natural.

Event

‘Technological Complexity and Risk Reduction: A Guardrails and checklist framework for EDTs in nuclear weapons decision-making’.

On 10th-11th April 2024, the ELN convened a group of diverse experts for a workshop at the German Federal Foreign Office to consider the core ingredients of a guardrails and checklist framework that will help policymakers anticipate and address challenges arising from Emerging and Disruptive Technologies (EDTs) and their aggregate effects on nuclear command, control, and communications (NC3) and nuclear weapons decision-making.

10 April 2024
Commentary

Navigating cyber vulnerabilities in AI-enabled military systems

As countries continue incorporating AI into conventional military systems, they should prepare themselves for the risk that adversaries are likely already working to exploit weaknesses in AI models by threatening datasets at the core of AI. To address this, Alice Saltini writes that states should develop metrics to assess how cyber vulnerabilities could impact AI integration.

19 March 2024 | Alice Saltini
Commentary

Sounding the alarm on AI-enhanced bioweapons

In our latest commentary produced from our New European Voices on Existential Risk (NEVER) network, Rebecca Donaldson explores the potential of new technologies for security whilst minimising their potential for harm in the realms of AI and the life sciences. She proposes that more funds go towards the biological weapons convention, the creation of an Emerging Technology Utilisation and Response Unit (ETURU) and the fostering of a culture of AI assurance and responsible democratisation of biotechnologies.

26 February 2024 | Rebecca Donaldson