Prof Ganna Pogrebna is an internationally recognised expert in behavioural data science, AI governance, and emerging technology risk assessment. She has previously worked with rapporteurs on AI risks at the Council of Europe, contributed to The Oxford Handbook of Ethics of AI, and published over 114 science contributions. Ganna has led research projects with total funding exceeding AUD 30 million, including projects for the World Bank, MoD (UK), GCHQ (UK), and the Australian Office of National Intelligence (ONI). She is a Methods Editor for Leadership Quarterly and serves on the editorial boards of Scientific Reports and Judgment and Decision Making. Her research focuses on decision-making under uncertainty, particularly in high-stakes environments such as defence, cybersecurity, and nuclear policy. She has advised policymakers on the impact of AI and emerging technologies in national security and crisis management, shaping strategies for risk mitigation and resilience.
Ganna Pogrebna
Professor, Executive Director, AI and Cyber Futures Institute
Content by Ganna Pogrebna

Technological complexity and risk reduction: Using digital twins to navigate uncertainty in nuclear weapons decision-making and EDT landscapes
This policy brief explores the integration of digital twin technologies into nuclear decision-making processes, assessing their potential to reduce risks stemming from emerging disruptive technologies (EDTs). It argues for international dialogue, transparency, and responsible innovation to prevent misuse, enhance NC3 resilience, and strengthen strategic stability through informed, scenario-based crisis simulations.

Ok, Doomer! The NEVER podcast – Fake brains and killer robots
Listen to the fifth episode of the NEVER podcast – Ok, Doomer! In this episode, we explore artificial intelligence and its relationship with existential risk. Featuring an introduction to the topic, why we should be especially wary when integrating AI with nuclear weapons systems, the role of AI in arms control, how best to regulate AI on the global level and what international institutions are best placed to do so, as well as historical perspectives on technological change and its impact on our cultural understandings of existential risk.