Skip to content

Ganna Pogrebna

Behavioural Data Scientist

Professor Ganna Pogrebna is an internationally recognised expert in behavioural data science, AI governance, and emerging technology risk assessment. Her research focuses on decision-making under uncertainty and the behavioural, ethical, and organisational dimensions of AI, with particular emphasis on high-stakes domains such as defence, cybersecurity, and national security policy. She has published over 150 peer-reviewed research articles, 3 books, is Editor of the forthcoming Cambridge Handbook of Behavioural Data Science, and a contributor to The Oxford Handbook of the Ethics of AI. She has led large-scale interdisciplinary research programmes with total funding exceeding AUD 30 million, including projects for the World Bank, the UK Ministry of Defence, GCHQ, and the Australian Office of National Intelligence.

Her work has been recognised through multiple honours, including the TechWomen100 Award, the Women in AI APAC Award (Risk Modelling and Cybersecurity), and the Australian Women in Security Award for AI in Cybersecurity. She serves as Methods Editor for The Leadership Quarterly and sits on the editorial boards of Scientific Reports and Judgment and Decision Making. She regularly advises international organisations, governments and industry on behavioural risk, AI governance, and technology-enabled decision systems.

Content by Ganna Pogrebna

Report

Towards a better understanding of human bias in nuclear decision-making and its interaction with emerging and disruptive technologies

This report by Ganna Pogrebna and ELN Senior Policy Fellow Rishi Paul presents findings from an ELN workshop that examined the ‘human’ and ‘machine’ components of bias and their points of interaction. The report highlights how human judgment and AI systems can interact in ways that reinforce, rather than reduce, risk.

27 February 2026 | Ganna Pogrebna and Rishi Paul
Policy brief

Technological complexity and risk reduction: Using digital twins to navigate uncertainty in nuclear weapons decision-making and EDT landscapes

This policy brief explores the integration of digital twin technologies into nuclear decision-making processes, assessing their potential to reduce risks stemming from emerging disruptive technologies (EDTs). It argues for international dialogue, transparency, and responsible innovation to prevent misuse, enhance NC3 resilience, and strengthen strategic stability through informed, scenario-based crisis simulations.

Commentary

Ok, Doomer! The NEVER podcast – Fake brains and killer robots

Listen to the fifth episode of the NEVER podcast – Ok, Doomer! In this episode, we explore artificial intelligence and its relationship with existential risk. Featuring an introduction to the topic, why we should be especially wary when integrating AI with nuclear weapons systems, the role of AI in arms control, how best to regulate AI on the global level and what international institutions are best placed to do so, as well as historical perspectives on technological change and its impact on our cultural understandings of existential risk.