About this Event
Cyber security is often framed as a technical challenge - defined by firewalls, encryption protocols, and intrusion detection systems. Yet beneath this infrastructure lies a human layer riddled with cognitive biases, institutional assumptions, and algorithmic blind spots that shape how threats are perceived, prioritised, and addressed. This talk explores the often-overlooked role of bias in cyber security, drawing on insights from behavioural science, decision theory, and human–machine interaction. From confirmation bias in threat intelligence analysis to automation bias in security operations centres, and from the racial and gendered assumptions baked into facial recognition systems to geopolitical framing in cyber risk assessments, bias manifests at every layer of the cyber security ecosystem. Through case studies from public and private sectors, the session will examine how these biases distort incident response, amplify inequality, and expose organisations to preventable risks. Special attention will be given to bias in AI-driven security tools - highlighting how efforts to automate defence may entrench flawed reasoning at scale. The talk will also introduce practical interventions for building bias-aware cyber cultures, including adversarial thinking training, red teaming with behavioural diversity, and the integration of fairness-aware machine learning. As the cyber threat landscape grows more complex, addressing bias is not just a matter of equity - it is essential for strategic resilience.
Event Venue & Nearby Stays
University of Warwick, IMC004, Coventry, United Kingdom
GBP 0.00











