Cognitive Biases and User Vulnerability to AI-Driven Social Engineering Attacks in Corporate IT Systems
Abstract
Social engineering remains one of the most persistent cybersecurity threats to corporate information systems, primarily because it exploits human cognitive processes rather than technical vulnerabilities. Despite substantial investments in advanced security infrastructures, employee behaviour continues to represent a critical point of failure, a challenge that has intensified with the emergence of artificial intelligence (AI)–driven social engineering attacks. AI-generated phishing, voice cloning, and deepfake impersonation enable highly personalized, context-aware, and scalable attacks that are increasingly difficult for users to detect. Central to the success of these attacks is the exploitation of cognitive biases, such as authority, urgency, familiarity, confirmation, and optimism biases, which systematically influence human judgment and decision-making in high-pressure corporate environments. This study examines how specific cognitive biases shape employee susceptibility to both traditional and AI-powered social engineering attacks within corporate IT environments. Drawing on cognitive psychology, human–computer interaction, and cybersecurity engineering frameworks, the research analyses the mechanisms through which biases affect user responses to deceptive digital interactions. It further investigates how AI-enhanced attack techniques amplify these vulnerabilities by mimicking legitimate communication patterns and dynamically adapting to user behaviour. The study evaluates the effectiveness and limitations of existing technical and human-centred security controls and highlights gaps in current organisational defence strategies that fail to adequately account for cognitive vulnerabilities. Based on these insights, the research proposes engineering-oriented mitigation strategies, including adaptive security training, cognitive-aware authentication mechanisms, behavioural analytics, and AI-driven detection systems, aimed at reducing user susceptibility and strengthening organisational resilience. By integrating human cognitive factors into security system design, this research contributes to the development of next-generation socio-technical cybersecurity models capable of countering both human error and AI-enhanced adversarial tactics.
References
Abawajy, J. (2014). User preference of cyber security awareness delivery methods. Behaviour & Information Technology, 33(3), 237–248.
Aleroud, A., & Zhou, L. (2017). Phishing environments, techniques, and countermeasures: A survey. Computers & Security, 68, 160–196.
Alharbi, M., & Alsubaie, N. (2021). A bibliometric analysis of phishing research: Trends, themes, and future directions. Applied Sciences, 11(21), 10354.
Bada, M., Sasse, A. M., & Nurse, J. R. C. (2019). Cyber security awareness campaigns: Why do they fail to change behaviour? arXiv preprint arXiv:1901.02672. https://arxiv.org/abs/1901.02672
Basu, S., Joshi, A., & Kumar, A. (2023). AI-driven phishing attacks: Techniques, detection, and mitigation. Journal of Cybersecurity Research, 8(2), 101–119. https://doi.org/10.1080/xxxxxx
Beresford, J., Pastrana, S., & Hutchings, A. (2023). AI-enabled social engineering: Emerging threats and defense challenges. Computers & Security, 127, 103036.
Chakraborty, R., Vishwanath, A., & Chen, R. (2022). Cognitive and emotional determinants of phishing susceptibility: A multidimensional analysis. Applied Sciences, 12(3), 1154.
Cialdini, R. B. (2009). Influence: Science and practice (5th ed.). Pearson Education.
ENISA. (2022). ENISA Threat Landscape. European Union Agency for Cybersecurity.
ENISA. (2023). ENISA Threat Landscape: Emerging AI-enabled attacks. European Union Agency for Cybersecurity.
Refbacks
- There are currently no refbacks.