The rapid proliferation of artificial intelligence (AI) and deepfake technologies has introduced new and complex risks to individuals, companies, financial systems, and digital trust. While existing research has primarily examined deepfakes in sexual or political contexts, systematic analyses of AI-enabled fraud remain limited. This study addresses this gap by conducting an incident-based analysis of 167 documented cases of AI- and deepfake-enabled fraud worldwide between 2019 and 2025. Drawing on Cyber-Routine Activities Theory (C-RAT), the study examines temporal trends, victim targeting patterns, financial losses, and cross-national variations to assess how emerging AI technologies reshape opportunity structures for cybercrime. The findings reveal a sharp increase in AI-assisted fraud after 2022, coinciding with the public availability of generative AI tools. Victimization patterns shifted from companies toward individuals, while financial losses initially concentrated among companies before increasingly affecting individual victims. Country-level analysis highlights substantial variation, including evidence that targeted regulatory interventions can reduce exposure to AI-enabled fraud, as demonstrated in the People’s Republic of China. Overall, the results support C-RAT’s core assumptions regarding motivated offenders, suitable targets, and capable guardianship, while extending the theory to account for AI-driven cyber threats and systemic forms of guardianship. The study emphasizes that AI-enabled fraud represents a structural social risk inherent in modern digital infrastructures. Effective mitigation requires multi-layered strategies integrating technical controls, organizational investment in cybersecurity, and adaptive regulatory governance.
LINK: https://www.tandfonline.com/doi/full/10.1080/07366981.2026.2631066