ABSTRACT
The rapid advancement of artificial intelligence (AI) has facilitated the rise of deepfakes—synthetic audio, image, text, and video content generated by deep learning algorithms that can convincingly fabricate events, statements, or identities. While deepfake technology has positive applications in fields such as education and entertainment, it has predominantly been exploited for malicious purposes, including sexual exploitation, political manipulation, fraud, and disinformation campaigns. In response, web-based deepfake detection tools have emerged as accessible alternatives to algorithmic models, enabling non-experts such as researchers, journalists, and policymakers to verify suspicious content. However, systematic evaluations of these tools remain scarce in the academic literature. To address this gap, the present study examines 37 web-based deepfake detection tools. Employing descriptive and statistical analyses—including accuracy analysis, weighted score, and margin of error calculations—the study evaluates tool performance and categorizes them into high, moderate, and low accuracy groups. Results indicate that photo detection tools are the most reliable, with several achieving weighted scores above 80 percent, while video detection tools demonstrate only moderate effectiveness. Audio detection remains underdeveloped, showing significant weaknesses in both accuracy and tool availability. Hive Moderation stands out as the most versatile platform, though none of the tools offer comprehensive, multi-modal, or real-time detection. The findings highlight critical gaps in current detection capabilities, particularly in audio and in large, long-form media. This underscores the importance of scalable, multi-modal, real-time solutions to safeguard democratic institutions, organizations, companies, individual security, and public trust.
LINK: https://www.tandfonline.com/doi/full/10.1080/07366981.2025.2564783
No comments:
Post a Comment