SAFEGUARDING DIGITAL AUTHENTICITY AND WOMEN’S IDENTITY THROUGH DEEPFAKE DETECTION

SAFEGUARDING DIGITAL AUTHENTICITY AND WOMEN’S IDENTITY THROUGH DEEPFAKE DETECTION

150 150 Sadmira Malaj
Editions:PDF
DOI: 10.37199/c41000316

SAFEGUARDING DIGITAL AUTHENTICITY AND WOMEN'S IDENTITY THROUGH DEEPFAKE DETECTION

Author
Livia IBRANJ, POLIS University (Tirana, Albania)

Abstract
Deepfake technology, the algorithmic manipulation of images and videos, is renowned for its ability to create highly realistic and stimulating content. They leverage deep learning and train generative neural architectures to map voices and faces onto another person's body. Using Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs) enables accurate manipulation of voices, facial features, and expressions to create images and videos that closely resemble real people. While this may appear as a technological achievement, deepfakes have enabled profound harms, including identity theft, harassment, and non-consensual explicit imagery, with women comprising 96% of victims. This paper explores multimodal detection approaches that combine deep learning features with forensic analysis to differentiate AI-generated images from authentic photographs. Our methodology integrates complementary detection strategies: deep semantic features via EfficientNet-B3 and CLIP models (2,304 dimensions), frequency-domain analysis detecting spectral anomalies, noise residual statistics, Local Binary Pattern texture descriptors, and facial forensics— totalling 2,339 features per image. An ensemble classifier combining Gradient Boosting and Logistic Regression was trained on 200 images (100 authentic photographs, 100 AI-generated from Midjourney, Stable Diffusion, and DALL-E), achieving 85% accuracy with 98.25% ROC-AUC. Performance analysis reveals asymmetric characteristics: 95% recall for authentic images versus 75% recall for AI-generated content, while maintaining 93.75% precision on synthetic detection. The 25% false negative rate underscores that technical detection alone cannot solve deepfake abuse— comprehensive protection requires platform accountability, legislative frameworks, and victim support systems. This study contextualises technical findings within the social crisis of digital sexual violence, examines documented psychological impacts on victims, identifies critical legal gaps, and outlines future research directions, including larger datasets, temporal analysis, and hybrid human- AI detection systems.

Keywords: Deepfake Detection, Generative Adversarial Networks, Artificial Intelligence (AI), Multimodal Feature Extraction

 

 

Published:
Publisher: Polis_press
Tags: