The proliferation of fake profiles on social media presents critical cybersecurity and misinformation challenges, necessitating robust and scalable detection mechanisms. Such profiles weaken consumer trust, reduce user engagement, and ultimately harm brand reputation and platform credibility. As adversarial tactics and synthetic identity generation
[...] Read more.
The proliferation of fake profiles on social media presents critical cybersecurity and misinformation challenges, necessitating robust and scalable detection mechanisms. Such profiles weaken consumer trust, reduce user engagement, and ultimately harm brand reputation and platform credibility. As adversarial tactics and synthetic identity generation evolve, traditional rule-based and machine learning approaches struggle to detect evolving and deceptive behavioral patterns embedded in dynamic user-generated content. This study aims to develop an AI-driven, multi-modal deep learning-based detection system for identifying fake profiles that fuses textual, visual, and social network features to enhance detection accuracy. It also seeks to ensure scalability, adversarial robustness, and real-time threat detection capabilities suitable for practical deployment in industrial cybersecurity environments. To achieve these objectives, the current study proposes an integrated AI system that combines the Robustly Optimized BERT Pretraining Approach (RoBERTa) for deep semantic textual analysis, ConvNeXt for high-resolution profile image verification, and Heterogeneous Graph Attention Networks (Hetero-GAT) for modeling complex social interactions. The extracted features from all three modalities are fused through an attention-based late fusion strategy, enhancing interpretability, robustness, and cross-modal learning. Experimental evaluations on large-scale social media datasets demonstrate that the proposed RoBERTa-ConvNeXt-HeteroGAT model significantly outperforms baseline models, including Support Vector Machine (SVM), Random Forest, and Long Short-Term Memory (LSTM). Performance achieves 98.9% accuracy, 98.4% precision, and a 98.6% F1-score, with a per-profile speed of 15.7 milliseconds, enabling real-time applicability. Moreover, the model proves to be resilient against various types of attacks on text, images, and network activity. This study advances the application of AI in cybersecurity by introducing a highly interpretable, multi-modal detection system that strengthens digital trust, supports identity verification, and enhances the security of social media platforms. This alignment of technical robustness with brand trust highlights the system’s value not only in cybersecurity but also in sustaining platform credibility and consumer confidence. This system provides practical value to a wide range of stakeholders, including platform providers, AI researchers, cybersecurity professionals, and public sector regulators, by enabling real-time detection, improving operational efficiency, and safeguarding online ecosystems.
Full article