Next Article in Journal
Hybrid B5G-DTN Architecture with Federated Learning for Contextual Communication Offloading
Previous Article in Journal
A Framework for Data Lifecycle Model Selection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Safeguarding Brand and Platform Credibility Through AI-Based Multi-Model Fake Profile Detection

by
Vishwas Chakranarayan
1,
Fadheela Hussain
2,
Fayzeh Abdulkareem Jaber
2,
Redha J. Shaker
2 and
Ali Rizwan
3,*
1
College of Administrative and Financial Sciences, University of Technology Bahrain, Salmabad P.O. Box 18041, Bahrain
2
Department of Computer Science, University of Technology Bahrain, Salmabad P.O. Box 18041, Bahrain
3
Department of Industrial Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah 21589, Saudi Arabia
*
Author to whom correspondence should be addressed.
Future Internet 2025, 17(9), 391; https://doi.org/10.3390/fi17090391
Submission received: 23 July 2025 / Revised: 27 August 2025 / Accepted: 27 August 2025 / Published: 29 August 2025

Abstract

The proliferation of fake profiles on social media presents critical cybersecurity and misinformation challenges, necessitating robust and scalable detection mechanisms. Such profiles weaken consumer trust, reduce user engagement, and ultimately harm brand reputation and platform credibility. As adversarial tactics and synthetic identity generation evolve, traditional rule-based and machine learning approaches struggle to detect evolving and deceptive behavioral patterns embedded in dynamic user-generated content. This study aims to develop an AI-driven, multi-modal deep learning-based detection system for identifying fake profiles that fuses textual, visual, and social network features to enhance detection accuracy. It also seeks to ensure scalability, adversarial robustness, and real-time threat detection capabilities suitable for practical deployment in industrial cybersecurity environments. To achieve these objectives, the current study proposes an integrated AI system that combines the Robustly Optimized BERT Pretraining Approach (RoBERTa) for deep semantic textual analysis, ConvNeXt for high-resolution profile image verification, and Heterogeneous Graph Attention Networks (Hetero-GAT) for modeling complex social interactions. The extracted features from all three modalities are fused through an attention-based late fusion strategy, enhancing interpretability, robustness, and cross-modal learning. Experimental evaluations on large-scale social media datasets demonstrate that the proposed RoBERTa-ConvNeXt-HeteroGAT model significantly outperforms baseline models, including Support Vector Machine (SVM), Random Forest, and Long Short-Term Memory (LSTM). Performance achieves 98.9% accuracy, 98.4% precision, and a 98.6% F1-score, with a per-profile speed of 15.7 milliseconds, enabling real-time applicability. Moreover, the model proves to be resilient against various types of attacks on text, images, and network activity. This study advances the application of AI in cybersecurity by introducing a highly interpretable, multi-modal detection system that strengthens digital trust, supports identity verification, and enhances the security of social media platforms. This alignment of technical robustness with brand trust highlights the system’s value not only in cybersecurity but also in sustaining platform credibility and consumer confidence. This system provides practical value to a wide range of stakeholders, including platform providers, AI researchers, cybersecurity professionals, and public sector regulators, by enabling real-time detection, improving operational efficiency, and safeguarding online ecosystems.

1. Introduction

Awareness of fake profiles on social media needs an urgent response because in the current age, new fraudulent methods and technological development exist [1]. The use and abuse of artificial intelligence (AI) to create fake accounts have been of great concern lately as accounts have taken on a realistic touch. These AI-generated profiles use realistic-looking images and characters to masquerade as real profiles. These accounts are mainly created to spread false information and to scam via networks that are operated predominantly on Twitter platforms [2]. Such extremely sophisticated fake account systems pose serious risks to the existence of online communities as well as shared information. The essence of machine learning algorithms in the detection of fake profiles is to investigate numerous attributes of accounts, which include interactions among the accounts, behaviors, and network properties. New exciting methods are being developed by technology developers, including keystroke dynamics analysis [3]. These also involve the analysis of typographic elements using behavioral biometrics to detect anomalies that fake accounts display. Other recent technologies have also made it possible to come up with better tools for detecting fake accounts using advanced systems offering new realistic capabilities against advanced threats. The area continues to present significant challenges, though. The rapid progress of AI enables scammers to create more sophisticated fake accounts that remain beyond existing protection and detection levels [4]. The creation of deepfake technology poses serious issues in information authentication, where fabricated works produced using deepfake are extremely realistic to both humans and mechanization systems. Deactivation of fraudulent accounts in social media platforms, like Meta, is slow. Financial delays also lead to two types of negative outcomes, which create monetary losses to users and discredit real businesses and individual reputations [5].
In recent years, the industrial relevance of AI-powered social media monitoring systems has become critical, especially as organizations and platforms struggle to ensure security, transparency, and trust. The threat posed by fake profiles not only affects individual users but also damages public institutions, influences democratic processes, and undermines the integrity of large-scale digital ecosystems. Therefore, robust AI-driven frameworks that are scalable, interpretable, and resistant to adversarial manipulation are now essential to the future of secure social computing.
RoBERTa, built on the BERT architecture, demonstrates a strong capability in capturing deep semantic meaning in textual data, outperforming conventional methods in tasks such as sentiment analysis and fake news detection. It is employed to identify fraudulent content in user-generated text, including bios, posts, and comments. ConvNeXt, a state-of-the-art convolutional model, enhances accuracy in image classification and is effective for verifying profile images and detecting manipulated visuals. Hetero-GAT, designed to model complex relationships within social networks, identifies irregular patterns by analyzing interactions such as likes, comments, and shares. When combined, these models provide a comprehensive solution for detecting fake profiles by integrating textual, visual, and social network data.
Counterfeit profiles have major negative effects on brand benefits, including decreasing consumer trust, which is essential in brand equity. In addition to undermining the security of these platforms, such issues as misinformation, scamming, and impersonation ruin the authenticity of user interactions, subsequently causing lower engagement and brand value. The marketing literature exposes that digital trust is central in brand credibility, and misleading accounts that may be used to provide fraudulent brand equity lead to the loss of brand equity as the consumer perception and use of the brand are being distorted. Protecting logos and intellectual property is not the only way to take advantage of brand credibility; ideally, brand credibility should result in the provision of genuine interactions that stimulate consumer confidence and their desirable loyalty. These factors are critical to the maintenance of the reputation of a brand.
Fake social media profiles pose a significant problem that compromises the security, integrity, and reliability of all online platforms. Online career profiles lacking authenticity serve criminal purposes, including financial crimes, fraudulent promotion activities, and fake coordination scheme operations [6]. AI technology has become so advanced in creating artificial profiles that detection has become very complex. Due to the implementation of deepfake technology, three key issues have emerged, including identity theft and privacy breaches, as well as the defamation of individuals through the creation of highly realistic fake images and videos. Addressing these issues can be achieved through a proper solution to the current situation, which involves improving safety standards and user reliability on social media platforms [7]. Adequate identification and removal of fake profiles in digital interactions ensures online integrity and effectively prevents the spread of misinformation and the activities of online scammers. The resolution to this problem generates several benefits for user data security and democratic practices by minimizing the influence of fake accounts in societal discussions [8]. This study undertakes the effort to develop safe digital communities through sophisticated determination techniques based on machine learning, behavioral analytics, and biometric verification technology.
Fake profiles have significant economic implications, contributing to ad fraud, scam-induced losses, and influencer marketing fraud, costing businesses billions annually. Consumer purchase decisions are heavily influenced by trust signals on social media, such as reviews, follower authenticity, and endorsements. Studies have shown that consumers are more likely to convert, remain loyal, and retain long-term relationships with brands they trust. Trust in digital platforms directly impacts conversion rates, with fraudulent accounts undermining these trust signals. As such, combating fake profiles is crucial not only for security but also for maintaining financial integrity and customer behavior stability in digital ecosystems [9].
Platform credibility serves as a valuable marketing asset, as platforms with high trust ratings attract advertisers, partners, and influencers seeking to reach authentic audiences. By reducing fake profiles, platforms can differentiate themselves in a competitive market, marketing features like “99% verified accounts” to highlight their commitment to authenticity and security. Implementing an AI-driven system to detect and mitigate fake profiles not only protects the platform but also directly ties to customer acquisition. Users are more likely to engage with platforms they perceive as safe and authentic, enhancing user retention and fostering a trustworthy digital environment for all stakeholders.
The AI model should be viewed not just as a technical security layer but as an integral part of brand risk management. By detecting fake profiles and impersonation accounts, this AI-driven system can be incorporated into brand monitoring tools to identify and mitigate threats targeting corporate brands or public figures. For instance, companies like Twitter and Facebook have faced significant brand damage due to the proliferation of fake accounts and impersonation, which has led to consumer mistrust and diminished brand value. By proactively addressing these risks, brands can better safeguard their reputation, preserve customer trust, and prevent financial losses.
Moreover, as the use of AI technologies grows in industrial sectors, including healthcare, finance, the Internet of Things, and autonomous systems, ensuring the security of social media platforms becomes central to cybersecurity measures in all organizations. This study also aligns with new trends in AI applications, focusing on the ethical use of AI, leveraging unstructured information across various fields of application, and achieving operational speed in responding to online hostilities in real time.
RoBERTa, ConvNeXt, and Hetero-GAT each demonstrate strong performance individually, but their integration into a unified model remains limited. Existing approaches often focus solely on either textual or visual features, which restricts their ability to detect advanced fake profiles. A more robust solution emerges by integrating textual, visual, and social interaction information. Furthermore, many existing models lack scalability across platforms and are vulnerable to adversarial attacks. The proposed framework combines RoBERTa, ConvNeXt, Hetero-GAT, and attention mechanisms, providing adaptability to evolving deception strategies and enabling real-time detection across diverse platforms with large-scale data volumes.
The increasing prevalence of fake profiles on social media poses significant adverse effects, making online authenticity checkups a critical necessity, as online platforms now encompass all facets of human communication, including economic operations and learning [10]. Fake profile users generate four major issues by distributing inaccurate information and engaging in financial manipulations that not only affect the privacy and anonymity of individuals but also misinform others, harming their minds. These threats erode trust in the digital ecosystem, as they compromise individual safety and societal institutions and democratic structures [11]. This study responds to the need to combat new deceptions generated by AI-created artificial profiles and fake content. The current state-of-the-art threats require innovative solution strategies, as the existing protection mechanisms are not effective enough. This study employs artificial intelligence techniques and incorporates behavioral information analysis and session authentication, delivering improved potential for detecting fake accounts [12]. The interrogation aims to create social media spaces that offer stronger security guarantees and credibility. To address this issue, the proposed research aims to develop a reliable detection technique that assists social media networks, cybersecurity experts, and government officials involved in the issue of deception [13]. The project aims to establish two key points: one is the adoption of new technologies, and the other concerns operational advantages that protect users against threats while simultaneously maintaining data credibility and fostering better online communities [14].
The proposed framework combines RoBERTa to analyze written text, convNeXt to verify the authenticity of images, and Hetero-GAT to investigate complex social networks. A late fusion on an attention-based approach can enable the incorporation of information across various modalities and the precise and understandable detection of fake profiles within the scheme. The ultimate goal is to address the significant issues of cybersecurity and misinformation by implementing a comprehensive plan that enhances the authenticity of online communication.
Despite advancements in fake profile detection, existing methods often focus on isolated modalities (text, image, or network interactions) and struggle to capture the full complexity of deceptive profiles. Many approaches fail to scale effectively across different platforms, and few are designed to withstand adversarial manipulation. Furthermore, many models lack interpretability, which limits their application in real-world scenarios
The key contributions of this study are as follows:
  • An innovative multi-modal detection system is proposed that integrates RoBERTa for textual analysis, ConvNeXt for image verification, and Hetero-GAT for modeling complex social interactions.
  • An attention-based late fusion strategy is introduced for cross-modal learning, enhancing detection accuracy and robustness.
  • The model demonstrates real-time applicability, achieving 98.9% accuracy and a 98.6% F1-score.
  • The system is shown to be resilient against diverse adversarial attacks, ensuring suitability for deployment in dynamic cybersecurity environments.
This study is organized as follows. Section 1 presents the background, motivation, and critical research gaps in fake profile detection in social media. It describes the necessity of AI-based, multi-modal options and provides a sketch of the suggested way forward. Section 2 provides a detailed survey of current detection methods, including rule-based approaches, machine learning classifiers, and novel deep learning-based models, as well as their limitations in detecting modern fake profile generation techniques. Section 3 gives further details of the postulated multi-modal framework. It demonstrates how RoBERTa can be used to analyze text, ConvNeXt can be used to verify images, and Hetero-GAT can be used to model social interactions. This Section also explains the attention-based late fusion mechanism for assembling the outputs of all modalities. Section 4 contains a description of the data collection process, data preprocessing, and data implementation. It establishes the evaluation parameters and the experimental conditions in which the model is trained and tested. Section 5 presents the experimental outcomes, including performance metrics, ablation studies, inference time comparisons, and other characteristics that assess adversarial robustness in various attack settings. In Section 6, this study will be concluded with key findings and recommendations on how the findings can be applied in the real world, including suggestions for future study on cross-platform generalization and privacy-preserving AI mechanisms.

2. Literature Review

To detect social media imposters, a machine learning detector was created to evaluate texts that are added by individuals and characteristics of their behavior. The experimental findings not only validated the suggested system’s superiority over rule-based models in deceptive account detection but also proved that the suggested system could detect the existence of deceptive accounts at very high rates of success. This article helped to build safe web environments through the discussion of the most important digital security risks, such as misinformation, deception, and manipulation [15].
Another study developed a system of monitoring that aims to identify the distributors of fake news within the social media platforms, where the metadata of the profile pages and the messages of the posts would be studied. The findings revealed that text analysis and profile characteristics are more accurate in the context of misinformation detection and tracking. It is this combination of semantic traits and behavioral markers that allowed the enhanced grouping of profiles and sources of information propagation, which in turn increased the efficiency of digital forensics in an example of a social media conglomeration [16].
To detect malicious user behavior, an experiment was carried out using a deep learning approach to enhance cybersecurity on social media. Through different deep learning models, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs), the system was able to identify abnormalities in behavior, thus improving threat detection and security enforcement. The multi-architecture guarantee made the system well positioned to respond to a large variety of deceptive behaviors, which made the level of detection high in addition to enhancing the portability and the fitness of the system on different platforms [17].
A group of researchers proposed a novel framework based on blockchain that uses the benefits of decentralized transparency and machine learning to detect and map out fraudulent profiles. The procedure of storing the records of suspicious activity became safe because of blockchain integration, as user responsibility and trust improved. This combined solution allowed not only immutable records, but also tamper-evident storage, which led the way to more reliable validation of identity in digital environments [18].
Fake profile detection was performed through a mix of deep convolutional neural network and Random Forest (RF) detection, and stalking behavior forecast was performed on the X (former Twitter) network using the hybrid method. This method of hybridization was the union of statistical feature engineering and elaborate visual feature extraction in determining model recurrences of deception and irregular involvement. The technique was found to be more accurate in detection compared to classic models. It was able to keep its users safe, increasing their safety by significant margins by predicting active threats [19].
Multi-modal approaches that employed spatiotemporal and contextual cues were used to examine the factors that led to the diffusion of misinformation on digital platforms. Adding in attention mechanisms improved the performance of the model significantly more, enabling it to understand fake data by learning what aspects of the data are meaningful. The strategy also fostered the transparency of platforms and digital security by increasing the granularity of detection on the platform and at the user level [20].
To understand fraudulent profiles, a Multinomial Naive Bayes classifier was used to examine the user-generated text and metadata. The model was very accurate, indicating lightweight statistical procedures could deliver real-time solutions in low-resource environments. Nevertheless, it also underlined the necessity of continuous feature engineering because of the changing behavioral patterns that are adopted by fraudsters [21].
The meta-models of machine learning and deep learning were reviewed systematically to determine their applicability in detecting deception in social networks. The review considered evidence from 36 different studies and pointed to several problems, among them being a biased dataset, no model validation on various platforms, and no standard evaluation criteria. With such careful scrutiny, it was evident that detection reliability in the actual environments was to be improved by applying consistent methods of analysis and powerful preprocessing pipelines [22].
A machine learning-based model was proposed to detect fake Instagram accounts based on a set of behavioral and profile features. The study demonstrated that high performance can be achieved with reduced feature sets, particularly by prioritizing engagement rates and content frequency. Nevertheless, the same issues, such as generalization to unseen datasets or a lack of user behavior patterns, are still important reasons for improvement [23].
A multi-modal deep learning model was proposed to track fake news by leveraging textual, visual, and contextual information on social networks. Based on CLIP models and LSTM networks, the framework achieves high detection rates and finds that integrating various modalities improves performance. It is instrumental in combating synthetic media and fake news, and as such, would be a solid choice as future content verification systems in platform moderation pipelines [24]. Table 1 presents an analytical comparison of previously conducted studies that address fake profile detection and the handling of misinformation within the context of social media. The techniques range from traditional machine learning to deep learning, blockchain integration, and multi-modal approaches, where data is combined using text, image, and behavioral data. All the studies offer particular contributions, with accuracy, interpretability, and real-time detection as their key strengths, and generalizability, computational cost, and adversarial robustness as commonly shared limitations. The insights will form the basis of stronger, more adaptive, and scalable AI-based social network security and user authentication frameworks.

Research Gap

Despite the advances in deep learning algorithms like RoBERTa, ConvNeXt, and Hetero-GAT, which demonstrate a significant trend in detecting fake profiles, some key research gaps remain to be filled. Among the most acute issues is the lack of flexibility in existing detection frameworks in response to the changing patterns of deceptive practices implemented by malicious users. These actors continually adapt their methods in response to new security measures, which means that the efficacy of static detection models in the long run is eroded. Therefore, the creation of adaptive systems to dynamically learn and react to new fraudulent patterns is an extremely urgent need.
In addition, although numerous current research efforts, along with the proposed one, show a high level of accuracy on benchmark data, they are frequently not cross-platform checked. The structure of data, the type of data, and the patterns of user behavior vary significantly across different social media platforms, and detection models must adapt effectively to these diverse environments. Their feasibility in real-life applications is limited without extensive testing on various real-life data.
The other main drawback is the performance trade-offs associated with late fusion strategies that utilize attention. Despite their strong multi-modal integration ability and interpretability (i.e., reliability in classifications), these mechanisms typically introduce latency and increased computing load, in addition to the generalized framework. This undermines the scalability of such models in large-scale, real-time operation of social networks.
Additionally, although ConvNeXt performs competitively on visual data analysis, it exhibits degraded performance that fails to address realistic generative AI-based synthetic media and deepfake profile content. Due to the growing sophistication of artificial media, existing visual modules may be insufficient to maintain detection accuracy.
Another research gap is linked to mistrust among end-users and moderators on the platform regarding explainability. Intuitive understanding and visual explanations of the reasoning underlying decisions are not readily available in most detection models, and their high-level utility is thus limited in content moderation teams and stakeholders who face the ultimate responsibility for making policy and user-trust decisions.
Lastly, adversarial robustness in multi-modal systems is insufficiently explored. The current frameworks are seldom tested in the context of coordinated attacks on textual, visual, and graph-based features in unison. These weaknesses have the potential to further impact precision under adversarial circumstances, particularly in high-risk information ecosystems.
To overcome such vulnerabilities, one should model an adaptive, interpretable, and adversarially robust AI architecture that can support generalization beyond platforms and resist adversaries that change over time. Further developments of scalable deployment, cross-domain testing, and real-time interpretability, and ensuring a practical, secure, and trusted integration of fake profile detection systems in imaginable social media settings should be prioritized in the upcoming study.

3. Methodology

Fake accounts on social networks have become a significant cybersecurity issue, endangering user privacy and facilitating mass misinformation campaigns due to their sheer numbers, which threaten immediate security measures and necessitate the development of AI-based defenses. Current rule-based and traditional machine learning systems lack the flexibility to identify changing and deceitful profiles, highlighting the potential of novel AI methods to enhance the resilience of systems in emerging cybersecurity technologies. In this study, a novel three-component deep learning model, integrating RoBERTa, ConvNeXt, and Hetero-GAT, is proposed to detect fake profiles on industrial-scale social networks. The framework is composed by combining all three modalities, namely textual, visual, and social graphs, thereby increasing the fidelity of detection along with the interpretability and deployment readiness of AI-borne cybersecurity infrastructure. The model offers early and late fusion force mechanics, which enable better performance compared to standard classification systems.

3.1. Dataset Description

The evaluation uses a benchmark multi-modal dataset consisting of 150,000 social media profiles (75,000 genuine and 75,000 fake), obtained from publicly available, anonymized repositories widely used in academic research, such as FakeNewsNet and Twitter Bot datasets. The dataset includes text, profile images, and graph-structured interactions, ensuring robustness in real-world operational environments. Textual information from user bios, posts, and comments is analyzed to detect patterns indicative of fake profiles. Profile picture data is examined to identify both artificially generated and computer-manipulated content. In this study, a profile is defined as a complete set of user-related information, covering textual data (name, bio, posts, and comments), visual data (profile and posted images), and social interaction data (connections, followers, likes, and comments). These varied data classes serve as multi-modal inputs to the detection model and collectively contribute to identifying fake profiles. The detection of anomalies leverages social network connections, along with behavioral interactions, which are represented as graph data. Supervised learning models can be trained through the profile authenticity labels detected in the dataset. Figure 1 shows the proposed fake profile detection framework.

3.2. Dataset Preprocessing

Model robustness improvement requires the use of several preprocessing methods that affect distinct types of data. The RoBERTa model generates embeddings based on deep text context semantics after performing tokenization and removing stop terms on text data processing. The preprocessing steps for images involve size normalization and enhancement through various augmentation methods, which optimize the feature extraction capabilities of ConvNeXt.
In this study, a Gated Recurrent Unit (GRU) is integrated into the RoBERTa model to capture sequential dependencies within textual data, such as user posts and bios. The GRU’s role is to preserve contextual relationships between words across long sequences, enabling the model to better detect patterns of deception in user-generated content. By carrying this out, it enhances RoBERTa’s ability to recognize linguistic anomalies typical of fake profiles that exhibit unusual or inconsistent narrative patterns.
The preprocessing pipeline constructs heterogeneous social graphs, where users and posts are represented as nodes with interaction edges, enabling the Hetero-GAT module to capture complex network patterns for enhanced security monitoring. Standards of features maintain numerical data stability between different modalities, which improves the generalization capabilities and effectiveness of the model.

3.3. RoBERTa

The proposed model incorporates RoBERTa as a core textual encoder, selected for its proven effectiveness in enterprise-level NLP tasks, enhancing its suitability for industrial AI applications in social media analytics. RoBERTa represents an advanced version of the widely used BERT model, which originated from the Bidirectional Encoder Representations from Transformers (BERT) architecture. The BERT model serves as the foundation for RoBERTa because they share the same Transformer architecture base. Sequence-to-sequence tasks with long-range dependencies should utilize the Transformer as their model of choice. The model applies self-attention techniques above recurrence or convolution for spotting relevant input–output connections. The self-attention mechanism operates by applying more weight to essential inputs to shorten the length of the sequence data. The Transformer comprises two fundamental components, including a pair of encoder and decoder units. The transformer contains multiple layers that enable its operations, including self-attention followed by a feed-forward network for the encoder, as well as self-attention layers backed by encoder–decoder attention and a feed-forward network for the decoder. The input text reading function falls under the responsibility of the encoder, but the task of prediction belongs to the decoder component. The proposed model utilizes only the encoder element of RoBERTa as its text encoding layer. The creators of BERT designed the system to solve existing context limitations that appear in unidirectional approaches. The model served its original purpose for both the Masked Language Modeling and Next Sentence Prediction frameworks. BERT accomplishes two tasks through its masked token features and semantic understanding of different text sections, enabling it to predict the following sentence. The RoBERTa model outperforms BERT in various operational aspects. The tokenization method of Byte Pair Encoding at the byte level, as presented in RoBERTa, generates smaller vocabularies and more resource-efficient performance compared to BERT’s character-level Byte Pair Encoding. The data preprocessing of BERT involves static masking, as masking occurs only once. In contrast, RoBERTa performs dynamic masking through multiple duplicated sequences and adjusted attention masks, which enables the RoBERTa model to process a range of input sequences. RoBERTa utilizes extensive training with larger database volumes and increased batch sizes, along with longer sequence inputs, across a prolonged training duration. A total of four datasets were used for training BERT: Book Corpus + English Wikipedia (16 GB), CC-News (76 GB), Open Web Text (38 GB), and Stories (31 GB).
In this study, the input information of the RoBERTa model is established with the pretrained RoBERTa tokenizer. Upon the raw text processing, the tokenizer builds subword token sequences using vocabulary knowledge of the large training corpus of RoBERTa. Using this type of processing, textual semantics are preserved, where out-of-vocabulary items exert less semantic contribution to the text. The tokenization process results in token skeletons, to which an explicit input ID will be assigned by the RoBERTa vocabulary based on its sequence position within the vocabulary. All tokens are given an attention mask, and based on this, the model gains an understanding of their significance compared with the other elements of the input sequence. The attention mechanism allows the model to concentrate on tokens with special words and ignore other less crucial words, which ensures good performance at the next action. The model takes in input IDs and attention masks and mines them down into a 12-stacked network with 768 hidden states. The RoBERTa model works based on self-attentive processing of a sequence extracting information on different levels of abstraction. This is because the RoBERTa model uses a lightweight GRU to implement a dependency in the sequence without incurring excessive complexity that would impede real-time inference, so as to scale in the cyber security pipeline. Adding a GRU facilitates use of contextual information to RoBERTa and multi-token dependencies, resulting in improved prediction outcomes.

3.4. ConvNeXt

ConvNeXt is employed to process and verify profile images as part of the multi-modal detection framework. Profile images, as well as user-uploaded content (posts, photos), are selected based on the presence of an image file linked to a user’s profile. These images are resized and normalized to a fixed size to maintain consistency across the dataset. The images are then passed through ConvNeXt’s feature extraction pipeline, where they undergo depth-wise separable convolutions to extract high-resolution visual features. For profiles without images, the system defaults to relying on textual and social interaction data from the profile to identify possible indicators of fakeness. This ensures that profiles with missing images are still analyzed effectively using other available modalities.
The ConvNeXt framework uses an entire convolutional structure to transform standard CNN architectures by adding depth-wise convolution features alongside LN and inverted bottlenecks to create highly effective high-resolution feature extraction capabilities. Traditional CNNs suffer from scalability issues due to the growth of parameters, particularly with depth expansion, as training and inference operations become increasingly expensive. The predetermined convolutional network structure restricts these networks from effectively exploring complex data patterns, which prevents them from utilizing all the available data information. ConvNeXt utilizes depth-wise separable convolution blocks and inverted bottlenecks to deliver high-resolution image feature extraction with reduced latency, making it ideal for real-time detection in AI-powered security systems. The depth-wise separable convolution consists of a depth-wise convolution followed by a point-wise convolution, as shown in Figure 2.
Using separate convolutions on each input channel enables the collection of spatial information without allowing channel interactions to occur. The 1 × 1 convolution performs non-linear operations on channel-based information integration. The architectural design of ConvNeXt features multiple ConvNeXt blocks, as shown in Figure 3.
The ConvNeXt block incorporates drop path integration, a method that serves as an efficient means of preventing overfitting. The model is trained with a drop path, which removes random network path segments to learn multiple diverse data representations that enhance generalization. ConvNeXt develops a new bottleneck structure that operates in the opposite direction compared to ResNet.
Figure 4 illustrates the structure, comprising three components: a 1 × 1 convolution kernel, a 3 × 3 convolution kernel, and another 1 × 1 convolution kernel. These three convolutional layers in the model have their output channels modified based on the specifications of individual models. This design of the inverse bottleneck structure begins by increasing channel counts but later reduces them. The feedback structure of the ConvNeXt network enhances its ability to perform non-linear transformations while maintaining feature expression capabilities, resulting in improved performance outcomes.

3.4.1. Feature Extraction Pipeline

Let an input profile image be represented as
I ϵ R H × W × C
where H, W, and C denote height, width, and the number of channels, respectively. The feature extraction process in ConvNeXt follows a hierarchical structure:
  • Patch Embedding Layer: The image is divided into patches of size P × P , where each patch is transformed into a feature vector x 0 :
x 0 = f e m b e d I
where f e m b e d   ( I ) is a convolutional layer with a stride of P.
  • Depth-Wise Convolutional Blocks: The main body of ConvNeXt consists of depth-wise convolutional layers, followed by Layer Normalization (LN) and GELU activation. Each convolutional block is formulated as
x l + 1 = x l + f c o n v L N G E L U W l x l
where W l is the convolution kernel, ∗ represents the convolution operation, and f c o n v   denotes the depth-wise convolution function.
  • Feature Aggregation: The extracted hierarchical features are aggregated through global average pooling (GAP), defined as
F g a p = 1 H W i = 1 H j = 1 W x i , j
where x i , j is the feature map activations at spatial locations i , j .

3.4.2. Fake Profile Image Classification

To classify images as authentic or fake, the current study introduces a fully connected classification head on top of ConvNeXt’s feature representations. Given the extracted feature vector F g a p , The final classification output y is computed as
y = σ W f c F g a p + b f c
where W f c and b f c are learnable parameters, and σ is the softmax activation function:
σ z i = e z i j e z i

3.5. Heterogeneous Graph Attention Network (Hetero-GAT)

Hetero-GAT is employed to model complex relationships within social network data, which includes multiple types of nodes such as users, posts, images, and interactions like follows, likes, and comments. Each node is embedded with relevant features (e.g., user activity, post content, image metadata), and the edges represent the relationships or interactions between these nodes. The attention mechanism in Hetero-GAT allows the model to dynamically adjust the importance of each relationship based on the context of the task. By analyzing these multi-modal connections, Hetero-GAT is able to capture the distinctive characteristics of fake accounts—such as abnormal engagement patterns, unusual user connections, or irregular content sharing behaviors—providing valuable insights into profile authenticity. This enables the detection of suspicious behavior, such as coordinated manipulation or bot-like activity, that might indicate the presence of fake profiles.
The Hetero-GAT model facilitates the detection of fake profiles by analyzing user connections, content engagement, and posting behavior, thereby enabling complex social interactions. The model represents a heterogeneous social network, where nodes correspond to users and posts, and edges encode interactions such as likes and reposts. Each node v (user or post) is initialized with an embedding vector:
h v ( 0 ) = x v
where x v represents the initial feature vector extracted from user metadata (e.g., activity patterns) and post embeddings.
To model relationships in the network, Hetero-GAT applies an attention-based message-passing mechanism, computing attention coefficients for each edge type (A, r, B), where nodes of type A interact with nodes of type B via relation r:
α u v ( r ) = exp l e a k y R e L U a r T W r h u W r h v k N u ( r ) exp L e a k y R e L U a r T W r h u W r h k
where W r I t is a learnable weight matrix, a r is an attention vector, denotes concatenation, and N u ( r ) is the set of neighbors under relation r. This mechanism enables the model to distinguish between different types of social interactions while learning the importance of each connection.
To update node representations, the model aggregates attention-weighted messages across different relation types:
h u = σ r R v N u r α u v r , l W r l h v l
where l represents the layer index. The final user embeddings are passed through a fully connected layer to classify profiles as real or fake:
y ^ u = σ W o u t h u L + b o u t
where W o u t and b o u t are learnable parameters, and σ is a sigmoid activation function for binary classification. The model is optimized using BCE loss:
L = 1 N u = 1 N ( y u log y ^ u + ( 1 y u ) log ( 1 y ^ u ) )
where y u is the actual label (1 for fake, 0 for real) and y ^ u is the predicted probability.
This Hetero-GAT model is integrated into the RoBERTa-ConvNeXt-HeteroGAT framework, complementing RoBERTa for textual analysis and ConvNeXt for image-based analysis. The framework employs an attention-based late fusion mechanism to adaptively weight modality contributions, thereby improving detection accuracy, interpretability, and resilience against multi-modal adversarial attacks, which are key for operational security systems. By effectively modelling anomalous patterns in social networks, Hetero-GAT significantly improves fake profile identification, outperforming conventional classifiers. The modular architecture and efficient preprocessing make this framework well suited for real-time deployment in industrial social platforms, offering robust, explainable, and secure identity validation at scale.

4. Experimental Setup

4.1. Feature Extraction and Model Components

The proposed AI-driven framework extracts feature from three complementary modalities to enable resilient cybersecurity applications across social platforms:
  • RoBERTa extracts deep semantic representations from user-generated textual data.
  • A GRU captures sequential dependencies within the text stream to preserve contextual continuity critical for profile behavior modeling.
The output textual feature vector is denoted as
F t e x t = G R U R o B E R T a X
Visual Feature Extraction (ConvNeXt) for Secure Identity Validation
  • Profile images are divided into patches and passed through hierarchical convolutional blocks.
  • Depth-wise convolution extracts spatial features.
  • Global average pooling (GAP) aggregates feature maps into a compact feature vector.
The final image feature vector is
F i m a g e = G A P C o n v N e X t I
Social Network Analysis (Hetero-GAT)
  • The heterogeneous graph module models user–user and user–content interactions, emphasizing behavioral irregularities that often indicate deceptive accounts.
  • Hetero-GAT employs multi-relation attention mechanisms to learn context-aware node representations adaptively.
The extracted graph-based feature vector is
F g r a p h = H e t e r o G A T ( G )

4.2. Feature Fusion and Classification

An attention-guided late fusion mechanism combines the three modality-specific feature vectors ( F t e x t ,   F i m a g e ,   F g r a p h ) , optimizing cross-modal synergy for industrial-strength profile detection:
F f u s e d = m = 1 3 α m F m
where α m represents the importance weight of each modality.
The fused features are passed through a fully connected classification layer with softmax activation:
y = σ W f c F f u s e d + b f c

4.3. Training and Evaluation

  • The framework is trained using Binary Cross Entropy (BCE) loss, suitable for secure binary classification in adversarial contexts.
  • Optimizer: Adam optimizer with learning rate η = 10−4.
  • Batch Size: 64
  • Evaluation Metrics: Accuracy, precision, recall, F1-score, AUC-ROC, and Matthews Correlation Coefficient (MCC) were selected to reflect robust detection in AI-based security systems.
To ensure fair benchmarking, the performance of the proposed framework was compared against traditional and deep learning baselines, including SVM, RF, and Long Short-Term Memory (LSTM). As shown in Table 2, the proposed RoBERTa-ConvNeXt-HeteroGAT model outperformed all baseline methods, achieving the highest accuracy (98.9%), precision (98.4%), and F1-score (98.6%), and demonstrating its superior capability for secure adversarial classification.

4.4. Evaluation Metrics

To assess the performance of the RoBERTa-ConvNeXt-HeteroGAT model for fake profile detection, the current study employs the following standard evaluation metrics:
  • Accuracy
Accuracy measures the proportion of correctly classified profiles (both real and fake) out of the total sample. It is defined as
A c c u r a c y = T P + T N T P + T N + F P + F N
  • Precision
Precision measures the proportion of correctly identified fake profiles out of all profiles predicted as counterfeit:
P r e c i s i o n = T P T P + F P
A high precision score indicates that the model is reliable in identifying fake profiles, with fewer false positives.
  • Recall
Recall (or sensitivity) measures the proportion of actual fake profiles that are correctly identified:
R e c a l l = T P T P + F N
A high recall value ensures that the model detects most fake profiles, with a minimal number of false negatives.
  • F1-Score
The F1-score is the harmonic mean of precision and recall, providing a balanced measure of performance.
F 1 - S c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
It ensures a trade-off between precision and recall, which is beneficial when class distributions are imbalanced.
  • Area Under the Receiver Operating Characteristic Curve (AUC-ROC)
The AUC-ROC score assesses the model’s ability to distinguish between genuine and fake profiles across various classification thresholds. It is computed as
A U C R O C = 0 1 T P R d ( E P R )
  • Matthews Correlation Coefficient (MCC)
MCC provides a more balanced assessment of classification performance, especially in imbalanced datasets:
M C C = T P × T N F P × F N ( T P + F P ) ( T P + F N ) ( T N + F P ) ( T N + F N )
In addition to standard evaluation metrics, the Matthews Correlation Coefficient (MCC) was incorporated to provide a balanced assessment of classification performance in imbalanced datasets. As presented in Table 3, the proposed RoBERTa-ConvNeXt-HeteroGAT model achieved an MCC of 0.97, alongside an accuracy of 98.9%, precision of 98.4%, recall of 98.7%, F1-score of 98.6%, and AUC-ROC of 0.99. These results confirm the robustness of the proposed framework, validating its superiority over traditional classifiers and demonstrating its suitability for real-time cybersecurity applications in AI-driven social platforms.
These results validate the proposed framework’s superiority over traditional classifiers, demonstrating high accuracy, resilience, and precision, making it suitable for real-time cybersecurity operations in AI-powered social platforms.

5. Results

The proposed RoBERTa-ConvNeXt-HeteroGAT model shows an enormous improvement in real profile detection compared to traditional machine and deep learning classification models. The multi-modal fusion strategy proved highly effective in jointly capturing text semantics, image authenticity, and complex social interactions, leading to superior results across all evaluation metrics. To evaluate the performance of the proposed model, RoBERTa-ConvNeXt-HeteroGAT, a comparison test was carried out with some classic machine learning and deep learning baselines, including SVM, RF, LSTM, RoBERTa, ConvNeXt, and Hetero-GAT. Such results demonstrate that the suggested architecture demonstrates state-of-the-art performance, incomparably surpassing all baselines in essential metrics. The model achieves high performance, with an accuracy of 98.9%, precision of 98.4%, recall of 98.8%, and F1-score of 98.6. Its discriminative capability is further validated by an AUC-ROC of 99.2%. These findings affirm that multi-modal feature fusion enhances the robustness of detection. A more detailed comparison is presented in Table 4.
Figure 5 shows the performance comparison of the proposed RoBERTa-ConvNeXt-HeteroGAT model in comparison with baseline classifiers, i.e., SVM, RF, LSTM, RoBERTa, ConvNeXt, and Hetero-GAT, in terms of accuracy, precision, recall, F1-score, and AUC-ROC. The model presented here achieves the best accuracy rate, with 98.9% precision, 98.4% recall, a 98.8% F1-score, and 98.6% AUC-ROC scores, meaning it significantly outperforms any other model. The findings validate the framework as effective, robust, and superior in accurately identifying fake accounts in social networks, making it a strong candidate for AI-driven cybersecurity solutions in online identity verification.
The individual contributions of the text- (RoBERTa), image- (ConvNeXt), and graph-based (Hetero-GAT) components were evaluated in a 3% ablation study. As Table 5 shows, where individual modalities can be pretty competitive, with Hetero-GAT performing the best as a single modality, the multi-modal fusion approach outperforms all others, achieving 98.9% accuracy, 98.4% precision, 98.8% recall, and a 98.6% F1-score. These results confirm the effectiveness of heterogeneous integration in detecting powerful fake profiles.
Figure 6 shows the relative results of individual modality-based models—RoBERTa (text), ConvNeXt (image), and Hetero-GAT (graph)—and the multi-modal fusion framework. On the one hand, when evaluated separately, all the models may be said to perform well; however, their predictive ability is weaker than that of the integrated method. Multi-modal fusion is the best-performing model on all measures, achieving 98.9% accuracy, 98.4% precision, 98.8% recall, and a 98.6% F1-score. This demonstrates the effectiveness of a holistic approach to fake profile detection, which incorporates textual, visual, and social graph data. The results validate that the use of multiple modalities is a significant strength and reliability factor in detecting cybersecurity threats in real-life scenarios.
The RoBERTa-ConvNeXt-HeteroGAT model has outstanding real-time efficiency, as each profile can be processed in 15.7 milliseconds. This inference speed outperforms that of all other baseline models, including SVM (29.4 ms), RF (24.8 ms), LSTM (42.3 ms), RoBERTa (38.5 ms), ConvNeXt (33.1 ms), and Hetero-GAT (27.9 ms). This is compared in Table 6, as highlighted by its applicability in being implemented in large-scale social sites. It operates with low latency, ensuring the timely identification of threats, making it a suitable tool in high-throughput, AI-based cyber environments that demand speed and precision in identifying fake profiles.
A comparison of the inference time of models such as SVM, RF, LSTM, RoBERTa, ConvNeXt, Hetero-GAT, and the proposed RoBERTa-ConvNext-HeteroGAT framework is shown in Figure 7. The offered model has the shortest processing time (15.7 milliseconds per profile), which is much better than the others based on their approaches. This high performance indicates that it is well suited for real-time applications in which quick decision-making is essential. The proposed architecture would offer low-latency operation compared to standard machine learning and deep learning models. It can therefore be highly effective in large-scale, AI-powered cybersecurity systems that require fast and reliable detection of fake accounts.
The proposed RoBERTa-ConvNeXt-HeteroGAT model was evaluated against various adversarial attacks, including textual, image, and artificially designed engagement patterns. As Table 7 demonstrates, the model exhibits relatively faultless performance in all adversarial situations. No more than a 4.7% accuracy drop is observed for combined attacks, which is a sign of high resilience. In particular, the model declines by 1.3 percent and 2.1 percent, respectively, under textual perturbations and image manipulations, as well as by 3.4 percent under synthetic engagement. These findings confirm the model’s potential for secure deployment in real-world systems, demonstrating high detection accuracy even under adversarial conditions.
The scalability of the proposed framework was tested in terms of training and testing it on three datasets containing from 10,000 social media profiles to 150,000 social media profiles. As shown in Table 8, the performance improves as the dataset size increases. The model performs well at 10,000 profiles, achieving an accuracy of 96.1 percent. When the entire dataset of 150,000 profiles is used, the accuracy increases to 98.9 percent. The same tendency is observed with the precision, recall, and F1-score values. These findings demonstrate that the model performs well in generalizing to large-scale social media settings, indicating that it is suitable for industrial use cases that require high-throughput and scalable AI-based detection of fake profiles.
Figure 8 presents a comprehensive assessment of the model’s robustness and scalability. The left panel illustrates the degradation in accuracy under various adversarial attack scenarios. The proposed RoBERTa-ConvNeXt-HeteroGAT model demonstrates strong resilience with minimal performance loss: 1.3% for textual perturbations, 2.1% for image manipulations, 3.4% for synthetic engagement, and 4.7% under combined attacks. The right panel illustrates the scalability of the model, where accuracy continues to rise to 98.9% as the training set size increases from 10,000 to 150,000 profiles. These findings confirm the model’s ability to manage adversarial attacks and scale to realistic and high-traffic-production social media settings.
The approach suggested in this study, based on RoBERTa-ConvNeXt-HeteroGAT, is a highly precise, computationally efficient, and adversarially robust mechanism for identifying fake profiles in industrial social networks. The model outperforms other methods by combining multi-modal data, including textual elements of content, profile images, and graph-initialized behaviors, as input. The model achieves state-of-the-art performance, and the performance gain is substantial compared to single-modality deep learning and conventional machine learning methods. Such architecture not only improves the accuracy of detection but also makes it robust to adversarial manipulation. This demonstrates the effectiveness of multi-modal-driven AI measures in combating misinformation and online security attacks, while providing a secure and scalable framework for identity authentication across online platforms.
Beyond the model’s accuracy, it is essential to track metrics that reflect its business and marketing relevance. Key Performance Indicators (KPIs) should include the percentage reduction in fake follower counts for brands, increases in user trust scores (measured through surveys), and a decrease in misinformation spread across the platform. These outcomes are valuable in demonstrating the model’s impact on brand credibility and consumer trust. Such KPIs can be reported to stakeholders to highlight the platform’s commitment to security and authenticity. They can also be leveraged in brand communication strategies, emphasizing the platform’s role in fostering a trustworthy digital ecosystem.

6. Discussion

6.1. Key Findings and Comparative Analysis

The current study proposes a cybersecurity-focused multi-modal deep learning architecture that combines textual (RoBERTa), visual (ConvNeXt), and graph-based social interaction (Hetero-GAT) modalities to identify fake profiles on social media sites. The experimentally proposed RoBERTa–ConvNeXt–Hetero-GAT model achieves a high performance of 98.9% precision and 98.6% F1-score, demonstrating its applicability in AI-driven intrusion detection and threat analysis systems, and easily surpassing regular competitors such as SVM (85.3%), RF (89.2%), and LSTM (92.4%). In comparison to the earlier literature, such as [1], which achieved a result of 91.3% on a BiLSTM, and [2], which reported 90.2% in image-based detection using CNNs, the current study’s model surpasses these results by a large margin. The attention-based late fusion approach enhances contextual threat modeling, improving classification performance by emphasizing the semantic importance of each modality. Notably, the model maintains robustness under adversarial attacks, losing only 4.7% of its accuracy compared to other models, such as [3], which reported a decline of over 8% when subjected to similar attacks.

6.2. Theoretical Implications

These findings support the evolving hypothesis that multi-modal architectures are more effective in capturing user behavior within digital ecosystems, producing more precise detection results [4]. The introduction of Hetero-GAT as part of the framework contributes to the existing literature on the relevance of graph mechanics that feature heterogeneous structures in modeling relational and behavioral anomalies. Furthermore, the high performance of attention-based late fusion underscores its theoretical utility in harmonizing feature importance across different modalities, which was not fully addressed in previous unimodal or bimodal experiments [5]. These results contribute to the field of intelligent defense by demonstrating the advantages of multi-view learning for identifying deceptive or adversarial entities in real-world networks.

6.3. Strategic and Policy Implications

The current framework presents an efficient and scalable detection mechanism that can be deployed on a large social platform in real time. The model has a fast inference rate per profile (15.7 milliseconds) and has demonstrated performance on a large scale with up to 150,000 accounts. This is an AI-based solution for real-time threat detection and security automation that policymakers and platform administrators can use to implement more robust identity verification practices, minimize the risk of accessing misinformation, and combat coordinated disinformation. These capabilities directly align with the objectives of intelligent cyber defense systems, supporting more strategic and larger-scale aspects of digital trust, platform integrity, and public information security.

6.4. Critical Reflections and Limitations

Despite promising results, certain limitations must be acknowledged. First, while the dataset is large, it may not fully represent the diverse behavioral patterns across emerging and niche social platforms. Second, the adversarial evaluation covers common perturbations but does not address more sophisticated or adaptive evasion strategies. Third, although attention mechanisms enhance interpretability, the current system does not yet offer fully transparent explanations to non-technical users or moderators. Future studies should explore the integration of explainable AI frameworks and adversarial training pipelines to improve transparency and resilience. Furthermore, cross-platform evaluations and domain adaptation strategies will help validate generalizability.

7. Ethical and Consumer Confidence

Explainable AI detection plays a crucial role in building transparency, which is a key marketing message for ethical brands. By providing clear, understandable reasons for identifying fake profiles and misinformation, the system enhances consumer trust and confidence. Rather than merely “defending” against security threats, this AI system actively contributes to enhancing a brand’s image by visibly demonstrating a commitment to authenticity and integrity. Brands that prioritize transparency and ethical practices in their digital environments foster stronger relationships with consumers, making trust a core differentiator in today’s competitive market.

8. Conclusions and Future Directions

This study presents a robust and scalable multi-modal deep learning framework for detecting fake profiles in social media ecosystems by integrating RoBERTa for textual representation, ConvNeXt for visual authentication, and Hetero-GAT for modeling social behavior. The proposed RoBERTa–ConvNeXt–Hetero-GAT architecture achieves state-of-the-art performance, with 98.9% accuracy and a 98.6% F1-score, surpassing conventional classifiers in both predictive accuracy and resilience to adversarial manipulations. The attention-based late fusion strategy enhances adaptability and interpretability, rendering the model well suited for AI-driven cybersecurity applications in large-scale digital platforms. The results validate the effectiveness of multi-modal data fusion and graph-based learning in identifying misinformation, coordinated disinformation, and automated bot activities in real time. With an inference speed of 15.7 milliseconds per profile, the model demonstrates its applicability in time-critical, high-volume scenarios, such as social networking platforms and e-governance systems. To enhance generalization across platforms, future studies should incorporate cross-platform datasets and domain adaptation techniques. The adoption of self-supervised and semi-supervised learning methods could reduce dependence on labeled data and improve adaptability to evolving adversarial behaviors. The integration of explainable AI components is essential to ensure transparency, support user trust, and comply with regulatory standards. Federated learning strategies may also be applied to maintain data privacy in fake profile classification without compromising model performance. Despite its promising results, the model has several limitations. It exhibits decreased accuracy when handling sequential text–image manipulations, and the computational complexity of processing heterogeneous graphs introduces additional overhead. Moreover, the framework’s reliance on labeled data restricts its capacity to detect novel threats without frequent retraining. Addressing these limitations is necessary to ensure the practical deployment of this framework in real-world cybersecurity environments.

Author Contributions

Conceptualization, V.C.; methodology, F.H.; software, F.A.J.; validation, R.J.S.; formal analysis, V.C.; investigation, F.H.; resources, F.A.J.; data curation, R.J.S.; writing—original draft preparation, A.R. and V.C.; writing—review and editing, F.H. and A.R.; visualization, F.A.J.; supervision, R.J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors thank University of Technology Bahrain for supporting this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Abualigah, L.; Al-Ajlouni, Y.Y.; Daoud, M.S.; Altalhi, M.; Migdady, H. Fake news detection using a recurrent neural network based on bidirectional LSTM and GloVe. Soc. Netw. Anal. Min. 2024, 14, 40. [Google Scholar] [CrossRef]
  2. Thakar, H.; Bhatt, B. Fake news detection: Recent trends and challenges. Soc. Netw. Anal. Min. 2024, 14, 176. [Google Scholar] [CrossRef]
  3. Papageorgiou, E.; Chronis, C.; Varlamis, I.; Himeur, Y. A survey on the use of large language models (LLMs) in fake news. Future Internet 2024, 16, 298. [Google Scholar] [CrossRef]
  4. Lv, S.; Dong, J.; Wang, C.; Wang, X.; Bao, Z. RB-GAT: A text classification model based on RoBERTa-BiGRU with Graph Attention Network. Sensors 2024, 24, 3365. [Google Scholar] [CrossRef] [PubMed]
  5. Alford, P.; Jones, R. Digital entrepreneurial marketing bricolage: Shaping technology-in-practice. Int. J. Entrep. Behav. Res. 2025, 31, 1038–1061. [Google Scholar] [CrossRef]
  6. Xia, W.; Neware, R.; Kumar, S.D.; Karras, D.A.; Rizwan, A. An optimization technique for intrusion detection of industrial control network vulnerabilities based on BP neural network. Int. J. Syst. Assur. Eng. Manag. 2022, 13 (Suppl. 1), 576–582. [Google Scholar] [CrossRef]
  7. Yan, Y.; Fu, H.; Wu, F. Multimodal Social Media Fake News Detection Based on 1D-CCNet Attention Mechanism. Electronics 2024, 13, 3700. [Google Scholar] [CrossRef]
  8. Li, J.; Jiang, W.; Zhang, J.; Shao, Y.; Zhu, W. Fake User Detection Based on Multi-Model Joint Representation. Information 2024, 15, 266. [Google Scholar] [CrossRef]
  9. Al-alshaqi, M.; Rawat, D.B.; Liu, C. A BERT-Based Multimodal Framework for Enhanced Fake News Detection Using Text and Image Data Fusion. Computers 2025, 14, 237. [Google Scholar] [CrossRef]
  10. Kuntur, S.; Wróblewska, A.; Paprzycki, M.; Ganzha, M. Under the influence: A survey of large language models in fake news detection. IEEE Trans. Artif. Intell. 2024, 6, 458–476. [Google Scholar] [CrossRef]
  11. Fatoni, F.; Kurniawan, T.B.; Dewi, D.A.; Zakaria, M.Z.; Muhayeddin, A.M.M. Fake vs. Real Image Detection Using Deep Learning Algorithm. J. Appl. Data Sci. 2025, 6, 366–376. [Google Scholar] [CrossRef]
  12. Singhal, M.; Pacheco, J.; Khorzooghi, S.M.S.M.; Debi, T.; Asudeh, A.; Das, G.; Nilizadeh, S. Auditing Yelp’s Business Ranking and Review Recommendation Through the Lens of Fairness. In Proceedings of the International AAAI Conference on Web and Social Media, Copenhagen, Denmark, 23–26 June 2025; Volume 19, pp. 1798–1816. [Google Scholar] [CrossRef]
  13. Manasa, P.; Malik, A.; Alqahtani, K.N.; Alomar, M.A.; Basingab, M.S.; Soni, M.; Rizwan, A.; Batra, I. Tweet spam detection using machine learning and swarm optimization techniques. IEEE Trans. Comput. Soc. Syst. 2022, 11, 4870–4877. [Google Scholar] [CrossRef]
  14. Obadă, D.R.; Dabija, D.C. “In flow”! why do users share fake news about environmentally friendly brands on social media? Int. J. Environ. Res. Public Health 2022, 19, 4861. [Google Scholar] [CrossRef] [PubMed]
  15. Alharbi, N.; Alkalifah, B.; Alqarawi, G.; Rassam, M.A. Countering Social Media Cybercrime Using Deep Learning: Instagram Fake Accounts Detection. Future Internet 2024, 16, 367. [Google Scholar] [CrossRef]
  16. Al Shahrani, A.M.; Rizwan, A.; Sánchez-Chero, M.; Rosas-Prado, C.E.; Salazar, E.B.; Awad, N.A. An internet of things (IoT)-based optimization to enhance security in healthcare applications. Math. Probl. Eng. 2022, 2022, 6802967. [Google Scholar] [CrossRef]
  17. Kukkar, A.; Gupta, D.; Beram, S.M.; Soni, M.; Singh, N.K.; Sharma, A.; Neware, R.; Shabaz, M.; Rizwan, A. Optimizing deep learning model parameters using socially implemented IoMT systems for diabetic retinopathy classification problem. IEEE Trans. Comput. Soc. Syst. 2022, 10, 1654–1665. [Google Scholar] [CrossRef]
  18. Al-Alshaqi, M.; Rawat, D.B.; Liu, C. Ensemble Techniques for Robust Fake News Detection: Integrating Transformers, Natural Language Processing, and Machine Learning. Sensors 2024, 24, 6062. [Google Scholar] [CrossRef]
  19. Brummernhenrich, B.; Paulus, C.L.; Jucks, R. Applying social cognition to feedback chatbots: Enhancing trustworthiness through politeness. Br. J. Educ. Technol. 2025, 1–20. [Google Scholar] [CrossRef]
  20. Pendyala, V.S.; Chintalapati, A. Using Multimodal Foundation Models for Detecting Fake Images on the Internet with Explanations. Future Internet 2024, 16, 432. [Google Scholar] [CrossRef]
  21. Zhang, L.; Zhang, C.; Zhang, Z.; Huang, Y. SAFE-GTA: Semantic Augmentation-Based Multimodal Fake News Detection via Global-Token Attention. Symmetry 2025, 17, 961. [Google Scholar] [CrossRef]
  22. Liu, Y.; Liu, Y.; Li, Z.; Yao, R.; Zhang, Y.; Wang, D. Modality interactive mixture-of-experts for fake news detection. In Proceedings of the ACM on Web Conference 2025, Sydney, Australia, 28 April–2 May 2025; pp. 5139–5150. [Google Scholar] [CrossRef]
  23. Guo, Z.; Li, Y.; Yang, Z.; Li, X.; Lee, L.K.; Li, Q.; Liu, W. Cross-Modal Attention Network for Detecting Multimodal Misinformation Across Multiple Platforms. IEEE Trans. Comput. Soc. Syst. 2024, 11, 4920–4933. [Google Scholar] [CrossRef]
  24. Baribi-Bartov, S.; Swire-Thompson, B.; Grinberg, N. Supersharers of fake news on Twitter. Science 2024, 384, 979–982. [Google Scholar] [CrossRef]
Figure 1. Proposed fake profile detection framework.
Figure 1. Proposed fake profile detection framework.
Futureinternet 17 00391 g001
Figure 2. Depth-wise and point-wise separates convolution.
Figure 2. Depth-wise and point-wise separates convolution.
Futureinternet 17 00391 g002
Figure 3. ConvNeXt block structure.
Figure 3. ConvNeXt block structure.
Futureinternet 17 00391 g003
Figure 4. Inverse bottleneck structure.
Figure 4. Inverse bottleneck structure.
Futureinternet 17 00391 g004
Figure 5. Performance comparison of different models.
Figure 5. Performance comparison of different models.
Futureinternet 17 00391 g005
Figure 6. Ablation study (impact of modalities).
Figure 6. Ablation study (impact of modalities).
Futureinternet 17 00391 g006
Figure 7. Inference time comparison.
Figure 7. Inference time comparison.
Futureinternet 17 00391 g007
Figure 8. Robustness and scalability of proposed model.
Figure 8. Robustness and scalability of proposed model.
Futureinternet 17 00391 g008
Table 1. Summary of literature review.
Table 1. Summary of literature review.
ReferenceMethod UsedMain FindingsLimitations
[15]ML algorithms + NLP + behavioral and network-based methodsA machine learning-based system effectively identifies fake profilesLimited generalizability and potential vulnerability to adversarial attacks
[16]BiLSTM with L2 regularization and early stoppingThe text alone achieves 97.78% detection accuracy; profile features are less valuable.Profile data has a limited impact; feature fusion does not yield significant improvements.
[17]CNN, RNN, LSTM, Autoencoders, GANsDeep learning improves the detection of anomalous behaviorTraditional models are ineffective for complex user behavior
[18]ML + Blockchain integrationHigh detection accuracy and user trust via decentralized recordsBlockchain is computationally expensive; scalability issues
[19]RF + Deep CNN93.89% fake profile accuracy; 86.57% stalking predictionNeeds better prevention models; non-numeric attributes limit feature utility
[20]FakeNewsNet + attention mechanismsMulti-modal detection with spatiotemporal features improves accuracyManual features are weak; unsupervised methods lack stability
[21]Multinomial Naïve Bayes94.67% accuracy with simple feature setsNeeds more training data; vulnerable to evolving patterns
[22]Systematic review + PROBAST bias assessmentIdentified bias and inconsistency in ML detection pipelinesReviewed studies have limited scope and variable metrics
[23]ML + Instagram behavior profilingDetected fake accounts with fewer input featuresFewer features hurt model robustness and generalization
[24]NLP (LSTM) + image analysis (CLIP)99.22% for text, 93.12% for combined detectionLacks ability to distinguish complex fake content types
Table 2. Model evaluation.
Table 2. Model evaluation.
ModelAccuracy (%)Precision (%)F1-Score (%)
SVM85.384.784.9
RF89.188.788.7
Long Short-Term Memory (LSTM)91.590.991.2
Proposed RoBERTa-ConvNeXt-HeteroGAT98.998.498.6
Table 3. Summary of model performance.
Table 3. Summary of model performance.
MetricProposed Model (RoBERTa-ConvNeXt-HeteroGAT)
Accuracy (%)98.9
Precision (%)98.4
Recall (%)98.7
F1-Score (%)98.6
AUC-ROC0.99
MCC0.97
Table 4. Performance comparison of different models.
Table 4. Performance comparison of different models.
ModelAccuracy (%)Precision (%)Recall (%)F1-Score (%)AUC-ROC (%)
SVM85.384.786.185.486.2
RF89.288.589.889.189.5
LSTM92.491.992.892.393.1
RoBERTa92.191.592.692.092.8
ConvNeXT91.490.891.991.392.0
Hetero-GAT94.293.694.894.295.1
Proposed Model98.998.498.898.699.2
Table 5. Performance of individual components vs. multi-modal fusion.
Table 5. Performance of individual components vs. multi-modal fusion.
ModelAccuracy (%)Precision (%)Recall (%)F1-Score (%)
RoBERTa (Text Only)92.191.592.692.0
ConvNeXt (Image Only)91.490.891.991.3
Hetero-GAT (Graph Only)94.293.694.894.2
Multi-Modal Fusion98.998.498.898.6
Table 6. Inference time comparison.
Table 6. Inference time comparison.
ModelInference Time (ms)
SVM29.4
RF24.8
LSTM42.3
RoBERTa38.5
ConvNeXt33.1
Hetero-GAT27.9
Proposed Model15.7
Table 7. Model robustness against adversarial attacks.
Table 7. Model robustness against adversarial attacks.
Attack TypeAccuracy Drop (%)
Textual Perturbations1.3
Image Manipulations2.1
Synthetic Engagement3.4
Combined Attacks4.7
Table 8. Model performance across different dataset sizes.
Table 8. Model performance across different dataset sizes.
Dataset SizeAccuracy (%)Precision (%)Recall (%)F1-Score (%)
10,00096.195.496.896.1
50,00097.396.997.597.2
100,00098.197.898.398.0
150,00098.998.498.898.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chakranarayan, V.; Hussain, F.; Jaber, F.A.; Shaker, R.J.; Rizwan, A. Safeguarding Brand and Platform Credibility Through AI-Based Multi-Model Fake Profile Detection. Future Internet 2025, 17, 391. https://doi.org/10.3390/fi17090391

AMA Style

Chakranarayan V, Hussain F, Jaber FA, Shaker RJ, Rizwan A. Safeguarding Brand and Platform Credibility Through AI-Based Multi-Model Fake Profile Detection. Future Internet. 2025; 17(9):391. https://doi.org/10.3390/fi17090391

Chicago/Turabian Style

Chakranarayan, Vishwas, Fadheela Hussain, Fayzeh Abdulkareem Jaber, Redha J. Shaker, and Ali Rizwan. 2025. "Safeguarding Brand and Platform Credibility Through AI-Based Multi-Model Fake Profile Detection" Future Internet 17, no. 9: 391. https://doi.org/10.3390/fi17090391

APA Style

Chakranarayan, V., Hussain, F., Jaber, F. A., Shaker, R. J., & Rizwan, A. (2025). Safeguarding Brand and Platform Credibility Through AI-Based Multi-Model Fake Profile Detection. Future Internet, 17(9), 391. https://doi.org/10.3390/fi17090391

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop