Sustainable Development of Information Dissemination: A Review of Current Fake News Detection Research and Practice

: With the popularization of digital technology, the problem of information pollution caused by fake news has become more common. Malicious dissemination of harmful, offensive or illegal content may lead to misleading, misunderstanding and social unrest, affecting social stability and sustainable economic development. With the continuous iteration of artiﬁcial intelligence technology, researchers have carried out automatic and intelligent news data mining and analysis based on aspects of information characteristics and realized the effective identiﬁcation of fake news information. However, the current research lacks the application of multidisciplinary knowledge and research on the interpretability of related methods. This paper focuses on the existing fake news detection technology. The survey includes fake news datasets, research methods for fake news detection, general technical models and multimodal related technical methods. The innovation contribution is to discuss the research progress of fake news detection in communication, linguistics, psychology and other disciplines. At the same time, it classiﬁes and summarizes the explainable fake news detection methods and proposes an explainable human-machine-theory triangle communication system, aiming at establishing a people-centered, sustainable human–machine interaction information dissemination system. Finally, we discuss the promising future research topics of fake news detection technology.


Introduction
With the rapid development of big data and information dissemination technology, fake news spreads through social media, posing a destructive threat to the sustainable development of society.False information not only undermines public trust and disrupts social order but also leads to misleading decisions, social division and opposition, which hinder the normal operation and progress of society.Fake news specifically refers to news reports that are untrue or exaggerated, such as Figure 1.These reports may be deliberately created to mislead the public or promote a specific agenda.The influence of fake political news on the Internet is more obvious than that of fake news about terrorism, natural disasters, science, urban legends or financial information [1].Compared with the truth, rumors tend to spread farther, faster and wider, which indicates fake news is more novel than real news.However, fake news often brings anxiety to people, affects the normal operation of society and threatens the sustainable development of society.
Specifically, the dissemination of information exhibits several characteristics, including rapidity of dissemination, information overload, universality of content, indistinguishability of authenticity, harmfulness of impact, trans-regional reach, discrimination of stigma, sociability of media and so on [2][3][4][5].A vast amount of false information is flooded on social media and major mainstream media platforms, giving rise to the phenomenon known as the information epidemic.The essence of the information epidemic is the presentation of the complex pattern of the integration of various communication mechanisms, such as mass communication, network communication and intelligent communication, against the background of new technology.Consequently, the detection of fake news technology becomes an urgent necessity in contemporary society for the identification of fake information.
Systems 2023, 11, x FOR PEER REVIEW 2 of 27 The 5G network will lead to bird deaths.
The COVID-19 outbreak is man-made Fake Real Cancer can already be treated Water is necessary for human survival.
COVID-19 will spread through droplets Specifically, the dissemination of information exhibits several characteristics, including rapidity of dissemination, information overload, universality of content, indistinguishability of authenticity, harmfulness of impact, trans-regional reach, discrimination of stigma, sociability of media and so on [2][3][4][5].A vast amount of false information is flooded on social media and major mainstream media platforms, giving rise to the phenomenon known as the information epidemic.The essence of the information epidemic is the presentation of the complex pattern of the integration of various communication mechanisms, such as mass communication, network communication and intelligent communication, against the background of new technology.Consequently, the detection of fake news technology becomes an urgent necessity in contemporary society for the identification of fake information.
In recent years, methods of fake news detection have been summarized in some studies.Zhou et al. [6] analyzed four aspects: knowledge, writing style, communication mode and source credibility.Zhang et al. [7] described the negative impact of online fake news and summarized the latest detection techniques available at that time.Hu et al. [8] summarized the fake news detection technology from three perspectives: supervised, weakly supervised and unsupervised.Athira et al. [9] conducted a systematic investigation into explainable artificial intelligence for fake news detection.These reviews lack multidisciplinary considerations, and there are still many deficiencies in the explainable, summarized fake news detection methods.Based on the analysis of the above, in order to develop a people-centered, explainable fake news detection system combined with multidisciplinary theoretical knowledge, we give a comprehensive overview of the research situation in fake news detection.The specific contributions are as follows:

•
We investigate the current research status of fake news detection technology, including datasets, research methods and technical models.On this basis, it discusses the use of multimodal technology and innovatively summarizes and analyzes the research progress in communication, linguistics, psychology and other disciplines in fake news detection.

•
We summarize the general fake news detection methods, which are divided into three aspects according to the development of different stages.At the same time, it analyzes explainable fake news detection and reviews the research related to explainable model structure and explainable model behavior.

•
Based on the summary of the research progress on fake news detection, we propose In recent years, methods of fake news detection have been summarized in some studies.Zhou et al. [6] analyzed four aspects: knowledge, writing style, communication mode and source credibility.Zhang et al. [7] described the negative impact of online fake news and summarized the latest detection techniques available at that time.Hu et al. [8] summarized the fake news detection technology from three perspectives: supervised, weakly supervised and unsupervised.Athira et al. [9] conducted a systematic investigation into explainable artificial intelligence for fake news detection.These reviews lack multidisciplinary considerations, and there are still many deficiencies in the explainable, summarized fake news detection methods.Based on the analysis of the above, in order to develop a people-centered, explainable fake news detection system combined with multidisciplinary theoretical knowledge, we give a comprehensive overview of the research situation in fake news detection.The specific contributions are as follows: • We investigate the current research status of fake news detection technology, including datasets, research methods and technical models.On this basis, it discusses the use of multimodal technology and innovatively summarizes and analyzes the research progress in communication, linguistics, psychology and other disciplines in fake news detection.

•
We summarize the general fake news detection methods, which are divided into three aspects according to the development of different stages.At the same time, it analyzes explainable fake news detection and reviews the research related to explainable model structure and explainable model behavior.

•
Based on the summary of the research progress on fake news detection, we propose an explainable triangular communication system consisting of humans, machines and theory that can be constructed, aiming to establish a people-centered, sustainable humanmachine interaction information dissemination system.On this basis, the promising research topics of fake news detection technology in the future are discussed.
The structure of this paper is as follows: Section 2 is the overview.Section 3 provides an overview of the general models utilized for detecting fake news.Section 4 summarizes Systems 2023, 11, 458 3 of 26 the fake news detection dataset.Section 5 evaluates the explainable fake news detection techniques.Finally, Section 6 concludes the work and suggests directions for future research.

Literature Search
This paper uses the Google Scholar database as a reliable source to assess recent trends in fake news detection research over the past five years.Using "fake news", "fake news detection", "multimodal fake news detection", "multidisciplinary + fake news" and "explainable fake news detection" as relevant keywords, the database was queried.The search yielded a total of 18,100 references published in the fake news detection field between 2018 and 2023.The results obtained were carefully analyzed to identify the prominent research directions in the field.This review article provides a comprehensive overview of the current state of research in fake news detection and highlights the most promising avenues for future investigation.With the help of deep insights into the various concerns in the field of fake news detection research, we are able to better appreciate the potential of this technique to develop more efficient and credible fake news detection techniques.
The specific research on fake news detection includes fact verification, position detection, topic detection and other tasks involving text classification, text clustering, image understanding, speech recognition and other research directions.The related research on fake news detection uses technologies such as text mining [10], machine learning [11], deep learning [12], natural language processing [13], machine vision [14] and other technologies to extract and identify key information from subjective visual perceptions of text or news pages.According to the summary and classification of a large number of references, the entire fake news detection method can be divided into three stages: (1) Machine learning stage.(2) Compared with machine learning algorithms, deep learning is not limited by manual feature extraction.It can extract text features from language texts through the self-learning ability of the network layer, which greatly improves the system performance of natural language processing tasks [12].Deep learning networks, including convolutional neural networks [15] and recurrent neural networks [16], are applied to fake news detection tasks.They can effectively learn complex semantic features and high-level semantic representations from text and have been shown to improve the performance of fake news detection tasks.(3) However, the manual annotation of large text data is very complex, and the data used for natural language processing tasks is very limited.The application of deep learning models with strong dependency data in natural language processing is also very challenging.In order to avoid the problems of overfitting and insufficient generalization ability caused by insufficient data volume, researchers began to explore the pre-trained model for semantic representation.So far, the pre-training model based on the transformer structure [17] has been vigorously developed.The representative BERT pre-training model [18], the GPT model [19][20][21], etc., have made rapid progress in the development of natural language processing.At the same time, the relevant studies on fake news detection have also been further developed.

Fake News Classification
The categorization of fake news is multifaceted and diverse, encompassing everything from unverified hearsay circulating on social media to deceitful propaganda deliberately spread by its creators.According to references [22][23][24][25], fake news is classified into five categories, illustrated in Table 1: (1) deceptive fake news; (2) false information of rumor nature; (3) false comment information; (4) headline party-type fake news; (5) fact-based recombination of false information.

Fake News Classification Definition
Deceptive fake news A false information intended to mislead and deceive the reader.Deceptive fake news is more deceptive and is intended to deliberately mislead readers or cause adverse effects.
False information of rumor nature Unconfirmed rumors, rumors or anonymous messages, etc.

False comment information
An untrue or misleading comment posted on an online platform, social media, or other interactive platform.
Headline party-type fake news Edit false headlines eye-catching, the actual content but no reference value of the news

Fact-based recombination of false information
To create misleading or false impressions by reorganizing true facts.

Research Methods of Fake News Detection
Most of the existing research on fake news detection methods regards fake news detection as a classification task.At present, according to the main features used by the classification model, from the perspective of methods, fake news detection can be divided into three categories: content-based detection methods, social network-based detection methods and knowledge-based detection methods.

Content-Based Detection Method
The content-based fake news detection method aims to extract various semantic features from the news content and detect the authenticity of the news through these features.There are some linguistic differences between fake news and true news, and fake news can be detected by distinguishing the language style of true and fake news texts.Fake news is more subjective than real news.The study found that the first person and the second person are used more in fake news, and fake news contains more words that can be used for exaggeration (such as subject, transfinite words and modal adverbs), while real news often uses specific (such as numbers), objective (such as third person) and positive words.The author of fake news will be more extreme.The study analyzes the writing styles of left-wing news and right-wing news [26,27] and finds that they all have extremist tendencies, political tendencies and hatred.However, not every linguistic feature has the same weight, and the importance of different lexical features is different.Song et al. [28] extracted a complete set of content features from real and fake news, including the total number of words in the news, the length of the content, the number of capital words, special symbols, sentences at the beginning of the number, offensive words, etc.Through experiments, the importance ranking of the features is listed, and it is found that the total number of words, the length of the content and the number of capital words have a greater impact on the discrimination of real news.Abbreviations and the total number of words have a greater impact on discrimination against fake news.

Detection Method Based on Social Network
The content-based approach can discover the linguistic features of true and fake news.However, sometimes fake news will mislead readers by deliberately imitating the writing techniques of real news.The content-based approach cannot distinguish the feature differences between such fake news and true news.In order to solve this problem, we can make full use of hidden information as auxiliary data, such as social background information and propagation paths in social networks.Social background information is one of the research directions.Shu et al. [29] explored the relationship between user data and fake news on social media and used the user's social participation as auxiliary Systems 2023, 11, 458 5 of 26 information for detection.Furthermore, Shu et al. [30] proposed a framework to simulate the triadic relationship between news publishers, news articles and users, extract effective features from the participation behavior of news publishers and readers, and then capture the interaction between them.Studies have shown that the use of social background information can not only improve the effect of fake news detection but also effectively predict it early.Another research direction detects fake news by simulating the propagation path of fake news in the network.Through experiments, Monti et al. [31] found that the mode of transmission is an important feature of fake news that exceeds other aspects such as news content, user data and social behavior.Raza et al. [32] proposed a fake news detection framework based on the Transformer architecture, which includes encoder and decoder parts.The encoder part is used to learn the representation of fake news data, and the decoder part is used to predict future behavior based on past observations.The model uses the characteristics of news content and social background to improve classification accuracy.

Knowledge-Based Detection Method
Knowledge-based (KB) fake news detection detects the authenticity of news by verifying fake news and facts, so this is also called fact checking.Fact checking can be divided into two categories: manual verification and automatic verification [6].The manual method uses domain expert knowledge or the crowdsourcing method.It has high accuracy but low efficiency, which cannot meet the needs of the era of big data.The automatic verification method using natural language processing and machine learning technology has become a hot research field.Fact checking first needs to construct a knowledge base or knowledge graph from the network through knowledge extraction.Then it compares and verifies the fake news with the knowledge base or knowledge graph to judge the authenticity of the news.Pan et al. [33] used knowledge graphs to detect fake news based on news content.They solved the problem that computational fact checking is not comprehensive enough.By extracting the triples from news articles, their method's F0 score exceeds 80.1.Hu et al. [34] developed a heterogeneous graph attention network to learn the context of news representation and encode the semantics of news content.By using an entity comparison network, they compare the context entity representation with the derived representation from the knowledge base (KB).This comparison aims to capture the consistency between the news content and the KB.
Based on the above analysis and the content fake news detection method, we try to extract effective features from text information and locate the key information about fake news.Fake news based on social networks and knowledge requires not only news information itself but also vast external resources, such as stance information, knowledge information and multi-modal feature information.In text-based fake news detection, we try to analyze the style and content characteristics of the news, capture specific features and judge the authenticity of the news.At the same time, there are also studies [35] that combine content features and environmental features as the input of the classifier.Additionally, they integrate user data, social behavior, propagation paths and other features to optimize the detection method [31].The existing methods are all different aspects of fake news detection methods, but they also have limitations.How to combine the existing methods to improve them and effectively improve the performance of fake news detection has become an urgent problem to be solved.

Multimodal Fake News Detection
In addition to the detection method based on a single feature source, it can also combine multiple features for fake news detection.In recent years, the data used in fake news detection is no longer limited to text information, and there has been an increasing focus on visual features.Multimodal fake news detection refers to the use of multiple types (such as text, images, etc.) of data to determine whether a news report contains misleading or inaccurate content [36][37][38].Cao et al. [39] found that visual content has become an important part of fake news.Fake news often uses unverified visual content (video, images, etc.) to mislead readers and deepen their trust in false information.Pictures, videos and other media information can also be applied to fake news detection.Figure 2 shows some examples of fake news that we collected on the network, with both textual and visual features.Fortunately, many multimodal datasets have been made available.For example, Shu et al. [23] proposed FakeNewsNet, a fake news resource library that covers news content, social environments and spatio-temporal information.It greatly enhances the capability of multi-feature fusion for detecting fake news.
In addition to the detection method based on a single feature source, it can also combine multiple features for fake news detection.In recent years, the data used in fake news detection is no longer limited to text information, and there has been an increasing focus on visual features.Multimodal fake news detection refers to the use of multiple types (such as text, images, etc.) of data to determine whether a news report contains misleading or inaccurate content [36][37][38].Cao et al. [39] found that visual content has become an important part of fake news.Fake news often uses unverified visual content (video, images, etc.) to mislead readers and deepen their trust in false information.Pictures, videos and other media information can also be applied to fake news detection.Figure 2 shows some examples of fake news that we collected on the network, with both textual and visual features.Fortunately, many multimodal datasets have been made available.For example, Shu et al. [23] proposed FakeNewsNet, a fake news resource library that covers news content, social environments and spatio-temporal information.It greatly enhances the capability of multi-feature fusion for detecting fake news.The main idea of the multimodal method is to train features from different modalities and then fuse them.Some fake news detection methods have been integrated into methods based on cross-modal comparative learning.For example, Qi et al. [40] mapped the pictures in fake news to the frequency and pixel domains and then fused the visual information in the two domains through a multi-domain visual neural network.Singhal et al.The main idea of the multimodal method is to train features from different modalities and then fuse them.Some fake news detection methods have been integrated into methods based on cross-modal comparative learning.For example, Qi et al. [40] mapped the pictures in fake news to the frequency and pixel domains and then fused the visual information in the two domains through a multi-domain visual neural network.Singhal et al. [41] obtained text features and visual features through pre-training models and fused them into new feature representations.Their simple, unified framework is shown in Figure 3 (the fake news image in Figure 3 comes from the network).
In addition, research on multimodal fake news detection has gradually increased in recent years.Qian et al. [42] proposed a hierarchical multi-modal context attention network for fake news detection, which includes two modules: a multi-modal context attention module and a hierarchical coding module.To model the multi-modal context of news posts, the multi-modal context attention module uses pre-trained BERT [18] for text representation and pre-trained ResNet [43] for image representation, ensuring a seamless integration of both textual and visual information.It combines inter-modal and intra-modal relationships to enhance fake news detection.The hierarchical coding module captures the rich hierarchical semantics of the text to improve the representation of multimodal news.In addition, research on multimodal fake news detection has gradually increased in recent years.Qian et al. [42] proposed a hierarchical multi-modal context attention network for fake news detection, which includes two modules: a multi-modal context attention module and a hierarchical coding module.To model the multi-modal context of news posts, the multi-modal context attention module uses pre-trained BERT [18] for text representation and pre-trained ResNet [43] for image representation, ensuring a seamless integration of both textual and visual information.It combines inter-modal and intra-modal relationships to enhance fake news detection.The hierarchical coding module captures the rich hierarchical semantics of the text to improve the representation of multimodal news.
The MCAN model proposed by Wu et al. [36] aims to learn multi-modal fusion representations by considering the dependencies between different modalities.The model includes three main steps: feature extraction, feature fusion and fake news detection.In the feature extraction step, three sub-models are used to extract features from the spatial domain, frequency domain and text.The VGG-19 [44] network is used to extract visual features from the spatial domain, and the ACNN-based sub-network is designed to extract features from the frequency domain, especially for re-compressed or tampered images.Furthermore, the BERT model is used to obtain the text features of the text content.In the feature fusion step, the deep common attention model is used to fuse multimodal features.The fusion process simulates the way that humans first see the image and then read the text.The common attention model is composed of multiple common attention layers, which capture the interdependence between different features.Finally, the fusion feature is used to detect fake news, and the output of the common attention model is used to judge the authenticity of the input news.
Wang et al. [37] proposed a cross-modal contrastive learning framework, COOLANT, for multimodal fake news detection.The framework consists of three main components: a cross-modal contrastive learning module for alignment, a cross-modal fusion module for learning cross-modal correction and a cross-modal aggregation module with an attention mechanism and guidance to improve the performance of multimodal fake news detection.The cross-modal contrast learning module aligns features by converting singlemodal embedding into a shared space.It uses auxiliary cross-modal consistency learning tasks to measure the semantic similarity between images and texts and provides soft targets for the contrast learning module.The contrastive learning module uses the contrast loss to predict the actual image-text pairing in the batch.The MCAN model proposed by Wu et al. [36] aims to learn multi-modal fusion representations by considering the dependencies between different modalities.The model includes three main steps: feature extraction, feature fusion and fake news detection.In the feature extraction step, three sub-models are used to extract features from the spatial domain, frequency domain and text.The VGG-19 [44] network is used to extract visual features from the spatial domain, and the ACNN-based sub-network is designed to extract features from the frequency domain, especially for re-compressed or tampered images.Furthermore, the BERT model is used to obtain the text features of the text content.In the feature fusion step, the deep common attention model is used to fuse multimodal features.The fusion process simulates the way that humans first see the image and then read the text.The common attention model is composed of multiple common attention layers, which capture the interdependence between different features.Finally, the fusion feature is used to detect fake news, and the output of the common attention model is used to judge the authenticity of the input news.
Wang et al. [37] proposed a cross-modal contrastive learning framework, COOLANT, for multimodal fake news detection.The framework consists of three main components: a cross-modal contrastive learning module for alignment, a cross-modal fusion module for learning cross-modal correction and a cross-modal aggregation module with an attention mechanism and guidance to improve the performance of multimodal fake news detection.The cross-modal contrast learning module aligns features by converting single-modal embedding into a shared space.It uses auxiliary cross-modal consistency learning tasks to measure the semantic similarity between images and texts and provides soft targets for the contrast learning module.The contrastive learning module uses the contrast loss to predict the actual image-text pairing in the batch.
In Table 2, we summarize several multimodal fake news detection methods, including the main techniques they use, datasets and extracted features and the corresponding accuracy performance.

Multidisciplinary Research Progress
In the past few decades, there have been studies on fake news and its detection in many disciplines, including computer science, sociology, psychology, linguistics, communication and neurocognitive science [30,[45][46][47].Each field has its own research content and research methods for fake news.There are also studies [46] that combine the knowledge of these fields and use interdisciplinary methods to detect fake news.Multidisciplinary false news theory research can help the natural sciences achieve fake news detection.Figure 4 summarizes the basic problems of fake news detection in various disciplines, and we also hope that future research can combine more interdisciplinary knowledge.
In Table 2, we summarize several multimodal fake news detection methods, including the main techniques they use, datasets and extracted features and the corresponding accuracy performance.

Multidisciplinary Research Progress
In the past few decades, there have been studies on fake news and its detection in many disciplines, including computer science, sociology, psychology, linguistics, communication and neurocognitive science [30,[45][46][47].Each field has its own research content and research methods for fake news.There are also studies [46] that combine the knowledge of these fields and use interdisciplinary methods to detect fake news.Multidisciplinary false news theory research can help the natural sciences achieve fake news detection.Figure 4 summarizes the basic problems of fake news detection in various disciplines, and we also hope that future research can combine more interdisciplinary knowledge.

Psychology
Psychological researchers mainly explore the cognitive behavior of fake news and study the psychological mechanisms of fake news dissemination [48,49].Bordia et al. [50] found that the interactive behavior of fake news is largely based on people's psychological Systems 2023, 11, 458 9 of 26 needs for the truth of the facts after analyzing the content of fake news on the Internet.The psychological factors that promote the dissemination of fake news include uncertainty, anxiety, etc. Pennycook et al. [51] investigated why people believe and share fake or highly misleading news online.They believe that it is the strong causal effect of political motivation on beliefs that makes people believe fake news.

Neuro-Cognitive Science
The sensitivity to fake news attacks depends on whether Internet users think fake news articles/clips are real after reading them.Arisoy et al. [52] tried to understand the sensitivity of users to text-centric fake news attacks through neurocognitive methods.They studied the neural basis related to fake news and real news through electroencephalograms (EEG), designed and ran EEG experiments on human users and analyzed the neural activities related to fake news and real news detection tasks of different types of news articles.Finally, they found that human detection of fake news may be ineffective and have potentially adverse effects.

Linguistics
Linguistic researchers [53] use computer technology to analyze the language content of fake news in combination with relevant theoretical content and summarize the pragmatic features of fake news and the language structure that triggers the spread of fake news.Choudhary et al. [54] believed that it is promising to start from language indicators, using qualitative and quantitative data analysis as the analysis method, through the detection and comparison of 16 attributes under the three main language feature categories (vocabulary, grammar and syntactic features) manually assigned to news texts, to identify the systematic nuances between fake news and factual news.

Communication Science
Communication researchers mainly analyze the concept of fake news [55] and try to find out the information dissemination mechanism, prevention and governance model of fake news in the context of the continuous use of social media [56].Jana et al. [57] proposed that the current concept of fake news is more extensive.The study believes that the essence of fake news is a two-dimensional phenomenon of public communication, which puts forward the theoretical framework of fake news research.Di et al. [58] revealed the sharing motivation related to fake news: benign online users may not share fake news in pursuit of financial or political/ideological goals but seek social recognition of the desired group by informing other members of specific related topics, which also strengthens the unity of the group.

Mitigation of the Spread of Malicious Content
Halting the distribution of malicious content online demands a blend of diverse approaches and strategies.Present studies employ deep learning architectures integrating social networks, propagation trees and other techniques to establish systems that automatically classify and screen malicious content, thereby preventing its entry into online forums or mitigating its distribution.By studying the problem of detecting geolocated content communities on Twitter, Apostol et al. [59] propose a new distributed system that offers nearly real-time information on hazard-related events and their development.Furthermore, they introduce a novel deep learning model to identify fake news, and misguided tweets will be eliminated from the display.In order to alleviate the spread of real-time fake news in social media, Truică et al. [60] proposed a real-time network awareness strategy that constructs a minimum cost-weighted directed spanning tree for the detected nodes and immunizes the nodes in the tree by using a novel ranking function to score the harmfulness of the nodes.In addition, Coban et al. [61] propose a novel COmmuNiTy-based Algorithm for network ImmuNization that uses network information to detect harmful content distributors as well as generate partitions and immunize them using subgraphs induced by each distributor.
The diffusion-based method [62][63][64] can also alleviate the spread of malicious content.By using the propagation mechanism in social networks, it can guide the propagation path of information in a targeted manner, thereby reducing the impact of malicious content.This method emphasizes active intervention and the influence of network communication structure in order to achieve the purpose of reducing the spread of malicious content.
In short, by researching how to stop the spread of malicious content, working with governments, civil society organizations and technology companies to develop relevant regulations and guidelines can be effective in combating the spread of malicious content.

General Technical Model of Fake News Detection
From the perspective of technical methods, the artificial intelligence technology used in fake news detection [65] involves many research fields such as natural language processing, computer vision and data mining [66].Fake news is divided into three categories: false text news, false picture news and false video news.For false text news detection, natural language processing has gradually become an important technical means in social science and information dissemination research [6].Its primary applications encompass sentiment analysis, which centers around text classification techniques; news summarization generation, which focuses on text summarization techniques; and opinion mining, which relies on topic modeling techniques [67].Therefore, for the detection of false information in text data, research on the application of natural language processing technology [68] is also constantly developing.For picture news, video news, etc., in terms of false picture news detection, researchers use computer vision technology [69-71] to detect false pictures synthesized.In addition, the continuous development of deep synthesis technology has led to the proliferation of fake videos.Researchers use deep learning technology to detect face tampering in videos [72].
It is generally believed that false text detection has gone through three stages: the first stage, the artificial feature design stage, started in 2011, which is mainly manual feature extraction based on expert knowledge; the second stage, the data-driven stage, started in 2016, is based on the research of various methods based on deep learning.The third stage, so far, is the research and exploration of the integration of knowledge and data.This stage is based on the pre-training model.Based on the above analysis, this paper believes that the specific technical methods of fake news detection research have the following aspects:

Fake News Detection based on Machine Learning
Commonly used classification models for fake news detection based on machine learning include support vector machine [73] and naive bayes [74].In addition, logistic regression [75] and decision trees [76], such as random forest classifiers, can also be used in fake news detection tasks [77].The basic principle of these models is to detect text based on the manual features of expert knowledge.Specific features include: linguistic features, theme features, user features and communication features.Eldesoky et al. [78] presented a classification model with the capability to detect fake news by utilizing Doc2vec and Word2vec embeddings as feature extraction techniques.The combination of the Doc2vec model and support vector machines achieved 95.5% accuracy on a real-world dataset.

Fake News Detection based on Deep Learning
Because machine learning is based on manual feature extraction, there will be deviations, and it performs poorly in feature extraction speed.In addition, machine learning produces high-dimensional representations of language information, resulting in dimensional disasters.In contrast, deep learning has more advantages than machine learning, showing higher accuracy and precision in fake news detection.Lai et al. [79] compared several machine learning and deep learning models based on pure content features.They found that the performance of the neural network model is better than the traditional ML model, and the accuracy of the neural network model is about 6% higher than the ML model.The essence of the neural network model is to use the method of word embedding [80] to combine the language model and feature learning to detect fake news.The fake news detection model based on a neural network has achieved relevant results in the research of fake news detection in many languages by using the word embedding method.A word embedding is a numerical representation of a word that captures its semantics based on the context in a given corpus.Word embedding can help understand semantics and find contextual clues, which is helpful for false news detection.Ilie et al. [81] used three word embeddings, Word2Vec, FastText and GloVe, to preserve word context, trained multiple deep learning architectures for classification and compared their performance in detecting the authenticity of news articles, ultimately obtaining the best results using a recursive convolutional neural network-based architecture.
The most typical recurrent neural network and convolutional neural network in deep learning can be used to solve the problem of fake news detection.Ma et al. [82] first used the hidden layer of a recurrent neural network to represent fake news information and proved that the model is superior to the good performance of artificial features.Since then, a model called FNDNet (Deep CNN) [83] has been proposed to learn the discriminative features of detecting fake news using multiple hidden layers.In addition, Huang et al. [84] used a graph convolutional neural network (GCN) [85] to learn user representations from graphs created by user behavior information.

Fake News Detection Based on Pre-Training Model
Traditional word embeddings may be difficult to capture complex contextual relationships, and they regard words as independent entities without considering the entire sentence structure.Vaswani et al. [17] introduced the Transformer, a deep learning model architecture, in 2017, which yielded exceptional outcomes in natural language processing tasks.The transformer enables the model to better capture the context and semantic information in the text so as to more accurately identify malicious content.Transformer introduces position encoding to process the position information of words in the input sequence.This allows the model to distinguish words in different locations, thereby avoiding the loss of location information.Its encoder-decoder structure can understand text at different levels and provide effective detection methods for multiple types of malicious behavior.Research [32,[86][87][88] and others have achieved good performance in detecting fake news using the transformer architecture.In practice, the choice of embedding depends on factors such as the size of the dataset, computing resources and the complexity of fake news detection tasks.Combining word embedding and transformer embedding may produce better results because word embedding captures the meaning of a single word while transformer embedding captures complex sentence-level semantics.Truică et al. [89] propose a new document embedding (DocEmb) constructed using word embeddings and transformers that achieves better results than more complex deep neural network models.In addition, Truică et al. [90] also proposed two bidirectional long short-term memory (BiLSTM) architectures, incorporating sentence transformers, to address two tasks: (1) a multi-class monolingual task of detecting fake news and (2) a multi-class cross-lingual task of detecting fake news.Using multiple transformer models may also achieve good performance.Truică et al. [91] proposed a new deep neural integration architecture based on transformers for false information detection (MisRoBAERTa), which uses RoBERTa-Bart sentences to embed error information and is superior to other transformer models in false information detection tasks.
In 2018, with the emergence and development of pre-training models, natural language processing tasks entered the era of pre-training models.Fine-tuning based on the BERT model [18] has significantly improved the performance of many natural language tasks.For fake news detection in natural language processing tasks of text data, the BERT pretraining model and related improved models gradually replace the original language model, which has become the basis of current research.Jwa et al. [87] combined news data in the pre-training phase to improve fake news recognition skills; Kaliyar et al. [92] proposed a BERT-based deep convolution method (fakeBERT) to detect fake news.The advantage of the pre-trained model is that the BERT method is unique in identifying and capturing contextual meanings in sentences or texts.In the model learning process, it does not need to go deep into the details of fake news to achieve good detection performance.
After several years of development, the BERT pre-training model based on transformer structure has gradually produced many related models [93][94][95] after structural adjustment, performance optimization and retraining.These models are collectively referred to as BERTology series models and have achieved good performance in various tasks.In summary, the fake news detection method based on the pre-trained model is already a research trend in this field.However, despite the complex characteristics of fake news, fake news detection based on pre-trained models still cannot achieve good performance in practical applications, like other practical tasks.How to extract features from more complex semantic information about fake news and establish a more effective fake news detection model for the 'pre-training + fine-tuning' paradigm of the pre-training model [96] is still an urgent problem to be solved.

Dataset
In the detection of fake news, the dataset used can be divided into single-modal and multi-modal data, as shown in Figure 5.We gathered prevalent fake news datasets from the past five years based on citations.Multimodal data has a more diverse form, typically comprising a combination of images or video text, as demonstrated in Table 3. Abbreviations for technical terms are defined upon first use.In contrast, unimodal datasets exclusively comprise text, providing a more extensive characterization.According to the dataset-construction method, data characteristics and adaptation tasks, it can be divided into three categories: BERT model [18] has significantly improved the performance of many natural language tasks.For fake news detection in natural language processing tasks of text data, the BERT pre-training model and related improved models gradually replace the original language model, which has become the basis of current research.Jwa et al. [87] combined news data in the pre-training phase to improve fake news recognition skills; Kaliyar et al. [92] proposed a BERT-based deep convolution method (fakeBERT) to detect fake news.The advantage of the pre-trained model is that the BERT method is unique in identifying and capturing contextual meanings in sentences or texts.In the model learning process, it does not need to go deep into the details of fake news to achieve good detection performance.
After several years of development, the BERT pre-training model based on transformer structure has gradually produced many related models [93][94][95] after structural adjustment, performance optimization and retraining.These models are collectively referred to as BERTology series models and have achieved good performance in various tasks.In summary, the fake news detection method based on the pre-trained model is already a research trend in this field.However, despite the complex characteristics of fake news, fake news detection based on pre-trained models still cannot achieve good performance in practical applications, like other practical tasks.How to extract features from more complex semantic information about fake news and establish a more effective fake news detection model for the 'pre-training + fine-tuning' paradigm of the pre-training model [96] is still an urgent problem to be solved.

Dataset
In the detection of fake news, the dataset used can be divided into single-modal and multi-modal data, as shown in Figure 5.We gathered prevalent fake news datasets from the past five years based on citations.Multimodal data has a more diverse form, typically comprising a combination of images or video text, as demonstrated in Table 3. Abbreviations for technical terms are defined upon first use.In contrast, unimodal datasets exclusively comprise text, providing a more extensive characterization.According to the dataset-construction method, data characteristics and adaptation tasks, it can be divided into three categories:   (1) Claims/Statements A statement is one or more sentences that contain information that needs to be verified for authenticity.As shown in Table 4, this type of data includes claims and statements collected from debates, campaigns, Facebook, Twitter, interviews, advertisements, etc., as well as entries in the Weki encyclopedia.Such datasets are often related to fact checking, and sometimes clear evidence is introduced to determine whether a particular claim is correct.(2) Posts Social media posts are also composed of one or more sentences with a more focused theme, such as in Table 5.But more importantly, it introduces user information, network information and other information on social media, which helps to build a high-quality fake news detection model.

(3) Articles
A text is a whole text composed of many interrelated sentences.As shown in Table 6, the salient feature of the chapter is that the structure is often title + text, and there is a contextual relationship between sentences.The corresponding problems are often not clearly given evidence, and it is necessary to analyze the evidence from the writing style of the text itself and so on.

Explainable Fake News Detection
With the rapid development and application of machine learning and artificial intelligence technology in various fields, it is very important to explain the results of the algorithm's output to the user.The interpretability of artificial intelligence means that people can understand the choices made by artificial intelligence models in their decisionmaking process, including the reasons, methods and content of decision making [138].Simply put, interpretability is the ability to turn artificial intelligence from a black box into a white box.At present, explainable artificial intelligence methods are applied to different fields in different industries, including biomedical, financial applications, video payment, and media industries.The core of explainable artificial intelligence is to obtain human trust.From this, we can see that there are two important concepts that can explain artificial intelligence: trust and interpretation.For explainable artificial intelligence, the connotation of interpretation is that agents must communicate, exchange and run into different people repeatedly before they can gain human trust.Therefore, for the agent, when explaining, it is necessary to consider the different educational backgrounds, knowledge levels and other factors of the audience and then design the content and form of the explained information.
Figure 6 shows an interactive, explainable AI framework for human-machine communication.The main participants in the system are interpreters and interpretive audiences.Interpreters refer to artificial intelligence agents with many explainable AI methods that can make decisions based on specified tasks; the audience listens to the explanations given by the interpreters, who are generally the affected people involved in a task as well as decision makers and developers.The interpreter provides different forms of interpretation results to the interpretation audience according to different task scenarios; the interpreter, in turn, asks the interpreter questions so that the interpreter can make adjustments and optimizations.In this way, the interpreter will be more intelligent and put forward more convincing interpretation results.

Explainable Fake News Detection
With the rapid development and application of machine learning and artificial intelligence technology in various fields, it is very important to explain the results of the algorithm's output to the user.The interpretability of artificial intelligence means that people can understand the choices made by artificial intelligence models in their decision-making process, including the reasons, methods and content of decision making [138].Simply put, interpretability is the ability to turn artificial intelligence from a black box into a white box.At present, explainable artificial intelligence methods are applied to different fields in different industries, including biomedical, financial applications, video payment, and media industries.The core of explainable artificial intelligence is to obtain human trust.From this, we can see that there are two important concepts that can explain artificial intelligence: trust and interpretation.For explainable artificial intelligence, the connotation of interpretation is that agents must communicate, exchange and run into different people repeatedly before they can gain human trust.Therefore, for the agent, when explaining, it is necessary to consider the different educational backgrounds, knowledge levels and other factors of the audience and then design the content and form of the explained information.
Figure 6 shows an interactive, explainable AI framework for human-machine communication.The main participants in the system are interpreters and interpretive audiences.Interpreters refer to artificial intelligence agents with many explainable AI methods that can make decisions based on specified tasks; the audience listens to the explanations given by the interpreters, who are generally the affected people involved in a task as well as decision makers and developers.The interpreter provides different forms of interpretation results to the interpretation audience according to different task scenarios; the interpreter, in turn, asks the interpreter questions so that the interpreter can make adjustments and optimizations.In this way, the interpreter will be more intelligent and put forward more convincing interpretation results.

Humans; The affected
Decision makers; Developers, etc.In order to reduce the risk of fake news dissemination compared with the pure datadriven method, in addition to the interpretability analysis of the model structure through machine learning, it is also necessary to interpret the model behavior of fake news detection through multidisciplinary, comprehensive research in the framework of human-machine communication and interaction, hence continuously optimizing the fake news detection system.In the face of the research problem of explainable fake news detection, how to explain the results of fake news detection and develop an intelligent fake news detection system that enables effective human-machine collaboration, comprehension, interpretability and sustainability has become a crucial research topic.The artificial intelligence research methods are divided into two kinds: one is derived from the interpretability of the model structure to make humans understand the working principle of the model; the other focuses on the behavioral explanation of the model, that is, letting the model give the score or reason for the prediction result rather than only a cold label.Based on the above two, we first summarize and review them in Sections 5.1 and 5.2, respectively.In Section 5.3, we propose a human-machine-theory triangle communication system for fake news detection based on the interactive explainable AI framework of human-machine communication, which may help us better realize explainable fake news detection.

Explainable Model Structure
The explainable model structure is to analyze and understand the internal structure of the model through explainable technology and to understand the working principle and working mechanism of the model.Structural analysis involves comprehending the operating mechanism and fundamental principles of the model structure.Only by fully understanding the working mechanism and working principle of the model structure can researchers and developers determine what problems exist in the model and when it is difficult to continue improving its performance.Only then, on the premise of understanding the characteristics of the model structure, can they point out the next optimization direction of the model.This enables them to improve the performance of the model in a better and faster way.Most of these explainable models use deep learning methods such as knowledge graphs and attention mechanisms.
Chien et al. [139] proposed the Explainable AI (XAI) framework XFlag, used LSTM [140] to carry out the fake news detection model and used the Layered Relevance Propagation (LRP) [141] algorithm to explain the model.Wu et al. [142] used knowledge graphs to enhance the embedded representation learning framework to detect fake news while providing interpretations of relationships.In this study, an external dataset was used to extract a knowledge graph, and a graph neural network was utilized to pre-train structured features for entities and relationships.The pre-trained features and semantic features were then combined to integrate explainable structured knowledge for recognizing fake news.Chen et al. [143] designed an explainable modular structure for automatically detecting rumors on social media.They utilized a two-level attention mechanism to capture the relative importance both between features and between feature classes.Furthermore, they highlighted the most significant features in the news to explain the algorithmic results.In addition, based on multidisciplinary explainable fake news detection, Qiao et al. [144] used multidisciplinary language synthesis methods to train features that are understandable to humans and then used these features to train a deep learning classifier with a bidirectional recurrent neural network (BRNN) structure [145], so that the classifier can obtain more explainable detection results in news data.
Silva et al. [146] proposed a novel fake news early detection technology called Propa-gation2Vec, as shown in Figure 7.The technology assigns different levels of importance to nodes and cascades in the propagation network and reconstructs knowledge of the complete propagation network based on their partial propagation network during the early detection phase.The study further presents a comprehensive explanation of the underlying logic of Propagation2Vec according to the attention weights assigned to different nodes and cascades.This enhances the applicability of the method and stimulates future research in the domain of fake news detection utilizing propagation networks.We summarize the explainable model structure methods in Table 7, including the main techniques they use, datasets and accuracy performance.We summarize the explainable model structure methods in Table 7, including the main techniques they use, datasets and accuracy performance.

Explainable Model Behavior
Explainable model behavior, that is, explainable analysis of the results of model prediction behavior, provides the basis for prediction results.Behavioral analysis typically involves comprehending the foundation of a model's anticipated behavior.Since deep learning algorithms consist of nonlinear structures, these successful models are commonly obscure and have difficulty revealing the rationale of their forecasts in a format that humans can grasp.The absence of transparency and intelligibility regarding a model's forecasts can lead to grave consequences.Shu et al. [55] used the sentence-comment joint attention sub-network to improve the performance of fake news detection, aiming to capture the inherent interpretability of news phrases and user comments.The dEFEND algorithm module facilitates search functionalities for searching news dissemination networks, trending news, top statements and related news.Moreover, it presents test results and explanations.In a similar vein, Lu et al. [151] utilized the graph-aware common attention network (GCAN) to assess the authenticity of source tweets on social media while providing explanations for the results.GCAN uses the attention mechanism to capture three aspects of the algorithm results: highlighting key words in source tweets, identifying characteristics of retweet propagation paths and understanding the behavior of retweeters.Chi et al. [152] proposed an automated explainable decision-making system (QA-AXDS) based on quantitative argumentation.This system can detect fake news and explain the results to users.It automatically captures human-level knowledge, constructs an interpretation model based on a dialogue tree and employs natural language to help users understand the reasoning process within the system.Notably, QA-AXDS is fully automated and does not require expert experience as pre-input, which enhances the robustness of the system.Ni et al. [153] studied the use of a multi-view attention mechanism network (MVAN) [154] to detect fake news in social networks and provide explanations for the results.MVAN incorporates a dual attention mechanism, encompassing text semantic attention and propagation structure attention, to capture clues in source tweets and propagation structures.It identifies crucial keywords and generates explainable detection results.Raha et al. [155] proposed a neural model for factual inconsistency classification with explanations.By training four neural models, they can predict the inconsistency type and provide explanations for a given sentence.However, Bhattarai et al. [156] introduced an explainable fake news detection framework based on the Tsetlin Machine (TM) [157].By capturing lexical and semantic features of true and fake news texts, this framework achieves accurate detection of fake news and the credibility score is used to provide interpretability.
Fu et al. [158] introduced a comprehensive and explainable false information detection framework called DISCO, as depicted in Figure 8.This framework addresses the challenge of detecting false information by leveraging the heterogeneity of false information and offering explanations for the detection results.Their approach demonstrates commendable accuracy and interpretability in a real-world fake news detection task.
within the system.Notably, QA-AXDS is fully automated and does not require expert experience as pre-input, which enhances the robustness of the system.Ni et al. [153] studied the use of a multi-view attention mechanism network (MVAN) [154] to detect fake news in social networks and provide explanations for the results.MVAN incorporates a dual attention mechanism, encompassing text semantic attention and propagation structure attention, to capture clues in source tweets and propagation structures.It identifies crucial keywords and generates explainable detection results.Raha et al. [155] proposed a neural model for factual inconsistency classification with explanations.By training four neural models, they can predict the inconsistency type and provide explanations for a given sentence.However, Bhattarai et al. [156] introduced an explainable fake news detection framework based on the Tsetlin Machine (TM) [157].By capturing lexical and semantic features of true and fake news texts, this framework achieves accurate detection of fake news and the credibility score is used to provide interpretability.
Fu et al. [158] introduced a comprehensive and explainable false information detection framework called DISCO, as depicted in Figure 8.This framework addresses the challenge of detecting false information by leveraging the heterogeneity of false information and offering explanations for the detection results.Their approach demonstrates commendable accuracy and interpretability in a real-world fake news detection task.We summarize the explainable model behavior analysis methods in Table 8, including the main techniques they use, datasets and accuracy performance.We summarize the explainable model behavior analysis methods in Table 8, including the main techniques they use, datasets and accuracy performance.

Human-Machine-Theory Triangle Communication System
Based on the interactive, explainable AI framework for human-machine communication presented in Figure 6, we have augmented the system by incorporating multidisciplinary theoretical knowledge.We introduce a triangular communication system that humans, machines and theory can form.As depicted in Figure 9, this represents a potentially more comprehensive solution for achieving explainable fake news detection.In this system, the machine leverages a multidisciplinary theoretical training model, which can be a general technical model or an explainable AI model, to provide prediction results to humans.The human, equipped with theoretical knowledge, assesses the prediction results and provides feedback to the machine.Through this iterative process, the explainable fake news detection system can be continuously adjusted and improved, thereby enhancing human trust in the system.

Human-Machine-Theory Triangle Communication System
Based on the interactive, explainable AI framework for human-machine communication presented in Figure 6, we have augmented the system by incorporating multidisciplinary theoretical knowledge.We introduce a triangular communication system that humans, machines and theory can form.As depicted in Figure 9, this represents a potentially more comprehensive solution for achieving explainable fake news detection.In this system, the machine leverages a multidisciplinary theoretical training model, which can be a general technical model or an explainable AI model, to provide prediction results to humans.The human, equipped with theoretical knowledge, assesses the prediction results and provides feedback to the machine.Through this iterative process, the explainable fake news detection system can be continuously adjusted and improved, thereby enhancing human trust in the system.Unlike the interactive, explainable AI framework shown in Figure 6, this framework emphasizes the crucial role of these three components.The machine encompasses various fake news detection methods, including machine learning, deep learning, pre-training and multimodality.The human is responsible for reviewing and evaluating the prediction results generated by the machine and providing feedback.The theory refers to the multidisciplinary knowledge involved in fake news detection.This paper has introduced the relevant multidisciplinary research progress in Section 2.5.In terms of operation, to achieve improved detection results, the machine combines multi-disciplinary theory to facilitate the detection of fake news.The prediction results can be in the form of labels or explanatory content.Once the human receives these prediction results, they may question or doubt them, prompting verification by referring to the theory.The verification results are then fed back to the machine, forming a cycle of communication.With each iteration, the performance of the machine gradually improves, leading to an increase in human trust in the machine.Based on the above principles, the triangular communication system involves humans, machines and theory, integrates multidisciplinary knowledge, and employs artificial intelligence algorithms for information detection.Simultaneously, in collaboration with human feedback, the accuracy of fake news detection can be continuously enhanced and the fake news detection system can be promoted to become an agent that can be trusted by human beings.

Conclusions and Future Work
The accuracy and reliability of information dissemination are of great help to the sustainable development of society and the economy.To embrace digital transformation, green information technology and responsible information production and consumption, we can reduce resource consumption and improve efficiency.These measures can achieve the long-term benefits of information dissemination.This paper focuses on investigating existing fake news detection technology and providing an overview of the research status of fake news detection methods.We collected almost all commonly used datasets, classified them from the perspectives of single-mode and multi-mode and summarized the research methods for fake news detection.This includes content-based detection methods, social network-based detection methods and knowledge-based detection methods.Considering the popularity of multimodal technology, we have also sorted out multimodal fake news detection methods.Additionally, this paper also discusses the research progress of fake news in multidisciplinary fields.Furthermore, we discuss the general fake news detection technology along with the explainable fake news detection method.Specifically, we propose a human-machine-theory explainable triangular communication framework.It is characterized by being peoplecentered, incorporating multidisciplinary knowledge and aiming to establish the sustainable development of a human-machine interaction information dissemination system.
Finally, based on the review of fake news detection presented above, several topics deserve further investigation in the future: (1) With the emergence of large models like ChatGPT, a wave of large models has been set off.These large models are far better than previous models in terms of language ability.There is potential for utilizing the knowledge and language abilities of these models to achieve improved performance in false information detection.However, research on fake news detection using large models is currently lacking.(2) In the era of AI-generated content, deep forgery technology is becoming more prevalent in various fields such as film and television, games and privacy protection.However, the malicious use of deep forgery technology poses threats to personal reputation, social stability and political security.Therefore, future research should focus on developing deep forgery generation and defense methods to address these challenges.
(3) From the perspective of explainable fake news detection, there is a need to develop comprehensive and explainable solutions.Currently, both in terms of model structure analysis and model behavior analysis, explainable artificial intelligence methods for fake news detection are not yet fully established.Designing an explainable fake news detection system has become crucial in the current complex information dissemination environment.(4) Brain science, neuropsychology, psychology and other multidisciplinary content are relatively cutting-edge fields of knowledge.At present, research on the neural mechanism of fake news is very limited.We believe that the special means in this field can contribute to the identification, defense and deeper understanding of fake news.
We hope that more efficient fake news detection methods can be developed in the future, thus promoting the sustainable development of information dissemination.

Figure 1 .
Figure 1.News disseminated on social media.

Figure 1 .
Figure 1.News disseminated on social media.

[ 41 ]
obtained text features and visual features through pre-training models and fused them into new feature representations.Their simple, unified framework is shown in Figure 3 (the fake news image in Figure 3 comes from the network).

Figure 4 .
Figure 4. Research on fake news in multidisciplinary fields.Figure 4. Research on fake news in multidisciplinary fields.

Figure 4 .
Figure 4. Research on fake news in multidisciplinary fields.Figure 4. Research on fake news in multidisciplinary fields.

Figure 5 .
Figure 5. Dataset division of fake news detection.Figure 5. Dataset division of fake news detection.

Figure 5 .
Figure 5. Dataset division of fake news detection.Figure 5. Dataset division of fake news detection.

Figure 6 .Figure 6 .
Figure 6.Explainable AI framework diagram for human-machine communication interaction Figure 6.Explainable AI framework diagram for human-machine communication interaction.

Figure 9 .
Figure 9. Human-machine-theory explainable triangular communication framework diagram for fake news detection.

Figure 9 .
Figure 9. Human-machine-theory explainable triangular communication framework diagram for fake news detection.

Table 1 .
Classification and explanation of fake news.

Table 2 .
Summary of multimodal fake news detection methods.

Table 2 .
Summary of multimodal fake news detection methods.

Table 3 .
The dataset of Multimode.

Table 4 .
The dataset of Single-mode claims.

Table 5 .
The dataset of Single-mode posts.

Table 6 .
The dataset of Single-mode articles.

Table 7 .
Structural analysis of explainable fake news detection model.

Table 7 .
Structural analysis of explainable fake news detection model.

Table 8 .
Behavior analysis of explainable fake news detection model.