Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = telecom fraud recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 7087 KB  
Article
Telecom Fraud Recognition Based on Large Language Model Neuron Selection
by Lanlan Jiang, Cheng Zhang, Xingguo Qin, Ya Zhou, Guanglun Huang, Hui Li and Jun Li
Mathematics 2025, 13(11), 1784; https://doi.org/10.3390/math13111784 - 27 May 2025
Viewed by 1625
Abstract
In the realm of natural language processing (NLP), text classification constitutes a task of paramount significance for large language models (LLMs). Nevertheless, extant methodologies predominantly depend on the output generated by the final layer of LLMs, thereby neglecting the wealth of information encapsulated [...] Read more.
In the realm of natural language processing (NLP), text classification constitutes a task of paramount significance for large language models (LLMs). Nevertheless, extant methodologies predominantly depend on the output generated by the final layer of LLMs, thereby neglecting the wealth of information encapsulated within neurons residing in intermediate layers. To surmount this shortcoming, we introduce LENS (Linear Exploration and Neuron Selection), an innovative technique designed to identify and sparsely integrate salient neurons from intermediate layers via a process of linear exploration. Subsequently, these neurons are transmitted to downstream modules dedicated to text classification. This strategy effectively mitigates noise originating from non-pertinent neurons, thereby enhancing both the accuracy and computational efficiency of the model. The detection of telecommunication fraud text represents a formidable challenge within NLP, primarily attributed to its increasingly covert nature and the inherent limitations of current detection algorithms. In an effort to tackle the challenges of data scarcity and suboptimal classification accuracy, we have developed the LENS-RMHR (Linear Exploration and Neuron Selection with RoBERTa, Multi-head Mechanism, and Residual Connections) model, which extends the LENS framework. By incorporating RoBERTa, a multi-head attention mechanism, and residual connections, the LENS-RMHR model augments the feature representation capabilities and improves training efficiency. Utilizing the CCL2023 telecommunications fraud dataset as a foundation, we have constructed an expanded dataset encompassing eight distinct categories that encapsulate a diverse array of fraud types. Furthermore, a dual-loss function has been employed to bolster the model’s performance in multi-class classification scenarios. Experimental results reveal that LENS-RMHR demonstrates superior performance across multiple benchmark datasets, underscoring its extensive potential for application in the domains of text classification and telecommunications fraud detection. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

16 pages, 5991 KB  
Article
Innovative Telecom Fraud Detection: A New Dataset and an Advanced Model with RoBERTa and Dual Loss Functions
by Jun Li, Cheng Zhang and Lanlan Jiang
Appl. Sci. 2024, 14(24), 11628; https://doi.org/10.3390/app142411628 - 12 Dec 2024
Cited by 5 | Viewed by 5252
Abstract
Telecom fraud has emerged as one of the most pressing challenges in the criminal field. With advancements in artificial intelligence, telecom fraud texts have become increasingly covert and deceptive. Existing prevention methods, such as mobile number tracking, detection, and traditional machine-learning-based text recognition, [...] Read more.
Telecom fraud has emerged as one of the most pressing challenges in the criminal field. With advancements in artificial intelligence, telecom fraud texts have become increasingly covert and deceptive. Existing prevention methods, such as mobile number tracking, detection, and traditional machine-learning-based text recognition, struggle in terms of their real-time performance in identifying telecom fraud. Additionally, the scarcity of Chinese telecom fraud text data has limited research in this area. In this paper, we propose a telecom fraud text detection model, RoBERTa-MHARC, which combines RoBERTa with a multi-head attention mechanism and residual connections. First, the model selects data categories from the CCL2023 telecom fraud dataset as basic samples and merges them with collected telecom fraud text data, creating a five-category dataset covering impersonation of customer service, impersonation of leadership acquaintances, loans, public security fraud, and normal text. During training, the model integrates a multi-head attention mechanism and enhances its training efficiency through residual connections. Finally, the model improves its multi-class classification accuracy by incorporating an inconsistency loss function alongside the cross-entropy loss. The experimental results demonstrate that our model performs well on multiple benchmark datasets, achieving an F1 score of 97.65 on the FBS dataset, 98.10 on our own dataset, and 93.69 on the news dataset. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop