Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (269)

Search Parameters:
Keywords = malicious user detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 2208 KB  
Article
Small Language Models for Phishing Website Detection: Cost, Performance, and Privacy Trade-Offs
by Georg Goldenits, Philip König, Sebastian Raubitzek and Andreas Ekelhart
J. Cybersecur. Priv. 2026, 6(2), 48; https://doi.org/10.3390/jcp6020048 - 5 Mar 2026
Viewed by 515
Abstract
Phishing websites pose a major cybersecurity threat, exploiting unsuspecting users and causing significant financial and organisational harm. Traditional machine learning approaches for phishing detection often require extensive feature engineering, continuous retraining, and costly infrastructure maintenance. At the same time, proprietary large language models [...] Read more.
Phishing websites pose a major cybersecurity threat, exploiting unsuspecting users and causing significant financial and organisational harm. Traditional machine learning approaches for phishing detection often require extensive feature engineering, continuous retraining, and costly infrastructure maintenance. At the same time, proprietary large language models (LLMs) have demonstrated strong performance in phishing-related classification tasks, but their operational costs and reliance on external providers limit their practical adoption in many business environments. This paper presents a detection pipeline for malicious websites and investigates the feasibility of Small Language Models (SLMs) using raw HTML code and URLs. A key advantage of these models is that they can be deployed on local infrastructure, providing organisations with greater control over data and operations. We systematically evaluate 15 commonly used SLMs, ranging from 1 billion to 70 billion parameters, benchmarking their classification accuracy, computational requirements, and cost-efficiency. Our results highlight the trade-offs between detection performance and resource consumption. While SLMs underperform compared to state-of-the-art proprietary LLMs, the gap is moderate: the best SLM achieves an F1-score of 0.893 (Llama3.3:70B), compared to 0.929 for GPT-5.2, indicating that open-source models can provide a viable and scalable alternative to external LLM services. Full article
(This article belongs to the Section Privacy)
Show Figures

Figure 1

27 pages, 2619 KB  
Article
Defamiliarization Attack: Literary Theory Enabled Discussion of LLM Safety
by Bibin Babu, Iana Agafonova, Sebastian Biedermann and Ivan Yamshchikov
Electronics 2026, 15(5), 1047; https://doi.org/10.3390/electronics15051047 - 2 Mar 2026
Viewed by 462
Abstract
This paper introduces a multi-turn large language model (LLM) jailbreaking attack called Defamiliarization, in which malicious queries are embedded within ostensibly harmless narratives. By reframing requests in “unmarked” contexts, LLMs can be coerced into producing undesirable outputs. A range of scenarios is documented, [...] Read more.
This paper introduces a multi-turn large language model (LLM) jailbreaking attack called Defamiliarization, in which malicious queries are embedded within ostensibly harmless narratives. By reframing requests in “unmarked” contexts, LLMs can be coerced into producing undesirable outputs. A range of scenarios is documented, from planning ethically dubious actions to selectively overlooking critical events in literary texts, thereby exposing the limitations of alignment strategies predicated on detecting trigger words or semantic cues. Rather than substituting vocabulary, defamiliarization manipulates context and presentation, highlighting vulnerabilities that cannot be addressed by token-level fixes alone. Beyond demonstrating the effectiveness of defamiliarization as an attack strategy, evidence is presented of a systematic relationship between model scale and susceptibility. Experiments reveal that smaller-parameter models are significantly easier to manipulate using defamiliarized prompts. This finding raises important concerns regarding the growing popularity of lightweight, locally hosted LLMs, which are favored for their lower computational requirements but may lack alignment safeguards. A more holistic approach to LLM safety is advocated—one that incorporates insights from literary theory, ethics, and user experience—treating these models as interpretive agents. By doing so, defenses against covert manipulations can be strengthened and AI systems can remain aligned with human values. Full article
Show Figures

Figure 1

24 pages, 5580 KB  
Article
DF-TransVAE: A Deep Fusion Network for Binary Classification-Based Anomaly Detection in Internet User Behavior
by Huihui Fan, Yuan Jia, Wu Le, Zhenhong Jia, Hui Zhao, Congbing He, Hedong Jiang, Zeyu Hu, Xiaoyi Lv, Jianting Yuan and Xiaohui Huang
Appl. Sci. 2026, 16(5), 2243; https://doi.org/10.3390/app16052243 - 26 Feb 2026
Viewed by 239
Abstract
User behavior anomaly detection plays a vital role in network security for identifying malicious access and abnormal activities in high-dimensional internet user behavior data. Although Transformer architectures have been widely adopted in anomaly detection tasks, and their integration with Variational Autoencoders (VAEs) has [...] Read more.
User behavior anomaly detection plays a vital role in network security for identifying malicious access and abnormal activities in high-dimensional internet user behavior data. Although Transformer architectures have been widely adopted in anomaly detection tasks, and their integration with Variational Autoencoders (VAEs) has often been used to further improve detection accuracy, existing integration methods have failed to effectively balance global feature dependency modeling and generative data distribution learning. This results in limited capability in identifying complex anomalous patterns. To address this issue, this paper proposes DF-TransVAE, a novel deeply integrated framework that advances the integration of a Transformer and a VAE for supervised anomaly detection. The framework first fuses global contextual representations from the Transformer encoder with original input features, then maps the fused representation into the latent space via the VAE encoder. A cross-attention mechanism is introduced as the core of deep integration, enabling dynamic, bidirectional interaction between the fused features and latent variables to enhance information fusion. Lastly, a fully connected classifier equipped with residual connections outputs anomaly probabilities for supervised binary classification. Experimental results on two public datasets demonstrate that the proposed framework achieves better performance than existing deep learning methods in terms of accuracy, precision, recall, and F1-score, particularly in detecting complex anomalous patterns. Our results indicate that the deep integration mechanism we propose effectively addresses the limitations of conventional Transformer–VAE combinations. Full article
Show Figures

Figure 1

13 pages, 1009 KB  
Article
Phishing Email Detection Using BERT and RoBERTa
by Mariam Ibrahim and Ruba Elhafiz
Computation 2026, 14(2), 46; https://doi.org/10.3390/computation14020046 - 7 Feb 2026
Viewed by 1176
Abstract
One of the most harmful and deceptive forms of cybercrime is phishing, which targets users with malicious emails and websites. In this paper, we focus on the use of natural language processing (NLP) techniques and transformer models for phishing email detection. The Nazario [...] Read more.
One of the most harmful and deceptive forms of cybercrime is phishing, which targets users with malicious emails and websites. In this paper, we focus on the use of natural language processing (NLP) techniques and transformer models for phishing email detection. The Nazario Phishing Corpus is preprocessed and blended with real emails from the Enron dataset to create a robustly balanced dataset. Urgency, deceptive phrasing, and structural anomalies were some of the neglected features and sociolinguistic traits of the text, which underwent tokenization, lemmatization, and noise filtration. We fine-tuned two transformer models, Bidirectional Encoder Representations from Transformers (BERT) and the Robustly Optimized BERT Pretraining Approach (RoBERTa), for binary classification. The models were evaluated on the standard metrics of accuracy, precision, recall, and F1-score. Given the context of phishing, emphasis was placed on recall to reduce the number of phishing attacks that went unnoticed. The results show that RoBERTa has more general performance and fewer false negatives than BERT and is therefore a better candidate for deployment on security-critical tasks. Full article
Show Figures

Figure 1

25 pages, 1516 KB  
Article
Comparative Benchmarking of Deep Learning Architectures for Detecting Adversarial Attacks on Large Language Models
by Oleksandr Kushnerov, Ruslan Shevchuk, Serhii Yevseiev and Mikołaj Karpiński
Information 2026, 17(2), 155; https://doi.org/10.3390/info17020155 - 4 Feb 2026
Viewed by 560
Abstract
The rapid adoption of large language models (LLMs) in corporate and governmental systems has raised critical security concerns, particularly prompt injection attacks exploiting LLMs’ inability to differentiate control instructions from untrusted user inputs. This study systematically benchmarks neural network architectures for malicious prompt [...] Read more.
The rapid adoption of large language models (LLMs) in corporate and governmental systems has raised critical security concerns, particularly prompt injection attacks exploiting LLMs’ inability to differentiate control instructions from untrusted user inputs. This study systematically benchmarks neural network architectures for malicious prompt detection, emphasizing robustness against character-level adversarial perturbations—an aspect that remains comparatively underemphasized in the specific context of prompt-injection detection despite its established significance in general adversarial NLP. Using the Malicious Prompt Detection Dataset (MPDD) containing 39,234 labeled instances, eight architectures—Dense DNN, CNN, BiLSTM, BiGRU, Transformer, ResNet, and character-level variants of CNN and BiLSTM—were evaluated based on standard performance metrics (accuracy, F1-score, and AUC-ROC), adversarial robustness coefficients against spacing and homoglyph perturbations, and inference latency. Results indicate that the word-level 3_Word_BiLSTM achieved the highest performance on clean samples (accuracy = 0.9681, F1 = 0.9681), whereas the Transformer exhibited lower accuracy (0.9190) and significant vulnerability to spacing attacks (adversarial robustness ρspacing=0.61). Conversely, the Character-level BiLSTM demonstrated superior resilience (ρspacing=1.0, ρhomoglyph=0.98), maintaining high accuracy (0.9599) and generalization on external datasets with only 2–4% performance decay. These findings highlight that character-level representations provide intrinsic robustness against obfuscation attacks, suggesting Char_BiLSTM as a reliable component in defense-in-depth strategies for LLM-integrated systems. Full article
(This article belongs to the Special Issue Public Key Cryptography and Privacy Protection)
Show Figures

Graphical abstract

5 pages, 1524 KB  
Proceeding Paper
SMSProcessing Using Optical Character Recognition for Smishing Detection
by Lidia Prudente-Tixteco, Jesus Olivares-Mercado and Linda Karina Toscano-Medina
Eng. Proc. 2026, 123(1), 12; https://doi.org/10.3390/engproc2026123012 - 3 Feb 2026
Viewed by 372
Abstract
Instant messaging services are the main modern means of communication because they allow the exchange of messages between people anywhere and through many types of devices. Smishing involves sending text messages spoofing banks, government institutions, or companies in order to deceive. These messages [...] Read more.
Instant messaging services are the main modern means of communication because they allow the exchange of messages between people anywhere and through many types of devices. Smishing involves sending text messages spoofing banks, government institutions, or companies in order to deceive. These messages often include malicious links that redirect users to fraudulent websites designed to steal personal information and commit financial fraud, identity theft, and extortion, among other crimes. Detecting smishing requires techniques to prevent access to dynamic links generated by cybercriminals to take control of devices or to consult blacklists of malicious links. Optical Character Recognition (OCR) recognizes text embedded in images without accessing links. This paper presents a conceptual model that uses OCR to extract text from messages suspected of smishing from a screenshot of a mobile device so that further processing can analyze whether it is smishing. Full article
(This article belongs to the Proceedings of First Summer School on Artificial Intelligence in Cybersecurity)
Show Figures

Figure 1

22 pages, 444 KB  
Article
Domain Knowledge-Enhanced LLMs for Fraud and Concept Drift Detection
by Ali Şenol, Garima Agrawal and Huan Liu
Electronics 2026, 15(3), 534; https://doi.org/10.3390/electronics15030534 - 26 Jan 2026
Cited by 2 | Viewed by 599
Abstract
Deceptive and evolving conversations on online platforms threaten trust, security, and user safety, particularly when concept drift obscures malicious intent. Large Language Models (LLMs) offer strong natural language reasoning but remain unreliable in risk-sensitive scenarios due to contextual ambiguity and hallucinations. This article [...] Read more.
Deceptive and evolving conversations on online platforms threaten trust, security, and user safety, particularly when concept drift obscures malicious intent. Large Language Models (LLMs) offer strong natural language reasoning but remain unreliable in risk-sensitive scenarios due to contextual ambiguity and hallucinations. This article introduces a domain knowledge-enhanced Dual-LLM framework that integrates structured cues with pretrained models to improve fraud detection and drift classification. The proposed approach achieves 98% accuracy on benchmark datasets, significantly outperforming zero-shot LLMs and traditional classifiers. The results highlight how domain-grounded prompts enhance both accuracy and interpretability, offering a trustworthy path for applying LLMs in safety-critical applications. Beyond advancing the state of the art in fraud detection, this work has the potential to benefit domains such as cybersecurity, e-commerce, financial fraud prevention, and online content moderation. Full article
(This article belongs to the Special Issue New Trends in Representation Learning)
Show Figures

Figure 1

21 pages, 1401 KB  
Article
Embedding-Based Detection of Indirect Prompt Injection Attacks in Large Language Models Using Semantic Context Analysis
by Mohammed Alamsabi, Michael Tchuindjang and Sarfraz Brohi
Algorithms 2026, 19(1), 92; https://doi.org/10.3390/a19010092 - 22 Jan 2026
Cited by 1 | Viewed by 1189
Abstract
Large Language Models (LLMs) are vulnerable to Indirect Prompt Injection Attacks (IPIAs), where malicious instructions are embedded within external content rather than direct user input. This study presents an embedding-based detection approach that analyses the semantic relationship between user intent and external content, [...] Read more.
Large Language Models (LLMs) are vulnerable to Indirect Prompt Injection Attacks (IPIAs), where malicious instructions are embedded within external content rather than direct user input. This study presents an embedding-based detection approach that analyses the semantic relationship between user intent and external content, enabling the early identification of IPIAs that conventional defences overlook. We also provide a dataset of 70,000 samples, constructed using 35,000 malicious instances from the Benchmark for Indirect Prompt Injection Attacks (BIPIA) and 35,000 benign instances generated using ChatGPT-4o-mini. Furthermore, we performed a comparative analysis of three embedding models, namely OpenAI text-embedding-3-small, GTE-large, and MiniLM-L6-v2, evaluated in combination with XGBoost, LightGBM, and Random Forest classifiers. The best-performing configuration using OpenAI embeddings with XGBoost achieved an accuracy of 97.7% and an F1-score of 0.977, matching or exceeding the performance of existing IPIA detection methods while offering practical deployment advantages. Unlike prevention-focused approaches that require modifications to the underlying LLM architecture, the proposed method operates as a model-agnostic external detection layer with an average inference time of 0.001 ms per sample. This detection-based approach complements existing prevention mechanisms by providing a lightweight, scalable solution that can be integrated into LLM pipelines without requiring architectural changes. Full article
Show Figures

Figure 1

25 pages, 1862 KB  
Article
A Novel Architecture for Mitigating Botnet Threats in AI-Powered IoT Environments
by Vasileios A. Memos, Christos L. Stergiou, Alexandros I. Bermperis, Andreas P. Plageras and Konstantinos E. Psannis
Sensors 2026, 26(2), 572; https://doi.org/10.3390/s26020572 - 14 Jan 2026
Viewed by 787
Abstract
The rapid growth of Artificial Intelligence of Things (AIoT) environments in various sectors has introduced major security challenges, as these smart devices can be exploited by malicious users to form Botnets of Things (BoT). Limited computational resources and weak encryption mechanisms in such [...] Read more.
The rapid growth of Artificial Intelligence of Things (AIoT) environments in various sectors has introduced major security challenges, as these smart devices can be exploited by malicious users to form Botnets of Things (BoT). Limited computational resources and weak encryption mechanisms in such devices make them attractive targets for attacks like Distributed Denial of Service (DDoS), Man-in-the-Middle (MitM), and malware distribution. In this paper, we propose a novel multi-layered architecture to mitigate BoT threats in AIoT environments. The system leverages edge traffic inspection, sandboxing, and machine learning techniques to analyze, detect, and prevent suspicious behavior, while uses centralized monitoring and response automation to ensure rapid mitigation. Experimental results demonstrate the necessity and superiority over or parallel to existing models, providing an early detection of botnet activity, reduced false positives, improved forensic capabilities, and scalable protection for large-scale AIoT areas. Overall, this solution delivers a comprehensive, resilient, and proactive framework to protect AIoT assets from evolving cyber threats. Full article
(This article belongs to the Special Issue Internet of Things Cybersecurity)
Show Figures

Figure 1

20 pages, 5241 KB  
Article
Phishing Website Impersonation: Comparative Analysis of Detection and Target Recognition Methods
by Marcin Jarczewski, Piotr Białczak and Wojciech Mazurczyk
Appl. Sci. 2026, 16(2), 640; https://doi.org/10.3390/app16020640 - 7 Jan 2026
Viewed by 922
Abstract
With the rapid advancements in technology, there has been a noticeable increase in phishing attacks that exploit users by impersonating trusted entities. The primary attack vectors include fraudulent websites and carefully crafted emails. Early detection of such threats enables the more effective blocking [...] Read more.
With the rapid advancements in technology, there has been a noticeable increase in phishing attacks that exploit users by impersonating trusted entities. The primary attack vectors include fraudulent websites and carefully crafted emails. Early detection of such threats enables the more effective blocking of malicious sites and timely user warnings. One of the key elements in phishing detection is identifying the entity being impersonated. In this article, we conduct a comparative analysis of methods for detecting phishing websites that rely on website screenshots and recognizing their impersonation targets. The two main research objectives include binary phishing detection to identify malicious intent and multiclass classification of impersonated targets to enable specific incident response and brand protection. Three approaches are compared: two state-of-the-art methods, Phishpedia and VisualPhishNet, and a third, proposed in this work, which uses perceptual hash similarity as a baseline. To ensure consistent evaluation conditions, a dedicated framework was developed for the study and shared with the community via GitHub. The obtained results indicate that Phishpedia and the Baseline method were the most effective in terms of detection performance, outperforming VisualPhishNet. Specifically, the proposed Baseline method achieved an F1 score of 0.95 on the Phishpedia dataset for binary classification, while Phishpedia maintained a high Identification Rate (>0.9) across all tested datasets. In contrast, VisualPhishNet struggled with dataset variability, achieving an F1 score of only 0.17 on the same benchmark. Moreover, as our proposed Baseline method demonstrated superior stability and binary classification performance, it should be considered as a robust candidate for preliminary filtering in hybrid systems. Full article
Show Figures

Figure 1

38 pages, 1444 KB  
Review
A Comprehensive Review: The Evolving Cat-and-Mouse Game in Network Intrusion Detection Systems Leveraging Machine Learning
by Qutaiba Alasad, Meaad Ahmed, Shahad Alahmed, Omer T. Khattab, Saba Alaa Abdulwahhab and Jiann-Shuin Yuan
J. Cybersecur. Priv. 2026, 6(1), 13; https://doi.org/10.3390/jcp6010013 - 4 Jan 2026
Viewed by 1246
Abstract
Machine learning (ML) techniques have significantly enhanced decision support systems to render them more accurate, efficient, and faster. ML classifiers in securing networks, on the other hand, face a disproportionate risk from the sophisticated adversarial attacks compared to other areas, such as spam [...] Read more.
Machine learning (ML) techniques have significantly enhanced decision support systems to render them more accurate, efficient, and faster. ML classifiers in securing networks, on the other hand, face a disproportionate risk from the sophisticated adversarial attacks compared to other areas, such as spam filtering, intrusion, and virus detection, and this introduces a continuous competition between malicious users and preventers. Attackers test ML models with inputs that have been specifically crafted to evade these models and obtain inaccurate forecasts. This paper presents a comprehensive review of attack and defensive techniques in ML-based NIDSs. It highlights the current serious challenges that the systems face in preserving robustness against adversarial attacks. Based on our analysis, with respect to their current superior performance and robustness, ML-based NIDS require urgent attention to develop more robust techniques to withstand such attacks. Finally, we discuss the current existing approaches in generating adversarial attacks and reveal the limitations of current defensive approaches. In this paper, the most recent advancements, such as hybrid defensive techniques that integrate multiple strategies to prevent adversarial attacks in NIDS, have highlighted the ongoing challenges. Full article
Show Figures

Figure 1

13 pages, 2083 KB  
Article
Adaptive Privacy-Preserving Insider Threat Detection Using Generative Sequence Models
by Fatmah Bamashmoos
Future Internet 2026, 18(1), 11; https://doi.org/10.3390/fi18010011 - 26 Dec 2025
Viewed by 589
Abstract
Insider threats remain one of the most challenging security risks in modern enterprises due to their subtle behavioral patterns and the difficulty of distinguishing malicious intent from legitimate activity. This paper presents a unified and adaptive generative framework for insider threat detection that [...] Read more.
Insider threats remain one of the most challenging security risks in modern enterprises due to their subtle behavioral patterns and the difficulty of distinguishing malicious intent from legitimate activity. This paper presents a unified and adaptive generative framework for insider threat detection that integrates Variational Autoencoders (VAEs) and Transformer Autoencoder architectures to learn personalized behavioral baselines from sequential user event logs. Anomalies are identified as significant deviations from these learned baseline distributions. The proposed framework incorporates an adaptive learning mechanism to address both cold-start scenarios and concept drift, enabling continuous model refinement as user behavior evolves. In addition, we introduce a privacy-preserving latent-space design and evaluate the framework under formal privacy attacks, including membership inference and reconstruction attacks, demonstrating strong resilience against data leakage. Experiments performed on the CERT Insider Threat Dataset (v5.2) show that our approach outperforms traditional and deep learning baselines, with the Transformer Autoencoder achieving an F1-score of 0.66 and an AUPRC of 0.59. The results highlight the effectiveness of generative sequence models for privacy-conscious and adaptive insider threat detection in enterprise environments, providing a comparative analysis of two powerful architectures for practical implementation. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence (AI) for Cybersecurity)
Show Figures

Graphical abstract

25 pages, 697 KB  
Article
A Hybrid Perplexity-MAS Framework for Proactive Jailbreak Attack Detection in Large Language Models
by Ping Wang, Hao-Cyuan Li, Hsiao-Chung Lin, Wen-Hui Lin, Fang-Ci Wu, Nian-Zu Xie and Zhon-Ghan Yang
Appl. Sci. 2025, 15(24), 13190; https://doi.org/10.3390/app152413190 - 16 Dec 2025
Viewed by 1387
Abstract
Jailbreak attacks (JAs) represent a sophisticated subclass of adversarial threats wherein malicious actors craft strategically engineered prompts that subvert the intended operational boundaries of large language models (LLMs). These attacks exploit latent vulnerabilities in generative AI architectures, allowing adversaries to circumvent established safety [...] Read more.
Jailbreak attacks (JAs) represent a sophisticated subclass of adversarial threats wherein malicious actors craft strategically engineered prompts that subvert the intended operational boundaries of large language models (LLMs). These attacks exploit latent vulnerabilities in generative AI architectures, allowing adversaries to circumvent established safety protocols and illicitly induce the model to output prohibited, unethical, or harmful content. The emergence of such exploits underscores critical gaps in the security and controllability of modern AI systems, raising profound concerns about their societal impact and deployment in sensitive environments. In response, this study introduces an innovative defense framework that synergistically integrates language model perplexity analysis with a Multi-Agent System (MAS)-oriented detection architecture. This hybrid design aims to fortify the resilience of LLMs by proactively identifying and neutralizing jailbreak attempts, thereby ensuring the protection of user privacy and ethical integrity. The experimental setup adopts a query-driven adversarial probing strategy, in which jailbreak prompts are dynamically generated and injected into the open-source LLaMA-2 model to systematically explore potential vulnerabilities. To ensure rigorous validation, the proposed framework will be evaluated using a custom jailbreak detection benchmark encompassing metrics such as Attack Success Rate (ASR), Defense Success Rate (DSR), Defense Pass Rate (DPR), False Positive Rate, Benign Pass Rate (BPR), and End-to-End Latency. Through iterative experimentation and continuous refinement, this work endeavors to advance the defensive capabilities of LLM-based systems, enabling more trustworthy, secure, and ethically aligned deployment of generative AI in real-world environments. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

29 pages, 9256 KB  
Article
MaSS-Droid: Android Malware Detection Framework Using Multi-Layer Feature Screening and Stacking Integration
by Zihao Zhang, Qiang Han and Zhichao Shi
Entropy 2025, 27(12), 1252; https://doi.org/10.3390/e27121252 - 11 Dec 2025
Viewed by 528
Abstract
In recent years, the frequent emergence of Android malware has posed a significant threat to user security. The redundancy of features in malicious software samples and the instability of individual model performance have also introduced numerous challenges to malware detection. To address these [...] Read more.
In recent years, the frequent emergence of Android malware has posed a significant threat to user security. The redundancy of features in malicious software samples and the instability of individual model performance have also introduced numerous challenges to malware detection. To address these issues, this paper proposes a malware detection framework named Mass-Droid, based on Multi-feature and Multi-layer Screening for adaptive Stacking integration. First, three types of features are extracted from APK files: permission features, API call features, and opcode sequences. Then, a three-layer feature screening mechanism is designed to effectively eliminate feature redundancy, improve detection accuracy, and reduce the computational complexity of the model. To tackle the problem of high performance fluctuations and limited generalization ability in single models, this paper proposes an adaptive Stacking integration method (Adaptive-Stacking). By dynamically adjusting the weights of base classifiers, this method significantly enhances the stability and generalization performance of the ensemble model when dealing with complex and diverse malware samples. The experimental results demonstrate that the MaSS-Droid framework can effectively mitigate overfitting, improve the model’s generalization capability, reduce feature redundancy, and significantly enhance the overall stability and accuracy of malware detection. Full article
Show Figures

Figure 1

22 pages, 1143 KB  
Article
Comparative Analysis of SQL Injection Defense Mechanisms Based on Three Approaches: PDO, PVT, and ART
by Jiho Choi, Young-Ae Jung and Hoon Ko
Appl. Sci. 2025, 15(23), 12351; https://doi.org/10.3390/app152312351 - 21 Nov 2025
Viewed by 1903
Abstract
This study presents a comprehensive examination of the risks associated with SQL Injection attacks, with a particular focus on the Union Select technique. This method is frequently exploited by attackers to retrieve unauthorized data by appending malicious queries to legitimate database calls. We [...] Read more.
This study presents a comprehensive examination of the risks associated with SQL Injection attacks, with a particular focus on the Union Select technique. This method is frequently exploited by attackers to retrieve unauthorized data by appending malicious queries to legitimate database calls. We analyzed multiple real-world cases where personal information was leaked through such attacks, underscoring the urgent need for robust countermeasures in modern web applications. To address these threats, we developed and implemented a multi-layered defense strategy. This strategy includes using PHP Data Objects (PDO) with Prepared Statements to safely handling user inputs, rigorous input pattern validation to detect and reject suspicious payloads, and a redirection-based filtering mechanism to disrupt abnormal access attempts. Through controlled experiments, we validated the effectiveness of these techniques in mitigating SQL Injection attacks. The results demonstrate that our approach successfully blocked malicious queries and prevented unauthorized data access or manipulation. These findings represent a significant contribution to enhancing the security, stability, and trustworthiness of web-based systems, especially those handling sensitive user information. Finally, this work is presented as an educational comparative study, not as a proposal of new defense mechanisms, aiming to provide a clear and reproducible evaluation of standard SQL injection countermeasures. The contributions of this work are threefold: (i) it provides a unified comparative evaluation of three representative SQL injection defense methods—PDO, pattern validation, and attacker redirection—under identical experimental conditions; (ii) it analyzes their strengths, weaknesses, and practical applicability in PHP–MySQL environments; and (iii) it serves as an educational reference that bridges theoretical understanding and practical implementation. The study also suggests directions for extending this work through machine-learning-based anomaly detection and runtime self-protection (RASP) frameworks. Full article
Show Figures

Figure 1

Back to TopTop