Emerging Applications of Artificial Intelligence Algorithms in Computer and Network Security

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E1: Mathematics and Computer Science".

Deadline for manuscript submissions: 30 June 2026 | Viewed by 4501

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Electronic Engineering, Chung Cheng Institute of Technology, National Defense University, Taoyuan City 335, Taiwan
Interests: cryptography; multimedia security; distributed computing; quantum computing

E-Mail Website
Guest Editor
Department of Computer Science, National Taichung University of Education, Taichung 40306, Taiwan
Interests: AI; machine learning; IoT; wireless; SDN; 5G networks; network slicing; dynamic distributed networks
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science and Information Engineering, National Cheng Kung University, No.1, University Road, Tainan City 701, Taiwan
Interests: router & switch design; scalable web server; cooperative web proxy; QoS; computer architecture; multiprocessor networks and cache coherence design; fault-tolerant; parallel and distributed systems

E-Mail Website
Guest Editor
Department of Electronic Engineering, National Taipei University of Technology, Taipei 10608, Taiwan
Interests: computer network; optical network; wireless network; queueing theory

Special Issue Information

Dear Colleagues,

The integration of artificial intelligence (AI) into computer and network security has opened new avenues for proactive and intelligent threat detection. AI algorithms such as machine learning are increasingly used for virus detection, malware detection, and intrusion detection, enabling systems to identify and respond to threats in real-time. In addition, AI enhances techniques such as steganalysis, helping uncover hidden data and prevent covert communication used in cyberattacks. These emerging applications not only strengthen computer and network security, but also offer scalable and adaptive solutions to combat evolving cyber threats in increasingly complex digital environments. This Special Issue focuses on applications of AI algorithms in computer and network security. Any research that combines AI algorithm and existing security technologies to improve computer and network security are welcome.

Prof. Dr. Chiang-Lung Liu
Prof. Dr. Lin-Huang Chang
Prof. Dr. Yeim-Kuan Chang
Prof. Dr. Yung-Chung Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • computer security
  • network security
  • machine learning
  • malware detection
  • intrusion detection
  • steganalysis

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 3109 KB  
Article
Early Detection of Virtual Machine Failures in Cloud Computing Using Quantum-Enhanced Support Vector Machine
by Bhargavi Krishnamurthy, Saikat Das and Sajjan G. Shiva
Mathematics 2026, 14(7), 1229; https://doi.org/10.3390/math14071229 - 7 Apr 2026
Viewed by 369
Abstract
Cloud computing is one of the essential computing platforms for modern enterprises. A total of 84 percent of large businesses use cloud computing services in 2025 to enable remote working and higher flexibility of operation with reduction in the cost of operation. Cloud [...] Read more.
Cloud computing is one of the essential computing platforms for modern enterprises. A total of 84 percent of large businesses use cloud computing services in 2025 to enable remote working and higher flexibility of operation with reduction in the cost of operation. Cloud environments are dynamic and multitenant, often demanding high computational resources for real-time processing. However, the cloud system’s behavior is subjected to various kinds of anomalies in which patterns of data deviate from the normal traffic. The varieties of anomalies that exist are performance anomalies, security anomalies, resource anomalies, and network anomalies. These anomalies disrupt the normal operation of cloud systems by increasing the latency, reducing throughput, frequently violating service level agreements (SLAs), and experiencing the failure of virtual machines. Among all anomalies, virtual machine failures are one of the potential anomalies in which the normal operation of the virtual machine is interrupted, resulting in the degradation of services. Virtual machine failure happens because of resource exhaustion, malware access, packet loss, Distributed Denial of Service attacks, etc. Hence, there is a need to detect the chances of virtual machine failures and prevent it through proactive measures. Traditional machine learning techniques often struggle with high-dimensional data and nonlinear correlations, ending up with poor real-time adaptation. Hence, quantum machine learning is found to be a promising solution which effectively deals with combinatorially complex and high-dimensional data. In this paper, a novel quantum-enhanced support vector machine (QSVM) is designed as an optimized binary classifier which combines the principles of both quantum computing and support vector machine. It encodes the classical data into quantum states. Feature mapping is performed to transform the data into the high-dimensional form of Hilbert space. Quantum kernel evaluation is performed to evaluate similarities. Through effective optimization, optimal hyperplanes are designed to detect the anomalous behavior of virtual machines. This results in the exponential speed-up of operation and prevents the local minima through entanglement and superposition operation. The performance of the proposed QSVM is analyzed using the QuCloudSim 1.0 simulator and further validated using expected value analysis methodology. Full article
Show Figures

Figure 1

26 pages, 2804 KB  
Article
An Improved Particle Swarm Optimization for Three-Dimensional Indoor Positioning with Ultra-Wideband Communications for LOS/NLOS Channels
by Yung-Fa Huang, Tung-Jung Chan, Guan-Yi Chen and Hsing-Wen Wang
Mathematics 2026, 14(3), 493; https://doi.org/10.3390/math14030493 - 30 Jan 2026
Viewed by 510
Abstract
In this study, an improved particle swarm optimization (PSO) algorithm is designed to construct a weighting model for line-of-sight (LOS) and non-line-of-sight (NLOS) channels in an ultra-wideband (UWB) indoor positioning system. In the proposed algorithm, the particle position represents candidate weight vectors, and [...] Read more.
In this study, an improved particle swarm optimization (PSO) algorithm is designed to construct a weighting model for line-of-sight (LOS) and non-line-of-sight (NLOS) channels in an ultra-wideband (UWB) indoor positioning system. In the proposed algorithm, the particle position represents candidate weight vectors, and the fitness function is defined by the 3D positioning error over multiple test points. An optimized weight modeling framework is proposed for a multi-anchor, three-dimensional UWB indoor positioning system under LOS and NLOS channels. First, the three-dimensional positioning problem is formulated as a multilateration model, and the tag coordinates are estimated via a linearized matrix equation solved by the least-squares method, which explicitly links anchor geometry and ranging errors to the positioning accuracy. To evaluate the proposed method, extensive ranging and positioning experiments are conducted in a realistic indoor environment using up to eight anchors with different LOS/NLOS configurations, including dynamic scenarios with varying numbers of NLOS anchors. The results show that, compared with the conventional unweighted multi-anchor scheme, the PSO-based weighting model can reduce the average 3D positioning error by more than 30% in typical LOS-dominant settings and significantly suppress error bursts in severe NLOS conditions. These findings demonstrate that the combination of mathematical modeling, least-squares estimation, and swarm intelligence optimization provides an effective tool for designing intelligent engineering positioning systems in complex indoor environments, which aligns with the development of smart factories and industrial Internet-of-Things (IIoT) applications. Full article
Show Figures

Figure 1

19 pages, 1973 KB  
Article
Continuous Smartphone Authentication via Multimodal Biometrics and Optimized Ensemble Learning
by Chia-Sheng Cheng, Ko-Chien Chang, Hsing-Chung Chen and Chao-Lung Chou
Mathematics 2026, 14(2), 311; https://doi.org/10.3390/math14020311 - 15 Jan 2026
Viewed by 1465
Abstract
The ubiquity of smartphones has transformed them into primary repositories of sensitive data; however, traditional one-time authentication mechanisms create a critical trust gap by failing to verify identity post-unlock. Our aim is to mitigate these vulnerabilities and align with the Zero Trust Architecture [...] Read more.
The ubiquity of smartphones has transformed them into primary repositories of sensitive data; however, traditional one-time authentication mechanisms create a critical trust gap by failing to verify identity post-unlock. Our aim is to mitigate these vulnerabilities and align with the Zero Trust Architecture (ZTA) framework and philosophy of “never trust, always verify,” as formally defined by the National Institute of Standards and Technology (NIST) in Special Publication 800-207. This study introduces a robust continuous authentication (CA) framework leveraging multimodal behavioral biometrics. A dedicated application was developed to synchronously capture touch, sliding, and inertial sensor telemetry. For feature modeling, a heterogeneous deep learning pipeline was employed to capture modality-specific characteristics, utilizing Convolutional Neural Networks (CNNs) for sensor data, Long Short-Term Memory (LSTM) networks for curvilinear sliding, and Gated Recurrent Units (GRUs) for discrete touch. To resolve performance degradation caused by class imbalance in Zero Trust environments, a Grid Search Optimization (GSO) strategy was applied to optimize a weighted voting ensemble, identifying the global optimum for decision thresholds and modality weights. Empirical validation on a dataset of 35,519 samples from 15 subjects demonstrates that the optimized ensemble achieves a peak accuracy of 99.23%. Sensor kinematics emerged as the primary biometric signature, followed by touch and sliding features. This framework enables high-precision, non-intrusive continuous verification, bridging the critical security gap in contemporary mobile architectures. Full article
Show Figures

Figure 1

19 pages, 2271 KB  
Article
Improving the Performance of Static Malware Classification Using Deep Learning Models and Feature Reduction Strategies
by Tai-Hung Lai, Yun-Jyun Tsai and Chiang-Lung Liu
Mathematics 2025, 13(23), 3753; https://doi.org/10.3390/math13233753 - 23 Nov 2025
Cited by 1 | Viewed by 1533
Abstract
The rapid evolution of malware continues to pose severe challenges to cybersecurity, highlighting the need for accurate and efficient detection systems. Traditional signature- and heuristic-based methods are increasingly inadequate against sophisticated threats, which has motivated the use of machine learning and deep learning [...] Read more.
The rapid evolution of malware continues to pose severe challenges to cybersecurity, highlighting the need for accurate and efficient detection systems. Traditional signature- and heuristic-based methods are increasingly inadequate against sophisticated threats, which has motivated the use of machine learning and deep learning for static malware classification. In this study, we propose three deep neural network (DNN) architectures tailored for the binary classification of Portable Executable (PE) files. The models were trained and validated on the EMBER 2017 dataset and further tested on the independent REWEMA dataset to evaluate their cross-dataset generalization capabilities. To address the computational burden of high-dimensional feature vectors, two feature reduction strategies were examined: the Kumar method, which selected 276 features, and the LightGBM-based intersection method, which identified 206 shared features. Experimental results showed that the proposed Model III consistently achieved the best overall performance, outperforming LightGBM (v3.3.5) and the other DNN models in terms of accuracy, recall, and F1-score. Notably, its recall exceeded that of LightGBM by 0.73%, highlighting its superiority in reducing false negative rates. Feature reduction further demonstrated that significant dimensionality reduction could be achieved without compromising classification quality, with the Kumar method achieving the best balance between accuracy and efficiency. Cross-dataset validation revealed performance degradation across all models due to distributional shifts, but the decline was less significant for the DNNs, confirming its greater adaptability compared with LightGBM. These findings demonstrate that architectural optimization and appropriate feature selection can significantly improve the performance of static malware classification. This study also provides empirical benchmarks and methodological guidance for developing accurate, efficient, and resilient malware detection systems that are resilient to evolving threats. Full article
Show Figures

Figure 1

Back to TopTop