Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (500)

Search Parameters:
Keywords = machine learning cybersecurity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 4863 KiB  
Article
Evaluation of Explainable, Interpretable and Non-Interpretable Algorithms for Cyber Threat Detection
by José Ramón Trillo, Felipe González-López, Juan Antonio Morente-Molinera, Roberto Magán-Carrión and Pablo García-Sánchez
Electronics 2025, 14(15), 3073; https://doi.org/10.3390/electronics14153073 (registering DOI) - 31 Jul 2025
Abstract
As anonymity-enabling technologies such as VPNs and proxies become increasingly exploited for malicious purposes, detecting traffic associated with such services emerges as a critical first step in anticipating potential cyber threats. This study analyses a network traffic dataset focused on anonymised IP addresses—not [...] Read more.
As anonymity-enabling technologies such as VPNs and proxies become increasingly exploited for malicious purposes, detecting traffic associated with such services emerges as a critical first step in anticipating potential cyber threats. This study analyses a network traffic dataset focused on anonymised IP addresses—not direct attacks—to evaluate and compare explainable, interpretable, and opaque machine learning models. Through advanced preprocessing and feature engineering, we examine the trade-off between model performance and transparency in the early detection of suspicious connections. We evaluate explainable ML-based models such as k-nearest neighbours, fuzzy algorithms, decision trees, and random forests, alongside interpretable models like naïve Bayes, support vector machines, and non-interpretable algorithms such as neural networks. Results show that neural networks achieve the highest performance, with a macro F1-score of 0.8786, but explainable models like HFER offer strong performance (macro F1-score = 0.6106) with greater interpretability. The choice of algorithm depends on project-specific needs: neural networks excel in accuracy, while explainable algorithms are preferred for resource efficiency and transparency, as stated in this work. This work underscores the importance of aligning cybersecurity strategies with operational requirements, providing insights into balancing performance with interpretability. Full article
(This article belongs to the Special Issue Network Security and Cryptography Applications)
Show Figures

Graphical abstract

25 pages, 2349 KiB  
Article
Development of a Method for Determining Password Formation Rules Using Neural Networks
by Leila Rzayeva, Alissa Ryzhova, Merei Zhaparkhanova, Ali Myrzatay, Olzhas Konakbayev, Abilkair Imanberdi, Yussuf Ahmed and Zhaksylyk Kozhakhmet
Information 2025, 16(8), 655; https://doi.org/10.3390/info16080655 (registering DOI) - 31 Jul 2025
Abstract
According to the latest Verizon DBIR report, credential abuse, including password reuse and human factors in password creation, remains the leading attack vector. It was revealed that most users change their passwords only when they forget them, and 35% of respondents find mandatory [...] Read more.
According to the latest Verizon DBIR report, credential abuse, including password reuse and human factors in password creation, remains the leading attack vector. It was revealed that most users change their passwords only when they forget them, and 35% of respondents find mandatory password rotation policies inconvenient. These findings highlight the importance of combining technical solutions with user-focused education to strengthen password security. In this research, the “human factor in the creation of usernames and passwords” is considered a vulnerability, as identifying the patterns or rules used by users in password generation can significantly reduce the number of possible combinations that attackers need to try in order to gain access to personal data. The proposed method based on an LSTM model operates at a character level, detecting recurrent structures and generating generalized masks that reflect the most common components in password creation. Open datasets of 31,000 compromised passwords from real-world leaks were used to train the model and it achieved over 90% test accuracy without signs of overfitting. A new method of evaluating the individual password creation habits of users and automatically fetching context-rich keywords from a user’s public web and social media footprint via a keyword-extraction algorithm is developed, and this approach is incorporated into a web application that allows clients to locally fine-tune an LSTM model locally, run it through ONNX, and carry out all inference on-device, ensuring complete data confidentiality and adherence to privacy regulations. Full article
Show Figures

Figure 1

26 pages, 5549 KiB  
Article
Intrusion Detection and Real-Time Adaptive Security in Medical IoT Using a Cyber-Physical System Design
by Faeiz Alserhani
Sensors 2025, 25(15), 4720; https://doi.org/10.3390/s25154720 (registering DOI) - 31 Jul 2025
Abstract
The increasing reliance on Medical Internet of Things (MIoT) devices introduces critical cybersecurity vulnerabilities, necessitating advanced, adaptive defense mechanisms. Recent cyber incidents—such as compromised critical care systems, modified therapeutic device outputs, and fraudulent clinical data inputs—demonstrate that these threats now directly impact life-critical [...] Read more.
The increasing reliance on Medical Internet of Things (MIoT) devices introduces critical cybersecurity vulnerabilities, necessitating advanced, adaptive defense mechanisms. Recent cyber incidents—such as compromised critical care systems, modified therapeutic device outputs, and fraudulent clinical data inputs—demonstrate that these threats now directly impact life-critical aspects of patient security. In this paper, we introduce a machine learning-enabled Cognitive Cyber-Physical System (ML-CCPS), which is designed to identify and respond to cyber threats in MIoT environments through a layered cognitive architecture. The system is constructed on a feedback-looped architecture integrating hybrid feature modeling, physical behavioral analysis, and Extreme Learning Machine (ELM)-based classification to provide adaptive access control, continuous monitoring, and reliable intrusion detection. ML-CCPS is capable of outperforming benchmark classifiers with an acceptable computational cost, as evidenced by its macro F1-score of 97.8% and an AUC of 99.1% when evaluated with the ToN-IoT dataset. Alongside classification accuracy, the framework has demonstrated reliable behaviour under noisy telemetry, maintained strong efficiency in resource-constrained settings, and scaled effectively with larger numbers of connected devices. Comparative evaluations, radar-style synthesis, and ablation studies further validate its effectiveness in real-time MIoT environments and its ability to detect novel attack types with high reliability. Full article
Show Figures

Figure 1

29 pages, 2379 KiB  
Article
FADEL: Ensemble Learning Enhanced by Feature Augmentation and Discretization
by Chuan-Sheng Hung, Chun-Hung Richard Lin, Shi-Huang Chen, You-Cheng Zheng, Cheng-Han Yu, Cheng-Wei Hung, Ting-Hsin Huang and Jui-Hsiu Tsai
Bioengineering 2025, 12(8), 827; https://doi.org/10.3390/bioengineering12080827 - 30 Jul 2025
Abstract
In recent years, data augmentation techniques have become the predominant approach for addressing highly imbalanced classification problems in machine learning. Algorithms such as the Synthetic Minority Over-sampling Technique (SMOTE) and Conditional Tabular Generative Adversarial Network (CTGAN) have proven effective in synthesizing minority class [...] Read more.
In recent years, data augmentation techniques have become the predominant approach for addressing highly imbalanced classification problems in machine learning. Algorithms such as the Synthetic Minority Over-sampling Technique (SMOTE) and Conditional Tabular Generative Adversarial Network (CTGAN) have proven effective in synthesizing minority class samples. However, these methods often introduce distributional bias and noise, potentially leading to model overfitting, reduced predictive performance, increased computational costs, and elevated cybersecurity risks. To overcome these limitations, we propose a novel architecture, FADEL, which integrates feature-type awareness with a supervised discretization strategy. FADEL introduces a unique feature augmentation ensemble framework that preserves the original data distribution by concurrently processing continuous and discretized features. It dynamically routes these feature sets to their most compatible base models, thereby improving minority class recognition without the need for data-level balancing or augmentation techniques. Experimental results demonstrate that FADEL, solely leveraging feature augmentation without any data augmentation, achieves a recall of 90.8% and a G-mean of 94.5% on the internal test set from Kaohsiung Chang Gung Memorial Hospital in Taiwan. On the external validation set from Kaohsiung Medical University Chung-Ho Memorial Hospital, it maintains a recall of 91.9% and a G-mean of 86.7%. These results outperform conventional ensemble methods trained on CTGAN-balanced datasets, confirming the superior stability, computational efficiency, and cross-institutional generalizability of the FADEL architecture. Altogether, FADEL uses feature augmentation to offer a robust and practical solution to extreme class imbalance, outperforming mainstream data augmentation-based approaches. Full article
Show Figures

Graphical abstract

17 pages, 3650 KiB  
Article
Towards Intelligent Threat Detection in 6G Networks Using Deep Autoencoder
by Doaa N. Mhawi, Haider W. Oleiwi and Hamed Al-Raweshidy
Electronics 2025, 14(15), 2983; https://doi.org/10.3390/electronics14152983 - 26 Jul 2025
Viewed by 140
Abstract
The evolution of sixth-generation (6G) wireless networks introduces a complex landscape of cybersecurity challenges due to advanced infrastructure, massive device connectivity, and the integration of emerging technologies. Traditional intrusion detection systems (IDSs) struggle to keep pace with such dynamic environments, often yielding high [...] Read more.
The evolution of sixth-generation (6G) wireless networks introduces a complex landscape of cybersecurity challenges due to advanced infrastructure, massive device connectivity, and the integration of emerging technologies. Traditional intrusion detection systems (IDSs) struggle to keep pace with such dynamic environments, often yielding high false alarm rates and poor generalization. This study proposes a novel and adaptive IDS that integrates statistical feature engineering with a deep autoencoder (DAE) to effectively detect a wide range of modern threats in 6G environments. Unlike prior approaches, the proposed system leverages the DAE’s unsupervised capability to extract meaningful latent representations from high-dimensional traffic data, followed by supervised classification for precise threat detection. Evaluated using the CSE-CIC-IDS2018 dataset, the system achieved an accuracy of 86%, surpassing conventional ML and DL baselines. The results demonstrate the model’s potential as a scalable and upgradable solution for securing next-generation wireless networks. Full article
(This article belongs to the Special Issue Emerging Technologies for Network Security and Anomaly Detection)
Show Figures

Figure 1

30 pages, 2096 KiB  
Article
A Hybrid Approach Using Graph Neural Networks and LSTM for Attack Vector Reconstruction
by Yelizaveta Vitulyova, Tetiana Babenko, Kateryna Kolesnikova, Nikolay Kiktev and Olga Abramkina
Computers 2025, 14(8), 301; https://doi.org/10.3390/computers14080301 - 24 Jul 2025
Viewed by 290
Abstract
The escalating complexity of cyberattacks necessitates advanced strategies for their detection and mitigation. This study presents a hybrid model that integrates Graph Neural Networks (GNNs) with Long Short-Term Memory (LSTM) networks to reconstruct and predict attack vectors in cybersecurity. GNNs are employed to [...] Read more.
The escalating complexity of cyberattacks necessitates advanced strategies for their detection and mitigation. This study presents a hybrid model that integrates Graph Neural Networks (GNNs) with Long Short-Term Memory (LSTM) networks to reconstruct and predict attack vectors in cybersecurity. GNNs are employed to analyze the structural relationships within the MITRE ATT&CK framework, while LSTM networks are utilized to model the temporal dynamics of attack sequences, effectively capturing the evolution of cyber threats. The combined approach harnesses the complementary strengths of these methods to deliver precise, interpretable, and adaptable solutions for addressing cybersecurity challenges. Experimental evaluation on the CICIDS2017 dataset reveals the model’s strong performance, achieving an Area Under the Curve (AUC) of 0.99 on both balanced and imbalanced test sets, an F1-score of 0.85 for technique prediction, and a Mean Squared Error (MSE) of 0.05 for risk assessment. These findings underscore the model’s capability to accurately reconstruct attack paths and forecast future techniques, offering a promising avenue for strengthening proactive defense mechanisms against evolving cyber threats. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

34 pages, 2669 KiB  
Article
A Novel Quantum Epigenetic Algorithm for Adaptive Cybersecurity Threat Detection
by Salam Al-E’mari, Yousef Sanjalawe and Salam Fraihat
AI 2025, 6(8), 165; https://doi.org/10.3390/ai6080165 - 22 Jul 2025
Viewed by 321
Abstract
The escalating sophistication of cyber threats underscores the critical need for intelligent and adaptive intrusion detection systems (IDSs) to identify known and novel attack vectors in real time. Feature selection is a key enabler of performance in machine learning-based IDSs, as it reduces [...] Read more.
The escalating sophistication of cyber threats underscores the critical need for intelligent and adaptive intrusion detection systems (IDSs) to identify known and novel attack vectors in real time. Feature selection is a key enabler of performance in machine learning-based IDSs, as it reduces the input dimensionality, enhances the detection accuracy, and lowers the computational latency. This paper introduces a novel optimization framework called Quantum Epigenetic Algorithm (QEA), which synergistically combines quantum-inspired probabilistic representation with biologically motivated epigenetic gene regulation to perform efficient and adaptive feature selection. The algorithm balances global exploration and local exploitation by leveraging quantum superposition for diverse candidate generation while dynamically adjusting gene expression through an epigenetic activation mechanism. A multi-objective fitness function guides the search process by optimizing the detection accuracy, false positive rate, inference latency, and model compactness. The QEA was evaluated across four benchmark datasets—UNSW-NB15, CIC-IDS2017, CSE-CIC-IDS2018, and TON_IoT—and consistently outperformed baseline methods, including Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and Quantum Genetic Algorithm (QGA). Notably, QEA achieved the highest classification accuracy (up to 97.12%), the lowest false positive rates (as low as 1.68%), and selected significantly fewer features (e.g., 18 on TON_IoT) while maintaining near real-time latency. These results demonstrate the robustness, efficiency, and scalability of QEA for real-time intrusion detection in dynamic and resource-constrained cybersecurity environments. Full article
Show Figures

Figure 1

18 pages, 1332 KiB  
Article
SC-LKM: A Semantic Chunking and Large Language Model-Based Cybersecurity Knowledge Graph Construction Method
by Pu Wang, Yangsen Zhang, Zicheng Zhou and Yuqi Wang
Electronics 2025, 14(14), 2878; https://doi.org/10.3390/electronics14142878 - 18 Jul 2025
Viewed by 401
Abstract
In cybersecurity, constructing an accurate knowledge graph is vital for discovering key entities and relationships in security incidents buried in vast unstructured threat reports. Traditional knowledge-graph construction pipelines based on handcrafted rules or conventional machine learning models falter when the data scale and [...] Read more.
In cybersecurity, constructing an accurate knowledge graph is vital for discovering key entities and relationships in security incidents buried in vast unstructured threat reports. Traditional knowledge-graph construction pipelines based on handcrafted rules or conventional machine learning models falter when the data scale and linguistic variety grow. GraphRAG, a retrieval-augmented generation (RAG) framework that splits documents into fixed-length chunks and then retrieves the most relevant ones for generation, offers a scalable alternative yet still suffers from fragmentation and semantic gaps that erode graph integrity. To resolve these issues, this paper proposes SC-LKM, a cybersecurity knowledge-graph construction method that couples the GraphRAG backbone with hierarchical semantic chunking. SC-LKM applies semantic chunking to build a cybersecurity knowledge graph that avoids the fragmentation and inconsistency seen in prior work. The semantic chunking method first respects the native document hierarchy and then refines boundaries with topic similarity and named-entity continuity, maintaining logical coherence while limiting information loss during the fine-grained processing of unstructured text. SC-LKM further integrates the semantic comprehension capacity of Qwen2.5-14B-Instruct, markedly boosting extraction accuracy and reasoning quality. Experimental results show that SC-LKM surpasses baseline systems in entity-recognition coverage, topology density, and semantic consistency. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

55 pages, 6352 KiB  
Review
A Deep Learning Framework for Enhanced Detection of Polymorphic Ransomware
by Mazen Gazzan, Bader Alobaywi, Mohammed Almutairi and Frederick T. Sheldon
Future Internet 2025, 17(7), 311; https://doi.org/10.3390/fi17070311 - 18 Jul 2025
Viewed by 419
Abstract
Ransomware, a significant cybersecurity threat, encrypts files and causes substantial damage, making early detection crucial yet challenging. This paper introduces a novel multi-phase framework for early ransomware detection, designed to enhance accuracy and minimize false positives. The framework addresses the limitations of existing [...] Read more.
Ransomware, a significant cybersecurity threat, encrypts files and causes substantial damage, making early detection crucial yet challenging. This paper introduces a novel multi-phase framework for early ransomware detection, designed to enhance accuracy and minimize false positives. The framework addresses the limitations of existing methods by integrating operational data with situational and threat intelligence, enabling it to dynamically adapt to the evolving ransomware landscape. Key innovations include (1) data augmentation using a Bi-Gradual Minimax Generative Adversarial Network (BGM-GAN) to generate synthetic ransomware attack patterns, addressing data insufficiency; (2) Incremental Mutual Information Selection (IMIS) for dynamically selecting relevant features, adapting to evolving ransomware behaviors and reducing computational overhead; and (3) a Deep Belief Network (DBN) detection architecture, trained on the augmented data and optimized with Uncertainty-Aware Dynamic Early Stopping (UA-DES) to prevent overfitting. The model demonstrates a 4% improvement in detection accuracy (from 90% to 94%) through synthetic data generation and reduces false positives from 15.4% to 14%. The IMIS technique further increases accuracy to 96% while reducing false positives. The UA-DES optimization boosts accuracy to 98.6% and lowers false positives to 10%. Overall, this framework effectively addresses the challenges posed by evolving ransomware, significantly enhancing detection accuracy and reliability. Full article
Show Figures

Figure 1

24 pages, 2173 KiB  
Article
A Novel Ensemble of Deep Learning Approach for Cybersecurity Intrusion Detection with Explainable Artificial Intelligence
by Abdullah Alabdulatif
Appl. Sci. 2025, 15(14), 7984; https://doi.org/10.3390/app15147984 - 17 Jul 2025
Viewed by 517
Abstract
In today’s increasingly interconnected digital world, cyber threats have grown in frequency and sophistication, making intrusion detection systems a critical component of modern cybersecurity frameworks. Traditional IDS methods, often based on static signatures and rule-based systems, are no longer sufficient to detect and [...] Read more.
In today’s increasingly interconnected digital world, cyber threats have grown in frequency and sophistication, making intrusion detection systems a critical component of modern cybersecurity frameworks. Traditional IDS methods, often based on static signatures and rule-based systems, are no longer sufficient to detect and respond to complex and evolving attacks. To address these challenges, Artificial Intelligence and machine learning have emerged as powerful tools for enhancing the accuracy, adaptability, and automation of IDS solutions. This study presents a novel, hybrid ensemble learning-based intrusion detection framework that integrates deep learning and traditional ML algorithms with explainable artificial intelligence for real-time cybersecurity applications. The proposed model combines an Artificial Neural Network and Support Vector Machine as base classifiers and employs a Random Forest as a meta-classifier to fuse predictions, improving detection performance. Recursive Feature Elimination is utilized for optimal feature selection, while SHapley Additive exPlanations (SHAP) provide both global and local interpretability of the model’s decisions. The framework is deployed using a Flask-based web interface in the Amazon Elastic Compute Cloud environment, capturing live network traffic and offering sub-second inference with visual alerts. Experimental evaluations using the NSL-KDD dataset demonstrate that the ensemble model outperforms individual classifiers, achieving a high accuracy of 99.40%, along with excellent precision, recall, and F1-score metrics. This research not only enhances detection capabilities but also bridges the trust gap in AI-powered security systems through transparency. The solution shows strong potential for application in critical domains such as finance, healthcare, industrial IoT, and government networks, where real-time and interpretable threat detection is vital. Full article
Show Figures

Figure 1

46 pages, 8887 KiB  
Article
One-Class Anomaly Detection for Industrial Applications: A Comparative Survey and Experimental Study
by Davide Paolini, Pierpaolo Dini, Ettore Soldaini and Sergio Saponara
Computers 2025, 14(7), 281; https://doi.org/10.3390/computers14070281 - 16 Jul 2025
Viewed by 373
Abstract
This article aims to evaluate the runtime effectiveness of various one-class classification (OCC) techniques for anomaly detection in an industrial scenario reproduced in a laboratory setting. To address the limitations posed by restricted access to proprietary data, the study explores OCC methods that [...] Read more.
This article aims to evaluate the runtime effectiveness of various one-class classification (OCC) techniques for anomaly detection in an industrial scenario reproduced in a laboratory setting. To address the limitations posed by restricted access to proprietary data, the study explores OCC methods that learn solely from legitimate network traffic, without requiring labeled malicious samples. After analyzing major publicly available datasets, such as KDD Cup 1999 and TON-IoT, as well as the most widely used OCC techniques, a lightweight and modular intrusion detection system (IDS) was developed in Python. The system was tested in real time on an experimental platform based on Raspberry Pi, within a simulated client–server environment using the NFSv4 protocol over TCP/UDP. Several OCC models were compared, including One-Class SVM, Autoencoder, VAE, and Isolation Forest. The results showed strong performance in terms of detection accuracy and low latency, with the best outcomes achieved using the UNSW-NB15 dataset. The article concludes with a discussion of additional strategies to enhance the runtime analysis of these algorithms, offering insights into potential future applications and improvement directions. Full article
Show Figures

Figure 1

27 pages, 960 KiB  
Article
Quantum-Inspired Algorithms and Perspectives for Optimization
by Gerardo Iovane
Electronics 2025, 14(14), 2839; https://doi.org/10.3390/electronics14142839 - 15 Jul 2025
Viewed by 436
Abstract
This paper starts with an updated review and analyzes recent developments in quantum-inspired algorithms for cybersecurity, with specific attention to possible perspectives of optimization. The enhancement of classical computing capabilities with quantum principles is transforming fields such as machine learning, optimization, and cybersecurity. [...] Read more.
This paper starts with an updated review and analyzes recent developments in quantum-inspired algorithms for cybersecurity, with specific attention to possible perspectives of optimization. The enhancement of classical computing capabilities with quantum principles is transforming fields such as machine learning, optimization, and cybersecurity. Evolutionary algorithms are one example where progress has already been made using quantum techniques through increased efficiency, generalization, and problem-solving techniques exploited by quantum principles. Quantum-inspired evolutionary algorithms (QIEAs) and quantum kernel methods are prime examples of such approaches. Quantum techniques are also used in the field of cybersecurity: QML-based identification systems for intrusion detection strengthen threat detection and encoding through quantum techniques with advanced cryptographic security, while quantum-secure hashing (QSHA) offers sophisticated means of protecting sensitive information. More specifically, QGANs are known for their integration into adversarial generative networks that increase efficiency by replacing classical models in adversarial defense through the generation of synthetic attack models. In this work, a set of benchmarks is provided for comparison with classical and other quantum-inspired technologies. The results demonstrate that these methods far outperform others in terms of computational efficiency and satisfactory scalability. Although fully functional models are still awaited, quantum computing benefits greatly from quantum-inspired technologies, as the latter enable the development of frameworks that bring us closer to the quantum era. Consequently, the work takes the form of an updated systematic review enriched with optimized perspectives. Full article
Show Figures

Figure 1

20 pages, 1851 KiB  
Article
ISO-Based Framework Optimizing Industrial Internet of Things for Sustainable Supply Chain Management
by Emad Hashiem Abualsauod
Sustainability 2025, 17(14), 6421; https://doi.org/10.3390/su17146421 - 14 Jul 2025
Viewed by 350
Abstract
The Industrial Internet of Things (IIoT) offers transformative potential for supply chain management by enabling automation, real-time monitoring, and predictive analytics. However, fragmented standardization, interoperability challenges, and cybersecurity risks hinder its sustainable adoption. This study aims to develop and validate an ISO-based framework [...] Read more.
The Industrial Internet of Things (IIoT) offers transformative potential for supply chain management by enabling automation, real-time monitoring, and predictive analytics. However, fragmented standardization, interoperability challenges, and cybersecurity risks hinder its sustainable adoption. This study aims to develop and validate an ISO-based framework to optimize IIoT networks for sustainable supply chain operations. A quantitative time-series research design was employed, analyzing 150 observations from 10–15 industrial firms over five years. Analytical methods included ARIMA, structural equation modeling (SEM), and XGBoost for predictive evaluation. The findings indicate a 6.2% increase in system uptime, a 4.7% reduction in operational costs, a 2.8% decrease in lead times, and a 55–60% decline in security incidents following ISO standard implementation. Interoperability improved by 40–50%, and integration cost savings ranged from 35–40%, contributing to a 25% boost in overall operational efficiency. These results underscore the critical role of ISO frameworks such as ISO/IEC 30141 and ISO 50001 in enhancing connectivity, energy efficiency, and network security across IIoT-enabled supply chains. While standardization significantly improves key performance indicators, the persistence of lead time variability suggests the need for additional optimization strategies. This study offers a structured and scalable methodology for ISO-based IIoT integration, delivering both theoretical advancement and practical relevance. By aligning with internationally recognized sustainability standards, it provides policymakers, practitioners, and industry leaders with an evidence-based framework to accelerate digital transformation, enhance operational efficiency, and support resilient, sustainable supply chain development in the context of Industry 4.0. Full article
(This article belongs to the Special Issue Network Operations and Supply Chain Management)
Show Figures

Figure 1

15 pages, 632 KiB  
Article
Architecture of an Efficient Environment Management Platform for Experiential Cybersecurity Education
by David Arnold, John Ford and Jafar Saniie
Information 2025, 16(7), 604; https://doi.org/10.3390/info16070604 - 14 Jul 2025
Viewed by 287
Abstract
Testbeds are widely used in experiential learning, providing practical assessments and bridging classroom material with real-world applications. However, manually managing and provisioning student lab environments consumes significant preparation time for instructors. The growing demand for advanced technical skills, such as network administration and [...] Read more.
Testbeds are widely used in experiential learning, providing practical assessments and bridging classroom material with real-world applications. However, manually managing and provisioning student lab environments consumes significant preparation time for instructors. The growing demand for advanced technical skills, such as network administration and cybersecurity, is leading to larger class sizes. This stresses testbed resources and necessitates continuous design updates. To address these challenges, we designed an efficient Environment Management Platform (EMP). The EMP is composed of a set of 4 Command Line Interface scripts and a Web Interface for secure administration and bulk user operations. Based on our testing, the EMP significantly reduces setup time for student virtualized lab environments. Through a cybersecurity learning environment case study, we found that setup is completed in 15 s for each student, a 12.8-fold reduction compared to manual provisioning. When considering a class of 20 students, the EMP realizes a substantial saving of 62 min in system configuration time. Additionally, the software-based management and provisioning process ensures the accurate realization of lab environments, eliminating the errors commonly associated with manual configuration. This platform is applicable to many educational domains that rely on virtual machines for experiential learning. Full article
(This article belongs to the Special Issue Digital Systems in Higher Education)
Show Figures

Graphical abstract

32 pages, 3793 KiB  
Systematic Review
Systematic Review: Malware Detection and Classification in Cybersecurity
by Sebastian Berrios, Dante Leiva, Bastian Olivares, Héctor Allende-Cid and Pamela Hermosilla
Appl. Sci. 2025, 15(14), 7747; https://doi.org/10.3390/app15147747 - 10 Jul 2025
Viewed by 650
Abstract
Malicious Software, commonly known as Malware, represents a persistent threat to cybersecurity, targeting the confidentiality, integrity, and availability of information systems. The digital era, marked by the proliferation of connected devices, cloud services, and the advancement of machine learning, has brought numerous benefits; [...] Read more.
Malicious Software, commonly known as Malware, represents a persistent threat to cybersecurity, targeting the confidentiality, integrity, and availability of information systems. The digital era, marked by the proliferation of connected devices, cloud services, and the advancement of machine learning, has brought numerous benefits; however, it has also exacerbated exposure to cyber threats, affecting both individuals and corporations. This systematic review, which follows the PRISMA 2020 framework, aims to analyze current trends and new methods for malware detection and classification. The review was conducted using data from Web of Science and Scopus, covering publications from 2020 and 2024, with over 47 key studies selected for in-depth analysis based on relevance, empirical results and citation metrics. These studies cover a variety of detection techniques, including machine learning, deep learning and hybrid models, with a focus on feature extraction, malware behavior analysis and the application of advanced algorithms to improve detection accuracy. The results highlight important advances, such as the improved performance of ensemble learning and deep learning models in detecting sophisticated threats. Finally, this study identifies the main challenges and outlines opportunities of future research to improve malware detection and classification frameworks. Full article
Show Figures

Figure 1

Back to TopTop