Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,141)

Search Parameters:
Keywords = network traffic distribution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 25831 KiB  
Article
Identification of Cultural Landscapes and Spatial Distribution Characteristics in Traditional Villages of Three Gorges Reservoir Area
by Jia Jiang, Zhiliang Yu and Ende Yang
Buildings 2025, 15(15), 2663; https://doi.org/10.3390/buildings15152663 - 28 Jul 2025
Viewed by 240
Abstract
The Three Gorges Reservoir Area (TGRA) is an important ecological barrier and cultural intermingling zone in the upper reaches of the Yangtze River, and its traditional villages carry unique information about natural changes and civilisational development, but face the challenges of conservation and [...] Read more.
The Three Gorges Reservoir Area (TGRA) is an important ecological barrier and cultural intermingling zone in the upper reaches of the Yangtze River, and its traditional villages carry unique information about natural changes and civilisational development, but face the challenges of conservation and development under the impact of modernisation and ecological pressure. This study takes 112 traditional villages in the TGRA that have been included in the protection list as the research objects, aiming to construct a cultural landscape identification framework for the traditional villages in the TGRA. Through field surveys, landscape feature assessments, GIS spatial analysis, and multi-source data analysis, we systematically analyse their cultural landscape type systems and spatial differentiation characteristics, and then reveal their cultural landscape types and spatial differentiation patterns. (1) The results of the study show that the spatial distribution of traditional villages exhibits significant altitude gradient differentiation—the low-altitude area is dominated by traffic and trade villages, the middle-altitude area is dominated by patriarchal manor villages and mountain farming villages, and the high-altitude area is dominated by ethno-cultural and ecologically dependent villages. (2) Slope and direction analyses further reveal that the gently sloping areas are conducive to the development of commercial and agricultural settlements, while the steeply sloping areas strengthen the function of ethnic and cultural defence. The results indicate that topographic conditions drive the synergistic evolution of the human–land system in traditional villages through the mechanisms of agricultural optimisation, trade networks, cultural defence, and ecological adaptation. The study provides a paradigm of “nature–humanities” interaction analysis for the conservation and development of traditional villages in mountainous areas, which is of practical value in coordinating the construction of ecological barriers and the revitalisation of villages in the reservoir area. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

23 pages, 15846 KiB  
Article
Habitats, Plant Diversity, Morphology, Anatomy, and Molecular Phylogeny of Xylosalsola chiwensis (Popov) Akhani & Roalson
by Anastassiya Islamgulova, Bektemir Osmonali, Mikhail Skaptsov, Anastassiya Koltunova, Valeriya Permitina and Azhar Imanalinova
Plants 2025, 14(15), 2279; https://doi.org/10.3390/plants14152279 - 24 Jul 2025
Viewed by 303
Abstract
Xylosalsola chiwensis (Popov) Akhani & Roalson is listed in the Red Data Book of Kazakhstan as a rare species with a limited distribution, occurring in small populations in Kazakhstan, Uzbekistan, and Turkmenistan. The aim of this study is to deepen the understanding of [...] Read more.
Xylosalsola chiwensis (Popov) Akhani & Roalson is listed in the Red Data Book of Kazakhstan as a rare species with a limited distribution, occurring in small populations in Kazakhstan, Uzbekistan, and Turkmenistan. The aim of this study is to deepen the understanding of the ecological conditions of its habitats, the floristic composition of its associated plant communities, the species’ morphological and anatomical characteristics, and its molecular phylogeny, as well as to identify the main threats to its survival. The ecological conditions of the X. chiwensis habitats include coastal sandy plains and the slopes of chinks and denudation plains with gray–brown desert soils and bozyngens on the Mangyshlak Peninsula and the Ustyurt Plateau at altitudes ranging from −3 to 270 m above sea level. The species is capable of surviving in arid conditions (less than 100 mm of annual precipitation) and under extreme temperatures (air temperatures exceeding 45 °C and soil surface temperatures above 65 °C). In X. chiwensis communities, we recorded 53 species of vascular plants. Anthropogenic factors associated with livestock grazing, industrial disturbances, and off-road vehicle traffic along an unregulated network of dirt roads have been identified as contributing to population decline and the potential extinction of the species under conditions of unsustainable land use. The morphometric traits of X. chiwensis could be used for taxonomic analysis and for identifying diagnostic morphological characteristics to distinguish between species of Xylosalsola. The most taxonomically valuable characteristics include the fruit diameter (with wings) and the cone-shaped structure length, as they differ consistently between species and exhibit relatively low variability. Anatomical adaptations to arid conditions were observed, including a well-developed hypodermis, which is indicative of a water-conserving strategy. The moderate photosynthetic activity, reflected by a thinner palisade mesophyll layer, may be associated with reduced photosynthetic intensity, which is compensated for through structural mechanisms for water conservation. The flow cytometry analysis revealed a genome size of 2.483 ± 0.191 pg (2n/4x = 18), and the phylogenetic analysis confirmed the placement of X. chiwensis within the tribe Salsoleae of the subfamily Salsoloideae, supporting its taxonomic distinctness. To support the conservation of this rare species, measures are proposed to expand the area of the Ustyurt Nature Reserve through the establishment of cluster sites. Full article
(This article belongs to the Section Plant Ecology)
Show Figures

Figure 1

21 pages, 1672 KiB  
Article
TSE-APT: An APT Attack-Detection Method Based on Time-Series and Ensemble-Learning Models
by Mingyue Cheng, Ga Xiang, Qunsheng Yang, Zhixing Ma and Haoyang Zhang
Electronics 2025, 14(15), 2924; https://doi.org/10.3390/electronics14152924 - 22 Jul 2025
Viewed by 235
Abstract
Advanced Persistent Threat (APT) attacks pose a serious challenge to traditional detection methods. These methods often suffer from high false-alarm rates and limited accuracy due to the multi-stage and covert nature of APT attacks. In this paper, we propose TSE-APT, a time-series ensemble [...] Read more.
Advanced Persistent Threat (APT) attacks pose a serious challenge to traditional detection methods. These methods often suffer from high false-alarm rates and limited accuracy due to the multi-stage and covert nature of APT attacks. In this paper, we propose TSE-APT, a time-series ensemble model that addresses these two limitations. It combines multiple machine-learning models, such as Random Forest (RF), Multi-Layer Perceptron (MLP), and Bidirectional Long Short-Term Memory Network (BiLSTM) models, to dynamically capture correlations between multiple stages of the attack process based on time-series features. It discovers hidden features through the integration of multiple machine-learning models to significantly improve the accuracy and robustness of APT detection. First, we extract a collection of dynamic time-series features such as traffic mean, flow duration, and flag frequency. We fuse them with static contextual features, including the port service matrix and protocol type distribution, to effectively capture the multi-stage behaviors of APT attacks. Then, we utilize an ensemble-learning model with a dynamic weight-allocation mechanism using a self-attention network to adaptively adjust the sub-model contribution. The experiments showed that using time-series feature fusion significantly enhanced the detection performance. The RF, MLP, and BiLSTM models achieved 96.7% accuracy, considerably enhancing recall and the false positive rate. The adaptive mechanism optimizes the model’s performance and reduces false-alarm rates. This study provides an analytical method for APT attack detection, considering both temporal dynamics and context static characteristics, and provides new ideas for security protection in complex networks. Full article
(This article belongs to the Special Issue AI in Cybersecurity, 2nd Edition)
Show Figures

Figure 1

36 pages, 8047 KiB  
Article
Fed-DTB: A Dynamic Trust-Based Framework for Secure and Efficient Federated Learning in IoV Networks: Securing V2V/V2I Communication
by Ahmed Alruwaili, Sardar Islam and Iqbal Gondal
J. Cybersecur. Priv. 2025, 5(3), 48; https://doi.org/10.3390/jcp5030048 - 19 Jul 2025
Viewed by 414
Abstract
The Internet of Vehicles (IoV) presents a vast opportunity for optimised traffic flow, road safety, and enhanced usage experience with the influence of Federated Learning (FL). However, the distributed nature of IoV networks creates certain inherent problems regarding data privacy, security from adversarial [...] Read more.
The Internet of Vehicles (IoV) presents a vast opportunity for optimised traffic flow, road safety, and enhanced usage experience with the influence of Federated Learning (FL). However, the distributed nature of IoV networks creates certain inherent problems regarding data privacy, security from adversarial attacks, and the handling of available resources. This paper introduces Fed-DTB, a new dynamic trust-based framework for FL that aims to overcome these challenges in the context of IoV. Fed-DTB integrates the adaptive trust evaluation that is capable of quickly identifying and excluding malicious clients to maintain the authenticity of the learning process. A performance comparison with previous approaches is shown, where the Fed-DTB method improves accuracy in the first two training rounds and decreases the per-round training time. The Fed-DTB is robust to non-IID data distributions and outperforms all other state-of-the-art approaches regarding the final accuracy (87–88%), convergence rate, and adversary detection (99.86% accuracy). The key contributions include (1) a multi-factor trust evaluation mechanism with seven contextual factors, (2) correlation-based adaptive weighting that dynamically prioritises trust factors based on vehicular conditions, and (3) an optimisation-based client selection strategy that maximises collaborative reliability. This work opens up opportunities for more accurate, secure, and private collaborative learning in future intelligent transportation systems with the help of federated learning while overcoming the conventional trade-off of security vs. efficiency. Full article
Show Figures

Figure 1

17 pages, 1184 KiB  
Article
A Biologically Inspired Cost-Efficient Zero-Trust Security Approach for Attacker Detection and Classification in Inter-Satellite Communication Networks
by Sridhar Varadala and Hao Xu
Future Internet 2025, 17(7), 304; https://doi.org/10.3390/fi17070304 - 13 Jul 2025
Viewed by 214
Abstract
In next-generation Low-Earth-Orbit (LEO) satellite networks, securing inter-satellite communication links (ISLs) through strong authentication is essential due to the network’s dynamic and distributed structure. Traditional authentication systems often struggle in these environments, leading to the adoption of Zero-Trust Security (ZTS) models. However, current [...] Read more.
In next-generation Low-Earth-Orbit (LEO) satellite networks, securing inter-satellite communication links (ISLs) through strong authentication is essential due to the network’s dynamic and distributed structure. Traditional authentication systems often struggle in these environments, leading to the adoption of Zero-Trust Security (ZTS) models. However, current ZTS protocols typically introduce high computational overhead, especially as the number of satellite nodes grows, which can impact both security and network performance. To overcome these challenges, a new bio-inspired ZTS framework called Manta Ray Foraging Cost-Optimized Zero-Trust Security (MRFCO-ZTS) has been introduced. This approach uses data-driven learning methods to enhance security across satellite communications. It continuously evaluates access requests by applying a cost function that accounts for risk level, likelihood of attack, and computational delay. The Manta Ray Foraging Optimization (MRFO) algorithm is used to minimize this cost, enabling effective classification of nodes as either trusted or malicious based on historical authentication records and real-time behavior. MRFCO-ZTS improves the accuracy of attacker detection while maintaining secure data exchange between authenticated satellites. Its effectiveness has been tested through numerical simulations under different satellite traffic conditions, with performance measured in terms of security accuracy, latency, and operational efficiency. Full article
(This article belongs to the Special Issue Joint Design and Integration in Smart IoT Systems, 2nd Edition)
Show Figures

Figure 1

31 pages, 2736 KiB  
Article
Unseen Attack Detection in Software-Defined Networking Using a BERT-Based Large Language Model
by Mohammed N. Swileh and Shengli Zhang
AI 2025, 6(7), 154; https://doi.org/10.3390/ai6070154 - 11 Jul 2025
Viewed by 576
Abstract
Software-defined networking (SDN) represents a transformative shift in network architecture by decoupling the control plane from the data plane, enabling centralized and flexible management of network resources. However, this architectural shift introduces significant security challenges, as SDN’s centralized control becomes an attractive target [...] Read more.
Software-defined networking (SDN) represents a transformative shift in network architecture by decoupling the control plane from the data plane, enabling centralized and flexible management of network resources. However, this architectural shift introduces significant security challenges, as SDN’s centralized control becomes an attractive target for various types of attacks. While the body of current research on attack detection in SDN has yielded important results, several critical gaps remain that require further exploration. Addressing challenges in feature selection, broadening the scope beyond Distributed Denial of Service (DDoS) attacks, strengthening attack decisions based on multi-flow analysis, and building models capable of detecting unseen attacks that they have not been explicitly trained on are essential steps toward advancing security measures in SDN environments. In this paper, we introduce a novel approach that leverages Natural Language Processing (NLP) and the pre-trained Bidirectional Encoder Representations from Transformers (BERT)-base-uncased model to enhance the detection of attacks in SDN environments. Our approach transforms network flow data into a format interpretable by language models, allowing BERT-base-uncased to capture intricate patterns and relationships within network traffic. By utilizing Random Forest for feature selection, we optimize model performance and reduce computational overhead, ensuring efficient and accurate detection. Attack decisions are made based on several flows, providing stronger and more reliable detection of malicious traffic. Furthermore, our proposed method is specifically designed to detect previously unseen attacks, offering a solution for identifying threats that the model was not explicitly trained on. To rigorously evaluate our approach, we conducted experiments in two scenarios: one focused on detecting known attacks, achieving an accuracy, precision, recall, and F1-score of 99.96%, and another on detecting previously unseen attacks, where our model achieved 99.96% in all metrics, demonstrating the robustness and precision of our framework in detecting evolving threats, and reinforcing its potential to improve the security and resilience of SDN networks. Full article
(This article belongs to the Special Issue Artificial Intelligence for Network Management)
Show Figures

Figure 1

18 pages, 736 KiB  
Article
Collaborative Split Learning-Based Dynamic Bandwidth Allocation for 6G-Grade TDM-PON Systems
by Alaelddin F. Y. Mohammed, Yazan M. Allawi, Eman M. Moneer and Lamia O. Widaa
Sensors 2025, 25(14), 4300; https://doi.org/10.3390/s25144300 - 10 Jul 2025
Viewed by 267
Abstract
Dynamic Bandwidth Allocation (DBA) techniques enable Time Division Multiplexing Passive Optical Network (TDM-PON) systems to efficiently manage upstream bandwidth by allowing the centralized Optical Line Terminal (OLT) to coordinate resource allocation among distributed Optical Network Units (ONUs). Conventional DBA techniques struggle to adapt [...] Read more.
Dynamic Bandwidth Allocation (DBA) techniques enable Time Division Multiplexing Passive Optical Network (TDM-PON) systems to efficiently manage upstream bandwidth by allowing the centralized Optical Line Terminal (OLT) to coordinate resource allocation among distributed Optical Network Units (ONUs). Conventional DBA techniques struggle to adapt to dynamic traffic conditions, resulting in suboptimal performance under varying load scenarios. This work suggests a Collaborative Split Learning-Based DBA (CSL-DBA) framework that utilizes the recently emerging Split Learning (SL) technique between the OLT and ONUs for the objective of optimizing predictive traffic adaptation and reducing communication overhead. Instead of requiring centralized learning at the OLT, the proposed approach decentralizes the process by enabling ONUs to perform local traffic analysis and transmit only model updates to the OLT. This cooperative strategy guarantees rapid responsiveness to fluctuating traffic conditions. We show by extensive simulations spanning several traffic scenarios, including low, fluctuating, and high traffic load conditions, that our proposed CSL-DBA achieves at least 99% traffic prediction accuracy, with minimal inference latency and scalable learning performance, and it reduces communication overhead by approximately 60% compared to traditional federated learning approaches, making it a strong candidate for next-generation 6G-grade TDM-PON systems. Full article
(This article belongs to the Special Issue Recent Advances in Optical Wireless Communications)
Show Figures

Figure 1

21 pages, 5291 KiB  
Article
Sensitivity Analysis and Optimization of Urban Roundabout Road Design Parameters Based on CFD
by Hangyu Zhang, Sihui Dong, Shiqun Li and Shuai Zheng
Eng 2025, 6(7), 156; https://doi.org/10.3390/eng6070156 - 9 Jul 2025
Viewed by 238
Abstract
With the rapid advancement of urbanization, urban transportation systems are facing increasingly severe congestion challenges, especially at traditional roundabouts. The rapid increase in vehicles has led to a sharp increase in pressure at roundabouts. In order to alleviate the traffic pressure in the [...] Read more.
With the rapid advancement of urbanization, urban transportation systems are facing increasingly severe congestion challenges, especially at traditional roundabouts. The rapid increase in vehicles has led to a sharp increase in pressure at roundabouts. In order to alleviate the traffic pressure in the roundabout, this paper changes the road design parameters of the roundabout, uses a CFD method combined with sensitivity analysis to study the influence of different inlet angles, lane numbers, and the outer radius on the pressure, and seeks the road design parameter scheme with the optimal mitigation effect. Firstly, the full factorial experimental design method is used to select the sample points in the design sample space, and the response values of each sample matrix are obtained by CFD. Secondly, the response surface model between the road design parameters of the roundabout and the pressure in the ring is constructed. The single-factor analysis method and the multi-factor analysis method are used to analyze the influence of the road parameters on the pressure of each feature point, and then the moment-independent sensitivity analysis method based on the response surface model is used to solve the sensitivity distribution characteristics of the road design parameters of the roundabout. Finally, the Kriging surrogate model is constructed, and the NSGA-II is used to solve the multi-objective optimization problem to obtain the optimal solution set of road parameters. The results show that there are significant differences in the mechanism of action of different road geometric parameters on the pressure of each feature point of the roundabout, and it shows obvious spatial heterogeneity of parameter sensitivity. The pressure changes in the two feature points at the entrance conflict area and the inner ring weaving area are significantly correlated with the lane number parameters. There is a strong coupling relationship between the pressure of the maximum pressure extreme point in the ring and the radius parameters of the outer ring. According to the optimal scheme of road parameters, that is, when the parameter set (inlet angle/°, number of lanes, outer radius/m) meets (35.4, 5, 65), the pressures of the feature points decrease by 34.1%, 38.3%, and 20.7%, respectively, which has a significant effect on alleviating the pressure in the intersection. This study optimizes the geometric parameters of roundabouts through multidisciplinary methods, provides a data-driven congestion reduction strategy for the urban sustainable development framework, and significantly improves road traffic efficiency, which is crucial for building an efficient traffic network and promoting urban sustainable development. Full article
Show Figures

Figure 1

10 pages, 1971 KiB  
Proceeding Paper
An Experimental Evaluation of Latency-Aware Scheduling for Distributed Kubernetes Clusters
by Radoslav Furnadzhiev
Eng. Proc. 2025, 100(1), 25; https://doi.org/10.3390/engproc2025100025 - 9 Jul 2025
Viewed by 252
Abstract
Kubernetes clusters are deployed across data centers for geo-redundancy and low-latency access, resulting in new challenges in scheduling workloads optimally. This paper presents a practical evaluation of network-aware scheduling in a distributed Kubernetes cluster that spans multiple network zones. A custom scheduling plugin [...] Read more.
Kubernetes clusters are deployed across data centers for geo-redundancy and low-latency access, resulting in new challenges in scheduling workloads optimally. This paper presents a practical evaluation of network-aware scheduling in a distributed Kubernetes cluster that spans multiple network zones. A custom scheduling plugin is implemented within the scheduling framework to incorporate real-time network telemetry (inter-node ping latency) into pod placement decisions. The assessment methodology combines a custom scheduler plugin, realistic network latency measurements, and representative distributed benchmarks to assess the impact of scheduling on traffic patterns. The results provide strong empirical confirmation of the findings previously established through simulation, offering a validated path forward to integrate not only network metrics, but also other performance-critical metrics such as energy efficiency, hardware utilization, and fault tolerance. Full article
Show Figures

Figure 1

22 pages, 1350 KiB  
Article
From Patterns to Predictions: Spatiotemporal Mobile Traffic Forecasting Using AutoML, TimeGPT and Traditional Models
by Hassan Ayaz, Kashif Sultan, Muhammad Sheraz and Teong Chee Chuah
Computers 2025, 14(7), 268; https://doi.org/10.3390/computers14070268 - 8 Jul 2025
Viewed by 367
Abstract
Call Detail Records (CDRs) from mobile networks offer valuable insights into both network performance and user behavior. With the growing importance of data analytics, analyzing CDRs has become critical for optimizing network resources by forecasting demand across spatial and temporal dimensions. In this [...] Read more.
Call Detail Records (CDRs) from mobile networks offer valuable insights into both network performance and user behavior. With the growing importance of data analytics, analyzing CDRs has become critical for optimizing network resources by forecasting demand across spatial and temporal dimensions. In this study, we examine publicly available CDR data from Telecom Italia to explore the spatiotemporal dynamics of mobile network activity in Milan. This analysis reveals key patterns in traffic distribution highlighting both high- and low-demand regions as well as notable variations in usage over time. To anticipate future network usage, we employ both Automated Machine Learning (AutoML) and the transformer-based TimeGPT model, comparing their performance against traditional forecasting methods such as Long Short-Term Memory (LSTM), ARIMA and SARIMA. Model accuracy is assessed using standard evaluation metrics, including root mean square error (RMSE), mean absolute error (MAE) and the coefficient of determination (R2). Results show that AutoML delivers the most accurate forecasts, with significantly lower RMSE (2.4990 vs. 14.8226) and MAE (1.0284 vs. 7.7789) compared to TimeGPT and a higher R2 score (99.96% vs. 98.62%). Our findings underscore the strengths of modern predictive models in capturing complex traffic behaviors and demonstrate their value in resource planning, congestion management and overall network optimization. Importantly, this study is one of the first to Comprehensively assess AutoML and TimeGPT on a high-resolution, real-world CDR dataset from a major European city. By merging machine learning techniques with advanced temporal modeling, this study provides a strong framework for scalable and intelligent mobile traffic prediction. It thus highlights the functionality of AutoML in simplifying model development and the possibility of TimeGPT to extend transformer-based prediction to the telecommunications domain. Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
Show Figures

Figure 1

20 pages, 1179 KiB  
Article
Conv1D-GRU-Self Attention: An Efficient Deep Learning Framework for Detecting Intrusions in Wireless Sensor Networks
by Kenan Honore Robacky Mbongo, Kanwal Ahmed, Orken Mamyrbayev, Guanghui Wang, Fang Zuo, Ainur Akhmediyarova, Nurzhan Mukazhanov and Assem Ayapbergenova
Future Internet 2025, 17(7), 301; https://doi.org/10.3390/fi17070301 - 4 Jul 2025
Viewed by 410
Abstract
Wireless Sensor Networks (WSNs) consist of distributed sensor nodes that collect and transmit environmental data, often in resource-constrained and unsecured environments. These characteristics make WSNs highly vulnerable to various security threats. To address this, the objective of this research is to design and [...] Read more.
Wireless Sensor Networks (WSNs) consist of distributed sensor nodes that collect and transmit environmental data, often in resource-constrained and unsecured environments. These characteristics make WSNs highly vulnerable to various security threats. To address this, the objective of this research is to design and evaluate a deep learning-based Intrusion Detection System (IDS) that is both accurate and efficient for real-time threat detection in WSNs. This study proposes a hybrid IDS model combining one-dimensional Convolutional Neural Networks (Conv1Ds), Gated Recurrent Units (GRUs), and Self-Attention mechanisms. A Conv1D extracts spatial features from network traffic, GRU captures temporal dependencies, and Self-Attention emphasizes critical sequence components, collectively enhancing detection of subtle and complex intrusion patterns. The model was evaluated using the WSN-DS dataset and demonstrated superior performance compared to traditional machine learning and simpler deep learning models. It achieved an accuracy of 98.6%, precision of 98.63%, recall of 98.6%, F1-score of 98.6%, and an ROC-AUC of 0.9994, indicating strong predictive capability even with imbalanced data. In addition to centralized training, the model was tested under cooperative, node-based learning conditions, where each node independently detects anomalies and contributes to a collective decision-making framework. This distributed approach improves detection efficiency and robustness. The proposed IDS offers a scalable and resilient solution tailored to the unique challenges of WSN security. Full article
Show Figures

Figure 1

17 pages, 2101 KiB  
Article
Enhancing DDoS Attacks Mitigation Using Machine Learning and Blockchain-Based Mobile Edge Computing in IoT
by Mahmoud Chaira, Abdelkader Belhenniche and Roman Chertovskih
Computation 2025, 13(7), 158; https://doi.org/10.3390/computation13070158 - 1 Jul 2025
Viewed by 404
Abstract
The widespread adoption of Internet of Things (IoT) devices has been accompanied by a remarkable rise in both the frequency and intensity of Distributed Denial of Service (DDoS) attacks, which aim to overwhelm and disrupt the availability of networked systems and connected infrastructures. [...] Read more.
The widespread adoption of Internet of Things (IoT) devices has been accompanied by a remarkable rise in both the frequency and intensity of Distributed Denial of Service (DDoS) attacks, which aim to overwhelm and disrupt the availability of networked systems and connected infrastructures. In this paper, we present a novel approach to DDoS attack detection and mitigation that integrates state-of-the-art machine learning techniques with Blockchain-based Mobile Edge Computing (MEC) in IoT environments. Our solution leverages the decentralized and tamper-resistant nature of Blockchain technology to enable secure and efficient data collection and processing at the network edge. We evaluate multiple machine learning models, including K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), Transformer architectures, and LightGBM, using the CICDDoS2019 dataset. Our results demonstrate that Transformer models achieve a superior detection accuracy of 99.78%, while RF follows closely with 99.62%, and LightGBM offers optimal efficiency for real-time detection. This integrated approach significantly enhances detection accuracy and mitigation effectiveness compared to existing methods, providing a robust and adaptive mechanism for identifying and mitigating malicious traffic patterns in IoT environments. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

37 pages, 18679 KiB  
Article
Real-Time DDoS Detection in High-Speed Networks: A Deep Learning Approach with Multivariate Time Series
by Drixter V. Hernandez, Yu-Kuen Lai and Hargyo T. N. Ignatius
Electronics 2025, 14(13), 2673; https://doi.org/10.3390/electronics14132673 - 1 Jul 2025
Viewed by 449
Abstract
The exponential growth of Distributed Denial-of-Service (DDoS) attacks in high-speed networks presents significant real-time detection and mitigation challenges. The existing detection frameworks are categorized into flow-based and packet-based detection approaches. Flow-based approaches usually suffer from high latency and controller overhead in high-volume traffic. [...] Read more.
The exponential growth of Distributed Denial-of-Service (DDoS) attacks in high-speed networks presents significant real-time detection and mitigation challenges. The existing detection frameworks are categorized into flow-based and packet-based detection approaches. Flow-based approaches usually suffer from high latency and controller overhead in high-volume traffic. In contrast, packet-based approaches are prone to high false-positive rates and limited attack classification, resulting in delayed mitigation responses. To address these limitations, we propose a real-time DDoS detection architecture that combines hardware-accelerated statistical preprocessing with GPU-accelerated deep learning models. The raw packet header information is transformed into multivariate time series data to enable classification of complex traffic patterns using Temporal Convolutional Networks (TCN), Long Short-Term Memory (LSTM) networks, and Transformer architectures. We evaluated the proposed system using experiments conducted under low to high-volume background traffic to validate each model’s robustness and adaptability in a real-time network environment. The experiments are conducted across different time window lengths to determine the trade-offs between detection accuracy and latency. The results show that larger observation windows improve detection accuracy using TCN and LSTM models and consistently outperform the Transformer in high-volume scenarios. Regarding model latency, TCN and Transformer exhibit constant latency across all window sizes. We also used SHAP (Shapley Additive exPlanations) analysis to identify the most discriminative traffic features, enhancing model interpretability and supporting feature selection for computational efficiency. Among the experimented models, TCN achieves the most balance between detection performance and latency, making it an applicable model for the proposed architecture. These findings validate the feasibility of the proposed architecture and support its potential as a real-time DDoS detection application in a realistic high-speed network. Full article
(This article belongs to the Special Issue Emerging Technologies for Network Security and Anomaly Detection)
Show Figures

Figure 1

37 pages, 4400 KiB  
Article
Optimizing Weighted Fair Queuing with Deep Reinforcement Learning for Dynamic Bandwidth Allocation
by Mays A. Mawlood and Dhari Ali Mahmood
Telecom 2025, 6(3), 46; https://doi.org/10.3390/telecom6030046 - 1 Jul 2025
Viewed by 353
Abstract
The rapid growth of high-quality telecommunications demands enhanced queueing system performance. Traditional bandwidth distribution often struggles to adapt to dynamic changes, network conditions, and erratic traffic patterns. Internet traffic fluctuates over time, causing resource underutilization. To address these challenges, this paper proposes a [...] Read more.
The rapid growth of high-quality telecommunications demands enhanced queueing system performance. Traditional bandwidth distribution often struggles to adapt to dynamic changes, network conditions, and erratic traffic patterns. Internet traffic fluctuates over time, causing resource underutilization. To address these challenges, this paper proposes a new adaptive algorithm called Weighted Fair Queues continual Deep Reinforcement Learning (WFQ continual-DRL), which integrates the advanced deep reinforcement learning Soft Actor-Critic (SAC) algorithm with the Elastic Weight Consolidation (EWC) approach. This technique is designed to overcome neural networks’ catastrophic forgetting, thereby enhancing network routers’ dynamic bandwidth allocation. The agent is trained to allocate bandwidth weights for multiple queues dynamically by interacting with the environment to observe queue lengths. The performance of the proposed adaptive algorithm was evaluated for eight queues until it expanded to twelve-queue systems. The model achieved higher cumulative rewards as compared to previous studies, indicating improved overall performance. The values of the Mean Squared Error (MSE) and Mean Absolute Error (MAE) decreased, suggesting effectively optimized bandwidth allocation. Reducing Root Mean Square Error (RMSE) indicated improved prediction accuracy and enhanced fairness computed by Jain’s index. The proposed algorithm was validated by employing real-world network traffic data, ensuring a robust model under dynamic queuing requirements. Full article
Show Figures

Figure 1

22 pages, 7580 KiB  
Article
Fuzzy-Based Multi-Modal Query-Forwarding in Mini-Datacenters
by Sami J. Habib and Paulvanna Nayaki Marimuthu
Computers 2025, 14(7), 261; https://doi.org/10.3390/computers14070261 - 1 Jul 2025
Viewed by 301
Abstract
The rapid growth of Internet of Things (IoT) enabled devices in industrial environments and the associated increase in data generation are paving the way for the development of localized, distributed datacenters. In this paper, we have proposed a novel mini-datacenter in the form [...] Read more.
The rapid growth of Internet of Things (IoT) enabled devices in industrial environments and the associated increase in data generation are paving the way for the development of localized, distributed datacenters. In this paper, we have proposed a novel mini-datacenter in the form of wireless sensor networks to efficiently handle query-based data collection from Industrial IoT (IIoT) devices. The mini-datacenter comprises a command center, gateways, and IoT sensors, designed to manage stochastic query-response traffic flow. We have developed a duplication/aggregation query flow model, tailored to emphasize reliable transmission. We have developed a dataflow management framework that employs a multi-modal query forwarding approach to forward queries from the command center to gateways under varying environments. The query forwarding includes coarse-grain and fine-grain strategies, where the coarse-grain strategy uses a direct data flow using a single gateway at the expense of reliability, while the fine-grain approach uses redundant gateways to enhance reliability. A fuzzy-logic-based intelligence system is integrated into the framework to dynamically select the appropriate granularity of the forwarding strategy based on the resource availability and network conditions, aided by a buffer watching algorithm that tracks real-time buffer status. We carried out several experiments with gateway nodes varying from 10 to 100 to evaluate the framework’s scalability and robustness in handling the query flow under complex environments. The experimental results demonstrate that the framework provides a flexible and adaptive solution that balances buffer usage while maintaining over 95% reliability in most queries. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

Back to TopTop