Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (309)

Search Parameters:
Keywords = high-performance computing service network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 1722 KB  
Article
A Lightweight Learning-Based Approach for Online Edge-to-Cloud Service Placement
by Mohammadsadeq Garshasbi Herabad, Javid Taheri, Bestoun S. Ahmed and Calin Curescu
Electronics 2026, 15(1), 65; https://doi.org/10.3390/electronics15010065 (registering DOI) - 23 Dec 2025
Abstract
The integration of edge and cloud computing is critical for resource-intensive applications which require low-latency communication, high reliability, and efficient resource utilisation. The service placement problem in these environments poses significant challenges owing to dynamic network conditions, heterogeneous resource availability, and the necessity [...] Read more.
The integration of edge and cloud computing is critical for resource-intensive applications which require low-latency communication, high reliability, and efficient resource utilisation. The service placement problem in these environments poses significant challenges owing to dynamic network conditions, heterogeneous resource availability, and the necessity for real-time decision-making. Because determining an optimal service placement in such networks is an NP-complete problem, the existing solutions rely on fast but suboptimal heuristics or computationally intensive metaheuristics. Neither approach meets the real-time demands of online scenarios, owing to its inefficiency or high computational overhead. In this study, we propose a lightweight learning-based approach for the online placement of services with multi-version components in edge-to-cloud computing. The proposed approach utilises a Shallow Neural Network (SNN) with both weight and power coefficients optimised using a Genetic Algorithm (GA). The use of an SNN ensures low computational overhead during the training phase and almost instant inference when deployed, making it well suited for real-time and online service placement in edge-to-cloud environments where rapid decision-making is crucial. The proposed method (SNN-GA) is specifically evaluated in AR/VR-based remote repair and maintenance scenarios, developed in collaboration with our industrial partner, and demonstrated robust performance and scalability across a wide range of problem sizes. The experimental results show that SNN-GA reduces the service response time by up to 27% compared to metaheuristics and 55% compared to heuristics at larger scales. It also achieves over 95% platform reliability, outperforming heuristics (which remain below 85%) and metaheuristics (which decrease to 90% at larger scales). Full article
21 pages, 793 KB  
Article
Beyond the Norm: Unsupervised Anomaly Detection in Telecommunications with Mahalanobis Distance
by Aline Mefleh, Michal Patryk Debicki, Ali Mubarak, Maroun Saade and Nathanael Weill
Computers 2025, 14(12), 561; https://doi.org/10.3390/computers14120561 - 17 Dec 2025
Viewed by 182
Abstract
Anomaly Detection (AD) in telecommunication networks is critical for maintaining service reliability and performance. However, operational networks present significant challenges: high-dimensional Key Performance Indicator (KPI) data collected from thousands of network elements must be processed in near real time to enable timely responses. [...] Read more.
Anomaly Detection (AD) in telecommunication networks is critical for maintaining service reliability and performance. However, operational networks present significant challenges: high-dimensional Key Performance Indicator (KPI) data collected from thousands of network elements must be processed in near real time to enable timely responses. This paper presents an unsupervised approach leveraging Mahalanobis Distance (MD) to identify network anomalies. The MD model offers a scalable solution that capitalizes on multivariate relationships among KPIs without requiring labeled data. Our methodology incorporates preprocessing steps to adjust KPI ratios, normalize feature distributions, and account for contextual factors like sample size. Aggregated anomaly scores are calculated across hierarchical network levels—cells, sectors, and sites—to localize issues effectively. Through experimental evaluations, the MD approach demonstrates consistent performance across datasets of varying sizes, achieving competitive Area Under the Receiver Operating Characteristic Curve (AUC) values while significantly reducing computational overhead compared to baseline AD methods: Isolation Forest (IF), Local Outlier Factor (LOF) and One-Class Support Vector Machines (SVM). Case studies illustrate the model’s practical application, pinpointing the Random Access Channel (RACH) success rate as a key anomaly contributor. The analysis highlights the importance of dimensionality reduction and tailored KPI adjustments in enhancing detection accuracy. This unsupervised framework empowers telecom operators to proactively identify and address network issues, optimizing their troubleshooting workflows. By focusing on interpretable metrics and efficient computation, the proposed approach bridges the gap between AD and actionable insights, offering a practical tool for improving network reliability and user experience. Full article
Show Figures

Graphical abstract

21 pages, 1301 KB  
Article
Attention-Guided Multi-Task Learning for Fault Detection, Classification, and Localization in Power Transmission Systems
by Md Samsul Alam, Md Raisul Islam, Rui Fan, Md Shafayat Alam Shazid and Abu Shouaib Hasan
Energies 2025, 18(24), 6547; https://doi.org/10.3390/en18246547 - 15 Dec 2025
Viewed by 259
Abstract
Timely and accurate fault diagnosis in power transmission systems is critical to ensuring grid stability, operational safety, and minimal service disruption. This study presents a unified deep learning framework that simultaneously performs fault identification, fault type classification, and fault location estimation using a [...] Read more.
Timely and accurate fault diagnosis in power transmission systems is critical to ensuring grid stability, operational safety, and minimal service disruption. This study presents a unified deep learning framework that simultaneously performs fault identification, fault type classification, and fault location estimation using a multi-task learning (MTL) approach. Using the IEEE 39–Bus network, a comprehensive data set was generated under various load conditions, fault types, resistances, and location scenarios to reflect real-world variability. The proposed model integrates a shared representation layer and task-specific output heads, enhanced with an attention mechanism to dynamically prioritize salient input features. To further optimize the model architecture, Optuna was employed for hyperparameter tuning, enabling systematic exploration of design parameters such as neuron counts, dropout rates, activation functions, and learning rates. Experimental results demonstrate that the proposed Optimized Multi-Task Learning Attention Network (MTL-AttentionNet) achieves high accuracy across all three tasks, outperforming traditional models such as Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP), which require separate training for each task. The attention mechanism contributes to both interpretability and robustness, while the MTL design reduces computational redundancy. Overall, the proposed framework provides a unified and efficient solution for real-time fault diagnosis on the IEEE 39–bus transmission system, with promising implications for intelligent substation automation and smart grid resilience. Full article
Show Figures

Figure 1

23 pages, 2226 KB  
Article
Dynamic Predictive Feedback Mechanism for Intelligent Bandwidth Control in Future SDN Networks
by Kritsanapong Somsuk, Suchart Khummanee and Panida Songram
Network 2025, 5(4), 54; https://doi.org/10.3390/network5040054 - 12 Dec 2025
Viewed by 213
Abstract
Future programmable networks such as 5G/6G and large-scale IoT deployments demand dynamic and intelligent bandwidth control mechanisms to ensure stable Quality of Service (QoS) under highly variable traffic conditions. Conventional queue-based schedulers and emerging machine learning techniques still struggle with slow reaction to [...] Read more.
Future programmable networks such as 5G/6G and large-scale IoT deployments demand dynamic and intelligent bandwidth control mechanisms to ensure stable Quality of Service (QoS) under highly variable traffic conditions. Conventional queue-based schedulers and emerging machine learning techniques still struggle with slow reaction to congestion, unstable fairness, and high computational costs. To address these challenges, this paper proposes a Dynamic Predictive Feedback (DPF) mechanism that integrates clustered-LSTM based short-term traffic prediction with meta-control driven adaptive bandwidth adjustment in a Software-Defined Networking (SDN) architecture. The prediction module proactively estimates future queue depth and arrival rates using in-band network telemetry (INT), while the feedback controller continuously adjusts scheduling weights based on congestion risk and fairness metrics. Extensive emulation experiments conducted under Static, Bursty IoT, Mixed, and Stress workloads show that DPF consistently outperforms state-of-the-art solutions, including A-WFQ and DRL-based schedulers, achieving up to 32% higher throughput, up to 40% lower latency, and 10–12% lower CPU and memory usage. Moreover, DPF demonstrates strong fairness (Jain’s Index ≥ 0.96), high adaptability, and minimal performance variance across scenarios. These results confirm that DPF is a scalable and resource-efficient solution capable of supporting the demands of future programmable, 5G/6G-ready network infrastructures. Full article
Show Figures

Figure 1

37 pages, 112317 KB  
Article
Neural Network–Based Adaptive Resource Allocation for 5G Heterogeneous Ultra-Dense Networks
by Alanoud Salah Alhazmi and Mohammed Amer Arafah
Sensors 2025, 25(24), 7521; https://doi.org/10.3390/s25247521 - 11 Dec 2025
Viewed by 258
Abstract
Increasing spectral bandwidth in 5G networks improves capacity but cannot fully address the heterogeneous and rapidly growing traffic demands. Heterogeneous ultra-dense networks (HUDNs) play a key role in offloading traffic across multi-tier deployments; however, their diverse base-station characteristics and diverse quality-of-service (QoS) requirements [...] Read more.
Increasing spectral bandwidth in 5G networks improves capacity but cannot fully address the heterogeneous and rapidly growing traffic demands. Heterogeneous ultra-dense networks (HUDNs) play a key role in offloading traffic across multi-tier deployments; however, their diverse base-station characteristics and diverse quality-of-service (QoS) requirements make resource allocation highly challenging. Traditional static resource-allocation approaches lack flexibility and often lead to inefficient spectrum utilization in such complex environments. This study aims to develop a joint user association–resource allocation (UA–RA) framework for 5G HUDNs that dynamically adapts to real-time network conditions to improve spectral efficiency and service ratio under high traffic loads. A software-defined networking controller centrally manages the UA–RA process by coordinating inter-cell resource redistribution through the lending of underutilized resource blocks between macro and small cells, mitigating repeated congestion. To further enhance adaptability, a neural network–adaptive resource allocation (NN–ARA) model is trained on UA–RA-driven simulation data to approximate efficient allocation decisions with low computational cost. A real-world evaluation is conducted using the downtown Los Angeles deployment. For performance validation, the proposed NN–ARA approach is compared with two representative baselines from the literature (Bouras et al. and Al-Ali et al.). Results show that NN–ARA achieves up to 20.8% and 11% higher downlink data rates in the macro and small tiers, respectively, and improves spectral efficiency by approximately 20.7% and 11.1%. It additionally reduces the average blocking ratio by up to 55%. These findings demonstrate that NN–ARA provides an adaptive, scalable, and SDN-coordinated solution for efficient spectrum utilization and service continuity in 5G and future 6G HUDNs. Full article
Show Figures

Figure 1

20 pages, 2676 KB  
Article
Memory-Efficient Iterative Signal Detection for 6G Massive MIMO via Hybrid Quasi-Newton and Deep Q-Networks
by Adeb Salh, Mohammed A. Alhartomi, Ghasan Ali Hussain, Fares S. Almehmadi, Saeed Alzahrani, Ruwaybih Alsulami and Abdulrahman Amer
Electronics 2025, 14(24), 4832; https://doi.org/10.3390/electronics14244832 - 8 Dec 2025
Viewed by 230
Abstract
The advent of Sixth Generation (6G) wireless communication systems demands unprecedented data rates, ultra-low latency, and massive connectivity to support emerging applications such as extended reality, digital twins, and ubiquitous intelligent services. These stringent requirements call for the use of massive Multiple-Input Multiple-Output [...] Read more.
The advent of Sixth Generation (6G) wireless communication systems demands unprecedented data rates, ultra-low latency, and massive connectivity to support emerging applications such as extended reality, digital twins, and ubiquitous intelligent services. These stringent requirements call for the use of massive Multiple-Input Multiple-Output (m-MIMO) systems with hundreds or even thousands of antennas, which introduce substantial challenges for signal detection algorithms. Conventional linear detectors, especially the linear Minimum Mean Square Error (MMSE) detectors, face prohibitive computational complexity due to high-dimensional matrix inversions, and their performance remains inherently restricted by the limitations of linear processing. The current research suggested an Iterative Signal Detection (ISD) algorithm with significant limitations being occupied with the combination of Deep Q-Network (DQN) and Quasi-Newton algorithms. The method incorporates the Broyden-Net, which could be faster with less memory training than the model in the case of spatially correlated channels, a Quasi-Newton method, and DQN to improve the m-MIMO detection. The proposed techniques support the computational efficiency of realistic 6G systems and outperform linear detectors. The simulation findings proved that the DQN-improved Quasi-Newton algorithm is more appropriate than traditional algorithms, since it combines the reward design, limited memory updates, and adaptive interference mitigation to shorten convergence time by 60% and increase the confrontation to correlated fading. Full article
(This article belongs to the Special Issue Advances in MIMO Communication)
Show Figures

Figure 1

26 pages, 4592 KB  
Article
Joint Optimization of Serial Task Offloading and UAV Position for Mobile Edge Computing Based on Multi-Agent Deep Reinforcement Learning
by Mengyuan Tao and Qi Zhu
Appl. Sci. 2025, 15(23), 12419; https://doi.org/10.3390/app152312419 - 23 Nov 2025
Viewed by 368
Abstract
Driven by the proliferation of the Internet of Things (IoT), Mobile Edge Computing (MEC) is a key technology for meeting the low-latency and high-computational demands of future wireless networks. However, ground-based MEC servers suffer from limited coverage and inflexible deployment. Unmanned Aerial Vehicles [...] Read more.
Driven by the proliferation of the Internet of Things (IoT), Mobile Edge Computing (MEC) is a key technology for meeting the low-latency and high-computational demands of future wireless networks. However, ground-based MEC servers suffer from limited coverage and inflexible deployment. Unmanned Aerial Vehicles (UAVs), with their high mobility, can serve as aerial edge servers to extend this coverage. This paper addresses the multi-user serial task offloading problem in cache-assisted UAV-MEC systems by proposing a joint optimization algorithm for service caching, UAV positioning, task offloading, and serial processing order. Under the constraints of physical resources such as UAV cache capacity, heterogeneous computing capabilities, and wireless channel bandwidth, an optimization problem is formulated to minimize the weighted sum of task completion time and user cost. The method first performs service caching based on task popularity and then utilizes the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm to optimize the UAV’s position, task offloading decisions, and serial processing order. The MADDPG algorithm consists of two collaborative agents: a UAV position agent responsible for selecting the optimal UAV position, and a task scheduling agent that determines the serial processing order and offloading decisions for all tasks. Simulation results demonstrate that the proposed algorithm can converge quickly to a stable solution, significantly reducing both task completion time and user cost. Full article
Show Figures

Figure 1

22 pages, 3577 KB  
Article
Pervasive Auto-Scaling Method for Improving the Quality of Resource Allocation in Cloud Platforms
by Vimal Raja Rajasekar and G. Santhi
Big Data Cogn. Comput. 2025, 9(11), 294; https://doi.org/10.3390/bdcc9110294 - 18 Nov 2025
Viewed by 507
Abstract
Cloud resource provider deployment at random locations increases operational costs regardless of the application demand intervals. To provide adaptable load balancing under varying application traffic intervals, the auto-scaling concept has been introduced. This article introduces a Pervasive Auto-Scaling Method (PASM) for Computing Resource [...] Read more.
Cloud resource provider deployment at random locations increases operational costs regardless of the application demand intervals. To provide adaptable load balancing under varying application traffic intervals, the auto-scaling concept has been introduced. This article introduces a Pervasive Auto-Scaling Method (PASM) for Computing Resource Allocation (CRA) to improve the application quality of service. In this auto-scaling method, deep reinforcement learning is employed to verify shared instances of up-scaling and down-scaling pervasively. The overflowing application demands are computed for their service failures and are used to train the learning network. In this process, the scaling is decided based on the maximum computing resource allocation to the demand ratio. Therefore, the learning network is also trained using scaling rates from the previous (completed) allocation intervals. This process is thus recurrent until maximum resource allocation with high sharing is achieved. The resource provider migrates to reduce the wait time based on the high-to-low demand ratio between successive computing intervals. This enhances the resource allocation rate without high wait times. The proposed method’s performance is validated using the metrics resource allocation rate, service delay, allocated wait time, allocation failures, and resource utilization. Full article
Show Figures

Figure 1

25 pages, 2799 KB  
Article
Blockchain-Enabled Identity Based Authentication Scheme for Cellular Connected Drones
by Yu Su, Zeyuan Li, Yufei Zhang, Xun Gui, Xue Deng and Jun Fu
Sensors 2025, 25(22), 6935; https://doi.org/10.3390/s25226935 - 13 Nov 2025
Viewed by 478
Abstract
The proliferation of drones across precision agriculture, disaster response operations, and delivery services has accentuated the critical need for secure communication frameworks. Due to the limited computational capabilities of drones and the fragility of real-time wireless communication networks, the cellular connected drones confront [...] Read more.
The proliferation of drones across precision agriculture, disaster response operations, and delivery services has accentuated the critical need for secure communication frameworks. Due to the limited computational capabilities of drones and the fragility of real-time wireless communication networks, the cellular connected drones confront mounting cybersecurity threats. Traditional authentication mechanisms, such as public-key infrastructure-based authentication, and identity-based authentication, are centralized and have high computational costs, which may result in single point of failure. To address these issues, this paper proposes a blockchain-enabled authentication and key agreement scheme for cellular-connected drones. Leveraging identity-based cryptography (IBC) and the Message Queuing Telemetry Transport (MQTT), the scheme flow is optimized to reduce the communication rounds in the authentication. By integrating MQTT brokers with the blockchain, it enables drones to authenticate through any network node, thereby enhancing system scalability and availability. Additionally, cryptographic performance is optimized via precompiled smart contracts, enabling efficient execution of complex operations. Comprehensive experimental evaluations validate the performance, scalability, robustness, and resource efficiency of the proposed scheme, and show that the system delivers near-linear scalability and accelerated on-chain verification. Full article
(This article belongs to the Special Issue Blockchain-Based Solutions to Secure IoT)
Show Figures

Figure 1

21 pages, 598 KB  
Article
Mask Inflation Encoder and Quasi-Dynamic Thresholding Outlier Detection in Cellular Networks
by Roland N. Mfondoum, Nikol Gotseva, Atanas Vlahov, Antoni Ivanov, Pavlina Koleva, Vladimir Poulkov and Agata Manolova
Telecom 2025, 6(4), 84; https://doi.org/10.3390/telecom6040084 - 4 Nov 2025
Viewed by 476
Abstract
Mobile networks have advanced significantly, providing high-throughput voice, video, and integrated data access to support connectivity through various services to facilitate high user density. This traffic growth has also increased the complexity of outlier detection (OD) for fraudster identification, fault detection, and protecting [...] Read more.
Mobile networks have advanced significantly, providing high-throughput voice, video, and integrated data access to support connectivity through various services to facilitate high user density. This traffic growth has also increased the complexity of outlier detection (OD) for fraudster identification, fault detection, and protecting network infrastructure and its users against cybersecurity threats. Autoencoder (AE) models are widely used for outlier detection (OD) on unlabeled and temporal data; however, they rely on fixed anomaly thresholds and anomaly-free training data, which are both difficult to obtain in practice. This paper introduces statistical masking in the encoder to enhance learning from nearly normal data by flagging potential outliers. It also proposes a quasidynamic threshold mechanism that adapts to reconstruction errors, improving detection by up to 3% median area under the receiver operating characteristic (AUROC) compared to the standard 95% threshold used in base AE models. Extensive experiments on the Milan Human Telecommunications Interaction (HTA) dataset validate the performance of the proposed methods. Combined, these two techniques yield a 31% improvement in AUROC and a 34% lower computational complexity when compared to baseline AE, long short-term memory AE (LSTM-AE), and seasonal auto-regressive integrated moving average (SARIMA), enabling efficient OD in modern cellular networks. Full article
Show Figures

Figure 1

28 pages, 5513 KB  
Article
An Agent-Based System for Location Privacy Protection in Location-Based Services
by Omar F. Aloufi, Ahmed S. Alfakeeh and Fahad M. Alotaibi
ISPRS Int. J. Geo-Inf. 2025, 14(11), 433; https://doi.org/10.3390/ijgi14110433 - 3 Nov 2025
Viewed by 521
Abstract
Location-based services (LBSs) are a crucial element of the Internet of Things (IoT) and have garnered significant attention from both researchers and users, driven by the rise of wireless devices and a growing user base. However, the use of LBS-enabled applications carries several [...] Read more.
Location-based services (LBSs) are a crucial element of the Internet of Things (IoT) and have garnered significant attention from both researchers and users, driven by the rise of wireless devices and a growing user base. However, the use of LBS-enabled applications carries several risks, as users must provide their real locations with each query. This can expose them to potential attacks from the LBS server, leading to serious issues like the theft of personal information. Consequently, protecting location privacy is a vital concern. To address this, location dummy-based methods are employed to safeguard the location privacy of LBS users. However, location dummy-based approaches also suffer from problems such as low resistance against inference attacks and the generation of strong dummy locations, an issue that is considered an open problem. Moreover, generating many location dummies to achieve a high privacy protection level leads to high network overhead and requires high computational capabilities on the mobile devices of the LBS users, and such devices are limited. In this paper, we introduce the Caching-Aware Double-Dummy Selection (CaDDSL) algorithm to protect the location privacy of LBS users against homogeneity location and semantic location inference attacks, which may be applied by the LBS server as a malicious party. Then, we enhance the CaDDSL algorithm via encapsulation with agents to solve the tradeoff between generating many dummies and large network overhead by proposing the Cache-Aware Overhead-Aware Dummy Selection (CaOaDSL) algorithm. Compared to three well-known approaches, namely GridDummy, CirDummy, and Dest-Ex, our approach showed better performance in terms of communication cost, cache hit ratio, resistance against inference attacks, and network overhead. Full article
Show Figures

Figure 1

21 pages, 2864 KB  
Article
Design and Performance Analysis of Sub-THz/THz Mini-Cluster Architectures for Dense Urban 5G/6G Networks
by Valdemar Farré, José Vega-Sánchez, Victor Garzón, Nathaly Orozco Garzón, Henry Carvajal Mora and Edgar Eduardo Benitez Olivo
Sensors 2025, 25(21), 6717; https://doi.org/10.3390/s25216717 - 3 Nov 2025
Viewed by 739
Abstract
The transition from Fifth Generation (5G) New Radio (NR) systems to Beyond 5G (B5G) and Sixth Generation (6G) networks requires innovative architectures capable of supporting ultra-high data rates, sub-millisecond latency, and massive connection densities in dense urban environments. This paper proposes a comprehensive [...] Read more.
The transition from Fifth Generation (5G) New Radio (NR) systems to Beyond 5G (B5G) and Sixth Generation (6G) networks requires innovative architectures capable of supporting ultra-high data rates, sub-millisecond latency, and massive connection densities in dense urban environments. This paper proposes a comprehensive design methodology for a mini-cluster architecture operating in sub-THz (0.1–0.3 THz) and THz (0.3–3 THz) frequency bands. The proposed framework aims to enhance existing 5G infrastructure while enabling B5G/6G capabilities, with a particular focus on hotspot coverage and mission-critical applications in dense urban environments. The architecture integrates mini Base Stations (mBS), Distributed Edge Computing Units (DECUs), and Intelligent Reflecting Surfaces (IRS) for coverage enhancement and blockage mitigation. Detailed link budget analysis, coverage and capacity planning, and propagation modeling tailored to complex urban morphologies are performed for representative case study cities, Quito and Guayaquil (Ecuador). Simulation results demonstrate up to 100 Gbps peak data rates, sub 100 μs latency, and tenfold energy efficiency gains over conventional 5G deployments. Additionally, the proposed framework highlights the growing importance of THz communications in the 5G evolution towards B5G and 6G systems, where ultra-dense, low-latency, and energy-efficient mini-cluster deployments play a key role in enabling next-generation connectivity for critical and immersive services. Beyond the studied cities, the proposed framework can be generalized to other metropolitan areas facing similar propagation and capacity challenges, providing a scalable pathway for early-stage sub-THz/THz deployments in B5G/6G networks. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

33 pages, 5642 KB  
Article
Feature-Optimized Machine Learning Approaches for Enhanced DDoS Attack Detection and Mitigation
by Ahmed Jamal Ibrahim, Sándor R. Répás and Nurullah Bektaş
Computers 2025, 14(11), 472; https://doi.org/10.3390/computers14110472 - 1 Nov 2025
Viewed by 1064
Abstract
Distributed denial of service (DDoS) attacks pose a serious risk to the operational stability of a network for companies, often leading to service disruptions and financial damage and a loss of trust and credibility. The increasing sophistication and scale of these threats highlight [...] Read more.
Distributed denial of service (DDoS) attacks pose a serious risk to the operational stability of a network for companies, often leading to service disruptions and financial damage and a loss of trust and credibility. The increasing sophistication and scale of these threats highlight the pressing need for advanced mitigation strategies. Despite the numerous existing studies on DDoS detection, many rely on large, redundant feature sets and lack validation for real-time applicability, leading to high computational complexity and limited generalization across diverse network conditions. This study addresses this gap by proposing a feature-optimized and computationally efficient ML framework for DDoS detection and mitigation using benchmark dataset. The proposed approach serves as a foundational step toward developing a low complexity model suitable for future real-time and hardware-based implementation. The dataset was systematically preprocessed to identify critical parameters, such as packet length Min, Total Backward Packets, Avg Fwd Segment Size, and others. Several ML algorithms, involving Logistic Regression, Decision Tree, Random Forest, Gradient Boosting, and Cat-Boost, are applied to develop models for detecting and mitigating abnormal network traffic. The developed ML model demonstrates high performance, achieving 99.78% accuracy with Decision Tree and 99.85% with Random Forest, representing improvements of 1.53% and 0.74% compared to previous work, respectively. In addition, the Decision Tree algorithm achieved 99.85% accuracy for mitigation. with an inference time as low as 0.004 s, proving its suitability for identifying DDoS attacks in real time. Overall, this research presents an effective approach for DDoS detection, emphasizing the integration of ML models into existing security systems to enhance real-time threat mitigation. Full article
Show Figures

Figure 1

26 pages, 4327 KB  
Article
DDoS Detection Using a Hybrid CNN–RNN Model Enhanced with Multi-Head Attention for Cloud Infrastructure
by Posathip Sathaporn, Woranidtha Krungseanmuang, Vasutorn Chaowalittawin, Chawalit Benjangkaprasert and Boonchana Purahong
Appl. Sci. 2025, 15(21), 11567; https://doi.org/10.3390/app152111567 - 29 Oct 2025
Viewed by 1214
Abstract
Cloud infrastructure supports modern services across different sectors, such as business, education, lifestyle, government and so on. With the high demand for cloud computing, the security of network communication is also an important consideration. Distributed denial-of-service (DDoS) attacks pose a significant threat. Therefore, [...] Read more.
Cloud infrastructure supports modern services across different sectors, such as business, education, lifestyle, government and so on. With the high demand for cloud computing, the security of network communication is also an important consideration. Distributed denial-of-service (DDoS) attacks pose a significant threat. Therefore, detection and mitigation are critically important for reliable operation of cloud-based systems. Intrusion detection systems (IDS) play a vital role in detecting and preventing attacks to avoid damage to reliability. This article presents DDoS detection using a convolutional neural network (CNN) and recurrent neural network (RNN) model enhancement with a multi-head attention mechanism for cloud infrastructure protection enhances the contextual relevance and accuracy of the DDoS detection. Preprocessing techniques were applied to optimize model performance, such as information gained to identify important features, normalization, and synthetic minority oversampling technique (SMOTE) to address class imbalance issues. The results were evaluated using confusion metrics. Based on the performance indicators, our proposed method achieves an accuracy of 97.78%, precision of 98.66%, recall of 94.53%, and F1-score of 96.49%. The hybrid model with multi-head attention achieved the best results among the other deep learning models. The model parameter size was moderately lightweight at 413,057 parameters with an inference time in a cloud environment of less than 6 milliseconds, making it suitable for application to cloud infrastructure. Full article
(This article belongs to the Special Issue AI Technology and Security in Cloud/Big Data)
Show Figures

Figure 1

19 pages, 20616 KB  
Article
Toward Trustworthy On-Device AI: A Quantization-Robust Parameterized Hybrid Neural Filtering Framework
by Sangwoo Hong, Seung-Wook Kim, Seunghyun Moon and Seowon Ji
Mathematics 2025, 13(21), 3447; https://doi.org/10.3390/math13213447 - 29 Oct 2025
Viewed by 616
Abstract
Recent advances in deep learning have led to a proliferation of AI services for the general public. Consequently, constructing trustworthy AI systems that operate on personal devices has become a crucial challenge. While on-device processing is critical for privacy-preserving and latency-sensitive applications, conventional [...] Read more.
Recent advances in deep learning have led to a proliferation of AI services for the general public. Consequently, constructing trustworthy AI systems that operate on personal devices has become a crucial challenge. While on-device processing is critical for privacy-preserving and latency-sensitive applications, conventional deep learning approaches often suffer from instability under quantization and high computational costs. Toward a trustworthy and efficient on-device solution for image processing, we present a hybrid neural filtering framework that combines the representational power of lightweight neural networks with the stability of classical filters. In our framework, the neural network predicts a low-dimensional parameter map that guides the filter’s behavior, effectively decoupling parameter estimation from the final image synthesis. This design enables a truly trustworthy AI system by operating entirely on-device, which eliminates the reliance on servers and significantly reduces computational cost. To ensure quantization robustness, we introduce a basis-decomposed parameterization, a design mathematically proven to bound reconstruction errors. Our network predicts a set of basis maps that are combined via fixed coefficients to form the final guidance. This architecture is intrinsically robust to quantization and supports runtime-adaptive precision without retraining. Experiments on depth map super-resolution validate our approach. Our framework demonstrates exceptional quantization robustness, exhibiting no performance degradation under 8-bit quantization, whereas a baseline suffers a significant 1.56 dB drop. Furthermore, our model’s significantly lower Mean Squared Error highlights its superior stability, providing a practical and mathematically grounded pathway toward trustworthy on-device AI. Full article
Show Figures

Figure 1

Back to TopTop