Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (872)

Search Parameters:
Keywords = Internet space

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 651 KiB  
Article
Enhancing IoT Connectivity in Suburban and Rural Terrains Through Optimized Propagation Models Using Convolutional Neural Networks
by George Papastergiou, Apostolos Xenakis, Costas Chaikalis, Dimitrios Kosmanos and Menelaos Panagiotis Papastergiou
IoT 2025, 6(3), 41; https://doi.org/10.3390/iot6030041 (registering DOI) - 31 Jul 2025
Abstract
The widespread adoption of the Internet of Things (IoT) has driven major advancements in wireless communication, especially in rural and suburban areas where low population density and limited infrastructure pose significant challenges. Accurate Path Loss (PL) prediction is critical for the effective deployment [...] Read more.
The widespread adoption of the Internet of Things (IoT) has driven major advancements in wireless communication, especially in rural and suburban areas where low population density and limited infrastructure pose significant challenges. Accurate Path Loss (PL) prediction is critical for the effective deployment and operation of Wireless Sensor Networks (WSNs) in such environments. This study explores the use of Convolutional Neural Networks (CNNs) for PL modeling, utilizing a comprehensive dataset collected in a smart campus setting that captures the influence of terrain and environmental variations. Several CNN architectures were evaluated based on different combinations of input features—such as distance, elevation, clutter height, and altitude—to assess their predictive accuracy. The findings reveal that CNN-based models outperform traditional propagation models (Free Space Path Loss (FSPL), Okumura–Hata, COST 231, Log-Distance), achieving lower error rates and more precise PL estimations. The best performing CNN configuration, using only distance and elevation, highlights the value of terrain-aware modeling. These results underscore the potential of deep learning techniques to enhance IoT connectivity in sparsely connected regions and support the development of more resilient communication infrastructures. Full article
Show Figures

Figure 1

20 pages, 1449 KiB  
Article
Deep Reinforcement Learning-Based Resource Allocation for UAV-GAP Downlink Cooperative NOMA in IIoT Systems
by Yuanyan Huang, Jingjing Su, Xuan Lu, Shoulin Huang, Hongyan Zhu and Haiyong Zeng
Entropy 2025, 27(8), 811; https://doi.org/10.3390/e27080811 - 29 Jul 2025
Viewed by 158
Abstract
This paper studies deep reinforcement learning (DRL)-based joint resource allocation and three-dimensional (3D) trajectory optimization for unmanned aerial vehicle (UAV)–ground access point (GAP) cooperative non-orthogonal multiple access (NOMA) communication in Industrial Internet of Things (IIoT) systems. Cooperative and non-cooperative users adopt different signal [...] Read more.
This paper studies deep reinforcement learning (DRL)-based joint resource allocation and three-dimensional (3D) trajectory optimization for unmanned aerial vehicle (UAV)–ground access point (GAP) cooperative non-orthogonal multiple access (NOMA) communication in Industrial Internet of Things (IIoT) systems. Cooperative and non-cooperative users adopt different signal transmission strategies to meet diverse, task-oriented, quality-of-service requirements. Specifically, the DRL framework based on the Soft Actor–Critic algorithm is proposed to jointly optimize user scheduling, power allocation, and UAV trajectory in continuous action spaces. Closed-form power allocation and maximum weight bipartite matching are integrated to enable efficient user pairing and resource management. Simulation results show that the proposed scheme significantly enhances system performance in terms of throughput, spectral efficiency, and interference management, while enabling robustness against channel uncertainties in dynamic IIoT environments. The findings indicate that combining model-free reinforcement learning with conventional optimization provides a viable solution for adaptive resource management in dynamic UAV-GAP cooperative communication scenarios. Full article
Show Figures

Figure 1

21 pages, 4738 KiB  
Article
Research on Computation Offloading and Resource Allocation Strategy Based on MADDPG for Integrated Space–Air–Marine Network
by Haixiang Gao
Entropy 2025, 27(8), 803; https://doi.org/10.3390/e27080803 - 28 Jul 2025
Viewed by 193
Abstract
This paper investigates the problem of computation offloading and resource allocation in an integrated space–air–sea network based on unmanned aerial vehicle (UAV) and low Earth orbit (LEO) satellites supporting Maritime Internet of Things (M-IoT) devices. Considering the complex, dynamic environment comprising M-IoT devices, [...] Read more.
This paper investigates the problem of computation offloading and resource allocation in an integrated space–air–sea network based on unmanned aerial vehicle (UAV) and low Earth orbit (LEO) satellites supporting Maritime Internet of Things (M-IoT) devices. Considering the complex, dynamic environment comprising M-IoT devices, UAVs and LEO satellites, traditional optimization methods encounter significant limitations due to non-convexity and the combinatorial explosion in possible solutions. A multi-agent deep deterministic policy gradient (MADDPG)-based optimization algorithm is proposed to address these challenges. This algorithm is designed to minimize the total system costs, balancing energy consumption and latency through partial task offloading within a cloud–edge-device collaborative mobile edge computing (MEC) system. A comprehensive system model is proposed, with the problem formulated as a partially observable Markov decision process (POMDP) that integrates association control, power control, computing resource allocation, and task distribution. Each M-IoT device and UAV acts as an intelligent agent, collaboratively learning the optimal offloading strategies through a centralized training and decentralized execution framework inherent in the MADDPG. The numerical simulations validate the effectiveness of the proposed MADDPG-based approach, which demonstrates rapid convergence and significantly outperforms baseline methods, and indicate that the proposed MADDPG-based algorithm reduces the total system cost by 15–60% specifically. Full article
(This article belongs to the Special Issue Space-Air-Ground-Sea Integrated Communication Networks)
Show Figures

Figure 1

21 pages, 1936 KiB  
Article
FFT-RDNet: A Time–Frequency-Domain-Based Intrusion Detection Model for IoT Security
by Bingjie Xiang, Renguang Zheng, Kunsan Zhang, Chaopeng Li and Jiachun Zheng
Sensors 2025, 25(15), 4584; https://doi.org/10.3390/s25154584 - 24 Jul 2025
Viewed by 258
Abstract
Resource-constrained Internet of Things (IoT) devices demand efficient and robust intrusion detection systems (IDSs) to counter evolving cyber threats. The traditional IDS models, however, struggle with high computational complexity and inadequate feature extraction, limiting their accuracy and generalizability in IoT environments. To address [...] Read more.
Resource-constrained Internet of Things (IoT) devices demand efficient and robust intrusion detection systems (IDSs) to counter evolving cyber threats. The traditional IDS models, however, struggle with high computational complexity and inadequate feature extraction, limiting their accuracy and generalizability in IoT environments. To address this, we propose FFT-RDNet, a lightweight IDS framework leveraging depthwise separable convolution and frequency-domain feature fusion. An ADASYN-Tomek Links hybrid strategy first addresses class imbalances. The core innovation of FFT-RDNet lies in its novel two-dimensional spatial feature modeling approach, realized through a dedicated dual-path feature embedding module. One branch extracts discriminative statistical features in the time domain, while the other branch transforms the data into the frequency domain via Fast Fourier Transform (FFT) to capture the essential energy distribution characteristics. These time–frequency domain features are fused to construct a two-dimensional feature space, which is then processed by a streamlined residual network using depthwise separable convolution. This network effectively captures complex periodic attack patterns with minimal computational overhead. Comprehensive evaluation on the NSL-KDD and CIC-IDS2018 datasets shows that FFT-RDNet outperforms state-of-the-art neural network IDSs across accuracy, precision, recall, and F1 score (improvements: 0.22–1%). Crucially, it achieves superior accuracy with a significantly reduced computational complexity, demonstrating high efficiency for resource-constrained IoT security deployments. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

40 pages, 16352 KiB  
Review
Surface Protection Technologies for Earthen Sites in the 21st Century: Hotspots, Evolution, and Future Trends in Digitalization, Intelligence, and Sustainability
by Yingzhi Xiao, Yi Chen, Yuhao Huang and Yu Yan
Coatings 2025, 15(7), 855; https://doi.org/10.3390/coatings15070855 - 20 Jul 2025
Viewed by 637
Abstract
As vital material carriers of human civilization, earthen sites are experiencing continuous surface deterioration under the combined effects of weathering and anthropogenic damage. Traditional surface conservation techniques, due to their poor compatibility and limited reversibility, struggle to address the compound challenges of micro-scale [...] Read more.
As vital material carriers of human civilization, earthen sites are experiencing continuous surface deterioration under the combined effects of weathering and anthropogenic damage. Traditional surface conservation techniques, due to their poor compatibility and limited reversibility, struggle to address the compound challenges of micro-scale degradation and macro-scale deformation. With the deep integration of digital twin technology, spatial information technologies, intelligent systems, and sustainable concepts, earthen site surface conservation technologies are transitioning from single-point applications to multidimensional integration. However, challenges remain in terms of the insufficient systematization of technology integration and the absence of a comprehensive interdisciplinary theoretical framework. Based on the dual-core databases of Web of Science and Scopus, this study systematically reviews the technological evolution of surface conservation for earthen sites between 2000 and 2025. CiteSpace 6.2 R4 and VOSviewer 1.6 were used for bibliometric visualization analysis, which was innovatively combined with manual close reading of the key literature and GPT-assisted semantic mining (error rate < 5%) to efficiently identify core research themes and infer deeper trends. The results reveal the following: (1) technological evolution follows a three-stage trajectory—from early point-based monitoring technologies, such as remote sensing (RS) and the Global Positioning System (GPS), to spatial modeling technologies, such as light detection and ranging (LiDAR) and geographic information systems (GIS), and, finally, to today’s integrated intelligent monitoring systems based on multi-source fusion; (2) the key surface technology system comprises GIS-based spatial data management, high-precision modeling via LiDAR, 3D reconstruction using oblique photogrammetry, and building information modeling (BIM) for structural protection, while cutting-edge areas focus on digital twin (DT) and the Internet of Things (IoT) for intelligent monitoring, augmented reality (AR) for immersive visualization, and blockchain technologies for digital authentication; (3) future research is expected to integrate big data and cloud computing to enable multidimensional prediction of surface deterioration, while virtual reality (VR) will overcome spatial–temporal limitations and push conservation paradigms toward automation, intelligence, and sustainability. This study, grounded in the technological evolution of surface protection for earthen sites, constructs a triadic framework of “intelligent monitoring–technological integration–collaborative application,” revealing the integration needs between DT and VR for surface technologies. It provides methodological support for addressing current technical bottlenecks and lays the foundation for dynamic surface protection, solution optimization, and interdisciplinary collaboration. Full article
Show Figures

Graphical abstract

35 pages, 2073 KiB  
Review
Using the Zero Trust Five-Step Implementation Process with Smart Environments: State-of-the-Art Review and Future Directions
by Shruti Kulkarni, Alexios Mylonas and Stilianos Vidalis
Future Internet 2025, 17(7), 313; https://doi.org/10.3390/fi17070313 - 18 Jul 2025
Viewed by 323
Abstract
There is a growing pressure on industry to secure environments and demonstrate their commitment in taking right steps to secure their products. This is because of the growing number of security compromises in the IT industry, Operational Technology environment, Internet of Things environment [...] Read more.
There is a growing pressure on industry to secure environments and demonstrate their commitment in taking right steps to secure their products. This is because of the growing number of security compromises in the IT industry, Operational Technology environment, Internet of Things environment and smart home devices. These compromises are not just about data breaches or data exfiltration, but also about unauthorised access to devices that are not configured correctly and vulnerabilities in software components, which usually lead to insecure authentication and authorisation. Incorrect configurations are usually in the form of devices being made available on the Internet (public domain), reusable credentials, access granted without verifying the requestor, and easily available credentials like default credentials. Organisations seeking to address the dual pressure of demonstrating steps in the right direction and addressing unauthorised access to resources can find a viable approach in the form of the zero trust concept. Zero trust principles are about moving security controls closer to the data, applications, assets and services and are based on the principle of “never trust, always verify”. As it stands today, zero trust research has advanced far beyond the concept of “never trust, always verify”. This paper provides the culmination of a literature review of research conducted in the space of smart home devices and IoT and the applicability of the zero trust five-step implementation process to secure them. We discuss the history of zero trust, the tenets of zero trust, the five-step implementation process for zero trust, and its adoption for smart home devices and Internet of Things, and we provide suggestions for future research. Full article
Show Figures

Figure 1

23 pages, 5644 KiB  
Article
Exploring the Performance of Transparent 5G NTN Architectures Based on Operational Mega-Constellations
by Oscar Baselga, Anna Calveras and Joan Adrià Ruiz-de-Azua
Network 2025, 5(3), 25; https://doi.org/10.3390/network5030025 - 18 Jul 2025
Viewed by 261
Abstract
The evolution of 3GPP non-terrestrial networks (NTNs) is enabling new avenues for broadband connectivity via satellite, especially within the scope of 5G. The parallel rise in satellite mega-constellations has further fueled efforts toward ubiquitous global Internet access. This convergence has fostered collaboration between [...] Read more.
The evolution of 3GPP non-terrestrial networks (NTNs) is enabling new avenues for broadband connectivity via satellite, especially within the scope of 5G. The parallel rise in satellite mega-constellations has further fueled efforts toward ubiquitous global Internet access. This convergence has fostered collaboration between mobile network operators and satellite providers, allowing the former to leverage mature space infrastructure and the latter to integrate with terrestrial mobile standards. However, integrating these technologies presents significant architectural challenges. This study investigates 5G NTN architectures using satellite mega-constellations, focusing on transparent architectures where Starlink is employed to relay the backhaul, midhaul, and new radio (NR) links. The performance of these architectures is assessed through a testbed utilizing OpenAirInterface (OAI) and Open5GS, which collects key user-experience metrics such as round-trip time (RTT) and jitter when pinging the User Plane Function (UPF) in the 5G core (5GC). Results show that backhaul and midhaul relays maintain delays of 50–60 ms, while NR relays incur delays exceeding one second due to traffic overload introduced by the RFSimulator tool, which is indispensable to transmit the NR signal over Starlink. These findings suggest that while transparent architectures provide valuable insights and utility, regenerative architectures are essential for addressing current time issues and fully realizing the capabilities of space-based broadband services. Full article
Show Figures

Figure 1

29 pages, 3338 KiB  
Article
AprilTags in Unity: A Local Alternative to Shared Spatial Anchors for Synergistic Shared Space Applications Involving Extended Reality and the Internet of Things
by Amitabh Mishra and Kevin Foster Carff
Sensors 2025, 25(14), 4408; https://doi.org/10.3390/s25144408 - 15 Jul 2025
Viewed by 304
Abstract
Creating shared spaces is a key part of making extended reality (XR) and Internet of Things (IoT) technology more interactive and collaborative. Currently, one system which stands out in achieving this end commercially involves spatial anchors. Due to the cloud-based nature of these [...] Read more.
Creating shared spaces is a key part of making extended reality (XR) and Internet of Things (IoT) technology more interactive and collaborative. Currently, one system which stands out in achieving this end commercially involves spatial anchors. Due to the cloud-based nature of these anchors, they can introduce connectivity and privacy issues for projects which need to be isolated from the internet. This research attempts to explore and create a different approach that does not require internet connectivity. This work involves the creation of an AprilTags-based calibration system as a local solution for creating shared XR spaces and investigates its performance. AprilTags are simple, scannable markers that, through computer vision algorithms, can help XR devices figure out position and rotation in a three-dimensional space. This implies that multiple users can be in the same virtual space and in the real-world space at the same time, easily. Our tests in XR showed that this method is accurate and works well for synchronizing multiple users. This approach could make shared XR experiences faster, more private, and easier to use without depending on cloud-based calibration systems. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2025)
Show Figures

Figure 1

24 pages, 1608 KiB  
Article
Efficient Keyset Design for Neural Networks Using Homomorphic Encryption
by Youyeon Joo, Seungjin Ha, Hyunyoung Oh and Yunheung Paek
Sensors 2025, 25(14), 4320; https://doi.org/10.3390/s25144320 - 10 Jul 2025
Viewed by 367
Abstract
With the advent of the Internet of Things (IoT), large volumes of sensitive data are produced from IoT devices, driving the adoption of Machine Learning as a Service (MLaaS) to overcome their limited computational resources. However, as privacy concerns in MLaaS grow, the [...] Read more.
With the advent of the Internet of Things (IoT), large volumes of sensitive data are produced from IoT devices, driving the adoption of Machine Learning as a Service (MLaaS) to overcome their limited computational resources. However, as privacy concerns in MLaaS grow, the demand for Privacy-Preserving Machine Learning (PPML) has increased. Fully Homomorphic Encryption (FHE) offers a promising solution by enabling computations on encrypted data without exposing the raw data. However, FHE-based neural network inference suffers from substantial overhead due to expensive primitive operations, such as ciphertext rotation and bootstrapping. While previous research has primarily focused on optimizing the efficiency of these computations, our work takes a different approach by concentrating on the rotation keyset design, a pre-generated data structure prepared before execution. We systematically explore three key design spaces (KDS) that influence rotation keyset design and propose an optimized keyset that reduces both computational overhead and memory consumption. To demonstrate the effectiveness of our new KDS design, we present two case studies that achieve up to 11.29× memory reduction and 1.67–2.55× speedup, highlighting the benefits of our optimized keyset. Full article
(This article belongs to the Special Issue Advances in Security of Mobile and Wireless Communications)
Show Figures

Figure 1

19 pages, 1103 KiB  
Article
Early-Stage Sensor Data Fusion Pipeline Exploration Framework for Agriculture and Animal Welfare
by Devon Martin, David L. Roberts and Alper Bozkurt
AgriEngineering 2025, 7(7), 215; https://doi.org/10.3390/agriengineering7070215 - 3 Jul 2025
Viewed by 400
Abstract
Internet-of-Things (IoT) approaches are continually introducing new sensors into the fields of agriculture and animal welfare. The application of multi-sensor data fusion to these domains remains a complex and open-ended challenge that defies straightforward optimization, often requiring iterative testing and refinement. To respond [...] Read more.
Internet-of-Things (IoT) approaches are continually introducing new sensors into the fields of agriculture and animal welfare. The application of multi-sensor data fusion to these domains remains a complex and open-ended challenge that defies straightforward optimization, often requiring iterative testing and refinement. To respond to this need, we have created a new open-source framework as well as a corresponding Python tool which we call the “Data Fusion Explorer (DFE)”. We demonstrated and evaluated the effectiveness of our proposed framework using four early-stage datasets from diverse disciplines, including animal/environmental tracking, agrarian monitoring, and food quality assessment. This included data across multiple common formats including single, array, and image data, as well as classification or regression and temporal or spatial distributions. We compared various pipeline schemes, such as low-level against mid-level fusion, or the placement of dimensional reduction. Based on their space and time complexities, we then highlighted how these pipelines may be used for different purposes depending on the given problem. As an example, we observed that early feature extraction reduced time and space complexity in agrarian data. Additionally, independent component analysis outperformed principal component analysis slightly in a sweet potato imaging dataset. Lastly, we benchmarked the DFE tool with respect to the Vanilla Python3 packages using our four datasets’ pipelines and observed a significant reduction, usually more than 50%, in coding requirements for users in almost every dataset, suggesting the usefulness of this package for interdisciplinary researchers in the field. Full article
Show Figures

Figure 1

21 pages, 6801 KiB  
Article
Performance Evaluation of a High-Gain Axisymmetric Minkowski Fractal Reflectarray for Ku-Band Satellite Internet Communication
by Prabhat Kumar Patnaik, Harish Chandra Mohanta, Dhruba Charan Panda, Ribhu Abhusan Panda, Malijeddi Murali and Heba G. Mohamed
Fractal Fract. 2025, 9(7), 421; https://doi.org/10.3390/fractalfract9070421 - 27 Jun 2025
Viewed by 484
Abstract
In this article, a high-gain axisymmetric Minkowski fractal reflectarray is designed and fabricated for Ku-Band satellite internet communications. High gain is achieved here by carefully optimising the number of unit cells, their shape modifier, focal length, feed position and scan angle. The space-filling [...] Read more.
In this article, a high-gain axisymmetric Minkowski fractal reflectarray is designed and fabricated for Ku-Band satellite internet communications. High gain is achieved here by carefully optimising the number of unit cells, their shape modifier, focal length, feed position and scan angle. The space-filling properties of Minkowski fractals help in miniaturising the fractal. The scan angle of the reflectarray varied by adjusting the fractal scaling factor for each unit cell in the array. The reflectarray is symmetric along the X-axis in its design and configuration. Initially, a Minkowski fractal unit cell is designed using iteration-1 in the simulation software. Then, its design parameters are optimised to achieve high gain, a narrow beam, and beam scan capabilities. The sensitivity of design parameters is examined individually using the array synthesis method to achieve these performance parameters. It helps to establish the maximum range of design and performance parameters for this design. The proposed reflectarray resonates at 12 GHz, achieving a gain of over 20 dB and a narrow beamwidth of less than 15 degrees. Finally, the designed fractal reflectarray is tested in real-time simulation environments using MATLAB R2023b, and its performance is evaluated in an interference scenario involving LEO and MEO satellites, as well as a ground station, under various time conditions. For real-world applicability, it is necessary to identify, analyse, and mitigate the unwanted interference signals that degrade the desired satellite signal. The proposed reflectarray, with its performance characteristics and beam scanning capabilities, is found to be an excellent choice for Ku-band satellite internet communications. Full article
Show Figures

Figure 1

23 pages, 2431 KiB  
Article
SatScope: A Data-Driven Simulator for Low-Earth-Orbit Satellite Internet
by Qichen Wang, Guozheng Yang, Yongyu Liang, Chiyu Chen, Qingsong Zhao and Sugai Chen
Future Internet 2025, 17(7), 278; https://doi.org/10.3390/fi17070278 - 24 Jun 2025
Viewed by 377
Abstract
The rapid development of low-Earth-orbit (LEO) satellite constellations has not only provided global users with low-latency and unrestricted high-speed data services but also presented researchers with the challenge of understanding dynamic changes in global network behavior. Unlike geostationary satellites and terrestrial internet infrastructure, [...] Read more.
The rapid development of low-Earth-orbit (LEO) satellite constellations has not only provided global users with low-latency and unrestricted high-speed data services but also presented researchers with the challenge of understanding dynamic changes in global network behavior. Unlike geostationary satellites and terrestrial internet infrastructure, LEO satellites move at a relative velocity of 7.6 km/s, leading to frequent alterations in their connectivity status with ground stations. Given the complexity of the space environment, current research on LEO satellite internet primarily focuses on modeling and simulation. However, existing LEO satellite network simulators often overlook the global network characteristics of these systems. We present SatScope, a data-driven simulator for LEO satellite internet. SatScope consists of three main components, space segment modeling, ground segment modeling, and network simulation configuration, providing researchers with an interface to interact with these models. Utilizing both space and ground segment models, SatScope can configure various network topology models, routing algorithms, and load balancing schemes, thereby enabling the evaluation of optimization algorithms for LEO satellite communication systems. We also compare SatScope’s fidelity, lightweight design, scalability, and openness against other simulators. Based on our simulation results using SatScope, we propose two metrics—ground node IP coverage rate and the number of satellite service IPs—to assess the service performance of single-layer satellite networks. Our findings reveal that during each network handover, on average, 38.94% of nodes and 83.66% of links change. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

30 pages, 4009 KiB  
Article
Secure Data Transmission Using GS3 in an Armed Surveillance System
by Francisco Alcaraz-Velasco, José M. Palomares, Fernando León-García and Joaquín Olivares
Information 2025, 16(7), 527; https://doi.org/10.3390/info16070527 - 23 Jun 2025
Viewed by 273
Abstract
Nowadays, the evolution and growth of machine learning (ML) algorithms and the Internet of Things (IoT) are enabling new applications. Smart weapons and people detection systems are examples. Firstly, this work takes advantage of an efficient, scalable, and distributed system, named SmartFog, which [...] Read more.
Nowadays, the evolution and growth of machine learning (ML) algorithms and the Internet of Things (IoT) are enabling new applications. Smart weapons and people detection systems are examples. Firstly, this work takes advantage of an efficient, scalable, and distributed system, named SmartFog, which identifies people with weapons by leveraging edge, fog, and cloud computing paradigms. Nevertheless, security vulnerabilities during data transmission are not addressed. Thus, this work bridges this gap by proposing a secure data transmission system integrating a lightweight security scheme named GS3. Therefore, the main novelty is the evaluation of the GS3 proposal in a real environment. In the first fog sublayer, GS3 leads to a 14% increase in execution time with respect to no secure data transmission, but AES results in a 34.5% longer execution time. GS3 achieves a 70% reduction in decipher time and a 55% reduction in cipher time compared to the AES algorithm. Furthermore, an energy consumption analysis shows that GS3 consumes 31% less power than AES. The security analysis confirms that GS3 detects tampering, replaying, forwarding, and forgery attacks. Moreover, GS3 has a key space of 2544 permutations, slightly larger than those of Chacha20 and Salsa20, with a faster solution than these methods. In addition, GS3 exhibits strength against differential cryptoanalysis. This mechanism is a compelling choice for energy-constrained environments and for securing event data transmissions with a short validity period. Moreover, GS3 maintains full architectural transparency with the underlying armed detection system. Full article
Show Figures

Graphical abstract

22 pages, 6648 KiB  
Article
A Malicious URL Detection Framework Based on Custom Hybrid Spatial Sequence Attention and Logic Constraint Neural Network
by Jinyang Zhou, Kun Zhang, Bing Zheng, Yu Zhou, Xin Xie, Ming Jin and Xiling Liu
Symmetry 2025, 17(7), 987; https://doi.org/10.3390/sym17070987 - 23 Jun 2025
Viewed by 338
Abstract
With the rapid development of the Internet, malicious URL detection has emerged as a critical challenge in the field of cyberspace security. Traditional machine-learning techniques and subsequent deep-learning frameworks have shown limitations in handling the complex malicious URL data generated by contemporary phishing [...] Read more.
With the rapid development of the Internet, malicious URL detection has emerged as a critical challenge in the field of cyberspace security. Traditional machine-learning techniques and subsequent deep-learning frameworks have shown limitations in handling the complex malicious URL data generated by contemporary phishing attacks. This paper proposes a novel detection framework, HSSLC-CharGRU (Hybrid Spatial–Sequential Attention Logically constrained neural network CharGRU), which balances high efficiency and accuracy while enhancing the generalization capability of detection frameworks. The core of HSSLC-CharGRU is the Gated Recurrent Unit (Gated Recurrent Unit, GRU), integrated with the HSSA (Hybrid Spatial–Sequential Attention, HSSA) module. The HSSLC-CharGRU framework proposed in this paper integrates symmetry concepts into its design. The HSSA module extracts URL sequence features across scales, reflecting multi-scale invariance. The interaction between the GRU and HSSA modules provides functional complementarity and symmetry, enhancing model robustness. In addition, the LCNN module incorporates logical rules and prior constraints to regulate the pattern-learning process during feature extraction, reducing the model’s sensitivity to noise and anomalous patterns. This enhances the structural symmetry of the feature space. Such logical constraints further improve the model’s generalization capability across diverse data distributions and strengthen its stability in handling complex URL patterns. These symmetries boost the model’s generalization across datasets and its adaptability and robustness in complex URL patterns. In the experimental part, HSSLC-CharGRU shows excellent detection accuracy compared with the current character-level malicious URL detection models. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

23 pages, 6982 KiB  
Article
An Efficient and Low-Delay SFC Recovery Method in the Space–Air–Ground Integrated Aviation Information Network with Integrated UAVs
by Yong Yang, Buhong Wang, Jiwei Tian, Xiaofan Lyu and Siqi Li
Drones 2025, 9(6), 440; https://doi.org/10.3390/drones9060440 - 16 Jun 2025
Viewed by 403
Abstract
Unmanned aerial vehicles (UAVs), owing to their flexible coverage expansion and dynamic adjustment capabilities, hold significant application potential across various fields. With the emergence of urban low-altitude air traffic dominated by UAVs, the integrated aviation information network combining UAVs and manned aircraft has [...] Read more.
Unmanned aerial vehicles (UAVs), owing to their flexible coverage expansion and dynamic adjustment capabilities, hold significant application potential across various fields. With the emergence of urban low-altitude air traffic dominated by UAVs, the integrated aviation information network combining UAVs and manned aircraft has evolved into a complex space–air–ground integrated Internet of Things (IoT) system. The application of 5G/6G network technologies, such as cloud computing, network function virtualization (NFV), and edge computing, has enhanced the flexibility of air traffic services based on service function chains (SFCs), while simultaneously expanding the network attack surface. Compared to traditional networks, the aviation information network integrating UAVs exhibits greater heterogeneity and demands higher service reliability. To address the failure issues of SFCs under attack, this study proposes an efficient SFC recovery method for recovery rate optimization (ERRRO) based on virtual network functions (VNFs) migration technology. The method first determines the recovery order of failed SFCs according to their recovery costs, prioritizing the restoration of SFCs with the lowest costs. Next, the migration priorities of the failed VNFs are ranked based on their neighborhood certainty, with the VNFs exhibiting the highest neighborhood certainty being migrated first. Finally, the destination nodes for migrating the failed VNFs are determined by comprehensively considering attributes such as the instantiated SFC paths, delay of physical platforms, and residual resources. Experiments demonstrate that the ERRRO performs well under networks with varying resource redundancy and different types of attacks. Compared to methods reported in the literature, the ERRRO achieves superior performance in terms of the SFC recovery rate and delay. Full article
(This article belongs to the Special Issue Space–Air–Ground Integrated Networks for 6G)
Show Figures

Figure 1

Back to TopTop