Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (285)

Search Parameters:
Keywords = cloud-edge collaboration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 6873 KB  
Article
Towards Effective Forest Fire Response: A Cloud–Edge Collaborative UAV Deployment Strategy for Rapid Situational Awareness
by Yumin Dong, Peifeng Li, Xiqing Guo and Ziyang Li
Fire 2026, 9(4), 160; https://doi.org/10.3390/fire9040160 - 10 Apr 2026
Abstract
Rapid and balanced situational awareness of fire fronts is critical for effective initial response to forest fires, yet suboptimal task planning for Unmanned Aerial Vehicle (UAV) swarms can delay intelligence delivery. This paper presents a cloud–edge collaborative approach that integrates edge-driven rapid task [...] Read more.
Rapid and balanced situational awareness of fire fronts is critical for effective initial response to forest fires, yet suboptimal task planning for Unmanned Aerial Vehicle (UAV) swarms can delay intelligence delivery. This paper presents a cloud–edge collaborative approach that integrates edge-driven rapid task partitioning with cloud-based global workload balancing, explicitly addressing the NP-hard multiple traveling salesman problem underlying multi-UAV reconnaissance. At the edge, a fire-spread-informed line clustering algorithm quickly assigns monitoring points to UAVs, exploiting low-latency processing for initial sectorization. The cloud then refines this allocation through a novel cooperative–competitive task transfer mechanism that minimizes the makespan. Extensive simulations and a real-world case study based on the 2020 Liangshan wildfire show that the proposed method reduces makespan by up to 24.5% compared to conventional centralized and distributed baselines, while remaining robust under severe communication constraints. Full article
25 pages, 1501 KB  
Article
MA-JTATO: Multi-Agent Joint Task Association and Trajectory Optimization in UAV-Assisted Edge Computing System
by Yunxi Zhang and Zhigang Wen
Drones 2026, 10(4), 267; https://doi.org/10.3390/drones10040267 - 7 Apr 2026
Abstract
With the rapid development of applications such as smart cities and the industrial internet, the computation-intensive tasks generated by massive sensing devices pose significant challenges to traditional cloud computing paradigms. Unmanned aerial vehicle (UAV)-assisted edge computing systems, leveraging their high mobility and wide-area [...] Read more.
With the rapid development of applications such as smart cities and the industrial internet, the computation-intensive tasks generated by massive sensing devices pose significant challenges to traditional cloud computing paradigms. Unmanned aerial vehicle (UAV)-assisted edge computing systems, leveraging their high mobility and wide-area coverage capabilities, offer an innovative architecture for low-latency and highly reliable edge services. However, the practical deployment of such systems faces a highly complex multi-objective optimization problem featured by the tight coupling of task offloading decisions, UAV trajectory planning, and edge server resource allocation. Conventional optimization methods are difficult to adapt to the dynamic and high-dimensional characteristics of this problem, leading to suboptimal system performance. To address this critical challenge, this paper constructs an intelligent collaborative optimization framework for UAV-assisted edge computing systems and formulates the system quality of service (QoS) optimization problem as a mixed-integer non-convex programming problem with the dual objectives of minimizing task processing latency and reducing overall system energy consumption. A multi-agent joint task association and trajectory optimization (MA-JTATO) algorithm based on hybrid reinforcement learning is proposed to solve this intractable problem, which innovatively decouples the original coupled optimization problem into three interrelated subproblems and realizes their collaborative and efficient solution. Specifically, the Advantage Actor-Critic (A2C) algorithm is adopted to realize dynamic and optimal task association between UAVs and edge servers for discrete decision-making requirements; the multi-agent deep deterministic policy gradient (MADDPG) method is employed to achieve cooperative and energy-efficient trajectory planning for multiple UAVs to meet the needs of continuous control in dynamic environments; and convex optimization theory is applied to obtain a closed-form optimal solution for the efficient allocation of computational resources on edge servers. Simulation results demonstrate that the proposed MA-JTATO algorithm significantly outperforms traditional baseline algorithms in enhancing overall QoS, effectively validating the framework’s superior performance and robustness in dynamic and complex scenarios. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

41 pages, 7381 KB  
Review
A Review of Construction and Demolition Waste Management: Resource Coordination and Multidimensional Interaction
by Yi-Hsin Lin, Weidong Yuan and Ting Wang
Buildings 2026, 16(7), 1437; https://doi.org/10.3390/buildings16071437 - 5 Apr 2026
Viewed by 181
Abstract
Accelerated urbanization and continuous infrastructure renewal have led to a rapid increase in construction and demolition waste (CDW), which accounts for approximately 20–50% of municipal solid waste in many developed countries. Consequently, effective management and resource utilization of CDW have become critical challenges [...] Read more.
Accelerated urbanization and continuous infrastructure renewal have led to a rapid increase in construction and demolition waste (CDW), which accounts for approximately 20–50% of municipal solid waste in many developed countries. Consequently, effective management and resource utilization of CDW have become critical challenges for sustainable urban development. To address these challenges, this study develops an integrated analytical framework for CDW recycling systems. Specifically, it constructs a “cloud-edge-terminal” collaborative recycling system and clarifies the interactions among material, information, and value flows. A three-dimensional coupling framework is further established to reconceptualize CDW management as a multivariate decision-making problem, alongside a multidimensional evaluation structure to support practical implementation and system optimization. Methodologically, the study adopts an integrative review approach supported by knowledge mapping analysis. A structured literature search and screening process was conducted using the Web of Science Core Collection (2015–2026) to ensure transparency and reproducibility in the literature identification and sample construction. The results propose a multidimensional coupling framework integrating resource coordination, information communication, and market trading into a unified decision system. The framework contributes an engineering-oriented analytical paradigm that promotes hierarchical decision coordination, dynamic multi-objective regulation, and integrated management of CDW recycling systems. Full article
Show Figures

Figure 1

25 pages, 11223 KB  
Article
Outlook for the Development of the Chip and Artificial Intelligence Industries—Application Perspective
by Bao Rong Chang and Hsiu-Fen Tsai
Algorithms 2026, 19(4), 255; https://doi.org/10.3390/a19040255 - 26 Mar 2026
Viewed by 400
Abstract
This review examines the transformative interplay between computing chips and Artificial Intelligence (AI), driving a revolution across various industries. First, the broader artificial intelligence and semiconductor ecosystem is analyzed, including hardware manufacturers, software frameworks, and system integration. Next, the development prospects are examined, [...] Read more.
This review examines the transformative interplay between computing chips and Artificial Intelligence (AI), driving a revolution across various industries. First, the broader artificial intelligence and semiconductor ecosystem is analyzed, including hardware manufacturers, software frameworks, and system integration. Next, the development prospects are examined, revealing current challenges such as power consumption, manufacturing complexity, supply chain constraints, and ethical considerations. Further discussion focuses on cloud-edge collaboration in relation to system architecture and workload allocation strategies. Then, cutting-edge AI technologies are analyzed, and key insights are summarized. Finally, the overall trends in artificial intelligence and the chip industry are summarized, clearly presenting the findings for the future and making a unique contribution to this review. Full article
(This article belongs to the Special Issue AI and Computational Methods in Engineering and Science: 2nd Edition)
Show Figures

Figure 1

26 pages, 6706 KB  
Article
Efficient Emergency Load Shedding to Mitigate Fault-Induced Delayed Voltage Recovery Using Cloud–Edge Collaborative Learning and Guided Evolutionary Strategy
by Dongyang Yang, Bing Cheng, Jisi Wu, Yunan Zhao, Xingao Tang and Renke Huang
Electronics 2026, 15(7), 1377; https://doi.org/10.3390/electronics15071377 - 26 Mar 2026
Viewed by 299
Abstract
Fault-induced delayed voltage recovery (FIDVR) poses a serious threat to modern power grid operation, where stalled induction motors following a fault can sustain dangerously low bus voltages and potentially trigger cascading failures. While deep reinforcement learning (DRL) has shown promise for emergency load [...] Read more.
Fault-induced delayed voltage recovery (FIDVR) poses a serious threat to modern power grid operation, where stalled induction motors following a fault can sustain dangerously low bus voltages and potentially trigger cascading failures. While deep reinforcement learning (DRL) has shown promise for emergency load shedding control, existing centralized DRL approaches require extensive communication infrastructure and large neural network models that are computationally prohibitive to train at scale. Fully decentralized approaches, on the other hand, lack inter-agent information sharing and coordination, often resulting in inadequate voltage recovery across area boundaries. To address these limitations, we propose a Cloud–Edge Collaborative DRL framework that combines lightweight, area-specific edge agents for local load shedding control with a supervisory cloud agent that coordinates their actions globally, achieving scalable training and system-wide voltage recovery simultaneously. Training is further accelerated through a modified Guided Surrogate-gradient-based Evolutionary Random Search (GSERS) algorithm. Validation on the IEEE 300-bus system demonstrates that the proposed framework reduces training time by approximately 90% compared to the fully centralized approach, while achieving comparable voltage recovery performance to the centralized method and approximately 80% better reward performance than the fully decentralized approach, confirming the critical benefit of the cloud-level coordination mechanism. Full article
(This article belongs to the Section Power Electronics)
Show Figures

Figure 1

24 pages, 3524 KB  
Article
An Intelligent Micromachine Perception System for Elevator Fault Diagnosis
by Li Lai, Shixuan Ding, Zewen Li, Zimin Luo and Hao Wang
Micromachines 2026, 17(4), 401; https://doi.org/10.3390/mi17040401 - 25 Mar 2026
Viewed by 364
Abstract
Elevator fault diagnosis heavily relies on high-precision sensing of microscopic physical states. Although Micro-Electro-Mechanical System (MEMS) sensors can capture such subtle features, they are constrained by high-frequency data streams, environmental noise, and the semantic gap between raw sensor data and actionable maintenance decisions. [...] Read more.
Elevator fault diagnosis heavily relies on high-precision sensing of microscopic physical states. Although Micro-Electro-Mechanical System (MEMS) sensors can capture such subtle features, they are constrained by high-frequency data streams, environmental noise, and the semantic gap between raw sensor data and actionable maintenance decisions. This study proposes a collaborative edge–cloud intelligent diagnosis framework specifically designed for elevator systems. On the edge side, a lightweight temporal Transformer model, ELiTe-Transformer, was designed and deployed on the Jetson platform. This model enhances sensitivity to event-driven MEMS signals through an industrial positional encoding mechanism and by integrating linear attention and INT8 quantization techniques, achieving a real-time inference latency of 21.4 ms. On the cloud side, retrieval-augmented generation (RAG) technology was adopted to integrate physical features extracted at the edge with domain knowledge, generating interpretable diagnostic reports. The experimental results show that the overall accuracy of the system reaches 96.0%. The edge–cloud collaborative framework improves the accuracy of complex fault diagnosis to 92.5%, and the adoption of RAG reduces the report hallucination rate by 71.4%. This work effectively addresses the bottlenecks of MEMS perception in elevator fault diagnosis, forming a closed loop from micro-signal acquisition to high-level decision support. Full article
(This article belongs to the Special Issue Human-Centred Intelligent Wearable Devices)
Show Figures

Figure 1

18 pages, 1843 KB  
Article
Heterogeneous Computing Resources Scheduling Based on Time-Varying Graphs and Multi-Agent Reinforcement Learning
by Jinshan Yuan, Xuncai Zhang and Kexin Gong
Future Internet 2026, 18(3), 168; https://doi.org/10.3390/fi18030168 - 20 Mar 2026
Viewed by 277
Abstract
The evolution toward 6G Computing Power Networks (CPN) aims to deeply integrate multi-tier computing resources across Cloud, Edge, and end devices. However, the significant heterogeneity of computing resources, characterized by varying hardware architectures such as CPUs, GPUs, and NPUs, coupled with the time-varying [...] Read more.
The evolution toward 6G Computing Power Networks (CPN) aims to deeply integrate multi-tier computing resources across Cloud, Edge, and end devices. However, the significant heterogeneity of computing resources, characterized by varying hardware architectures such as CPUs, GPUs, and NPUs, coupled with the time-varying network topology caused by terminal mobility, poses severe challenges to realizing efficient integrated scheduling that satisfies Quality of Service (QoS). To address spatiotemporal mismatches between task requirements and hardware architectures, this paper proposes an integrated scheduling method combining Discrete Time-Varying Graph (DTVG) construction with Multi-Agent Reinforcement Learning (MARL). Specifically, we model the dynamic interaction between mobile tasks and heterogeneous nodes as a DTVG to capture spatiotemporal evolution and employ a QMIX-based algorithm to enable collaborative decision-making among distributed agents. Simulation results demonstrate that the proposed approach effectively solves the joint optimization problem of heterogeneous resource matching and dynamic path planning, significantly outperforming traditional baselines in terms of resource utilization and average latency. This study confirms that incorporating graph-theoretic modeling with reinforcement learning offers a robust solution for the complex coupling of communication and computation in dynamic 6G networks. Full article
(This article belongs to the Special Issue Collaborative Intelligence for Connected Agents)
Show Figures

Figure 1

19 pages, 1537 KB  
Article
Data-Driven Cognitive Early Warning for Goaf Spontaneous Combustion: An Edge-Deployed RBF Network with Real-Time Multisensor Analytics
by Gang Cheng, Hailin Pei, Xiaokang Chen, Xiaorong Pang and Renzheng Sun
Big Data Cogn. Comput. 2026, 10(3), 91; https://doi.org/10.3390/bdcc10030091 - 19 Mar 2026
Viewed by 333
Abstract
Spontaneous combustion in goaf areas poses a significant threat to coal mine safety. Traditional safety management systems, reliant on passive response and single-indicator thresholds, often suffer from delayed warnings and lack cognitive decision support. To address this challenge, this study proposes a big-data-driven [...] Read more.
Spontaneous combustion in goaf areas poses a significant threat to coal mine safety. Traditional safety management systems, reliant on passive response and single-indicator thresholds, often suffer from delayed warnings and lack cognitive decision support. To address this challenge, this study proposes a big-data-driven cognitive computing framework for dynamic risk prediction of goaf spontaneous combustion, based on a “Cloud-Edge-End” collaborative architecture. The method leverages multi-sensor big data streams (CO, C2H4, O2, etc.) and deploys a lightweight Radial Basis Function (RBF) neural network on underground edge computing nodes (STM32) for real-time analytics. The model demonstrates excellent predictive performance on imbalanced datasets, with a PR-AUC of 0.910 and a recall of 99.7%. The edge-deployed RBF model achieves a single-pass inference time of only 0.62 ms, enabling real-time cognitive risk mapping. Field application at Z Coal Mine validated the system’s effectiveness, providing an average pre-warning time of 48.5 h, achieving zero spontaneous combustion accidents, and reducing the Total Recordable Injury Rate (TRIR) by 15.2%. This work illustrates how edge-based cognitive computing can transform safety management from passive response to proactive prevention, offering a scalable and interpretable framework for intelligent mine safety. Full article
Show Figures

Figure 1

29 pages, 11319 KB  
Article
Confidence-Aware Topology Identification in Low-Voltage Distribution Networks: A Multi-Source Fusion Method Based on Weakly Supervised Learning
by Siliang Liu, Can Deng, Zenan Zheng, Ying Zhu, Hongxin Lu and Wenze Liu
Energies 2026, 19(6), 1503; https://doi.org/10.3390/en19061503 - 18 Mar 2026
Viewed by 233
Abstract
The topology identification (TI) of low-voltage distribution networks (LVDNs) is the foundation for their intelligent operation and lean management. However, the existing identification methods may produce inconsistent results under measurement noise, missing data, and heterogeneous load behaviors. Without principled multiple method fusion and [...] Read more.
The topology identification (TI) of low-voltage distribution networks (LVDNs) is the foundation for their intelligent operation and lean management. However, the existing identification methods may produce inconsistent results under measurement noise, missing data, and heterogeneous load behaviors. Without principled multiple method fusion and meter-level confidence quantification, the reliability of the identification results is questionable in the absence of ground-truth topology. To address these challenges, a confidence-aware TI (Ca-TI) method for the LVDN based on weakly supervised learning (WSL) and Dempster–Shafer (D-S) evidence theory is proposed, aiming to infer each meter’s latent topology connectivity label and quantify the meter-level confidence without ground truth by fusing different identification methods. Specifically, within the framework of data programming (DP) in WSL, different TI methods were modeled as labeling functions (LFs), and a weakly supervised label model (WSLM) was adopted to learn each method’s error pattern and each meter’s posterior responsibility; within the framework of D-S evidence theory, an uncertainty-aware basic probability assignment (BPA) was constructed from each meter’s posterior responsibility, with posterior uncertainty allocated to ignorance, and was further discounted according to the missing data rate; subsequently, a consensus-calibrated conflict-gated (CCCG)-enhanced D-S fusion rule was proposed to aggregate the TI results of multiple methods, producing the final TI decisions with meter-level confidence. Finally, the test was carried out in both simulated and actual low-voltage distribution transformer areas (LVDTAs), and the robustness of the proposed method under various measurement noise and missing data was tested. The results indicate that the proposed method can effectively integrate the performances of various TI methods, is not adversely affected by extreme bias from any single method, and provides the meter-level confidence for targeted on-site verification. Further, an engineering deployment scheme with cloud–edge collaboration is further discussed to support scalable implementation in utility environments. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Electrical Power Systems)
Show Figures

Figure 1

25 pages, 1580 KB  
Article
A Study on the Cloud-Edge-Terminal Framework for Large Computing Models in New Power Systems
by Hualiang Fang, Ziyi Feng and Weibo Li
Energies 2026, 19(6), 1501; https://doi.org/10.3390/en19061501 - 18 Mar 2026
Viewed by 269
Abstract
With the rapid evolution of a new power system characterized by a high proportion of renewable energy, system operations have become increasingly random, variable, and uncertain. The system model exhibits features such as high dimensionality, multiple time scales, stochastic behavior, and nonlinearity. This [...] Read more.
With the rapid evolution of a new power system characterized by a high proportion of renewable energy, system operations have become increasingly random, variable, and uncertain. The system model exhibits features such as high dimensionality, multiple time scales, stochastic behavior, and nonlinearity. This paper proposes a large-scale computational power system model architecture based on cloud-edge-terminal collaboration. By defining functional roles within the cloud-edge-terminal structure and implementing a global model coordination mechanism, the approach enables an organic integration of global awareness, local adaptation, dynamic training, and online optimization for power system problem models. At the cloud level, various object models and the power grid topology are constructed. The edge generates typical problem models for the power system, while the terminal devices produce lightweight models adapted to local grids. This architecture supports collaborative modeling for key business scenarios such as power flow analysis, stability assessment, and reactive power optimization. The study focuses on the training methods of distilled parameters within the terminal models to enhance their adaptability for real-world deployment in power systems. Simulation results demonstrate that the cloud-edge-terminal model offers excellent scalability, adaptability, and real-time performance for computations in new power systems, effectively supporting localized, intelligent operations and decision-making within the system. Full article
Show Figures

Figure 1

14 pages, 4757 KB  
Article
Design and Implementation of an IoT-Based Low-Power Wearable EEG Sensing System for Home-Based Sleep Monitoring
by Ya Wang, Jun-Bo Chen and Yu-Ting Chen
Sensors 2026, 26(6), 1803; https://doi.org/10.3390/s26061803 - 12 Mar 2026
Viewed by 484
Abstract
Long-term home-based sleep monitoring requires wearable sensing devices that strictly balance signal precision with power constraints. This study presents the design and implementation of a low-noise, low-power wearable single-channel electroencephalography (EEG) system for automatic sleep staging. The hardware architecture integrates a TI ADS1298 [...] Read more.
Long-term home-based sleep monitoring requires wearable sensing devices that strictly balance signal precision with power constraints. This study presents the design and implementation of a low-noise, low-power wearable single-channel electroencephalography (EEG) system for automatic sleep staging. The hardware architecture integrates a TI ADS1298 analog front-end with an STM32F4 microcontroller, utilizing differential sampling and hardware-based filtering to effectively suppress power-line interference and baseline drift. System-level testing demonstrates an average power consumption of approximately 150.85 mW, enabling over 24.6 h of continuous operation on a 1000 mAh battery, which meets the requirements for overnight monitoring. To achieve accurate staging without draining the wearable’s battery, we adopted and deployed a lightweight deep learning model, SleePyCo, on the cloud backend. This architecture was specifically optimized for our edge–cloud collaborative execution, which combines contrastive representation learning with temporal dependency modeling. Validation on the ISRUC dataset yielded an overall accuracy of 79.3% ± 3.0%, with a notable F1-score of 88.3% for Deep Sleep (N3). Furthermore, practical field trials involving 10 healthy subjects verified the system’s engineering stability, achieving a valid data rate exceeding 97% and a Bluetooth packet loss rate of only 0.8%. These results confirm that the proposed hardware–software co-designed system provides a robust, energy-efficient IoMT sensing solution for daily sleep health management. Full article
(This article belongs to the Section Wearables)
Show Figures

Graphical abstract

27 pages, 2344 KB  
Article
Cloud-Edge Resource Scheduling and Offloading Optimization Based on Deep Reinforcement Learning
by Lili Yin, Yunze Xie, Ze Zhao and Jie Gao
Sensors 2026, 26(5), 1704; https://doi.org/10.3390/s26051704 - 8 Mar 2026
Viewed by 342
Abstract
In the context of smart manufacturing, with the widespread deployment of Industrial Internet of Things (IoT) devices, a large number of computation tasks that are highly sensitive to latency and have strict deadlines have emerged, requiring real-time processing. Effectively offloading tasks to address [...] Read more.
In the context of smart manufacturing, with the widespread deployment of Industrial Internet of Things (IoT) devices, a large number of computation tasks that are highly sensitive to latency and have strict deadlines have emerged, requiring real-time processing. Effectively offloading tasks to address the issues of increased latency and task dropouts caused by dynamic changes in edge node load has become a key challenge in the cloud–edge–end collaborative environment of smart manufacturing. To tackle the complex issues of unknown edge node loads and dynamic system state changes, this paper proposes a distributed algorithm based on deep reinforcement learning, utilizing convolutional neural networks (CNN) and the Informer architecture. The proposed algorithm leverages CNN to extract local features of edge node loads while utilizing Informer’s self-attention mechanism to capture long-term load variation trends, thereby effectively handling the uncertainty and dynamics inherent in node loads. Furthermore, by integrating the Dueling Deep Q-Network (DQN) and Double DQN techniques, the algorithm achieves a precise approximation of the state–action value function, further enhancing its capability to perceive system temporal characteristics and adapt to heterogeneous tasks. Each mobile device can independently make task offloading decisions and scheduling strategies based on its observations, enabling dynamic task allocation and optimization of execution order. Simulation results show that, compared to various existing algorithms, the proposed method reduces task dropout rates by 82.3–94% and average latency by 28–39.2%. Experimental results validate the significant advantages of this method in intelligent manufacturing scenarios with high load and latency-sensitive tasks. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

27 pages, 2849 KB  
Systematic Review
Intrusion Detection in Fog Computing: A Systematic Review of Security Advances and Challenges
by Nyashadzashe Tamuka, Topside Ehleketani Mathonsi, Thomas Otieno Olwal, Solly Maswikaneng, Tonderai Muchenje and Tshimangadzo Mavin Tshilongamulenzhe
Computers 2026, 15(3), 169; https://doi.org/10.3390/computers15030169 - 5 Mar 2026
Viewed by 604
Abstract
Fog computing extends cloud services to the network edge to support low-latency IoT applications. However, since fog environments are distributed and resource-constrained, intrusion detection systems must be adapted to defend against cyberattacks while keeping computation and communication overhead minimal. This systematic review presents [...] Read more.
Fog computing extends cloud services to the network edge to support low-latency IoT applications. However, since fog environments are distributed and resource-constrained, intrusion detection systems must be adapted to defend against cyberattacks while keeping computation and communication overhead minimal. This systematic review presents research on intrusion detection systems (IDSs) for fog computing and synthesizes advances and research gaps. The study was guided by the “Preferred-Reporting-Items for-Systematic-Reviews-and-Meta-Analyses” (PRISMA) framework. Scopus and Web of Science were searched in the title field using TITLE/TI = (“intrusion detection” AND “fog computing”) for 2021–2025. The inclusion criteria were (i) 2021–2025 publications, (ii) journal or conference papers, (iii) English language, and (iv) open access availability; duplicates were removed programmatically using a DOI-first key with a title, year, and author alternative. The search identified 8560 records, of which 4905 were unique and included for qualitative grouping and bibliometric synthesis. Metadata (year, venue, authors, affiliations, keywords, and citations) were extracted and analyzed in Python to compute trends and collaboration. Intrusion detection systems in fog networks were categorized into traditional/signature-based, machine learning, deep learning, and hybrid/ensemble. Hybrid and DL approaches reported accuracy ranging from 95 to 99% on benchmark datasets (such as NSL-KDD, UNSW-NB15, CIC-IDS2017, KDD99, BoT-IoT). Notable bottlenecks included computational load relative to real-time latency on resource-constrained nodes, elevated false-positive rates for anomaly detection under concept drift, limited generalization to unseen attacks, privacy risks from centralizing data, and limited real-world validation. Bibliometric analyses highlighted the field’s concentration in fast-turnaround, open-access journals such as IEEE Access and Sensors, as well as a small number of highly collaborative author clusters, alongside dominant terms such as “learning,” “federated,” “ensemble,” “lightweight,” and “explainability.” Emerging directions include federated and distributed training to preserve privacy, as well as online/continual learning adaptation. Future work should consist of real-world evaluation of fog networks, ultra-lightweight yet adaptive hybrid IDS, self-learning, and secure cooperative frameworks. These insights help researchers select appropriate IDS models for fog networks. Full article
Show Figures

Figure 1

15 pages, 1486 KB  
Review
Challenges of Space Debris Detection, Tracking, and Monitoring in Near-Earth Orbit: Overview of Current Status and Mitigation Strategies
by Motti Haridim, Assaf Shaked, Niv Cohen and Jacob Gavan
Information 2026, 17(3), 253; https://doi.org/10.3390/info17030253 - 3 Mar 2026
Viewed by 866
Abstract
The accumulation of space debris in near-Earth orbit, particularly in Low Earth Orbit (LEO), poses an increasing threat to satellite operations, communication infrastructures, and long-term space sustainability. As modern constellations expand and incorporate advanced satellite technologies, including sensing and wireless communications, artificial intelligence-of-things [...] Read more.
The accumulation of space debris in near-Earth orbit, particularly in Low Earth Orbit (LEO), poses an increasing threat to satellite operations, communication infrastructures, and long-term space sustainability. As modern constellations expand and incorporate advanced satellite technologies, including sensing and wireless communications, artificial intelligence-of-things (AIoT), enabled payloads, and edge computing for on-orbit data processing, the risk profile grows. This paper reviews the current debris environment and existing sensing and monitoring techniques, highlights major collision events and deliberate debris-generating activities, and analyzes the role of both governmental and commercial satellite constellations in exacerbating and mitigating the challenges. Emerging space surveillance and tracking (SST) techniques, leveraging radar, optical sensors, and interferometric SAR for enhanced intelligence, surveillance, and reconnaissance (ISR), are highlighted alongside software-defined networking (SDN) approaches and cloud communication technology that enable coordinated debris-avoidance maneuvers. Key international regulatory frameworks, tracking architectures, and mitigation measures, including alignment with ISO 24113 standards, advanced TT&C capabilities, and evolving active debris removal technologies, are examined. The study emphasizes the necessity of a global, interoperable ecosystem that integrates AI/ML (artificial intelligence and machine learning)-driven situational awareness, secure SATCOM links with AJ/LPI/LPD (anti-jamming/low probability of interception/low probability of detection) characteristics, and collaborative protocols among space agencies, commercial operators, and regulatory bodies to ensure the sustainable use of orbital space for future generations. Full article
(This article belongs to the Special Issue Sensing and Wireless Communications)
Show Figures

Figure 1

45 pages, 2170 KB  
Systematic Review
From Precision Agriculture to Intelligent Agricultural Ecosystems: A Systematic Review of Machine Learning and Big Data Applications
by Ania Cravero, Samuel Sepúlveda, Fernanda Gutiérrez and Lilia Muñoz
Agronomy 2026, 16(5), 516; https://doi.org/10.3390/agronomy16050516 - 27 Feb 2026
Cited by 1 | Viewed by 1161
Abstract
This systematic review analyzes the evolution of Machine Learning and Big Data applications in agriculture from 2021 to 2025, with particular emphasis on how recent technological advances facilitate the transition from precision agriculture to Intelligent Agricultural Ecosystems. A comprehensive literature search was conducted [...] Read more.
This systematic review analyzes the evolution of Machine Learning and Big Data applications in agriculture from 2021 to 2025, with particular emphasis on how recent technological advances facilitate the transition from precision agriculture to Intelligent Agricultural Ecosystems. A comprehensive literature search was conducted across Scopus, Web of Science, IEEE Xplore, the ACM Digital Library, SpringerLink, and MDPI, following the PRISMA 2020 guidelines. After duplicate removal and a two-stage screening process (title/abstract screening followed by full-text assessment), eligible peer-reviewed studies were systematically extracted using a structured coding matrix encompassing six analytical domains: crops, soil, weather and water, land use, animal systems, and farmer decision-making. The findings reveal a substantial increase in ML-driven agricultural analytics. Although Random Forest and Convolutional Neural Networks remain widely adopted, recent studies demonstrate a marked shift toward advanced Deep Learning architectures, integrated cloud–edge–device infrastructures, Federated Learning frameworks for privacy-preserving collaboration, Explainable AI techniques to enhance transparency, and governance-oriented mechanisms to ensure interoperability. Notwithstanding these advances, several persistent challenges remain, including limited generalizability across diverse agroclimatic contexts, the high costs associated with high-quality data annotation, the integration of heterogeneous and multimodal datasets, and infrastructural constraints related to connectivity. These developments are synthesized within the IAE conceptual framework, underscoring governance- and lifecycle-aware orchestration MLOps as a critical differentiator that transcends purely technology-centric approaches. Full article
Show Figures

Figure 1

Back to TopTop