Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (661)

Search Parameters:
Keywords = edge-cloud architecture

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
52 pages, 3234 KB  
Perspective
Edge-Intelligent and Cyber-Resilient Coordination of Electric Vehicles and Distributed Energy Resources in Modern Distribution Grids
by Mahmoud Ghofrani
Energies 2026, 19(8), 1867; https://doi.org/10.3390/en19081867 - 10 Apr 2026
Abstract
The rapid electrification of transportation and proliferation of distributed energy resources (DERs) are transforming distribution grids into highly dynamic, data-intensive, and cyber-physical systems. While reinforcement learning (RL), multi-agent coordination, and edge computing offer powerful tools for adaptive control, their deployment in safety-critical utility [...] Read more.
The rapid electrification of transportation and proliferation of distributed energy resources (DERs) are transforming distribution grids into highly dynamic, data-intensive, and cyber-physical systems. While reinforcement learning (RL), multi-agent coordination, and edge computing offer powerful tools for adaptive control, their deployment in safety-critical utility environments raises concerns regarding stability, certification compatibility, cyber-resilience, and regulatory acceptance. This paper presents an architecture-centric framework for edge-intelligent and cyber-resilient coordination of electric vehicles (EVs) and DERs that reconciles adaptive learning with deterministic safety guarantees. The proposed hierarchical edge–cloud architecture integrates multi-agent system (MAS) coordination, constraint-invariant reinforcement learning, and embedded cybersecurity mechanisms within a structured control hierarchy. Learning-enabled edge agents operate exclusively within standards-compliant safety envelopes enforced through supervisory constraint projection, control barrier functions, and Lyapunov-consistent stability safeguards. Protection-critical functions remain deterministic and isolated from adaptive layers, preserving compatibility with IEEE 1547 and existing utility protection schemes. The framework further incorporates anomaly triggered policy freezing, fail-safe fallback modes, and communication-aware resilience mechanisms to prevent unsafe transient behavior in non-stationary, distributed environments. Unlike simulation-only learning approaches, the architecture embeds progressive validation through software-in-the-loop (SIL), hardware-in-the-loop (HIL), and power hardware-in-the-loop (PHIL) testing to empirically verify transient stability, constraint compliance, and cyber-resilience under realistic timing and disturbance conditions. Beyond technical performance, the paper situates edge intelligence within standards evolution, governance structures, workforce transformation, techno-economic assessment, and equitable deployment pathways. By framing adaptive control as a bounded, auditable augmentation layer rather than a disruptive replacement for certified infrastructure, the proposed architecture provides a pragmatic roadmap for evolutionary modernization of distribution systems. Full article
(This article belongs to the Section E: Electric Vehicles)
Show Figures

Figure 1

22 pages, 1888 KB  
Article
Predictive Fuzzy Proportional–Integral–Derivative Control for Edge-Based Greenhouse Environmental Regulation
by Wenfeng Li, Jianghua Zhao, Yang Liu, Xi Liu, Shu Lou, Hongyao Xu, Chaoyang Wang, Xuankai Zhang and Zhaobo Huang
Agriculture 2026, 16(8), 829; https://doi.org/10.3390/agriculture16080829 - 8 Apr 2026
Viewed by 94
Abstract
To address the strong nonlinearity, coupling, and time-delay characteristics in greenhouse environmental regulation, as well as the large overshoot and limited robustness of conventional proportional–integral–derivative (PID) control, while considering the practical constraint that complex intelligent control methods are difficult to deploy directly on [...] Read more.
To address the strong nonlinearity, coupling, and time-delay characteristics in greenhouse environmental regulation, as well as the large overshoot and limited robustness of conventional proportional–integral–derivative (PID) control, while considering the practical constraint that complex intelligent control methods are difficult to deploy directly on low-cost industrial controllers, this study proposes a predictive fuzzy PID control method for greenhouse environments under programmable logic controller (PLC)-based edge deployment. An integrated remote monitoring and control system with a “PLC–human–machine interface (HMI)–cloud–mobile” architecture was also developed. Based on the intelligent greenhouse experimental platform of Yunnan Agricultural University, the proposed method was validated for greenhouse temperature and air humidity regulation through MATLAB simulations, PLC deployment, and on-site operation tests. The results showed that all four control strategies were able to effectively track the setpoints of greenhouse temperature and humidity, while predictive PID and predictive fuzzy PID achieved better overall performance than conventional PID and fuzzy PID. Predictive fuzzy PID performed best in the humidity channel, whereas its performance in the temperature channel was close to that of predictive PID but with more stable disturbance recovery and better overall balance. On-site operation results further showed that, under typical operating conditions, the tracking error of the actual greenhouse temperature relative to the target temperature could be maintained within approximately ±1 °C, while the error of the actual air humidity relative to the target humidity remained within approximately −2% to 3% RH. These results verify the engineering feasibility of the proposed method on resource-constrained industrial PLC platforms. The proposed method can provide a useful reference for the lightweight and intelligent upgrading of small- and medium-sized greenhouse environmental control systems. Full article
33 pages, 875 KB  
Review
Artificial Intelligence for High-Availability Systems: A Comprehensive Review
by Lidia Fotia, Rosario Gaeta, Fabrizio Messina, Domenico Rosaci and Giuseppe M. L. Sarné
Computers 2026, 15(4), 231; https://doi.org/10.3390/computers15040231 - 8 Apr 2026
Viewed by 201
Abstract
High-availability (HA) systems—essential in many contemporary contexts—are designed to guarantee the availability of processes and data for more than 99% of their operational time. These systems are typically implemented as Cloud/Edge infrastructures that are properly maintained by human operators and intelligent agents in [...] Read more.
High-availability (HA) systems—essential in many contemporary contexts—are designed to guarantee the availability of processes and data for more than 99% of their operational time. These systems are typically implemented as Cloud/Edge infrastructures that are properly maintained by human operators and intelligent agents in order to guarantee the required level of availability. Moreover, we are witnessing the widespread adoption of AI-based automation across many industries. AI-based software agents are increasingly being adopted to introduce more automation in highly available systems, particularly for monitoring and fault detection, fault prediction, recovery, and optimization processes. In this review paper, we discuss the state of the art of AI-based solutions for HA systems. In particular, we focus on the use of AI for the core operational mechanisms of monitoring, failure detection, and recovery. Our discussion begins by reviewing a few key background concepts of HA architectures, then we review recent work on AI-based solutions for monitoring, fault detection and recovery in HA systems. Full article
(This article belongs to the Special Issue Recent Trends in Dependable and High Availability Systems)
Show Figures

Figure 1

31 pages, 2475 KB  
Article
Fuzzy-Logic Workload Orchestration Framework for Smart Campuses in Edge-Cloud System Architecture
by Abdullah Fawaz Aljulayfi
Electronics 2026, 15(8), 1556; https://doi.org/10.3390/electronics15081556 - 8 Apr 2026
Viewed by 195
Abstract
Transforming a conventional university campus into a smart campus by leveraging modern technologies aims to deliver university services efficiently, effectively, and at low cost. Modern technologies enhance campus life by providing services, such as smart classrooms and campus security, on demand. Seamless service [...] Read more.
Transforming a conventional university campus into a smart campus by leveraging modern technologies aims to deliver university services efficiently, effectively, and at low cost. Modern technologies enhance campus life by providing services, such as smart classrooms and campus security, on demand. Seamless service delivery requires reliable and efficient access to the services that take into consideration the dynamic contextual attributes related to, e.g., end-device mobility, latency sensitivity, and resource constraints. University staff, students, and visitors frequently submit different types of service requests on the move, which requires a robust orchestration framework capable of managing these requests across edge-cloud environments. The orchestration framework needs to intelligently distribute the workload, taking into consideration the latency sensitivity requirements and contextual conditions, including resource constraints. Therefore, a fuzzy-logic orchestration framework for smart-campus environments in edge-cloud architecture is proposed. The framework incorporates key factors, including user speed, resource utilization, and request delay sensitivity, in the decision-making process to satisfy both service consumers and service providers. It prioritizes latency-sensitive requests while simultaneously enhancing resource utilization efficiency. Simulation-based experimental results demonstrate the effectiveness of the proposed framework compared with benchmark approaches in orchestrating incoming workloads under several user and contextual conditions. Additionally, the results show that the proposed framework improves the execution rate by 30% compared to benchmark models and achieves more than double resource utilization efficiency. Full article
Show Figures

Figure 1

25 pages, 1501 KB  
Article
MA-JTATO: Multi-Agent Joint Task Association and Trajectory Optimization in UAV-Assisted Edge Computing System
by Yunxi Zhang and Zhigang Wen
Drones 2026, 10(4), 267; https://doi.org/10.3390/drones10040267 - 7 Apr 2026
Viewed by 235
Abstract
With the rapid development of applications such as smart cities and the industrial internet, the computation-intensive tasks generated by massive sensing devices pose significant challenges to traditional cloud computing paradigms. Unmanned aerial vehicle (UAV)-assisted edge computing systems, leveraging their high mobility and wide-area [...] Read more.
With the rapid development of applications such as smart cities and the industrial internet, the computation-intensive tasks generated by massive sensing devices pose significant challenges to traditional cloud computing paradigms. Unmanned aerial vehicle (UAV)-assisted edge computing systems, leveraging their high mobility and wide-area coverage capabilities, offer an innovative architecture for low-latency and highly reliable edge services. However, the practical deployment of such systems faces a highly complex multi-objective optimization problem featured by the tight coupling of task offloading decisions, UAV trajectory planning, and edge server resource allocation. Conventional optimization methods are difficult to adapt to the dynamic and high-dimensional characteristics of this problem, leading to suboptimal system performance. To address this critical challenge, this paper constructs an intelligent collaborative optimization framework for UAV-assisted edge computing systems and formulates the system quality of service (QoS) optimization problem as a mixed-integer non-convex programming problem with the dual objectives of minimizing task processing latency and reducing overall system energy consumption. A multi-agent joint task association and trajectory optimization (MA-JTATO) algorithm based on hybrid reinforcement learning is proposed to solve this intractable problem, which innovatively decouples the original coupled optimization problem into three interrelated subproblems and realizes their collaborative and efficient solution. Specifically, the Advantage Actor-Critic (A2C) algorithm is adopted to realize dynamic and optimal task association between UAVs and edge servers for discrete decision-making requirements; the multi-agent deep deterministic policy gradient (MADDPG) method is employed to achieve cooperative and energy-efficient trajectory planning for multiple UAVs to meet the needs of continuous control in dynamic environments; and convex optimization theory is applied to obtain a closed-form optimal solution for the efficient allocation of computational resources on edge servers. Simulation results demonstrate that the proposed MA-JTATO algorithm significantly outperforms traditional baseline algorithms in enhancing overall QoS, effectively validating the framework’s superior performance and robustness in dynamic and complex scenarios. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

16 pages, 1436 KB  
Article
Mutual Cloud: Decentralized Task Orchestration in Loosely Coupled Distributed Environments
by Chaewon Keum, Yelin Song, Seoyoung Lee, Kyungwoon Cho and Hyokyung Bahn
Appl. Sci. 2026, 16(7), 3484; https://doi.org/10.3390/app16073484 - 2 Apr 2026
Viewed by 290
Abstract
Today, many computing workloads are executed in loosely coupled, geographically distributed environments where resources are owned by different organizations. Examples include inter-institutional research infrastructures, community-operated clusters, and edge deployments. As disconnections are frequent in such environments, ensuring reliable task execution remains a fundamental [...] Read more.
Today, many computing workloads are executed in loosely coupled, geographically distributed environments where resources are owned by different organizations. Examples include inter-institutional research infrastructures, community-operated clusters, and edge deployments. As disconnections are frequent in such environments, ensuring reliable task execution remains a fundamental challenge. Kubernetes, the de facto standard for cluster orchestration, provides centralized control and strong consistency, but suffers from slow recovery when node failures occur frequently. At the opposite extreme, blockchain-based orchestration removes centralized control but incurs substantial latency due to global consensus, making it unsuitable for time-sensitive task scheduling. This paper presents Mutual Cloud, a decentralized orchestration framework that operates between these two extremes. Mutual Cloud adopts a hybrid architecture where task admission and queue management are handled in a centralized manner similar to conventional public clouds, whereas most scheduling functions, including execution-node selection and failure handling, are performed in a decentralized manner by autonomous agents using a distributed hash table. We implement a prototype of Mutual Cloud and evaluate its performance through large-scale simulation studies. The results show that Mutual Cloud maintains stable performance comparable to centralized baselines under normal conditions while achieving approximately five-second-level recovery latency under substantial node failures. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

25 pages, 2491 KB  
Article
Accelerating the Uptake of 5G for Automotive: Real-World Trials from the TARGET-X Project
by Jad Nasreddine, Paul Salvati and Miguel Fuentes
Future Internet 2026, 18(4), 189; https://doi.org/10.3390/fi18040189 - 1 Apr 2026
Viewed by 338
Abstract
As the automotive industry transitions toward high-level autonomy, the demand for connectivity offering deterministic low latency and high reliability becomes paramount. This paper presents the end-to-end design, implementation, and experimental validation of three advanced Vehicle-to-Everything (V2X) use cases within a real-world 5G network [...] Read more.
As the automotive industry transitions toward high-level autonomy, the demand for connectivity offering deterministic low latency and high reliability becomes paramount. This paper presents the end-to-end design, implementation, and experimental validation of three advanced Vehicle-to-Everything (V2X) use cases within a real-world 5G network environment: Cooperative Perception, Automotive Digital Twin, and Predictive Quality of Service (pQoS) for Tele-operated Driving (ToD). Trials at the 370-hectare IDIADA proving ground in Spain benchmarked 5G and Multi-access Edge Computing (MEC) against 4G and cloud alternatives. Experimental results demonstrate that 5G provides substantial performance gains, achieving average latency reductions up to 90% for V2X messages compared to 4G. The integration of MEC halved the average service latency, consistently maintaining it within the 40–50 ms range required for safety-critical services, whereas cloud-based hosting exhibited uncontrollable fluctuations. In the pQoS for ToD, the implementation of a Network Digital Twin (NDT) and exposure APIs reduced the average video jitter by up to 63%, preventing service collapse in degraded coverage zones. Finally, the automotive Digital Twin (DT) achieved high-fidelity synchronization with temporal deviations consistently below 10%. These findings underscore the necessity of edge-centric architectures and proactive network telemetry for the resilient deployment of future safety-critical mobility services, effectively charting the course for the design of 6G. Full article
(This article belongs to the Special Issue Moving Towards 6G Wireless Technologies—2nd Edition)
Show Figures

Graphical abstract

45 pages, 3695 KB  
Article
Towards a Reference Architecture for Machine Learning Operations
by Miguel Ángel Mateo-Casalí, Andrés Boza and Francisco Fraile
Computers 2026, 15(4), 218; https://doi.org/10.3390/computers15040218 - 1 Apr 2026
Viewed by 368
Abstract
Industrial organisations increasingly rely on machine learning (ML) to improve quality, maintenance, and planning in Industry 4.0/5.0 ecosystems. However, turning experimental models into reliable services on the production floor remains complex due to the heterogeneity of operational technologies (OTs) and information technologies (ITs), [...] Read more.
Industrial organisations increasingly rely on machine learning (ML) to improve quality, maintenance, and planning in Industry 4.0/5.0 ecosystems. However, turning experimental models into reliable services on the production floor remains complex due to the heterogeneity of operational technologies (OTs) and information technologies (ITs), including implementation constraints, latency in edge-fog-cloud scenarios, governance requirements, and continuous performance degradation caused by data drift. Although Machine Learning Operations (MLOps) provides lifecycle practices for deployment, monitoring, and retraining, the evidence is fragmented across tool-centric descriptions, case-specific pipelines, and conceptual architectures, offering limited guidance on which industrial constraints should inform architectural decisions and how to evaluate solutions. This work addresses that gap through a PRISMA-guided systematic review of 49 studies on industrial MLOps (with the search and screening primarily targeting Industry 4.0/IIoT operationalisation contexts, as reflected in the search strategy and corpus) and an evidence-based synthesis of principles, challenges, lifecycle practices, and enabling technologies. From this synthesis, industrial requirements are derived that encompass OT/IT integration, edge-fog-cloud orchestration, security and traceability, and observability-based lifecycle control. On this basis, a reference architecture is proposed that maps these requirements to functional layers, data and control flows, and verifiable responsibilities. To support reproducibility and practical inspectability, the article also presents an open-source architectural instantiation aligned with the proposed decomposition. Finally, the evaluation is illustrated through a predictive maintenance use case (tool breakage) in a single CNC machining cell, where the objective is to demonstrate end-to-end feasibility under realistic operational constraints rather than cross-scenario superiority or broad industrial generalisability. Full article
(This article belongs to the Special Issue Machine Learning: Innovation, Implementation, and Impact)
Show Figures

Figure 1

23 pages, 2459 KB  
Article
Optimizing Renewable Energy Distribution Networks with AI Techniques: The A-IsolE Project
by Gian Giuseppe Soma, Maria Giulia Pasquarelli, Massimo Pentolini, Cristina Dore, Francesco Martini, Andrea Bagnasco, Andrea Vinci, Giulio Valfrè, Enrico Bessone, Gabriele Mosaico and Matteo Saviozzi
Energies 2026, 19(7), 1718; https://doi.org/10.3390/en19071718 - 31 Mar 2026
Viewed by 285
Abstract
The large-scale penetration of Distributed Energy Resources (DERs), the proliferation of Energy Communities, and the increasing provision of flexibility services are fundamentally transforming distribution network operation, rendering traditional Distribution Management Systems (DMSs) structurally inadequate. This paper addresses this structural gap by proposing and [...] Read more.
The large-scale penetration of Distributed Energy Resources (DERs), the proliferation of Energy Communities, and the increasing provision of flexibility services are fundamentally transforming distribution network operation, rendering traditional Distribution Management Systems (DMSs) structurally inadequate. This paper addresses this structural gap by proposing and experimentally validating A-ISolE, a novel hybrid Artificial Intelligence (AI) architecture that natively integrates centralized and distributed intelligence within a unified DMS framework. The core scientific contribution of this work lies in the formulation and deployment of a coordinated, hierarchical AI paradigm in which cloud-level predictive and optimization modules dynamically interact with edge-level autonomous control agents. Specifically, the paper introduces: (1) an integrated forecasting state estimation pipeline with AI-enhanced grid observability; (2) intelligent fault location and optimal feeder reconfiguration algorithms embedded into operational control loops; and (3) distributed edge control strategies enabling autonomous yet coordinated microgrid stabilization. The architecture is validated on a real pilot microgrid in Sanremo (Italy). Experimental results demonstrate quantifiable gains in many parameters, substantiating the feasibility of hybrid centralized/distributed AI as a foundational paradigm for future resilient and decarbonized distribution networks. Full article
Show Figures

Figure 1

14 pages, 1839 KB  
Proceeding Paper
Digital Twin and IoT Integration for Predictive Maintenance in Civil and Structural Engineering
by Wai Yie Leong
Eng. Proc. 2026, 134(1), 19; https://doi.org/10.3390/engproc2026134019 - 31 Mar 2026
Viewed by 416
Abstract
The growing complexity, age, and environmental exposure of civil infrastructure assets—bridges, tunnels, buildings, highways, and dams—have necessitated a transition from reactive or preventive maintenance strategies toward predictive, data-driven systems. The integration of IoT and Digital Twin (DT) technologies provides a transformative paradigm for [...] Read more.
The growing complexity, age, and environmental exposure of civil infrastructure assets—bridges, tunnels, buildings, highways, and dams—have necessitated a transition from reactive or preventive maintenance strategies toward predictive, data-driven systems. The integration of IoT and Digital Twin (DT) technologies provides a transformative paradigm for intelligent monitoring, early fault detection, and real-time lifecycle management. This paper explores the technological convergence of IoT sensor networks, edge-cloud analytics, and digital twin platforms for predictive maintenance in civil and structural engineering. The study presents a multi-layered DT–IoT integration framework designed for infrastructure assets, emphasizing interoperability, cybersecurity, and semantic data synchronization. Key research outcomes include enhanced asset availability, reduced maintenance costs, and improved safety margins. The proposed architecture incorporates sensor-level digital shadows, edge inference modules, and cloud-based analytical twins powered by hybrid machine learning and finite element models. Real-world applications and case studies from smart bridges and intelligent building systems demonstrate prediction accuracies exceeding 90% in identifying early structural fatigue indicators. Ultimately, the results underscore the strategic role of DT–IoT convergence in realizing sustainable, resilient, and self-aware civil infrastructure aligned with Industry 5.0 principles. This study provides a roadmap for digital transformation in asset management, integrating standards such as International Organization for Standardization (ISO) 23247 and ISO 19650 to ensure interoperability and lifecycle traceability. The results reinforce that predictive maintenance through DT and IoT integration is not only technically viable but essential for extending infrastructure lifespan, minimizing unplanned downtime, and achieving carbon-efficient asset operation. Full article
Show Figures

Figure 1

28 pages, 4715 KB  
Article
Techno-Economic and SLA-Aware Control of 5G Cloud-RAN via Multi-Objective and Penalty-Constrained Reinforcement Learning
by Sherif M. Aboul, Hala M. Abd El Kader, Esraa M. Eid and Shimaa S. Ali
Network 2026, 6(2), 20; https://doi.org/10.3390/network6020020 - 31 Mar 2026
Viewed by 251
Abstract
Fifth-generation (5G) mobile networks must simultaneously satisfy stringent latency targets, high user density, and energy-aware operation across heterogeneous services. Cloud Radio Access Networks (C-RAN) provide architectural flexibility through centralized baseband processing, but they also introduce new control challenges related to fronthaul constraints, dynamic [...] Read more.
Fifth-generation (5G) mobile networks must simultaneously satisfy stringent latency targets, high user density, and energy-aware operation across heterogeneous services. Cloud Radio Access Networks (C-RAN) provide architectural flexibility through centralized baseband processing, but they also introduce new control challenges related to fronthaul constraints, dynamic traffic variations, and joint radio–compute coordination with Mobile Edge Computing (MEC). This paper proposes a unified AI-driven optimization framework for adaptive 5G C-RAN management, where the controller dynamically tunes key system decisions—including functional split selection, TDD downlink ratio, user–RU association, fronthaul load management, and MEC offloading proportion. To enable fair benchmarking under identical simulation settings, a static baseline policy is compared against five adaptive control strategies: Deep Q-Network (DQN), Proximal Policy Optimization (PPO), Deep Deterministic Policy Gradient (DDPG), Multi-Objective Reinforcement Learning (MORL), and a Deterministic Service-Level Agreement (SLA)-aware controller Penalty-Constrained Hierarchical Action Controller (PCHAC). Performance evaluation across techno-economic and service KPIs shows that intelligent control significantly improves operational profit, tail-latency behavior, and energy efficiency while enhancing SLA compliance compared with non-adaptive operation. The results highlight the practicality of multi-objective and constraint-aware learning for next-generation C-RAN orchestration under scaling traffic demand. Full article
Show Figures

Figure 1

23 pages, 7893 KB  
Article
Long-Tail Learning for Three-Dimensional Pavement Distress Segmentation Using Point Clouds Reconstructed from a Consumer Camera
by Pengjian Cheng, Junyan Yi, Zhongshi Pei, Zengxin Liu, Dayong Jiang and Abduhaibir Abdukadir
Remote Sens. 2026, 18(7), 1008; https://doi.org/10.3390/rs18071008 - 27 Mar 2026
Viewed by 318
Abstract
The application of 3D data in pavement inspection represents an emerging trend. Acquiring and measuring the 3D information of pavement distress enables a more comprehensive assessment of severity, thereby allowing for accurate monitoring and evaluation of the pavement’s technical condition. Existing methods face [...] Read more.
The application of 3D data in pavement inspection represents an emerging trend. Acquiring and measuring the 3D information of pavement distress enables a more comprehensive assessment of severity, thereby allowing for accurate monitoring and evaluation of the pavement’s technical condition. Existing methods face challenges in high-cost pavement scanning and insufficient research on automated 3D distress segmentation. This study employed a consumer-grade action camera for data acquisition and constructed an engineering-aligned 3D point cloud dataset of pavements. Then a long-tail class imbalance mitigation strategy was introduced, integrating adaptive re-sampling with a weighted fusion loss function, effectively balancing minority class representation. The proposed network, named PointPaveSeg, was a dedicated point cloud processing architecture. A dual-stream feature fusion module was designed for the encoder layer, which decoupled geometric and semantic features to improve distress extraction capability. The network incorporated a hierarchical feature propagation structure enhanced by edge reinforcement, global interaction, and residual connections. Experimental results demonstrated that PointPaveSeg achieved an mIoU of 78.45% and an accuracy of 95.43%. In the field evaluation, post-processing and geometric information extraction were performed on the segmented point clouds. The results showed high consistency with manual measurements. Testing confirmed the method’s practical applicability in real-world projects, offering a new lightweight alternative for intelligent pavement monitoring and maintenance systems. Full article
(This article belongs to the Special Issue Point Cloud Data Analysis and Applications)
Show Figures

Figure 1

25 pages, 11223 KB  
Article
Outlook for the Development of the Chip and Artificial Intelligence Industries—Application Perspective
by Bao Rong Chang and Hsiu-Fen Tsai
Algorithms 2026, 19(4), 255; https://doi.org/10.3390/a19040255 - 26 Mar 2026
Viewed by 438
Abstract
This review examines the transformative interplay between computing chips and Artificial Intelligence (AI), driving a revolution across various industries. First, the broader artificial intelligence and semiconductor ecosystem is analyzed, including hardware manufacturers, software frameworks, and system integration. Next, the development prospects are examined, [...] Read more.
This review examines the transformative interplay between computing chips and Artificial Intelligence (AI), driving a revolution across various industries. First, the broader artificial intelligence and semiconductor ecosystem is analyzed, including hardware manufacturers, software frameworks, and system integration. Next, the development prospects are examined, revealing current challenges such as power consumption, manufacturing complexity, supply chain constraints, and ethical considerations. Further discussion focuses on cloud-edge collaboration in relation to system architecture and workload allocation strategies. Then, cutting-edge AI technologies are analyzed, and key insights are summarized. Finally, the overall trends in artificial intelligence and the chip industry are summarized, clearly presenting the findings for the future and making a unique contribution to this review. Full article
(This article belongs to the Special Issue AI and Computational Methods in Engineering and Science: 2nd Edition)
Show Figures

Figure 1

24 pages, 518 KB  
Article
A Secure Authentication Scheme for Hierarchical Federated Learning with Anomaly Detection in IoT-Based Smart Agriculture
by Jihye Choi and Youngho Park
Appl. Sci. 2026, 16(7), 3211; https://doi.org/10.3390/app16073211 - 26 Mar 2026
Viewed by 244
Abstract
Unmanned Aerial Vehicle (UAV)-assisted hierarchical federated learning (HFL) has emerged as a promising architecture for Internet of Things (IoT)-based smart agriculture, which enables scalable model training over large and sparse farmlands. In this setting, UAVs act as mobile edge servers, aggregating local updates [...] Read more.
Unmanned Aerial Vehicle (UAV)-assisted hierarchical federated learning (HFL) has emerged as a promising architecture for Internet of Things (IoT)-based smart agriculture, which enables scalable model training over large and sparse farmlands. In this setting, UAVs act as mobile edge servers, aggregating local updates from distributed agricultural IoT devices and relaying them to the cloud server. While HFL improves scalability and reduces communication overhead, it still faces critical security threats due to its reliance on public wireless channels and the vulnerability of model aggregation to malicious updates. In this paper, we propose a secure authentication scheme that integrates anomaly detection with elliptic curve cryptography (ECC)-based mutual authentication to protect both the communication and training phases. In the proposed scheme, UAVs authenticate participating clients before receiving their local models, then perform anomaly detection to identify and exclude malicious participants. If a client is found to be malicious, its identity credentials are revoked and broadcast by the cloud server to prevent future participation. The security of the proposed scheme is formally verified using Burrows–Abadi–Needham (BAN) logic, the Real-or-Random (RoR) model, and the Automated Validation of Internet Security Protocols and Applications (AVISPA) tool, along with informal security analysis. The performance evaluation includes comparisons of security features, computation cost, and communication cost with other related schemes, and an experimental assessment of anomaly detection performance. The results demonstrate that our scheme provides strong security guarantees, low overhead, and effective malicious client detection, making it well suited for UAV-assisted HFL in smart agriculture. Full article
Show Figures

Figure 1

15 pages, 1364 KB  
Article
CAMS F Edge DTN: Context-Aware Offline-First Synchronization and Local Reasoning Using CRDTs and MQTT-SN
by Nelson Iván Herrera, Estevan Ricardo Gómez-Torres, Edgar E. González, Renato M. Toasa and Paúl Baldeón
Future Internet 2026, 18(4), 180; https://doi.org/10.3390/fi18040180 - 26 Mar 2026
Viewed by 582
Abstract
Context-aware mobile applications operating in environments with intermittent or unreliable connectivity must support offline-first behavior while preserving consistent decision-making and timely synchronization. Traditional cloud-centric architectures often fail to provide adequate availability, responsiveness, and reliable context reasoning under such conditions. This paper presents CAMS-F [...] Read more.
Context-aware mobile applications operating in environments with intermittent or unreliable connectivity must support offline-first behavior while preserving consistent decision-making and timely synchronization. Traditional cloud-centric architectures often fail to provide adequate availability, responsiveness, and reliable context reasoning under such conditions. This paper presents CAMS-F Edge DTN, an edge-centric runtime designed to support offline-first context-aware applications operating under intermittent connectivity. The proposed approach extends the CAMS domain-specific language (DSL) with declarative policies for semantic reconciliation, opportunistic synchronization, and context-aware conflict resolution. The runtime integrates Conflict-Free Replicated Data Types (CRDTs), opportunistic communication channels such as Bluetooth and Wi-Fi Direct, and MQTT-SN messaging to enable robust data exchange across mobile, vehicular, and edge nodes. CAMS F-Edge DTN supports offline-first execution by allowing applications to evaluate contextual rules locally and reconcile distributed state asynchronously when connectivity becomes available. The approach is evaluated through controlled experiments and case studies in rural logistics and healthcare distribution scenarios. The experimental results show that the proposed architecture maintains 96–99% operational availability under intermittent connectivity and up to 100% availability during fully offline operation, while achieving low-latency local reasoning (<10 ms median latency) and deterministic state convergence through CRDT-based synchronization mechanisms. Full article
Show Figures

Figure 1

Back to TopTop