Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (258)

Search Parameters:
Keywords = End-Edge-Cloud

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
44 pages, 1049 KB  
Review
Toward Intelligent AIoT: A Comprehensive Survey on Digital Twin and Multimodal Generative AI Integration
by Xiaoyi Luo, Aiwen Wang, Xinling Zhang, Kunda Huang, Songyu Wang, Lixin Chen and Yejia Cui
Mathematics 2025, 13(21), 3382; https://doi.org/10.3390/math13213382 - 23 Oct 2025
Viewed by 402
Abstract
The Artificial Intelligence of Things (AIoT) is rapidly evolving from basic connectivity to intelligent perception, reasoning, and decision making across domains such as healthcare, manufacturing, transportation, and smart cities. Multimodal generative AI (GAI) and digital twins (DTs) provide complementary solutions. DTs deliver high-fidelity [...] Read more.
The Artificial Intelligence of Things (AIoT) is rapidly evolving from basic connectivity to intelligent perception, reasoning, and decision making across domains such as healthcare, manufacturing, transportation, and smart cities. Multimodal generative AI (GAI) and digital twins (DTs) provide complementary solutions. DTs deliver high-fidelity virtual replicas for real-time monitoring, simulation, and optimization with GAI enhancing cognition, cross-modal understanding, and the generation of synthetic data. This survey presents a comprehensive overview of DT–GAI integration in the AIoT. We review the foundations of DTs and multimodal GAI and highlight their complementary roles. We further introduce the Sense–Map–Generate–Act (SMGA) framework, illustrating their interaction through the SMGA loop. We discuss key enabling technologies, including multimodal data fusion, dynamic DT evolution, and cloud–edge–end collaboration. Representative application scenarios, including smart manufacturing, smart cities, autonomous driving, and healthcare, are examined to demonstrate their practical impact. Finally, we outline open challenges, including efficiency, reliability, privacy, and standardization, and we provide directions for future research toward sustainable, trustworthy, and intelligent AIoT systems. Full article
Show Figures

Figure 1

28 pages, 1459 KB  
Article
Research on Computing Power Resources-Based Clustering Methods for Edge Computing Terminals
by Jian Wang, Jiali Li, Xianzhi Cao, Chang Lv and Liusong Yang
Appl. Sci. 2025, 15(20), 11285; https://doi.org/10.3390/app152011285 - 21 Oct 2025
Viewed by 247
Abstract
In the “cloud–edge–end” three-tier architecture of edge computing, the cloud, edge layer, and end-device layer collaborate to enable efficient data processing and task allocation. Certain computation-intensive tasks are decomposed into subtasks at the edge layer and assigned to terminal devices for execution. However, [...] Read more.
In the “cloud–edge–end” three-tier architecture of edge computing, the cloud, edge layer, and end-device layer collaborate to enable efficient data processing and task allocation. Certain computation-intensive tasks are decomposed into subtasks at the edge layer and assigned to terminal devices for execution. However, existing research has primarily focused on resource scheduling, paying insufficient attention to the specific requirements of tasks for computing and storage resources, as well as to constructing terminal clusters tailored to the needs of different subtasks.This study proposes a multi-objective optimization-based cluster construction method to address this gap, aiming to form matched clusters for each subtask. First, this study integrates the computing and storage resources of nodes into a unified concept termed the computing power resources of terminal nodes. A computing power metric model is then designed to quantitatively evaluate the heterogeneous resources of terminals, deriving a comprehensive computing power value for each node to assess its capability. Building upon this model, this study introduces an improved NSGA-III (Non-dominated Sorting Genetic Algorithm III) clustering algorithm. This algorithm incorporates simulated annealing and adaptive genetic operations to generate the initial population and employs a differential mutation strategy in place of traditional methods, thereby enhancing optimization efficiency and solution diversity. The experimental results demonstrate that the proposed algorithm consistently outperformed the optimal baseline algorithm across most scenarios, achieving average improvements of 18.07%, 7.82%, 15.25%, and 10% across the four optimization objectives, respectively. A comprehensive comparative analysis against multiple benchmark algorithms further confirms the marked competitiveness of the method in multi-objective optimization. This approach enables more efficient construction of terminal clusters adapted to subtask requirements, thereby validating its efficacy and superior performance. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

23 pages, 1262 KB  
Article
A Symmetry-Enhanced Secure and Traceable Data Sharing Model Based on Decentralized Information Flow Control for the End–Edge–Cloud Paradigm
by Jintian Lu, Chengzhi Yu, Menglong Qi, Han Luo, Jie Tian and Jianfeng Li
Symmetry 2025, 17(10), 1771; https://doi.org/10.3390/sym17101771 - 21 Oct 2025
Viewed by 254
Abstract
The End–Edge–Cloud (EEC) paradigm hierarchically orchestrates Internet of Things (IoT) devices, edge nodes, and cloud, optimizing system performance for both delay-sensitive data and compute-intensive processing tasks. Securing IoT data sharing in the EEC-driven paradigm while maintaining data traceability poses critical challenges. In this [...] Read more.
The End–Edge–Cloud (EEC) paradigm hierarchically orchestrates Internet of Things (IoT) devices, edge nodes, and cloud, optimizing system performance for both delay-sensitive data and compute-intensive processing tasks. Securing IoT data sharing in the EEC-driven paradigm while maintaining data traceability poses critical challenges. In this paper we propose STDSM, a symmetry-enhanced secure and traceable data sharing model for the EEC-driven data sharing paradigm. STDSM enables IoT data owners to share data securely by attaching symmetric security labels (for secrecy and integrity) to their data. This mechanism symmetrically controls both data outflow and inflow. Furthermore, STDSM can also track data user identity. Subsequently, the security properties of STDSM, including data confidentiality, integrity, and identity traceability, are formally verified; the verification takes 280 ms, using a novel approach that combines High-Level Petri Net modeling with the satisfiability modulo theories library and the Z3 solver. In addition, our experimental results show that STDSM reduces time overhead by up to 15% while providing enhanced traceability. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

16 pages, 6847 KB  
Article
Edge-Based Autonomous Fire and Smoke Detection Using MobileNetV2
by Dilshod Sharobiddinov, Hafeez Ur Rehman Siddiqui, Adil Ali Saleem, Gerardo Mendez Mezquita, Debora Libertad Ramírez Vargas and Isabel de la Torre Díez
Sensors 2025, 25(20), 6419; https://doi.org/10.3390/s25206419 - 17 Oct 2025
Viewed by 364
Abstract
Forest fires pose significant threats to ecosystems, human life, and the global climate, necessitating rapid and reliable detection systems. Traditional fire detection approaches, including sensor networks, satellite monitoring, and centralized image analysis, often suffer from delayed response, high false positives, and limited deployment [...] Read more.
Forest fires pose significant threats to ecosystems, human life, and the global climate, necessitating rapid and reliable detection systems. Traditional fire detection approaches, including sensor networks, satellite monitoring, and centralized image analysis, often suffer from delayed response, high false positives, and limited deployment in remote areas. Recent deep learning-based methods offer high classification accuracy but are typically computationally intensive and unsuitable for low-power, real-time edge devices. This study presents an autonomous, edge-based forest fire and smoke detection system using a lightweight MobileNetV2 convolutional neural network. The model is trained on a balanced dataset of fire, smoke, and non-fire images and optimized for deployment on resource-constrained edge devices. The system performs near real-time inference, achieving a test accuracy of 97.98% with an average end-to-end prediction latency of 0.77 s per frame (approximately 1.3 FPS) on the Raspberry Pi 5 edge device. Predictions include the class label, confidence score, and timestamp, all generated locally without reliance on cloud connectivity, thereby enhancing security and robustness against potential cyber threats. Experimental results demonstrate that the proposed solution maintains high predictive performance comparable to state-of-the-art methods while providing efficient, offline operation suitable for real-world environmental monitoring and early wildfire mitigation. This approach enables cost-effective, scalable deployment in remote forest regions, combining accuracy, speed, and autonomous edge processing for timely fire and smoke detection. Full article
Show Figures

Figure 1

69 pages, 7515 KB  
Review
Towards an End-to-End Digital Framework for Precision Crop Disease Diagnosis and Management Based on Emerging Sensing and Computing Technologies: State over Past Decade and Prospects
by Chijioke Leonard Nkwocha and Abhilash Kumar Chandel
Computers 2025, 14(10), 443; https://doi.org/10.3390/computers14100443 - 16 Oct 2025
Viewed by 700
Abstract
Early detection and diagnosis of plant diseases is critical for ensuring global food security and sustainable agricultural practices. This review comprehensively examines latest advancements in crop disease risk prediction, onset detection through imaging techniques, machine learning (ML), deep learning (DL), and edge computing [...] Read more.
Early detection and diagnosis of plant diseases is critical for ensuring global food security and sustainable agricultural practices. This review comprehensively examines latest advancements in crop disease risk prediction, onset detection through imaging techniques, machine learning (ML), deep learning (DL), and edge computing technologies. Traditional disease detection methods, which rely on visual inspections, are time-consuming, and often inaccurate. While chemical analyses are accurate, they can be time consuming and leave less flexibility to promptly implement remedial actions. In contrast, modern techniques such as hyperspectral and multispectral imaging, thermal imaging, and fluorescence imaging, among others can provide non-invasive and highly accurate solutions for identifying plant diseases at early stages. The integration of ML and DL models, including convolutional neural networks (CNNs) and transfer learning, has significantly improved disease classification and severity assessment. Furthermore, edge computing and the Internet of Things (IoT) facilitate real-time disease monitoring by processing and communicating data directly in/from the field, reducing latency and reliance on in-house as well as centralized cloud computing. Despite these advancements, challenges remain in terms of multimodal dataset standardization, integration of individual technologies of sensing, data processing, communication, and decision-making to provide a complete end-to-end solution for practical implementations. In addition, robustness of such technologies in varying field conditions, and affordability has also not been reviewed. To this end, this review paper focuses on broad areas of sensing, computing, and communication systems to outline the transformative potential of end-to-end solutions for effective implementations towards crop disease management in modern agricultural systems. Foundation of this review also highlights critical potential for integrating AI-driven disease detection and predictive models capable of analyzing multimodal data of environmental factors such as temperature and humidity, as well as visible-range and thermal imagery information for early disease diagnosis and timely management. Future research should focus on developing autonomous end-to-end disease monitoring systems that incorporate these technologies, fostering comprehensive precision agriculture and sustainable crop production. Full article
Show Figures

Figure 1

29 pages, 9032 KB  
Article
Multi-Agent Deep Reinforcement Learning for Joint Task Offloading and Resource Allocation in IIoT with Dynamic Priorities
by Yongze Ma, Yanqing Zhao, Yi Hu, Xingyu He and Sifang Feng
Sensors 2025, 25(19), 6160; https://doi.org/10.3390/s25196160 - 4 Oct 2025
Viewed by 788
Abstract
The rapid growth of Industrial Internet of Things (IIoT) terminals has resulted in tasks exhibiting increased concurrency, heterogeneous resource demands, and dynamic priorities, significantly increasing the complexity of task scheduling in edge computing. Cloud–edge–end collaborative computing leverages cross-layer task offloading to alleviate edge [...] Read more.
The rapid growth of Industrial Internet of Things (IIoT) terminals has resulted in tasks exhibiting increased concurrency, heterogeneous resource demands, and dynamic priorities, significantly increasing the complexity of task scheduling in edge computing. Cloud–edge–end collaborative computing leverages cross-layer task offloading to alleviate edge node resource contention and improve task scheduling efficiency. However, existing methods generally neglect the joint optimization of task offloading, resource allocation, and priority adaptation, making it difficult to balance task execution and resource utilization under resource-constrained and competitive conditions. To address this, this paper proposes a two-stage dynamic-priority-aware joint task offloading and resource allocation method (DPTORA). In the first stage, an improved Multi-Agent Proximal Policy Optimization (MAPPO) algorithm integrated with a Priority-Gated Attention Module (PGAM) enhances the robustness and accuracy of offloading strategies under dynamic priorities; in the second stage, the resource allocation problem is formulated as a single-objective convex optimization task and solved globally using the Lagrangian dual method. Simulation results show that DPTORA significantly outperforms existing multi-agent reinforcement learning baselines in terms of task latency, energy consumption, and the task completion rate. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

22 pages, 1572 KB  
Article
Collaborative Optimization of Cloud–Edge–Terminal Distribution Networks Combined with Intelligent Integration Under the New Energy Situation
by Fei Zhou, Chunpeng Wu, Yue Wang, Qinghe Ye, Zhenying Tai, Haoyi Zhou and Qingyun Sun
Mathematics 2025, 13(18), 2924; https://doi.org/10.3390/math13182924 - 10 Sep 2025
Viewed by 576
Abstract
The complex electricity consumption situation on the customer side and large-scale wind and solar power generation have gradually shifted the traditional “source-follow-load” model in the power system towards the “source-load interaction” model. At present, the voltage regulation methods require excessive computing resources to [...] Read more.
The complex electricity consumption situation on the customer side and large-scale wind and solar power generation have gradually shifted the traditional “source-follow-load” model in the power system towards the “source-load interaction” model. At present, the voltage regulation methods require excessive computing resources to accurately predict the fluctuating load under the new energy structure. However, with the development of artificial intelligence and cloud computing, more methods for processing big data have emerged. This paper proposes a new method for electricity consumption analysis that combines traditional mathematical statistics with machine learning to overcome the limitations of non-intrusive load detection methods and develop a distributed optimization of cloud–edge–device distribution networks based on electricity consumption. Aiming at problems such as overfitting and the demand for accurate short-term renewable power generation prediction, it is proposed to use the long short-term memory method to process time series data, and an improved algorithm is developed in combination with error feedback correction. The R2 value of the coupling algorithm reaches 0.991, while the values of RMSE, MAPE and MAE are 1347.2, 5.36 and 199.4, respectively. Power prediction cannot completely eliminate errors. It is necessary to combine the consistency algorithm to construct the regulation strategy. Under the regulation strategy, stability can be achieved after 25 iterations, and the optimal regulation is obtained. Finally, the cloud–edge–device distributed coevolution model of the power grid is obtained to achieve the economy of power grid voltage control. Full article
Show Figures

Figure 1

28 pages, 7302 KB  
Article
A Prototype of a Lightweight Structural Health Monitoring System Based on Edge Computing
by Yinhao Wang, Zhiyi Tang, Guangcai Qian, Wei Xu, Xiaomin Huang and Hao Fang
Sensors 2025, 25(18), 5612; https://doi.org/10.3390/s25185612 - 9 Sep 2025
Viewed by 1026
Abstract
Bridge Structural Health Monitoring (BSHM) is vital for assessing structural integrity and operational safety. Traditional wired systems are limited by high installation costs and complexity, while existing wireless systems still face issues with cost, synchronization, and reliability. Moreover, cloud-based methods for extreme event [...] Read more.
Bridge Structural Health Monitoring (BSHM) is vital for assessing structural integrity and operational safety. Traditional wired systems are limited by high installation costs and complexity, while existing wireless systems still face issues with cost, synchronization, and reliability. Moreover, cloud-based methods for extreme event detection struggle to meet real-time and bandwidth constraints in edge environments. To address these challenges, this study proposes a lightweight wireless BSHM system based on edge computing, enabling local data acquisition and real-time intelligent detection of extreme events. The system consists of wireless sensor nodes for front-end acceleration data collection and an intelligent hub for data storage, visualization, and earthquake recognition. Acceleration data are converted into time–frequency images to train a MobileNetV2-based model. With model quantization and Neural Processing Unit (NPU) acceleration, efficient on-device inference is achieved. Experiments on a laboratory steel bridge verify the system’s high acquisition accuracy, precise clock synchronization, and strong anti-interference performance. Compared with inference on a general-purpose ARM CPU running the unquantized model, the quantized model deployed on the NPU achieves a 26× speedup in inference, a 35% reduction in power consumption, and less than 1% accuracy loss. This solution provides a cost-effective, reliable BSHM framework for small-to-medium-sized bridges, offering local intelligence and rapid response with strong potential for real-world applications. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

31 pages, 2138 KB  
Article
A Sustainability Assessment of a Blockchain-Secured Solar Energy Logger for Edge IoT Environments
by Javad Vasheghani Farahani and Horst Treiblmaier
Sustainability 2025, 17(17), 8063; https://doi.org/10.3390/su17178063 - 7 Sep 2025
Viewed by 1402
Abstract
In this paper, we design, implement, and empirically evaluate a tamper-evident, blockchain-secured solar energy logging system for resource-constrained edge Internet of Things (IoT) devices. Using a Merkle tree batching approach in conjunction with threshold-triggered blockchain anchoring, the system combines high-frequency local logging with [...] Read more.
In this paper, we design, implement, and empirically evaluate a tamper-evident, blockchain-secured solar energy logging system for resource-constrained edge Internet of Things (IoT) devices. Using a Merkle tree batching approach in conjunction with threshold-triggered blockchain anchoring, the system combines high-frequency local logging with energy-efficient, cryptographically verifiable submissions to the Ethereum Sepolia testnet, a public Proof-of-Stake (PoS) blockchain. The logger captured and hashed cryptographic chains on a minute-by-minute basis during a continuous 135 h deployment on a Raspberry Pi equipped with an INA219 sensor. Thanks to effective retrial and daily rollover mechanisms, it committed 130 verified Merkle batches to the blockchain without any data loss or unverifiable records, even during internet outages. The system offers robust end-to-end auditability and tamper resistance with low operational and carbon overhead, which was tested with comparative benchmarking against other blockchain logging models and conventional local and cloud-based loggers. The findings illustrate the technical and sustainability feasibility of digital audit trails based on blockchain technology for distributed solar energy systems. These audit trails facilitate scalable environmental, social, and governance (ESG) reporting, automated renewable energy certification, and transparent carbon accounting. Full article
Show Figures

Figure 1

22 pages, 3203 KB  
Article
Task Offloading Strategy of Multi-Objective Optimization Algorithm Based on Particle Swarm Optimization in Edge Computing
by Liping Yang, Shengyu Wang, Wei Zhang, Bin Jing, Xiaoru Yu, Ziqi Tang and Wei Wang
Appl. Sci. 2025, 15(17), 9784; https://doi.org/10.3390/app15179784 - 5 Sep 2025
Cited by 1 | Viewed by 2043
Abstract
With the rapid development of edge computing and deep learning, the efficient deployment of deep neural networks (DNNs) on resource-constrained terminal devices faces multiple challenges (background), such as execution delay, high energy consumption, and resource allocation costs. This study proposes an improved Multi-Objective [...] Read more.
With the rapid development of edge computing and deep learning, the efficient deployment of deep neural networks (DNNs) on resource-constrained terminal devices faces multiple challenges (background), such as execution delay, high energy consumption, and resource allocation costs. This study proposes an improved Multi-Objective Particle Swarm Optimization (MOPSO) algorithm for PSO. Unlike the conventional PSO, our approach integrates a historical optimal solution detection mechanism and a dynamic temperature regulation strategy to overcome its limitations in this application scenario. First, an end–edge–cloud collaborative computing framework is constructed. Within this framework, a multi-objective optimization model is established, aiming to minimize time delay, energy consumption, and cloud configuration cost. To solve this model, an optimization method is designed that integrates a historical optimal solution detection mechanism and a dynamic temperature regulation strategy into the MOPSO algorithm. Experiments on six types of DNNs, including the Visual Geometry Group (VGG) series, have shown that this algorithm reduces execution time by an average of 58.6%, the average energy consumption by 61.8%, and optimizes cloud configuration costs by 36.1% compared to traditional offloading strategies. Its Global Search Capability Index (GSCI) reaches 92.3%, which is 42.6% higher than the standard PSO algorithm. This method provides an efficient, secure, and stable cooperative computing solution for multi-constraint task unloading in an edge computing environment. Full article
Show Figures

Figure 1

23 pages, 4093 KB  
Article
Multi-Objective Optimization with Server Load Sensing in Smart Transportation
by Youjian Yu, Zhaowei Song and Qinghua Zhang
Appl. Sci. 2025, 15(17), 9717; https://doi.org/10.3390/app15179717 - 4 Sep 2025
Viewed by 526
Abstract
The rapid development of telematics technology has greatly supported high-computing applications like autonomous driving and real-time road condition prediction. However, the limited computational resources and dynamic topology of in-vehicle terminals pose challenges such as delay, load imbalance, and bandwidth consumption. To address these, [...] Read more.
The rapid development of telematics technology has greatly supported high-computing applications like autonomous driving and real-time road condition prediction. However, the limited computational resources and dynamic topology of in-vehicle terminals pose challenges such as delay, load imbalance, and bandwidth consumption. To address these, a three-layer vehicular network architecture based on cloud–edge–end collaboration was proposed, with V2X technology used for multi-hop transmission. Models for delay, energy consumption, and edge caching were designed to meet the requirements for low delay, energy efficiency, and effective caching. Additionally, a dynamic pricing model for edge resources, based on load-awareness, was proposed to balance service quality and cost-effectiveness. The enhanced NSGA-III algorithm (ADP-NSGA-III) was applied to optimize system delay, energy consumption, and system resource pricing. The experimental results (mean of 30 independent runs) indicate that, compared with the NSGA-II, NSGA-III, MOEA-D, and SPEA2 optimization schemes, the proposed scheme reduced system delay by 21.63%, 5.96%, 17.84%, and 8.30%, respectively, in a system with 55 tasks. The energy consumption was reduced by 11.87%, 7.58%, 15.59%, and 9.94%, respectively. Full article
Show Figures

Figure 1

29 pages, 5213 KB  
Article
Design and Implementation of a Novel Intelligent Remote Calibration System Based on Edge Intelligence
by Quan Wang, Jiliang Fu, Xia Han, Xiaodong Yin, Jun Zhang, Xin Qi and Xuerui Zhang
Symmetry 2025, 17(9), 1434; https://doi.org/10.3390/sym17091434 - 3 Sep 2025
Viewed by 715
Abstract
Calibration of power equipment has become an essential task in modern power systems. This paper proposes a distributed remote calibration prototype based on a cloud–edge–end architecture by integrating intelligent sensing, Internet of Things (IoT) communication, and edge computing technologies. The prototype employs a [...] Read more.
Calibration of power equipment has become an essential task in modern power systems. This paper proposes a distributed remote calibration prototype based on a cloud–edge–end architecture by integrating intelligent sensing, Internet of Things (IoT) communication, and edge computing technologies. The prototype employs a high-precision frequency-to-voltage conversion module leveraging satellite signals to address traceability and value transmission challenges in remote calibration, thereby ensuring reliability and stability throughout the process. Additionally, an environmental monitoring module tracks parameters such as temperature, humidity, and electromagnetic interference. Combined with video surveillance and optical character recognition (OCR), this enables intelligent, end-to-end recording and automated data extraction during calibration. Furthermore, a cloud-edge task scheduling algorithm is implemented to offload computational tasks to edge nodes, maximizing resource utilization within the cloud–edge collaborative system and enhancing service quality. The proposed prototype extends existing cloud–edge collaboration frameworks by incorporating calibration instruments and sensing devices into the network, thereby improving the intelligence and accuracy of remote calibration across multiple layers. Furthermore, this approach facilitates synchronized communication and calibration operations across symmetrically deployed remote facilities and reference devices, providing solid technical support to ensure that measurement equipment meets the required precision and performance criteria. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

20 pages, 3112 KB  
Article
A Cloud-Edge-End Collaborative Framework for Adaptive Process Planning by Welding Robots
by Kangjie Shi and Weidong Shen
Machines 2025, 13(9), 798; https://doi.org/10.3390/machines13090798 - 2 Sep 2025
Viewed by 624
Abstract
The emergence of mass personalized production has increased the adaptability and intelligence requirements of welding robots. To address the challenges associated with mass personalized production, this paper proposes a novel knowledge-driven framework for intelligent welding process planning in cloud robotics systems. This framework [...] Read more.
The emergence of mass personalized production has increased the adaptability and intelligence requirements of welding robots. To address the challenges associated with mass personalized production, this paper proposes a novel knowledge-driven framework for intelligent welding process planning in cloud robotics systems. This framework integrates cloud-edge-end collaborative computing with ontology-based knowledge representation to enable efficient welding process optimization. A hierarchical knowledge-based architecture was developed using the SQLite 3.38.0, Redis 5.0.4, and HBase 2.1.0 tools. The ontology models formally define the welding tasks, resources, processes, and results, thereby enabling semantic interoperability across heterogeneous systems. A hybrid knowledge evolution method that combines cloud-based welding simulation and transfer learning is presented as a means of achieving inexpensive, efficient, and intelligent evolution of welding process knowledge. Experiments demonstrated that, with respect to pure cloud-based solutions, edge-based knowledge bases can reduce the average response time by 86%. The WeldNet-152 model achieved a welding parameter prediction accuracy of 95.1%, while the knowledge evolution method exhibited a simulation-to-reality transfer accuracy of 78%. The proposed method serves as a foundation for significant enhancements in the adaptability of welding robots to Industry 5.0 manufacturing environments. Full article
(This article belongs to the Section Advanced Manufacturing)
Show Figures

Figure 1

22 pages, 1672 KB  
Article
Optimizing Robotic Disassembly-Assembly Line Balancing with Directional Switching Time via an Improved Q(λ) Algorithm in IoT-Enabled Smart Manufacturing
by Qi Zhang, Yang Xing, Man Yao, Xiwang Guo, Shujin Qin, Haibin Zhu, Liang Qi and Bin Hu
Electronics 2025, 14(17), 3499; https://doi.org/10.3390/electronics14173499 - 1 Sep 2025
Cited by 1 | Viewed by 816
Abstract
With the growing adoption of circular economy principles in manufacturing, efficient disassembly and reassembly of end-of-life (EOL) products has become a key challenge in smart factories. This paper addresses the Disassembly and Assembly Line Balancing Problem (DALBP), which involves scheduling robotic tasks across [...] Read more.
With the growing adoption of circular economy principles in manufacturing, efficient disassembly and reassembly of end-of-life (EOL) products has become a key challenge in smart factories. This paper addresses the Disassembly and Assembly Line Balancing Problem (DALBP), which involves scheduling robotic tasks across workstations while minimizing total operation time and accounting for directional switching time between disassembly and assembly phases. To solve this problem, we propose an improved reinforcement learning algorithm, IQ(λ), which extends the classical Q(λ) method by incorporating eligibility trace decay, a dynamic Action Table mechanism to handle non-conflicting parallel tasks, and switching-aware reward shaping to penalize inefficient task transitions. Compared with standard Q(λ), these modifications enhance the algorithm’s global search capability, accelerate convergence, and improve solution quality in complex DALBP scenarios. While the current implementation does not deploy live IoT infrastructure, the architecture is modular and designed to support future extensions involving edge-cloud coordination, trust-aware optimization, and privacy-preserving learning in Industrial Internet of Things (IIoT) environments. Four real-world disassembly-assembly cases (flashlight, copier, battery, and hammer drill) are used to evaluate the algorithm’s effectiveness. Experimental results show that IQ(λ) consistently outperforms traditional Q-learning, Q(λ), and Sarsa in terms of solution quality, convergence speed, and robustness. Furthermore, ablation studies and sensitivity analysis confirm the importance of the algorithm’s core design components. This work provides a scalable and extensible framework for intelligent scheduling in cyber-physical manufacturing systems and lays a foundation for future integration with secure, IoT-connected environments. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

23 pages, 16525 KB  
Article
Real-Time Vision–Language Analysis for Autonomous Underwater Drones: A Cloud–Edge Framework Using Qwen2.5-VL
by Wannian Li and Fan Zhang
Drones 2025, 9(9), 605; https://doi.org/10.3390/drones9090605 - 27 Aug 2025
Viewed by 1513
Abstract
Autonomous Underwater Vehicles (AUVs) equipped with vision systems face unique challenges in real-time environmental perception due to harsh underwater conditions and computational constraints. This paper presents a novel cloud–edge framework for real-time vision–language analysis in underwater drones using the Qwen2.5-VL model. Our system [...] Read more.
Autonomous Underwater Vehicles (AUVs) equipped with vision systems face unique challenges in real-time environmental perception due to harsh underwater conditions and computational constraints. This paper presents a novel cloud–edge framework for real-time vision–language analysis in underwater drones using the Qwen2.5-VL model. Our system employs a uniform frame sampling mechanism that balances temporal resolution with processing capabilities, achieving near real-time analysis at 1 fps from 23 fps input streams. We construct a comprehensive data flow model encompassing image enhancement, communication latency, cloud-side inference, and semantic result return, which is supported by a theoretical latency framework and sustainable processing rate analysis. Simulation-based experimental results across three challenging underwater scenarios—pipeline inspection, coral reef monitoring, and wreck investigation—demonstrate consistent scene comprehension with end-to-end latencies near 1 s. The Qwen2.5-VL model successfully generates natural language summaries capturing spatial structure, biological content, and habitat conditions, even under turbidity and occlusion. Our results show that vision–language models (VLMs) can provide rich semantic understanding of underwater scenes despite challenging conditions, enabling AUVs to perform complex monitoring tasks with natural language scene descriptions. This work contributes to advancing AI-powered perception systems for the growing autonomous underwater drone market, supporting applications in environmental monitoring, offshore infrastructure inspection, and marine ecosystem assessment. Full article
(This article belongs to the Special Issue Advances in Autonomous Underwater Drones: 2nd Edition)
Show Figures

Figure 1

Back to TopTop