Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (200)

Search Parameters:
Keywords = industrial and robotic vision systems

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 5560 KiB  
Article
Design of Reconfigurable Handling Systems for Visual Inspection
by Alessio Pacini, Francesco Lupi and Michele Lanzetta
J. Manuf. Mater. Process. 2025, 9(8), 257; https://doi.org/10.3390/jmmp9080257 - 31 Jul 2025
Viewed by 130
Abstract
Industrial Vision Inspection Systems (VISs) often struggle to adapt to increasing variability of modern manufacturing due to the inherent rigidity of their hardware architectures. Although the Reconfigurable Manufacturing System (RMS) paradigm was introduced in the early 2000s to overcome these limitations, designing such [...] Read more.
Industrial Vision Inspection Systems (VISs) often struggle to adapt to increasing variability of modern manufacturing due to the inherent rigidity of their hardware architectures. Although the Reconfigurable Manufacturing System (RMS) paradigm was introduced in the early 2000s to overcome these limitations, designing such reconfigurable machines remains a complex, expert-dependent, and time-consuming task. This is primarily due to the lack of structured methodologies and the reliance on trial-and-error processes. In this context, this study proposes a novel theoretical framework to facilitate the design of fully reconfigurable handling systems for VISs, with a particular focus on fixture design. The framework is grounded in Model-Based Definition (MBD), embedding semantic information directly into the 3D CAD models of the inspected product. As an additional contribution, a general hardware architecture for the inspection of axisymmetric components is presented. This architecture integrates an anthropomorphic robotic arm, Numerically Controlled (NC) modules, and adaptable software and hardware components to enable automated, software-driven reconfiguration. The proposed framework and architecture were applied in an industrial case study conducted in collaboration with a leading automotive half-shaft manufacturer. The resulting system, implemented across seven automated cells, successfully inspected over 200 part types from 12 part families and detected more than 60 defect types, with a cycle below 30 s per part. Full article
Show Figures

Figure 1

20 pages, 3729 KiB  
Article
Can AIGC Aid Intelligent Robot Design? A Tentative Research of Apple-Harvesting Robot
by Qichun Jin, Jiayu Zhao, Wei Bao, Ji Zhao, Yujuan Zhang and Fuwen Hu
Processes 2025, 13(8), 2422; https://doi.org/10.3390/pr13082422 - 30 Jul 2025
Viewed by 326
Abstract
More recently, artificial intelligence (AI)-generated content (AIGC) is fundamentally transforming multiple sectors, including materials discovery, healthcare, education, scientific research, and industrial manufacturing. As for the complexities and challenges of intelligent robot design, AIGC has the potential to offer a new paradigm, assisting in [...] Read more.
More recently, artificial intelligence (AI)-generated content (AIGC) is fundamentally transforming multiple sectors, including materials discovery, healthcare, education, scientific research, and industrial manufacturing. As for the complexities and challenges of intelligent robot design, AIGC has the potential to offer a new paradigm, assisting in conceptual and technical design, functional module design, and the training of the perception ability to accelerate prototyping. Taking the design of an apple-harvesting robot, for example, we demonstrate a basic framework of the AIGC-assisted robot design methodology, leveraging the generation capabilities of available multimodal large language models, as well as the human intervention to alleviate AI hallucination and hidden risks. Second, we study the enhancement effect on the robot perception system using the generated apple images based on the large vision-language models to expand the actual apple images dataset. Further, an apple-harvesting robot prototype based on an AIGC-aided design is demonstrated and a pick-up experiment in a simulated scene indicates that it achieves a harvesting success rate of 92.2% and good terrain traversability with a maximum climbing angle of 32°. According to the tentative research, although not an autonomous design agent, the AIGC-driven design workflow can alleviate the significant complexities and challenges of intelligent robot design, especially for beginners or young engineers. Full article
(This article belongs to the Special Issue Design and Control of Complex and Intelligent Systems)
Show Figures

Figure 1

22 pages, 6487 KiB  
Article
An RGB-D Vision-Guided Robotic Depalletizing System for Irregular Camshafts with Transformer-Based Instance Segmentation and Flexible Magnetic Gripper
by Runxi Wu and Ping Yang
Actuators 2025, 14(8), 370; https://doi.org/10.3390/act14080370 - 24 Jul 2025
Viewed by 273
Abstract
Accurate segmentation of densely stacked and weakly textured objects remains a core challenge in robotic depalletizing for industrial applications. To address this, we propose MaskNet, an instance segmentation network tailored for RGB-D input, designed to enhance recognition performance under occlusion and low-texture conditions. [...] Read more.
Accurate segmentation of densely stacked and weakly textured objects remains a core challenge in robotic depalletizing for industrial applications. To address this, we propose MaskNet, an instance segmentation network tailored for RGB-D input, designed to enhance recognition performance under occlusion and low-texture conditions. Built upon a Vision Transformer backbone, MaskNet adopts a dual-branch architecture for RGB and depth modalities and integrates multi-modal features using an attention-based fusion module. Further, spatial and channel attention mechanisms are employed to refine feature representation and improve instance-level discrimination. The segmentation outputs are used in conjunction with regional depth to optimize the grasping sequence. Experimental evaluations on camshaft depalletizing tasks demonstrate that MaskNet achieves a precision of 0.980, a recall of 0.971, and an F1-score of 0.975, outperforming a YOLO11-based baseline. In an actual scenario, with a self-designed flexible magnetic gripper, the system maintains a maximum grasping error of 9.85 mm and a 98% task success rate across multiple camshaft types. These results validate the effectiveness of MaskNet in enabling fine-grained perception for robotic manipulation in cluttered, real-world scenarios. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

22 pages, 11043 KiB  
Article
Digital Twin-Enabled Adaptive Robotics: Leveraging Large Language Models in Isaac Sim for Unstructured Environments
by Sanjay Nambiar, Rahul Chiramel Paul, Oscar Chigozie Ikechukwu, Marie Jonsson and Mehdi Tarkian
Machines 2025, 13(7), 620; https://doi.org/10.3390/machines13070620 - 17 Jul 2025
Viewed by 403
Abstract
As industrial automation evolves towards human-centric, adaptable solutions, collaborative robots must overcome challenges in unstructured, dynamic environments. This paper extends our previous work on developing a digital shadow for industrial robots by introducing a comprehensive framework that bridges the gap between physical systems [...] Read more.
As industrial automation evolves towards human-centric, adaptable solutions, collaborative robots must overcome challenges in unstructured, dynamic environments. This paper extends our previous work on developing a digital shadow for industrial robots by introducing a comprehensive framework that bridges the gap between physical systems and their virtual counterparts. The proposed framework advances toward a fully functional digital twin by integrating real-time perception and intuitive human–robot interaction capabilities. The framework is applied to a hospital test lab scenario, where a YuMi robot automates the sorting of microscope slides. The system incorporates a RealSense D435i depth camera for environment perception, Isaac Sim for virtual environment synchronization, and a locally hosted large language model (Mistral 7B) for interpreting user voice commands. These components work together to achieve bi-directional synchronization between the physical and digital environments. The framework was evaluated through 20 test runs under varying conditions. A validation study measured the performance of the perception module, simulation, and language interface, with a 60% overall success rate. Additionally, synchronization accuracy between the simulated and physical robot joint movements reached 98.11%, demonstrating strong alignment between the digital and physical systems. By combining local LLM processing, real-time vision, and robot simulation, the approach enables untrained users to interact with collaborative robots in dynamic settings. The results highlight its potential for improving flexibility and usability in industrial automation. Full article
(This article belongs to the Topic Smart Production in Terms of Industry 4.0 and 5.0)
Show Figures

Figure 1

22 pages, 3768 KiB  
Article
A Collaborative Navigation Model Based on Multi-Sensor Fusion of Beidou and Binocular Vision for Complex Environments
by Yongxiang Yang and Zhilong Yu
Appl. Sci. 2025, 15(14), 7912; https://doi.org/10.3390/app15147912 - 16 Jul 2025
Viewed by 344
Abstract
This paper addresses the issues of Beidou navigation signal interference and blockage in complex substation environments by proposing an intelligent collaborative navigation model based on Beidou high-precision navigation and binocular vision recognition. The model is designed with Beidou navigation providing global positioning references [...] Read more.
This paper addresses the issues of Beidou navigation signal interference and blockage in complex substation environments by proposing an intelligent collaborative navigation model based on Beidou high-precision navigation and binocular vision recognition. The model is designed with Beidou navigation providing global positioning references and binocular vision enabling local environmental perception through a collaborative fusion strategy. The Unscented Kalman Filter (UKF) is used to integrate data from multiple sensors to ensure high-precision positioning and dynamic obstacle avoidance capabilities for robots in complex environments. Simulation results show that the Beidou–Binocular Cooperative Navigation (BBCN) model achieves a global positioning error of less than 5 cm in non-interference scenarios, and an error of only 6.2 cm under high-intensity electromagnetic interference, significantly outperforming the single Beidou model’s error of 40.2 cm. The path planning efficiency is close to optimal (with an efficiency factor within 1.05), and the obstacle avoidance success rate reaches 95%, while the system delay remains within 80 ms, meeting the real-time requirements of industrial scenarios. The innovative fusion approach enables unprecedented reliability for autonomous robot inspection in high-voltage environments, offering significant practical value in reducing human risk exposure, lowering maintenance costs, and improving inspection efficiency in power industry applications. This technology enables continuous monitoring of critical power infrastructure that was previously difficult to automate due to navigation challenges in electromagnetically complex environments. Full article
(This article belongs to the Special Issue Advanced Robotics, Mechatronics, and Automation)
Show Figures

Figure 1

30 pages, 2023 KiB  
Review
Fusion of Computer Vision and AI in Collaborative Robotics: A Review and Future Prospects
by Yuval Cohen, Amir Biton and Shraga Shoval
Appl. Sci. 2025, 15(14), 7905; https://doi.org/10.3390/app15147905 - 15 Jul 2025
Viewed by 594
Abstract
The integration of advanced computer vision and artificial intelligence (AI) techniques into collaborative robotic systems holds the potential to revolutionize human–robot interaction, productivity, and safety. Despite substantial research activity, a systematic synthesis of how vision and AI are jointly enabling context-aware, adaptive cobot [...] Read more.
The integration of advanced computer vision and artificial intelligence (AI) techniques into collaborative robotic systems holds the potential to revolutionize human–robot interaction, productivity, and safety. Despite substantial research activity, a systematic synthesis of how vision and AI are jointly enabling context-aware, adaptive cobot capabilities across perception, planning, and decision-making remains lacking (especially in recent years). Addressing this gap, our review unifies the latest advances in visual recognition, deep learning, and semantic mapping within a structured taxonomy tailored to collaborative robotics. We examine foundational technologies such as object detection, human pose estimation, and environmental modeling, as well as emerging trends including multimodal sensor fusion, explainable AI, and ethically guided autonomy. Unlike prior surveys that focus narrowly on either vision or AI, this review uniquely analyzes their integrated use for real-world human–robot collaboration. Highlighting industrial and service applications, we distill the best practices, identify critical challenges, and present key performance metrics to guide future research. We conclude by proposing strategic directions—from scalable training methods to interoperability standards—to foster safe, robust, and proactive human–robot partnerships in the years ahead. Full article
Show Figures

Figure 1

18 pages, 3556 KiB  
Article
Multi-Sensor Fusion for Autonomous Mobile Robot Docking: Integrating LiDAR, YOLO-Based AprilTag Detection, and Depth-Aided Localization
by Yanyan Dai and Kidong Lee
Electronics 2025, 14(14), 2769; https://doi.org/10.3390/electronics14142769 - 10 Jul 2025
Viewed by 531
Abstract
Reliable and accurate docking remains a fundamental challenge for autonomous mobile robots (AMRs) operating in complex industrial environments with dynamic lighting, motion blur, and occlusion. This study proposes a novel multi-sensor fusion-based docking framework that significantly enhances robustness and precision by integrating YOLOv8-based [...] Read more.
Reliable and accurate docking remains a fundamental challenge for autonomous mobile robots (AMRs) operating in complex industrial environments with dynamic lighting, motion blur, and occlusion. This study proposes a novel multi-sensor fusion-based docking framework that significantly enhances robustness and precision by integrating YOLOv8-based AprilTag detection, depth-aided 3D localization, and LiDAR-based orientation correction. A key contribution of this work is the construction of a custom AprilTag dataset featuring real-world visual disturbances, enabling the YOLOv8 model to achieve high-accuracy detection and ID classification under challenging conditions. To ensure precise spatial localization, 2D visual tag coordinates are fused with depth data to compute 3D positions in the robot’s frame. A LiDAR group-symmetry mechanism estimates heading deviation, which is combined with visual feedback in a hybrid PID controller to correct angular errors. A finite-state machine governs the docking sequence, including detection, approach, yaw alignment, and final engagement. Simulation and experimental results demonstrate that the proposed system achieves higher docking success rates and improved pose accuracy under various challenging conditions compared to traditional vision- or LiDAR-only approaches. Full article
Show Figures

Figure 1

40 pages, 2250 KiB  
Review
Comprehensive Comparative Analysis of Lower Limb Exoskeleton Research: Control, Design, and Application
by Sk Hasan and Nafizul Alam
Actuators 2025, 14(7), 342; https://doi.org/10.3390/act14070342 - 9 Jul 2025
Viewed by 606
Abstract
This review provides a comprehensive analysis of recent advancements in lower limb exoskeleton systems, focusing on applications, control strategies, hardware architecture, sensing modalities, human-robot interaction, evaluation methods, and technical innovations. The study spans systems developed for gait rehabilitation, mobility assistance, terrain adaptation, pediatric [...] Read more.
This review provides a comprehensive analysis of recent advancements in lower limb exoskeleton systems, focusing on applications, control strategies, hardware architecture, sensing modalities, human-robot interaction, evaluation methods, and technical innovations. The study spans systems developed for gait rehabilitation, mobility assistance, terrain adaptation, pediatric use, and industrial support. Applications range from sit-to-stand transitions and post-stroke therapy to balance support and real-world navigation. Control approaches vary from traditional impedance and fuzzy logic models to advanced data-driven frameworks, including reinforcement learning, recurrent neural networks, and digital twin-based optimization. These controllers support personalized and adaptive interaction, enabling real-time intent recognition, torque modulation, and gait phase synchronization across different users and tasks. Hardware platforms include powered multi-degree-of-freedom exoskeletons, passive assistive devices, compliant joint systems, and pediatric-specific configurations. Innovations in actuator design, modular architecture, and lightweight materials support increased usability and energy efficiency. Sensor systems integrate EMG, EEG, IMU, vision, and force feedback, supporting multimodal perception for motion prediction, terrain classification, and user monitoring. Human–robot interaction strategies emphasize safe, intuitive, and cooperative engagement. Controllers are increasingly user-specific, leveraging biosignals and gait metrics to tailor assistance. Evaluation methodologies include simulation, phantom testing, and human–subject trials across clinical and real-world environments, with performance measured through joint tracking accuracy, stability indices, and functional mobility scores. Overall, the review highlights the field’s evolution toward intelligent, adaptable, and user-centered systems, offering promising solutions for rehabilitation, mobility enhancement, and assistive autonomy in diverse populations. Following a detailed review of current developments, strategic recommendations are made to enhance and evolve existing exoskeleton technologies. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

32 pages, 2740 KiB  
Article
Vision-Based Navigation and Perception for Autonomous Robots: Sensors, SLAM, Control Strategies, and Cross-Domain Applications—A Review
by Eder A. Rodríguez-Martínez, Wendy Flores-Fuentes, Farouk Achakir, Oleg Sergiyenko and Fabian N. Murrieta-Rico
Eng 2025, 6(7), 153; https://doi.org/10.3390/eng6070153 - 7 Jul 2025
Viewed by 1292
Abstract
Camera-centric perception has matured into a cornerstone of modern autonomy, from self-driving cars and factory cobots to underwater and planetary exploration. This review synthesizes more than a decade of progress in vision-based robotic navigation through an engineering lens, charting the full pipeline from [...] Read more.
Camera-centric perception has matured into a cornerstone of modern autonomy, from self-driving cars and factory cobots to underwater and planetary exploration. This review synthesizes more than a decade of progress in vision-based robotic navigation through an engineering lens, charting the full pipeline from sensing to deployment. We first examine the expanding sensor palette—monocular and multi-camera rigs, stereo and RGB-D devices, LiDAR–camera hybrids, event cameras, and infrared systems—highlighting the complementary operating envelopes and the rise of learning-based depth inference. The advances in visual localization and mapping are then analyzed, contrasting sparse and dense SLAM approaches, as well as monocular, stereo, and visual–inertial formulations. Additional topics include loop closure, semantic mapping, and LiDAR–visual–inertial fusion, which enables drift-free operation in dynamic environments. Building on these foundations, we review the navigation and control strategies, spanning classical planning, reinforcement and imitation learning, hybrid topological–metric memories, and emerging visual language guidance. Application case studies—autonomous driving, industrial manipulation, autonomous underwater vehicles, planetary rovers, aerial drones, and humanoids—demonstrate how tailored sensor suites and algorithms meet domain-specific constraints. Finally, the future research trajectories are distilled: generative AI for synthetic training data and scene completion; high-density 3D perception with solid-state LiDAR and neural implicit representations; event-based vision for ultra-fast control; and human-centric autonomy in next-generation robots. By providing a unified taxonomy, a comparative analysis, and engineering guidelines, this review aims to inform researchers and practitioners designing robust, scalable, vision-driven robotic systems. Full article
(This article belongs to the Special Issue Interdisciplinary Insights in Engineering Research)
Show Figures

Figure 1

24 pages, 13673 KiB  
Article
Autonomous Textile Sorting Facility and Digital Twin Utilizing an AI-Reinforced Collaborative Robot
by Torbjørn Seim Halvorsen, Ilya Tyapin and Ajit Jha
Electronics 2025, 14(13), 2706; https://doi.org/10.3390/electronics14132706 - 4 Jul 2025
Viewed by 446
Abstract
This paper presents the design and implementation of an autonomous robotic facility for textile sorting and recycling, leveraging advanced computer vision and machine learning technologies. The system enables real-time textile classification, localization, and sorting on a dynamically moving conveyor belt. A custom-designed pneumatic [...] Read more.
This paper presents the design and implementation of an autonomous robotic facility for textile sorting and recycling, leveraging advanced computer vision and machine learning technologies. The system enables real-time textile classification, localization, and sorting on a dynamically moving conveyor belt. A custom-designed pneumatic gripper is developed for versatile textile handling, optimizing autonomous picking and placing operations. Additionally, digital simulation techniques are utilized to refine robotic motion and enhance overall system reliability before real-world deployment. The multi-threaded architecture facilitates the concurrent and efficient execution of textile classification, robotic manipulation, and conveyor belt operations. Key contributions include (a) dynamic and real-time textile detection and localization, (b) the development and integration of a specialized robotic gripper, (c) real-time autonomous robotic picking from a moving conveyor, and (d) scalability in sorting operations for recycling automation across various industry scales. The system progressively incorporates enhancements, such as queuing management for continuous operation and multi-thread optimization. Advanced material detection techniques are also integrated to ensure compliance with the stringent performance requirements of industrial recycling applications. Full article
(This article belongs to the Special Issue New Insights Into Smart and Intelligent Sensors)
Show Figures

Figure 1

59 pages, 3738 KiB  
Article
A Survey of Visual SLAM Based on RGB-D Images Using Deep Learning and Comparative Study for VOE
by Van-Hung Le and Thi-Ha-Phuong Nguyen
Algorithms 2025, 18(7), 394; https://doi.org/10.3390/a18070394 - 27 Jun 2025
Viewed by 624
Abstract
Visual simultaneous localization and mapping (Visual SLAM) based on RGB-D image data includes two main tasks: One is to build an environment map, and the other is to simultaneously track the position and movement of visual odometry estimation (VOE). Visual SLAM and VOE [...] Read more.
Visual simultaneous localization and mapping (Visual SLAM) based on RGB-D image data includes two main tasks: One is to build an environment map, and the other is to simultaneously track the position and movement of visual odometry estimation (VOE). Visual SLAM and VOE are used in many applications, such as robot systems, autonomous mobile robots, assistance systems for the blind, human–machine interaction, industry, etc. To solve the computer vision problems in Visual SLAM and VOE from RGB-D images, deep learning (DL) is an approach that gives very convincing results. This manuscript examines the results, advantages, difficulties, and challenges of the problem of Visual SLAM and VOE based on DL. In this paper, the taxonomy is proposed to conduct a complete survey based on three methods to construct Visual SLAM and VOE from RGB-D images (1) using DL for the modules of the Visual SLAM and VOE systems; (2) using DL to supplement the modules of Visual SLAM and VOE systems; and (3) using end-to-end DL to build Visual SLAM and VOE systems. The 220 scientific publications on Visual SLAM, VOE, and related issues were surveyed. The studies were surveyed based on the order of methods, datasets, evaluation measures, and detailed results. In particular, studies on using DL to build Visual SLAM and VOE systems have analyzed the challenges, advantages, and disadvantages. We also proposed and published the TQU-SLAM benchmark dataset, and a comparative study on fine-tuning the VOE model using a Multi-Layer Fusion network (MLF-VO) framework was performed. The comparison results of VOE on the TQU-SLAM benchmark dataset range from 16.97 m to 57.61 m. This is a huge error compared to the VOE methods on the KITTI, TUM RGB-D SLAM, and ICL-NUIM datasets. Therefore, the dataset we publish is very challenging, especially in the opposite direction (OP-D) when collecting and annotation data. The results of the comparative study are also presented in detail and available. Full article
(This article belongs to the Special Issue Advances in Deep Learning and Next-Generation Internet Technologies)
Show Figures

Figure 1

31 pages, 7285 KiB  
Article
Development, Design, and Improvement of an Intelligent Harvesting System for Aquatic Vegetable Brasenia schreberi
by Xianping Guan, Longyuan Shi, Hongrui Ge, Yuhan Ding and Shicheng Nie
Agronomy 2025, 15(6), 1451; https://doi.org/10.3390/agronomy15061451 - 14 Jun 2025
Viewed by 385
Abstract
At present, there is a lack of effective and usable machinery in the harvesting of aquatic vegetables. The harvesting of most aquatic vegetables such as Brasenia schreberi relies entirely on manual labor, resulting in a high labor demand and labor shortages, which restricts [...] Read more.
At present, there is a lack of effective and usable machinery in the harvesting of aquatic vegetables. The harvesting of most aquatic vegetables such as Brasenia schreberi relies entirely on manual labor, resulting in a high labor demand and labor shortages, which restricts the industrial development of aquatic vegetables. To address this problem, an intelligent harvesting system for the aquatic vegetable Brasenia schreberi was developed in response to the challenging working conditions associated with harvesting it. The system is composed of a catamaran mobile platform, a picking device, and a harvesting manipulator control system. The mobile platform, driven by two paddle wheels, is equipped with a protective device to prevent vegetable stem entanglement, making it suitable for shallow pond aquatic vegetable environments. The self-designed picking device rapidly harvests vegetables through lateral clamping and cutting. The harvesting manipulator control system incorporates harvesting posture perception based on the YOLO-GS recognition algorithm and combines it with an improved RRT algorithm for robotic arm path planning. The experimental results indicate that the intelligent harvesting system is suitable for aquatic vegetable harvesting and the improved RRT algorithm surpasses the traditional one in terms of the planning time and path length. The vision-based positioning error was 4.80 mm, meeting harvesting accuracy requirements. In actual harvest experiments, the system showed an average success rate of 90.0%, with an average picking time of 5.229 s per leaf, thus proving its feasibility and effectiveness. Full article
(This article belongs to the Special Issue Application of Machine Learning and Modelling in Food Crops)
Show Figures

Figure 1

13 pages, 1400 KiB  
Communication
Human and Humanoid-in-the-Loop (HHitL) Ecosystem: An Industry 5.0 Perspective
by Mahdi Sadeqi Bajestani, Mohammad Mahruf Mahdi, Duhwan Mun and Duck Bong Kim
Machines 2025, 13(6), 510; https://doi.org/10.3390/machines13060510 - 12 Jun 2025
Viewed by 715
Abstract
As manufacturing transitions into the era of Industry 5.0, the demand for systems that are not only intelligent but also human-centric, resilient, and sustainable is becoming increasingly critical. This paper introduces the Human and Humanoid-in-the-Loop (HHitL) ecosystem, a novel framework that integrates both [...] Read more.
As manufacturing transitions into the era of Industry 5.0, the demand for systems that are not only intelligent but also human-centric, resilient, and sustainable is becoming increasingly critical. This paper introduces the Human and Humanoid-in-the-Loop (HHitL) ecosystem, a novel framework that integrates both humans and humanoid robots as collaborative agents within cyber–physical manufacturing environments. Building on the foundational principles of Industry 5.0, the paper presents a 6P architecture that includes participation, purpose, preservation, physical assets, persistence, and projection. The core features of this ecosystem, including anthropomorphism, perceptual intelligence, cognitive adaptability, and dexterity/locomotion, are identified, and their enablers are also introduced. This work presents a forward-looking vision for next-generation manufacturing ecosystems where human values and robotic capabilities converge to form adaptive, ethical, and high-performance systems. Full article
(This article belongs to the Section Advanced Manufacturing)
Show Figures

Figure 1

26 pages, 19159 KiB  
Article
Development of a Pipeline-Cleaning Robot for Heat-Exchanger Tubes
by Qianwen Liu, Canlin Li, Guangfei Wang, Lijuan Li, Jinrong Wang, Jianping Tan and Yuxiang Wu
Electronics 2025, 14(12), 2321; https://doi.org/10.3390/electronics14122321 - 6 Jun 2025
Viewed by 599
Abstract
Cleaning operations in narrow pipelines are often hindered by limited maneuverability and low efficiency, necessitating the development of a high-performance and highly adaptable robotic solution. To address this challenge, this study proposes a pipeline-cleaning robot specifically designed for the heat-exchange tubes of industrial [...] Read more.
Cleaning operations in narrow pipelines are often hindered by limited maneuverability and low efficiency, necessitating the development of a high-performance and highly adaptable robotic solution. To address this challenge, this study proposes a pipeline-cleaning robot specifically designed for the heat-exchange tubes of industrial heat exchangers. The robot features a dual-wheel cross-drive configuration to enhance motion stability and integrates a gear–rack-based alignment mechanism with a cam-based propulsion system to enable autonomous deployment and cleaning via a flexible arm. The robot adopts a modular architecture with a separated body and cleaning arm, allowing for rapid assembly and maintenance through bolted connections. A vision-guided control system is implemented to support accurate positioning and task scheduling within the primary pipeline. Experimental results demonstrate that the robot can stably execute automatic navigation and sub-pipe cleaning, achieving pipe-switching times of less than 30 s. The system operates reliably and significantly improves cleaning efficiency. The proposed robotic system exhibits strong adaptability and generalizability, offering an effective solution for automated cleaning in confined pipeline environments. Full article
(This article belongs to the Special Issue Intelligent Mobile Robotic Systems: Decision, Planning and Control)
Show Figures

Figure 1

30 pages, 10731 KiB  
Article
Real-Time 3D Vision-Based Robotic Path Planning for Automated Adhesive Spraying on Lasted Uppers in Footwear Manufacturing
by Ya-Yung Huang, Jun-Ting Lai and Hsien-Huang Wu
Appl. Sci. 2025, 15(11), 6365; https://doi.org/10.3390/app15116365 - 5 Jun 2025
Viewed by 515
Abstract
The automation of adhesive application in footwear manufacturing is challenging due to complex surface geometries and model variability. This study presents an integrated 3D vision-based robotic system for adhesive spraying on lasted uppers. A triangulation-based scanning setup reconstructs each upper into a high-resolution [...] Read more.
The automation of adhesive application in footwear manufacturing is challenging due to complex surface geometries and model variability. This study presents an integrated 3D vision-based robotic system for adhesive spraying on lasted uppers. A triangulation-based scanning setup reconstructs each upper into a high-resolution point cloud, enabling customized spraying path planning. A six-axis robotic arm executes the path using an adaptive transformation matrix that aligns with surface normals. UV fluorescent dye and inspection are used to verify adhesive coverage. Experimental results confirm high repeatability and precision, with most deviations within the industry-accepted ±1 mm range. While localized glue-deficient areas were observed around high-curvature regions such as the toe cap, these remain limited and serve as a basis for further system enhancement. The system significantly reduces labor dependency and material waste, as observed through the replacement of four manual operators and the elimination of adhesive over-application in the tested production line. It has been successfully installed and validated on a production line in Hanoi, Vietnam, meeting real-world industrial requirements. This research contributes to advancing intelligent footwear manufacturing by integrating 3D vision, robotic motion control, and automation technologies. Full article
Show Figures

Figure 1

Back to TopTop