Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (73)

Search Parameters:
Keywords = robot-aided training

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 687 KB  
Review
Hybrid Reconstruction in Head and Neck Surgery: Integration of Virtual Planning, Navigation, and Robotic Microsurgery
by Thomas J. Sorenson, Rebecca Lisk, Alexis B. Jacobson, Adam Jacobson and Jamie P. Levine
J. Clin. Med. 2026, 15(8), 2963; https://doi.org/10.3390/jcm15082963 - 14 Apr 2026
Viewed by 290
Abstract
Reconstruction in head and neck surgery requires restoration of complex functions, including speech, swallowing, and breathing, while preserving as much facial form and patient identity as possible. Over the past decade, advances in preoperative digital planning, intraoperative technologies, and robotic platforms have reshaped [...] Read more.
Reconstruction in head and neck surgery requires restoration of complex functions, including speech, swallowing, and breathing, while preserving as much facial form and patient identity as possible. Over the past decade, advances in preoperative digital planning, intraoperative technologies, and robotic platforms have reshaped reconstructive strategies, giving rise to the concept of hybrid reconstruction. Hybrid approaches integrate free tissue transfer with computer-aided design and manufacturing (CAD/CAM), virtual surgical planning, intraoperative navigation, and robot-assisted microsurgery to enhance precision, reproducibility, and functional outcomes. This narrative review examines the principles and applications of hybrid reconstruction in head and neck surgery with particular emphasis on osseous reconstruction of the mandible, maxilla, and midface. The roles of intraoperative navigation and robotic assistance as enabling tools are discussed, along with their potential benefits and current limitations. Functional and morphologic outcomes, patient-reported quality of life, and challenges related to cost, access, training, and evidence heterogeneity are critically reviewed. Hybrid reconstruction represents an advancement toward outcomes-driven, patient-centered care; however, thoughtful integration of emerging technologies and continued emphasis on rigorous outcome assessment are essential to guide responsible adoption in contemporary head and neck reconstructive surgery. Full article
(This article belongs to the Special Issue Advances and Challenges in Head and Neck Reconstructive Surgery)
Show Figures

Figure 1

20 pages, 1284 KB  
Review
Navigating Aging with Technology: A Scoping Review of Digital Interventions Addressing Intrinsic Capacity Decline in Older Adults
by Ping Lu, Chengji Yu, Dayu Tang, Xiaodie Yang, Ying Zhou, Juan Zhao and Liying Ying
Healthcare 2026, 14(5), 557; https://doi.org/10.3390/healthcare14050557 - 24 Feb 2026
Viewed by 769
Abstract
Background: Intrinsic capacity (IC) is key to promoting healthy aging, and managing declines in IC is crucial for delaying functional deterioration in older adults. Digital health interventions (DHIs) hold promising potential for addressing IC decline. This scoping review aims to synthesize existing evidence [...] Read more.
Background: Intrinsic capacity (IC) is key to promoting healthy aging, and managing declines in IC is crucial for delaying functional deterioration in older adults. Digital health interventions (DHIs) hold promising potential for addressing IC decline. This scoping review aims to synthesize existing evidence by mapping the types of DHIs employed and examining their effects across the five domains of IC in older adults. Methods: The review was conducted following the five-stage framework of Arksey and O’Malley and the PRISMA-ScR guideline. The search was performed across PubMed, Embase, CINAHL, Cochrane Library, PsycINFO, SinoMed, and CNKI databases for studies published between 1 January 2015 and 31 July 2025. Relevant studies were identified using MeSH terms and free-text terms related to “older adults”, “digital health”, and “intrinsic capacity”. Results: Based on the eligibility criteria, 81 studies were included. The DHIs identified encompassed virtual reality, exergames, computerized cognitive training, mHealth, internet-based interventions, telehealth, digital hearing aids, assistive robotics, and visual biofeedback. Most studies focused on single-domain interventions (74%), with cognition being the most targeted (40.7%), while sensory (4.9%) and vitality (2.5%) domains received the least attention. No digital interventions targeted all five IC domains. Regarding efficacy, many DHIs reported statistically significant improvements in one or more IC domains; however, the magnitude and consistency of these effects varied considerably across studies. Conclusions: Preliminary evidence suggests that DHIs show potential in managing declines in IC among older adults. However, evidence quality varies significantly, often derived from small-scale studies. Future research should focus on establishing clinical effectiveness through adequately powered trials and on integrating DHIs into comprehensive intervention strategies that target all domains of IC, with robust evaluation of their outcomes. Full article
(This article belongs to the Section Digital Health Technologies)
Show Figures

Figure 1

21 pages, 5567 KB  
Article
Classification of Double-Bottom U-Shaped Weld Joints Using Synthetic Images and Image Splitting
by Gyeonghoon Kang and Namkug Ku
J. Mar. Sci. Eng. 2026, 14(2), 224; https://doi.org/10.3390/jmse14020224 - 21 Jan 2026
Viewed by 378
Abstract
The shipbuilding industry relies heavily on welding, which accounts for approximately 70% of the overall production process. However, the recent decline in skilled workers, together with rising labor costs, has accelerated the automation of shipbuilding operations. In particular, the welding activities are concentrated [...] Read more.
The shipbuilding industry relies heavily on welding, which accounts for approximately 70% of the overall production process. However, the recent decline in skilled workers, together with rising labor costs, has accelerated the automation of shipbuilding operations. In particular, the welding activities are concentrated in the double-bottom region of ships, where collaborative robots are increasingly introduced to alleviate workforce shortages. Because these robots must directly recognize U-shaped weld joints, this study proposes an image-based classification system capable of automatically identifying and classifying such joints. In double-bottom structures, U-shaped weld joints can be categorized into 176 types according to combinations of collar plate type, slot, watertight feature, and girder. To distinguish these types, deep learning-based image recognition is employed. To construct a large-scale training dataset, 3D Computer-Aided Design (CAD) models were automatically generated using Open Cascade and subsequently rendered to produce synthetic images. Furthermore, to improve classification performance, the input images were split into left, right, upper, and lower regions for both training and inference. The class definitions for each region were simplified based on the presence or absence of key features. Consequently, the classification accuracy was significantly improved compared with an approach using non-split images. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

21 pages, 4823 KB  
Article
QL-HIT2F: A Q-Learning-Aided Adaptive Fuzzy Path Planning Algorithm with Enhanced Obstacle Avoidance
by Nana Zhou, Fengjun Zhou, Changming Li and Jianning Zhong
Sensors 2026, 26(1), 200; https://doi.org/10.3390/s26010200 - 27 Dec 2025
Viewed by 598
Abstract
There has been significant interest in solving robot path planning problems using fuzzy logic-based methods. Recently, the Genetic Algorithm-based Hierarchical Interval Type-2 Fuzzy (GA-HIT2F) system has been introduced as a novel planner in this domain. However, this method lacks adaptability to changes in [...] Read more.
There has been significant interest in solving robot path planning problems using fuzzy logic-based methods. Recently, the Genetic Algorithm-based Hierarchical Interval Type-2 Fuzzy (GA-HIT2F) system has been introduced as a novel planner in this domain. However, this method lacks adaptability to changes in target points, and insufficient flexibility can lead to planning failures in local minimum traps, making it difficult to apply to complex scenarios. In this paper, we identify the limitations of the original GA-HIT2F approach and propose an enhanced Q-Learning-aided Adaptive Hierarchical Interval Type-2 Fuzzy (QL-HIT2F) algorithm for path planning. The proposed planner incorporates reinforcement learning to improve a robot’s capability to avoid collisions with special obstacles. Additionally, the average obstacle orientation (AOO) is introduced to optimize the robot’s angular adjustments. Two supplementary robot parameters are integrated into the reinforcement learning action space, along with fuzzy membership parameters. The training process also introduces the concepts of meta-map and sub-training. Simulation results from a series of path planning experiments validate the feasibility and effectiveness of the proposed QL-HIT2F approach. Full article
Show Figures

Figure 1

19 pages, 7032 KB  
Article
Prediction Model for the Oscillation Trajectory of Trellised Tomatoes Based on ARIMA-EEMD-LSTM
by Yun Wu, Yongnian Zhang, Peilong Zhao, Xiaolei Zhang, Xiaochan Wang, Maohua Xiao and Yinlong Zhu
Agriculture 2025, 15(23), 2418; https://doi.org/10.3390/agriculture15232418 - 24 Nov 2025
Viewed by 462
Abstract
Second-order damping oscillation models are incapable of precisely predicting superimposed and multi-fruit collision-induced oscillations. In view of this problem, an ARIMA-EEMD-LSTM hybrid model for predicting the oscillation trajectories of trellised tomatoes was proposed in this study. First, the oscillation trajectories of trellised tomatoes [...] Read more.
Second-order damping oscillation models are incapable of precisely predicting superimposed and multi-fruit collision-induced oscillations. In view of this problem, an ARIMA-EEMD-LSTM hybrid model for predicting the oscillation trajectories of trellised tomatoes was proposed in this study. First, the oscillation trajectories of trellised tomatoes under different picking forces were captured with the aid of the Nokov motion capture system, and then the collected oscillation trajectory datasets were then divided into training and test subsets. Afterwards, the ensemble empirical mode decomposition (EEMD) method was employed to decompose oscillation signals into multiple intrinsic mode function (IMF) components, of which different components were predicted by different models. Specifically, high-frequency components were predicted by the long short-term memory (LSTM) model while low-frequency components were predicted by the autoregressive integrated moving average (ARIMA) model. The final oscillation trajectory prediction model for trellised tomatoes was constructed by integrating these components. Finally, the constructed model was experimentally validated and applied to an analysis of single-fruit oscillations and multi-fruit oscillations (including collision oscillations and superposition oscillations). The following experimental results were yielded: Under single-fruit oscillation conditions, the prediction accuracy reached an RMSE of 0.1008–0.2429 mm, an MAE of 0.0751–0.1840 mm, and an MAPE of 0.01–0.06%. Under multi-fruit oscillation conditions, the prediction accuracy yielded an RMSE of 0.1521–0.6740 mm, an MAE of 0.1084–0.5323 mm, and an MAPE of 0.01–0.27%. The research results serve as a reference for the dynamic harvesting prediction of tomato-picking robots and contribute to improvement of harvesting efficiency and success rates. Full article
Show Figures

Figure 1

19 pages, 4883 KB  
Review
Latest Advancements and Future Directions in Prostate Cancer Surgery: Reducing Invasiveness and Expanding Indications
by Valerio Santarelli, Roberta Corvino, Giulio Bevilacqua, Stefano Salciccia, Giovanni Di Lascio, Francesco Del Giudice, Giovanni Battista Di Pierro, Giorgio Franco, Simone Crivellaro and Alessandro Sciarra
Cancers 2025, 17(18), 3053; https://doi.org/10.3390/cancers17183053 - 18 Sep 2025
Cited by 2 | Viewed by 2619
Abstract
For more than 20 years, after the introduction of the first robotic system, research on prostate cancer (PCa) surgery has mainly focused on evaluating outcomes of Robotic-Assisted Radical Prostatectomy (RARP). In the last few years, however, a new generation of innovative techniques, surgical [...] Read more.
For more than 20 years, after the introduction of the first robotic system, research on prostate cancer (PCa) surgery has mainly focused on evaluating outcomes of Robotic-Assisted Radical Prostatectomy (RARP). In the last few years, however, a new generation of innovative techniques, surgical approaches, and expanded indications have emerged. The Single Port (SP) robotic system was the first real hardware innovation in robotic surgery, and has already demonstrated advantages in terms of shorter length of stay, better cosmetic results and reduced postoperative pain. Artificial Intelligence (AI)-powered algorithms are being proposed as reliable tools for surgical assistance, aiding in standardization and mass implementation of robotic training. New surgical indications are emerging on the basis of patient and tumor characteristics. The extensive adoption of PCa screening and the precision of diagnostic tools have increased the rate of PCa diagnoses in a localized stage. Partial prostatectomy, despite needing further validation, has emerged as a safe and minimally invasive treatment option for confined tumors, able to minimize the side effects of prostate surgery. For locally advanced PCa, radioguided surgery has not only enhanced the oncological effectiveness of lymphadenectomy by enabling the precise identification and extraction of pathological lymph nodes, but has also contributed to minimizing the side effects associated with unnecessarily extensive dissections. Finally, in light of the increased efficacy of modern systemic therapies and the longer life expectancy, RP is currently being evaluated for primary tumor management in the metastatic phase. Despite the novelty of the aforementioned treatment options, they are already set to shape the future evolution of PCa management and international guidelines. Full article
(This article belongs to the Section Cancer Therapy)
Show Figures

Figure 1

20 pages, 3729 KB  
Article
Can AIGC Aid Intelligent Robot Design? A Tentative Research of Apple-Harvesting Robot
by Qichun Jin, Jiayu Zhao, Wei Bao, Ji Zhao, Yujuan Zhang and Fuwen Hu
Processes 2025, 13(8), 2422; https://doi.org/10.3390/pr13082422 - 30 Jul 2025
Cited by 1 | Viewed by 1484
Abstract
More recently, artificial intelligence (AI)-generated content (AIGC) is fundamentally transforming multiple sectors, including materials discovery, healthcare, education, scientific research, and industrial manufacturing. As for the complexities and challenges of intelligent robot design, AIGC has the potential to offer a new paradigm, assisting in [...] Read more.
More recently, artificial intelligence (AI)-generated content (AIGC) is fundamentally transforming multiple sectors, including materials discovery, healthcare, education, scientific research, and industrial manufacturing. As for the complexities and challenges of intelligent robot design, AIGC has the potential to offer a new paradigm, assisting in conceptual and technical design, functional module design, and the training of the perception ability to accelerate prototyping. Taking the design of an apple-harvesting robot, for example, we demonstrate a basic framework of the AIGC-assisted robot design methodology, leveraging the generation capabilities of available multimodal large language models, as well as the human intervention to alleviate AI hallucination and hidden risks. Second, we study the enhancement effect on the robot perception system using the generated apple images based on the large vision-language models to expand the actual apple images dataset. Further, an apple-harvesting robot prototype based on an AIGC-aided design is demonstrated and a pick-up experiment in a simulated scene indicates that it achieves a harvesting success rate of 92.2% and good terrain traversability with a maximum climbing angle of 32°. According to the tentative research, although not an autonomous design agent, the AIGC-driven design workflow can alleviate the significant complexities and challenges of intelligent robot design, especially for beginners or young engineers. Full article
(This article belongs to the Special Issue Design and Control of Complex and Intelligent Systems)
Show Figures

Figure 1

28 pages, 1791 KB  
Article
Speech Recognition-Based Wireless Control System for Mobile Robotics: Design, Implementation, and Analysis
by Sandeep Gupta, Udit Mamodiya and Ahmed J. A. Al-Gburi
Automation 2025, 6(3), 25; https://doi.org/10.3390/automation6030025 - 24 Jun 2025
Cited by 9 | Viewed by 6092
Abstract
This paper describes an innovative wireless mobile robotics control system based on speech recognition, where the ESP32 microcontroller is used to control motors, facilitate Bluetooth communication, and deploy an Android application for the real-time speech recognition logic. With speech processed on the Android [...] Read more.
This paper describes an innovative wireless mobile robotics control system based on speech recognition, where the ESP32 microcontroller is used to control motors, facilitate Bluetooth communication, and deploy an Android application for the real-time speech recognition logic. With speech processed on the Android device and motor commands handled on the ESP32, the study achieves significant performance gains through distributed architectures while maintaining low latency for feedback control. In experimental tests over a range of 1–10 m, stable 110–140 ms command latencies, with low variation (±15 ms) were observed. The system’s voice and manual button modes both yield over 92% accuracy with the aid of natural language processing, resulting in training requirements being low, and displaying strong performance in high-noise environments. The novelty of this work is evident through an adaptive keyword spotting algorithm for improved recognition performance in high-noise environments and a gradual latency management system that optimizes processing parameters in the presence of noise. By providing a user-friendly, real-time speech interface, this work serves to enhance human–robot interaction when considering future assistive devices, educational platforms, and advanced automated navigation research. Full article
(This article belongs to the Section Robotics and Autonomous Systems)
Show Figures

Figure 1

19 pages, 1563 KB  
Article
Small Object Tracking in LiDAR Point Clouds: Learning the Target-Awareness Prototype and Fine-Grained Search Region
by Shengjing Tian, Yinan Han, Xiantong Zhao and Xiuping Liu
Sensors 2025, 25(12), 3633; https://doi.org/10.3390/s25123633 - 10 Jun 2025
Cited by 1 | Viewed by 2223
Abstract
Light Detection and Ranging (LiDAR) point clouds are an essential perception modality for artificial intelligence systems like autonomous driving and robotics, where the ubiquity of small objects in real-world scenarios substantially challenges the visual tracking of small targets amidst the vastness of point [...] Read more.
Light Detection and Ranging (LiDAR) point clouds are an essential perception modality for artificial intelligence systems like autonomous driving and robotics, where the ubiquity of small objects in real-world scenarios substantially challenges the visual tracking of small targets amidst the vastness of point cloud data. Current methods predominantly focus on developing universal frameworks for general object categories, often sidelining the persistent difficulties associated with small objects. These challenges stem from a scarcity of foreground points and a low tolerance for disturbances. To this end, we propose a deep neural network framework that trains a Siamese network for feature extraction and innovatively incorporates two pivotal modules: the target-awareness prototype mining (TAPM) module and the regional grid subdivision (RGS) module. The TAPM module utilizes the reconstruction mechanism of the masked auto-encoder to distill prototypes within the feature space, thereby enhancing the salience of foreground points and aiding in the precise localization of small objects. To heighten the tolerance of disturbances in feature maps, the RGS module is devised to retrieve detailed features of the search area, capitalizing on Vision Transformer and pixel shuffle technologies. Furthermore, beyond standard experimental configurations, we have meticulously crafted scaling experiments to assess the robustness of various trackers when dealing with small objects. Comprehensive evaluations show our method achieves a mean Success of 64.9% and 60.4% under original and scaled settings, outperforming benchmarks by +3.6% and +5.4%, respectively. Full article
(This article belongs to the Special Issue AI-Based Computer Vision Sensors & Systems)
Show Figures

Figure 1

16 pages, 2700 KB  
Article
Robot-Assisted Microsurgery Has a Steeper Learning Curve in Microsurgical Novices
by Felix Struebing, Jonathan Weigel, Emre Gazyakan, Laura Cosima Siegwart, Charlotte Holup, Ulrich Kneser and Arne Hendrik Boecker
Life 2025, 15(5), 763; https://doi.org/10.3390/life15050763 - 9 May 2025
Cited by 5 | Viewed by 1885
Abstract
Introduction: Mastering microsurgery requires advanced fine motor skills, hand–eye coordination, and precision, making it challenging for novices. Robot-assisted microsurgery offers benefits, such as eliminating physiological tremors and enhancing precision through motion scaling, which may potentially make learning microsurgical skills easier. Materials and Methods: [...] Read more.
Introduction: Mastering microsurgery requires advanced fine motor skills, hand–eye coordination, and precision, making it challenging for novices. Robot-assisted microsurgery offers benefits, such as eliminating physiological tremors and enhancing precision through motion scaling, which may potentially make learning microsurgical skills easier. Materials and Methods: Sixteen medical students without prior microsurgical experience performed 160 anastomoses in a synthetic model. The students were randomly assigned into two cohorts, one starting with the conventional technique (HR group) and one with robotic assistance (RH group) using the Symani surgical system. Results: Both cohorts showed a reduction in procedural time and improvement in SAMS scores over successive attempts, with robotic anastomoses demonstrating a 48.2% decrease in time and a 54.6% increase in SAMS scores. The decreases were significantly larger than the RH group (p < 0.05). The quality of the final anastomoses was comparable in both groups (p > 0.05). Discussion: This study demonstrated a steep preclinical learning curve for robot-assisted microsurgery (RAMS) among novices in a synthetic, preclinical model. No significant differences in SAMS scores between robotic and manual techniques after ten anastomoses. Robot-assisted microsurgery required more time per anastomosis, but the results suggest that experience with RAMS may aid in manual skill acquisition. The study indicates that further exploration into the sequencing of robotic and manual training could be valuable, especially in designing structured microsurgical curricula. Full article
Show Figures

Figure 1

19 pages, 388 KB  
Review
Investigation into the Applications of Artificial Intelligence (AI) in Special Education: A Literature Review
by Esraa Hussein, Menatalla Hussein and Maha Al-Hendawi
Soc. Sci. 2025, 14(5), 288; https://doi.org/10.3390/socsci14050288 - 8 May 2025
Cited by 13 | Viewed by 10769
Abstract
The integration of artificial intelligence (AI) in special education has the potential to transform learning experiences and improve outcomes for students with disabilities. This systematic literature review examines the application of AI technologies in special education, focusing on personalized learning, cognitive and behavioral [...] Read more.
The integration of artificial intelligence (AI) in special education has the potential to transform learning experiences and improve outcomes for students with disabilities. This systematic literature review examines the application of AI technologies in special education, focusing on personalized learning, cognitive and behavioral interventions, communication, emotional support, and physical independence. Through an analysis of 15 studies conducted between 2019 and 2024, the review synthesizes evidence on the effectiveness of AI tools, including intelligent tutoring systems, adaptive learning platforms, assistive communication devices, and robotic aids. The findings suggest that AI-driven technologies significantly enhance students’ academic performance, communication skills, emotional regulation, and physical mobility by providing tailored interventions that address individual needs. This review also highlights several challenges, including limited access to AI technologies in low-resource settings, the need for more comprehensive teacher training, and ethical concerns related to data privacy and algorithmic bias. Additionally, the geographic focus of the current research is primarily on developed countries, overlooking the specific challenges of implementing AI in resource-constrained environments. This review emphasizes the need for more diverse and ethical research to fully realize the potential of AI in supporting students with disabilities and promoting inclusive education. Full article
Show Figures

Figure 1

14 pages, 655 KB  
Perspective
AI-Driven Telerehabilitation: Benefits and Challenges of a Transformative Healthcare Approach
by Rocco Salvatore Calabrò and Sepehr Mojdehdehbaher
AI 2025, 6(3), 62; https://doi.org/10.3390/ai6030062 - 17 Mar 2025
Cited by 20 | Viewed by 10081
Abstract
Artificial intelligence (AI) has revolutionized telerehabilitation by integrating machine learning (ML), big data analytics, and real-time feedback to create adaptive, patient-centered care. AI-driven systems enhance telerehabilitation by analyzing patient data to personalize therapy, monitor progress, and suggest adjustments, eliminating the need for constant [...] Read more.
Artificial intelligence (AI) has revolutionized telerehabilitation by integrating machine learning (ML), big data analytics, and real-time feedback to create adaptive, patient-centered care. AI-driven systems enhance telerehabilitation by analyzing patient data to personalize therapy, monitor progress, and suggest adjustments, eliminating the need for constant clinician oversight. The benefits of AI-powered telerehabilitation include increased accessibility, especially for remote or mobility-limited patients, and greater convenience, allowing patients to perform therapies at home. However, challenges persist, such as data privacy risks, the digital divide, and algorithmic bias. Robust encryption protocols, equitable access to technology, and diverse training datasets are critical to addressing these issues. Ethical considerations also arise, emphasizing the need for human oversight and maintaining the therapeutic relationship. AI also aids clinicians by automating administrative tasks and facilitating interdisciplinary collaboration. Innovations like 5G networks, the Internet of Medical Things (IoMT), and robotics further enhance telerehabilitation’s potential. By transforming rehabilitation into a dynamic, engaging, and personalized process, AI and telerehabilitation together represent a paradigm shift in healthcare, promising improved outcomes and broader access for patients worldwide. Full article
Show Figures

Figure 1

32 pages, 13506 KB  
Article
VR Co-Lab: A Virtual Reality Platform for Human–Robot Disassembly Training and Synthetic Data Generation
by Yashwanth Maddipatla, Sibo Tian, Xiao Liang, Minghui Zheng and Beiwen Li
Machines 2025, 13(3), 239; https://doi.org/10.3390/machines13030239 - 17 Mar 2025
Cited by 5 | Viewed by 4726
Abstract
This research introduces a virtual reality (VR) training system for improving human–robot collaboration (HRC) in industrial disassembly tasks, particularly for e-waste recycling. Conventional training approaches frequently fail to provide sufficient adaptability, immediate feedback, or scalable solutions for complex industrial workflows. The implementation leverages [...] Read more.
This research introduces a virtual reality (VR) training system for improving human–robot collaboration (HRC) in industrial disassembly tasks, particularly for e-waste recycling. Conventional training approaches frequently fail to provide sufficient adaptability, immediate feedback, or scalable solutions for complex industrial workflows. The implementation leverages Quest Pro’s body-tracking capabilities to enable ergonomic, immersive interactions with planned eye-tracking integration for improved interactivity and accuracy. The Niryo One robot aids users in hands-on disassembly while generating synthetic data to refine robot motion planning models. A Robot Operating System (ROS) bridge enables the seamless simulation and control of various robotic platforms using Unified Robotics Description Format (URDF) files, bridging virtual and physical training environments. A Long Short-Term Memory (LSTM) model predicts user interactions and robotic motions, optimizing trajectory planning and minimizing errors. Monte Carlo dropout-based uncertainty estimation enhances prediction reliability, ensuring adaptability to dynamic user behavior. Initial technical validation demonstrates the platform’s potential, with preliminary testing showing promising results in task execution efficiency and human–robot motion alignment, though comprehensive user studies remain for future work. Limitations include the lack of multi-user scenarios, potential tracking inaccuracies, and the need for further real-world validation. This system establishes a sandbox training framework for HRC in disassembly, leveraging VR and AI-driven feedback to improve skill acquisition, task efficiency, and training scalability across industrial applications. Full article
Show Figures

Figure 1

20 pages, 4396 KB  
Article
Squeeze-EnGAN: Memory Efficient and Unsupervised Low-Light Image Enhancement for Intelligent Vehicles
by Haegyo In, Juhum Kweon and Changjoo Moon
Sensors 2025, 25(6), 1825; https://doi.org/10.3390/s25061825 - 14 Mar 2025
Cited by 5 | Viewed by 1425
Abstract
Intelligent vehicles, such as autonomous cars, drones, and robots, rely on sensors to gather environmental information and respond accordingly. RGB cameras are commonly used due to their low cost and high resolution but are limited in low-light conditions. While employing LiDAR or specialized [...] Read more.
Intelligent vehicles, such as autonomous cars, drones, and robots, rely on sensors to gather environmental information and respond accordingly. RGB cameras are commonly used due to their low cost and high resolution but are limited in low-light conditions. While employing LiDAR or specialized cameras can address this issue, these solutions often incur high costs. Deep learning-based low-light image enhancement (LLIE) methods offer an alternative, but existing models struggle to adapt to road scenes. Furthermore, most LLIE models rely on supervised training but are heavily constrained by the lack of low-light and normal-light paired datasets. In particular, obtaining paired datasets for driving scenes is extremely challenging. To address these issues, this paper proposes Squeeze-EnGAN, a memory-efficient, GAN-based LLIE method capable of unsupervised learning without paired image datasets. Squeeze-EnGAN incorporates a fire module into a U-net architecture, substantially reducing the number of parameters and Multiply-Accumulate Operations (MACs) compared to its base model, EnlightenGAN. Additionally, Squeeze-EnGAN achieves real-time performance on devices like Jetson Xavier (0.061 s). Significantly, enhanced images improve object detection performance over original images, demonstrating the model’s potential to aid high-level vision tasks in intelligent vehicles. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

18 pages, 35240 KB  
Article
Selection of Trajectories to Improve Thermal Fields During the Electric Arc Welding Process Using Hybrid Model CFD-FNN
by Sixtos A. Arreola-Villa, Alma Rosa Méndez-Gordillo, Alejandro Pérez-Alvarado, Rumualdo Servín-Castañeda, Ismael Calderón-Ramos and Héctor Javier Vergara-Hernández
Metals 2025, 15(2), 154; https://doi.org/10.3390/met15020154 - 3 Feb 2025
Cited by 1 | Viewed by 1820
Abstract
Effective thermal management is essential in welding processes to maintain structural integrity and material quality, especially in high-precision industrial applications. This study examines the thermal behavior of an AISI 1080 steel plate containing 100 blind holes filled using robotic electric arc welding. Temperature [...] Read more.
Effective thermal management is essential in welding processes to maintain structural integrity and material quality, especially in high-precision industrial applications. This study examines the thermal behavior of an AISI 1080 steel plate containing 100 blind holes filled using robotic electric arc welding. Temperature measurements, recorded with eight strategically positioned thermocouples, monitored the thermal evolution throughout the robotic welding process. The experimental results validated a computational heat transfer model developed with ANSYS Fluent software to simulate and predict temperature distribution achieving a mean absolute percentage error (MAPE) below 4.53%. A feedforward neural network was trained with simulation-generated data to optimize welding sequences. The optimization focuses on minimizing the area under the thermal history curves, reducing temperature gradients, and mitigating overheating risks. Integrating CFD simulations and neural networks introduces a hybrid methodology combining precise numerical modeling with advanced predictive capabilities. The hybrid CFD-FNN results reached a determination coefficient (R2) of 0.93 and an MAPE of 3.5% highlighting the potential of this approach to predict the thermal behavior in multipoint welding processes. This model generated optimized welding trajectories improving the uniformity of the temperature field, reducing thermal gradients and minimizing temperature peaks, thus aiding in preventing overheating. This framework represents a significant advancement in welding technologies, demonstrating the effective application of deep learning techniques in optimizing complex industrial processes. Full article
(This article belongs to the Special Issue Fusion Welding, 2nd Edition)
Show Figures

Figure 1

Back to TopTop