Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,637)

Search Parameters:
Keywords = industrial robots

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 11034 KiB  
Article
Digital Twin-Enabled Adaptive Robotics: Leveraging Large Language Models in Isaac Sim for Unstructured Environments
by Sanjay Nambiar, Rahul Chiramel Paul, Oscar Chigozie Ikechukwu, Marie Jonsson and Mehdi Tarkian
Machines 2025, 13(7), 620; https://doi.org/10.3390/machines13070620 (registering DOI) - 17 Jul 2025
Abstract
As industrial automation evolves towards human-centric, adaptable solutions, collaborative robots must overcome challenges in unstructured, dynamic environments. This paper extends our previous work on developing a digital shadow for industrial robots by introducing a comprehensive framework that bridges the gap between physical systems [...] Read more.
As industrial automation evolves towards human-centric, adaptable solutions, collaborative robots must overcome challenges in unstructured, dynamic environments. This paper extends our previous work on developing a digital shadow for industrial robots by introducing a comprehensive framework that bridges the gap between physical systems and their virtual counterparts. The proposed framework advances toward a fully functional digital twin by integrating real-time perception and intuitive human–robot interaction capabilities. The framework is applied to a hospital test lab scenario, where a YuMi robot automates the sorting of microscope slides. The system incorporates a RealSense D435i depth camera for environment perception, Isaac Sim for virtual environment synchronization, and a locally hosted large language model (Mistral 7B) for interpreting user voice commands. These components work together to achieve bi-directional synchronization between the physical and digital environments. The framework was evaluated through 20 test runs under varying conditions. A validation study measured the performance of the perception module, simulation, and language interface, with a 60% overall success rate. Additionally, synchronization accuracy between the simulated and physical robot joint movements reached 98.11%, demonstrating strong alignment between the digital and physical systems. By combining local LLM processing, real-time vision, and robot simulation, the approach enables untrained users to interact with collaborative robots in dynamic settings. The results highlight its potential for improving flexibility and usability in industrial automation. Full article
(This article belongs to the Topic Smart Production in Terms of Industry 4.0 and 5.0)
23 pages, 1383 KiB  
Article
Fuzzy Adaptive Control for a 4-DOF Hand Rehabilitation Robot
by Paul Tucan, Oana-Maria Vanta, Calin Vaida, Mihai Ciupe, Dragos Sebeni, Adrian Pisla, Simona Stiole, David Lupu, Zoltan Major, Bogdan Gherman, Vasile Bulbucan, Ionut Zima, Jose Machado and Doina Pisla
Actuators 2025, 14(7), 351; https://doi.org/10.3390/act14070351 (registering DOI) - 17 Jul 2025
Abstract
This paper presents the development of a fuzzy-PID control able to adapt to several robot–patient interaction modes by monitoring patient evolution during the rehabilitation procedure. This control system is designed to provide targeted rehabilitation therapy through three interaction modes: passive; active–assistive; and resistive. [...] Read more.
This paper presents the development of a fuzzy-PID control able to adapt to several robot–patient interaction modes by monitoring patient evolution during the rehabilitation procedure. This control system is designed to provide targeted rehabilitation therapy through three interaction modes: passive; active–assistive; and resistive. By integrating a fuzzy inference system into the classical PID architecture, the FPID controller dynamically adjusts control gains in response to tracking error and patient effort. The simulation results indicate that, in passive mode, the FPID controller achieves a 32% lower RMSE, reduced overshoot, and a faster settling time compared to the conventional PID. In the active–assistive mode, the FPID demonstrates enhanced responsiveness and reduced error lag when tracking a sinusoidal reference, while in resistive mode, it more effectively compensates for imposed load disturbances. A rehabilitation scenario simulating repeated motion cycles on a healthy subject further confirms that the FPID controller consistently produces a lower overall RMSE and variability. Full article
19 pages, 3284 KiB  
Article
A Novel Parametrical Approach to the Ribbed Element Slicing Process in Robotic Additive Manufacturing
by Ivan Gajdoš, Łukasz Sobaszek, Pavol Štefčák, Jozef Varga and Ján Slota
Polymers 2025, 17(14), 1965; https://doi.org/10.3390/polym17141965 (registering DOI) - 17 Jul 2025
Abstract
Additive manufacturing is one of the most common technologies used in prototyping and manufacturing usable parts. Currently, industrial robots are also increasingly being used to carry out this process. This is due to a robot’s capability to fabricate components with structural configurations that [...] Read more.
Additive manufacturing is one of the most common technologies used in prototyping and manufacturing usable parts. Currently, industrial robots are also increasingly being used to carry out this process. This is due to a robot’s capability to fabricate components with structural configurations that are unattainable using conventional 3D printers. The number of degrees of freedom of the robot, combined with its working range and precision, allows the construction of parts with greater dimensions and better strength in comparison to conventional 3D printing. However, the implementation of a robot into the 3D printing process requires the development of novel solutions to streamline and facilitate the prototyping and manufacturing processes. This work focuses on the need to develop new slicing methods for robotic additive manufacturing. A solution for alternative control code generation without external slicer utilization is presented. The implementation of the proposed method enables a reduction of over 80% in the time required to generate new G-code, significantly outperforming traditional approaches. The paper presents a novel approach to the slicing process in robotic additive manufacturing that is adopted for the fused granular fabrication process using thermoplastic polymers. Full article
(This article belongs to the Special Issue Additive Manufacturing Based on Polymer Materials)
27 pages, 49290 KiB  
Review
AI-Driven Robotics: Innovations in Design, Perception, and Decision-Making
by Lei Li, Li Li, Mantian Li and Ke Liang
Machines 2025, 13(7), 615; https://doi.org/10.3390/machines13070615 (registering DOI) - 17 Jul 2025
Abstract
Robots are increasingly being used across industries, healthcare, and service sectors to perform a wide range of tasks. However, as these tasks become more complex and environments more unpredictable, the need for adaptable robots continues to grow—bringing with it greater technological challenges. Artificial [...] Read more.
Robots are increasingly being used across industries, healthcare, and service sectors to perform a wide range of tasks. However, as these tasks become more complex and environments more unpredictable, the need for adaptable robots continues to grow—bringing with it greater technological challenges. Artificial intelligence (AI), driven by large datasets and advanced algorithms, plays a pivotal role in addressing these challenges and advancing robotics. AI enhances robot design by making it more intelligent and flexible, significantly improving robot perception to better understand and respond to surrounding environments and empowering more intelligent control and decision-making. In summary, AI contributes to robotics through design optimization, environmental perception, and intelligent decision-making. This article explores the driving role of AI in robotics and presents detailed examples of its integration with fields such as embodied intelligence, humanoid robots, big data, and large AI models, while also discussing future prospects and challenges in this rapidly evolving field. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

19 pages, 2785 KiB  
Article
Implementing an AI-Based Digital Twin Analysis System for Real-Time Decision Support in a Custom-Made Sportswear SME
by Tõnis Raamets, Kristo Karjust, Jüri Majak and Aigar Hermaste
Appl. Sci. 2025, 15(14), 7952; https://doi.org/10.3390/app15147952 (registering DOI) - 17 Jul 2025
Abstract
Small and medium-sized enterprises (SMEs) in the manufacturing sector often struggle to make effective use of production data due to fragmented systems and limited digital infrastructure. This paper presents a case study of implementing an AI-enhanced digital twin in a custom sportswear manufacturing [...] Read more.
Small and medium-sized enterprises (SMEs) in the manufacturing sector often struggle to make effective use of production data due to fragmented systems and limited digital infrastructure. This paper presents a case study of implementing an AI-enhanced digital twin in a custom sportswear manufacturing SME developed under the AI and Robotics Estonia (AIRE) initiative. The solution integrates real-time production data collection using the Digital Manufacturing Support Application (DIMUSA); data processing and control; clustering-based data analysis; and virtual simulation for evaluating improvement scenarios. The framework was applied in a live production environment to analyze workstation-level performance, identify recurring bottlenecks, and provide interpretable visual insights for decision-makers. K-means clustering and DBSCAN were used to group operational states and detect process anomalies, while simulation was employed to model production flow and assess potential interventions. The results demonstrate how even a lightweight AI-driven system can support human-centered decision-making, improve process transparency, and serve as a scalable foundation for Industry 5.0-aligned digital transformation in SMEs. Full article
Show Figures

Figure 1

35 pages, 1464 KiB  
Systematic Review
Assessing Transparency of Robots, Exoskeletons, and Assistive Devices: A Systematic Review
by Nicol Moscatelli, Cristina Brambilla, Valentina Lanzani, Lorenzo Molinari Tosatti and Alessandro Scano
Sensors 2025, 25(14), 4444; https://doi.org/10.3390/s25144444 (registering DOI) - 17 Jul 2025
Abstract
Transparency is a key requirement for some classes of robots, exoskeletons, and assistive devices (READs), where safe and efficient human–robot interaction is crucial. Typical fields that require transparency are rehabilitation and industrial contexts. However, the definitions of transparency adopted in the literature are [...] Read more.
Transparency is a key requirement for some classes of robots, exoskeletons, and assistive devices (READs), where safe and efficient human–robot interaction is crucial. Typical fields that require transparency are rehabilitation and industrial contexts. However, the definitions of transparency adopted in the literature are heterogeneous. It follows that there is a need to clarify, summarize, and assess how transparency is commonly defined and measured. Thus, the goal of this review is to systematically examine how transparency is conceptualized and evaluated across studies. To this end, we performed a structured search across three major scientific databases. After a thorough screening process, 20 out of 400 identified articles were further examined and included in this review. Despite being recognized as a desirable and essential characteristic of READs in many domains of application, our findings reveal that transparency is still inconsistently defined and evaluated, which limits comparability across studies and hinders the development of standardized evaluation frameworks. Indeed, our screening found significant heterogeneity in both terminology and evaluation methods. The majority of the studies used either a mechanical or a kinematic definition, mostly focusing on the intrinsic behavior of the device and frequently giving little attention to the device impact of the user and on the user’s perception. Furthermore, user-centered or physiological assessments could be examined further, since evaluation metrics are usually based on kinematic and robot mechanical metrics. Only a few studies have examined the underlying motor control strategies, using more in-depth methods such as muscle synergy analysis. These findings highlight the need for a shared taxonomy and a standardized framework for transparency evaluation. Such efforts would enable more reliable comparisons between studies and support the development of more effective and user-centered READs. Full article
(This article belongs to the Special Issue Wearable Sensors, Robotic Systems and Assistive Devices)
Show Figures

Figure 1

40 pages, 17591 KiB  
Article
Research and Education in Robotics: A Comprehensive Review, Trends, Challenges, and Future Directions
by Mutaz Ryalat, Natheer Almtireen, Ghaith Al-refai, Hisham Elmoaqet and Nathir Rawashdeh
J. Sens. Actuator Netw. 2025, 14(4), 76; https://doi.org/10.3390/jsan14040076 - 16 Jul 2025
Abstract
Robotics has emerged as a transformative discipline at the intersection of the engineering, computer science, and cognitive sciences. This state-of-the-art review explores the current trends, methodologies, and challenges in both robotics research and education. This paper presents a comprehensive review of the evolution [...] Read more.
Robotics has emerged as a transformative discipline at the intersection of the engineering, computer science, and cognitive sciences. This state-of-the-art review explores the current trends, methodologies, and challenges in both robotics research and education. This paper presents a comprehensive review of the evolution of robotics, tracing its development from early automation to intelligent, autonomous systems. Key enabling technologies, such as Artificial Intelligence (AI), soft robotics, the Internet of Things (IoT), and swarm intelligence, are examined along with real-world applications in healthcare, manufacturing, agriculture, and sustainable smart cities. A central focus is placed on robotics education, where hands-on, interdisciplinary learning is reshaping curricula from K–12 to postgraduate levels. This paper analyzes instructional models including project-based learning, laboratory work, capstone design courses, and robotics competitions, highlighting their effectiveness in developing both technical and creative competencies. Widely adopted platforms such as the Robot Operating System (ROS) are briefly discussed in the context of their educational value and real-world alignment. Through case studies, institutional insights, and synthesis of academic and industry practices, this review underscores the vital role of robotics education in fostering innovation, systems thinking, and workforce readiness. The paper concludes by identifying the key challenges and future directions to guide researchers, educators, industry stakeholders, and policymakers in advancing robotics as both technological and educational frontiers. Full article
Show Figures

Figure 1

22 pages, 3768 KiB  
Article
A Collaborative Navigation Model Based on Multi-Sensor Fusion of Beidou and Binocular Vision for Complex Environments
by Yongxiang Yang and Zhilong Yu
Appl. Sci. 2025, 15(14), 7912; https://doi.org/10.3390/app15147912 - 16 Jul 2025
Abstract
This paper addresses the issues of Beidou navigation signal interference and blockage in complex substation environments by proposing an intelligent collaborative navigation model based on Beidou high-precision navigation and binocular vision recognition. The model is designed with Beidou navigation providing global positioning references [...] Read more.
This paper addresses the issues of Beidou navigation signal interference and blockage in complex substation environments by proposing an intelligent collaborative navigation model based on Beidou high-precision navigation and binocular vision recognition. The model is designed with Beidou navigation providing global positioning references and binocular vision enabling local environmental perception through a collaborative fusion strategy. The Unscented Kalman Filter (UKF) is used to integrate data from multiple sensors to ensure high-precision positioning and dynamic obstacle avoidance capabilities for robots in complex environments. Simulation results show that the Beidou–Binocular Cooperative Navigation (BBCN) model achieves a global positioning error of less than 5 cm in non-interference scenarios, and an error of only 6.2 cm under high-intensity electromagnetic interference, significantly outperforming the single Beidou model’s error of 40.2 cm. The path planning efficiency is close to optimal (with an efficiency factor within 1.05), and the obstacle avoidance success rate reaches 95%, while the system delay remains within 80 ms, meeting the real-time requirements of industrial scenarios. The innovative fusion approach enables unprecedented reliability for autonomous robot inspection in high-voltage environments, offering significant practical value in reducing human risk exposure, lowering maintenance costs, and improving inspection efficiency in power industry applications. This technology enables continuous monitoring of critical power infrastructure that was previously difficult to automate due to navigation challenges in electromagnetically complex environments. Full article
(This article belongs to the Special Issue Advanced Robotics, Mechatronics, and Automation)
Show Figures

Figure 1

30 pages, 2023 KiB  
Review
Fusion of Computer Vision and AI in Collaborative Robotics: A Review and Future Prospects
by Yuval Cohen, Amir Biton and Shraga Shoval
Appl. Sci. 2025, 15(14), 7905; https://doi.org/10.3390/app15147905 - 15 Jul 2025
Viewed by 56
Abstract
The integration of advanced computer vision and artificial intelligence (AI) techniques into collaborative robotic systems holds the potential to revolutionize human–robot interaction, productivity, and safety. Despite substantial research activity, a systematic synthesis of how vision and AI are jointly enabling context-aware, adaptive cobot [...] Read more.
The integration of advanced computer vision and artificial intelligence (AI) techniques into collaborative robotic systems holds the potential to revolutionize human–robot interaction, productivity, and safety. Despite substantial research activity, a systematic synthesis of how vision and AI are jointly enabling context-aware, adaptive cobot capabilities across perception, planning, and decision-making remains lacking (especially in recent years). Addressing this gap, our review unifies the latest advances in visual recognition, deep learning, and semantic mapping within a structured taxonomy tailored to collaborative robotics. We examine foundational technologies such as object detection, human pose estimation, and environmental modeling, as well as emerging trends including multimodal sensor fusion, explainable AI, and ethically guided autonomy. Unlike prior surveys that focus narrowly on either vision or AI, this review uniquely analyzes their integrated use for real-world human–robot collaboration. Highlighting industrial and service applications, we distill the best practices, identify critical challenges, and present key performance metrics to guide future research. We conclude by proposing strategic directions—from scalable training methods to interoperability standards—to foster safe, robust, and proactive human–robot partnerships in the years ahead. Full article
Show Figures

Figure 1

21 pages, 6802 KiB  
Article
Digital Twin Driven Four-Dimensional Path Planning of Collaborative Robots for Assembly Tasks in Industry 5.0
by Ilias Chouridis, Gabriel Mansour, Asterios Chouridis, Vasileios Papageorgiou, Michel Theodor Mansour and Apostolos Tsagaris
Robotics 2025, 14(7), 97; https://doi.org/10.3390/robotics14070097 (registering DOI) - 15 Jul 2025
Viewed by 60
Abstract
Collaborative robots are vital in Industry 5.0 operations. They are utilized to perform tasks in collaboration with humans or other robots to increase overall production efficiency and execute complex tasks. Aiming at a comprehensive approach to assembly processes and highlighting new applications of [...] Read more.
Collaborative robots are vital in Industry 5.0 operations. They are utilized to perform tasks in collaboration with humans or other robots to increase overall production efficiency and execute complex tasks. Aiming at a comprehensive approach to assembly processes and highlighting new applications of collaborative robots, this paper presents the development of a digital twin (DT) for the design, monitoring, optimization and simulation of robots’ deployment in assembly cells. The DT integrates information from both the physical and virtual worlds to design the trajectory of collaborative robots. The physical information about the industrial environment is replicated within the DT in a computationally efficient way that aligns with the requirements of the path planning algorithm and the DT’s objectives. An enhanced artificial fish swarm algorithm (AFSA) is utilized for the 4D path planning optimization, taking into account dynamic and static obstacles. Finally, the proposed framework is utilized for the examination of a case in which four industrial robotic arms are collaborating for the assembly of an industrial component. Full article
(This article belongs to the Special Issue Robot Teleoperation Integrating with Augmented Reality)
Show Figures

Figure 1

24 pages, 1605 KiB  
Article
Quantum-Secure Coherent Optical Networking for Advanced Infrastructures in Industry 4.0
by Ofir Joseph and Itzhak Aviv
Information 2025, 16(7), 609; https://doi.org/10.3390/info16070609 - 15 Jul 2025
Viewed by 129
Abstract
Modern industrial ecosystems, particularly those embracing Industry 4.0, increasingly depend on coherent optical networks operating at 400 Gbps and beyond. These high-capacity infrastructures, coupled with advanced digital signal processing and phase-sensitive detection, enable real-time data exchange for automated manufacturing, robotics, and interconnected factory [...] Read more.
Modern industrial ecosystems, particularly those embracing Industry 4.0, increasingly depend on coherent optical networks operating at 400 Gbps and beyond. These high-capacity infrastructures, coupled with advanced digital signal processing and phase-sensitive detection, enable real-time data exchange for automated manufacturing, robotics, and interconnected factory systems. However, they introduce multilayer security challenges—ranging from hardware synchronization gaps to protocol overhead manipulation. Moreover, the rise of large-scale quantum computing intensifies these threats by potentially breaking classical key exchange protocols and enabling the future decryption of stored ciphertext. In this paper, we present a systematic vulnerability analysis of coherent optical networks that use OTU4 framing, Media Access Control Security (MACsec), and 400G ZR+ transceivers. Guided by established risk assessment methodologies, we uncover critical weaknesses affecting management plane interfaces (e.g., MDIO and I2C) and overhead fields (e.g., Trail Trace Identifier, Bit Interleaved Parity). To mitigate these risks while preserving the robust data throughput and low-latency demands of industrial automation, we propose a post-quantum security framework that merges spectral phase masking with multi-homodyne coherent detection, strengthened by quantum key distribution for key management. This layered approach maintains backward compatibility with existing infrastructure and ensures forward secrecy against quantum-enabled adversaries. The evaluation results show a substantial reduction in exposure to timing-based exploits, overhead field abuses, and cryptographic compromise. By integrating quantum-safe measures at the optical layer, our solution provides a future-proof roadmap for network operators, hardware vendors, and Industry 4.0 stakeholders tasked with safeguarding next-generation manufacturing and engineering processes. Full article
Show Figures

Figure 1

24 pages, 1076 KiB  
Article
Visual–Tactile Fusion and SAC-Based Learning for Robot Peg-in-Hole Assembly in Uncertain Environments
by Jiaxian Tang, Xiaogang Yuan and Shaodong Li
Machines 2025, 13(7), 605; https://doi.org/10.3390/machines13070605 - 14 Jul 2025
Viewed by 140
Abstract
Robotic assembly, particularly peg-in-hole tasks, presents significant challenges in uncertain environments where pose deviations, varying peg shapes, and environmental noise can undermine performance. To address these issues, this paper proposes a novel approach combining visual–tactile fusion with reinforcement learning. By integrating multimodal data [...] Read more.
Robotic assembly, particularly peg-in-hole tasks, presents significant challenges in uncertain environments where pose deviations, varying peg shapes, and environmental noise can undermine performance. To address these issues, this paper proposes a novel approach combining visual–tactile fusion with reinforcement learning. By integrating multimodal data (RGB image, depth map, tactile force information, and robot body pose data) via a fusion network based on the autoencoder, we provide the robot with a more comprehensive perception of its environment. Furthermore, we enhance the robot’s assembly skill ability by using the Soft Actor–Critic (SAC) reinforcement learning algorithm, which allows the robot to adapt its actions to dynamic environments. We evaluate our method through experiments, which showed clear improvements in three key aspects: higher assembly success rates, reduced task completion times, and better generalization across diverse peg shapes and environmental conditions. The results suggest that the combination of visual and tactile feedback with SAC-based learning provides a viable and robust solution for robotic assembly in uncertain environments, paving the way for scalable and adaptable industrial robots. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

21 pages, 5069 KiB  
Article
A Patent-Based Technology Roadmap for AI-Powered Manipulators: An Evolutionary Analysis of the B25J Classification
by Yujia Zhai, Zehao Liu, Rui Zhao, Xin Zhang and Gengfeng Zheng
Informatics 2025, 12(3), 69; https://doi.org/10.3390/informatics12030069 - 11 Jul 2025
Viewed by 256
Abstract
Technology roadmapping is conducted by systematic mapping of technological evolution through patent analytics to inform innovation strategies. This study proposes an integrated framework combining hierarchical Latent Dirichlet Allocation (LDA) modeling with multiphase technology lifecycle theory, analyzing 113,449 Derwent patent abstracts (2008–2022) across three [...] Read more.
Technology roadmapping is conducted by systematic mapping of technological evolution through patent analytics to inform innovation strategies. This study proposes an integrated framework combining hierarchical Latent Dirichlet Allocation (LDA) modeling with multiphase technology lifecycle theory, analyzing 113,449 Derwent patent abstracts (2008–2022) across three dimensions: technological novelty, functional applications, and competitive advantages. By segmenting innovation stages via logistic growth curve modeling and optimizing topic extraction through perplexity validation, we constructed dynamic technology roadmaps to decode latent evolutionary patterns in AI-powered programmable manipulators (B25J classification) within an innovation trajectory. Key findings revealed: (1) a progressive transition from electromechanical actuation to sensor-integrated architectures, evidenced by 58% compound annual growth in embedded sensing patents; (2) application expansion from industrial automation (72% early stage patents) to precision medical operations, with surgical robotics growing 34% annually since 2018; and (3) continuous advancements in adaptive control algorithms, showing 2.7× growth in reinforcement learning implementations. The methodology integrates quantitative topic modeling (via pyLDAvis visualization and cosine similarity analysis) with qualitative lifecycle theory, addressing the limitations of conventional technology analysis methods by reconciling semantic granularity with temporal dynamics. The results identify core innovation trajectories—precision control, intelligent detection, and medical robotics—while highlighting emerging opportunities in autonomous navigation and human–robot collaboration. This framework provides empirically grounded strategic intelligence for R&D prioritization, cross-industry investment, and policy formulation in Industry 4.0. Full article
Show Figures

Figure 1

18 pages, 3325 KiB  
Article
AI-Driven Arm Movement Estimation for Sustainable Wearable Systems in Industry 4.0
by Emanuel Muntean, Monica Leba and Andreea Cristina Ionica
Sustainability 2025, 17(14), 6372; https://doi.org/10.3390/su17146372 - 11 Jul 2025
Viewed by 155
Abstract
In an era defined by rapid technological advancements, the intersection of artificial intelligence and industrial innovation has garnered significant attention from both academic and industry stakeholders. The emergence of Industry 4.0, characterized by the integration of cyber–physical systems, the Internet of Things, and [...] Read more.
In an era defined by rapid technological advancements, the intersection of artificial intelligence and industrial innovation has garnered significant attention from both academic and industry stakeholders. The emergence of Industry 4.0, characterized by the integration of cyber–physical systems, the Internet of Things, and smart manufacturing, demands the evolution of operational methodologies to ensure processes’ sustainability. One area of focus is the development of wearable systems that utilize artificial intelligence for the estimation of arm movements, which can enhance the ergonomics and efficiency of labor-intensive tasks. This study proposes a Random Forest-based regression model to estimate upper arm kinematics using only shoulder orientation data, reducing the need for multiple sensors and thereby lowering hardware complexity and energy demands. The model was trained on biomechanical data collected via a minimal three-IMU wearable configuration and demonstrated high predictive performance across all motion axes, achieving R2 > 0.99 and low RMSE scores on training (1.14, 0.71, and 0.73), test (3.37, 1.97, and 2.04), and unseen datasets (2.77, 0.78, and 0.63). Statistical analysis confirmed strong biomechanical coupling between shoulder and upper arm motion, justifying the feasibility of a simplified sensor approach. The findings highlight the relevance of our method for sustainable wearable technology design and its potential applications in rehabilitation robotics, industrial exoskeletons, and human–robot collaboration systems. Full article
(This article belongs to the Special Issue Sustainable Engineering Trends and Challenges Toward Industry 4.0)
Show Figures

Figure 1

19 pages, 3641 KiB  
Article
Data-Driven Selection of Decontamination Robot Locomotion Based on Terrain Compatibility Scoring Models
by Prithvi Krishna Chittoor, A. Jayasurya, Sriniketh Konduri, Eduardo Sanchez Cruz, S. M. Bhagya P. Samarakoon, M. A. Viraj J. Muthugala and Mohan Rajesh Elara
Appl. Sci. 2025, 15(14), 7781; https://doi.org/10.3390/app15147781 - 11 Jul 2025
Viewed by 196
Abstract
Decontamination robots are becoming more common in environments where reducing human exposure to hazardous substances is essential, including healthcare settings, laboratories, and industrial cleanrooms. Designing terrain-capable decontamination robots quickly is challenging due to varying operational surfaces and mobility limitations. To tackle this issue, [...] Read more.
Decontamination robots are becoming more common in environments where reducing human exposure to hazardous substances is essential, including healthcare settings, laboratories, and industrial cleanrooms. Designing terrain-capable decontamination robots quickly is challenging due to varying operational surfaces and mobility limitations. To tackle this issue, a structured recommendation framework is proposed to automate selecting optimal locomotion types and track configurations, significantly cutting down design time. The proposed system features a two-stage evaluation process: first, it creates an annotated compatibility score matrix by validating locomotion types against a robust dataset based on factors like friction coefficient, roughness, payload capacity, and slope gradient; second, it employs a weighted scoring model to rank wheel/track types based on their appropriateness for the identified environmental conditions. User needs are processed dynamically using a large language model, enabling flexible and scalable management of various deployment scenarios. A prototype decontamination robot was developed following the proposed algorithm’s guidance. This framework speeds up the configuration process and establishes a foundation for more intelligent, terrain-aware robot design workflows that can be applied to industrial, healthcare, and service robotics sectors. Full article
Show Figures

Figure 1

Back to TopTop