Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (87)

Search Parameters:
Keywords = safe human–robot collaboration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 2023 KiB  
Review
Fusion of Computer Vision and AI in Collaborative Robotics: A Review and Future Prospects
by Yuval Cohen, Amir Biton and Shraga Shoval
Appl. Sci. 2025, 15(14), 7905; https://doi.org/10.3390/app15147905 - 15 Jul 2025
Viewed by 271
Abstract
The integration of advanced computer vision and artificial intelligence (AI) techniques into collaborative robotic systems holds the potential to revolutionize human–robot interaction, productivity, and safety. Despite substantial research activity, a systematic synthesis of how vision and AI are jointly enabling context-aware, adaptive cobot [...] Read more.
The integration of advanced computer vision and artificial intelligence (AI) techniques into collaborative robotic systems holds the potential to revolutionize human–robot interaction, productivity, and safety. Despite substantial research activity, a systematic synthesis of how vision and AI are jointly enabling context-aware, adaptive cobot capabilities across perception, planning, and decision-making remains lacking (especially in recent years). Addressing this gap, our review unifies the latest advances in visual recognition, deep learning, and semantic mapping within a structured taxonomy tailored to collaborative robotics. We examine foundational technologies such as object detection, human pose estimation, and environmental modeling, as well as emerging trends including multimodal sensor fusion, explainable AI, and ethically guided autonomy. Unlike prior surveys that focus narrowly on either vision or AI, this review uniquely analyzes their integrated use for real-world human–robot collaboration. Highlighting industrial and service applications, we distill the best practices, identify critical challenges, and present key performance metrics to guide future research. We conclude by proposing strategic directions—from scalable training methods to interoperability standards—to foster safe, robust, and proactive human–robot partnerships in the years ahead. Full article
Show Figures

Figure 1

22 pages, 397 KiB  
Review
Compliant Force Control for Robots: A Survey
by Minglei Zhu, Dawei Gong, Yuyang Zhao, Jiaoyuan Chen, Jun Qi and Shijie Song
Mathematics 2025, 13(13), 2204; https://doi.org/10.3390/math13132204 - 6 Jul 2025
Viewed by 528
Abstract
Compliant force control is a fundamental capability for enabling robots to interact safely and effectively with dynamic and uncertain environments. This paper presents a comprehensive survey of compliant force control strategies, intending to enhance safety, adaptability, and precision in applications such as physical [...] Read more.
Compliant force control is a fundamental capability for enabling robots to interact safely and effectively with dynamic and uncertain environments. This paper presents a comprehensive survey of compliant force control strategies, intending to enhance safety, adaptability, and precision in applications such as physical human–robot interaction, robotic manipulation, and collaborative tasks. The review begins with a classification of compliant control methods into passive and active approaches, followed by a detailed examination of direct force control techniques—including hybrid and parallel force/position control—and indirect methods such as impedance and admittance control. Special emphasis is placed on advanced compliant control strategies applied to structurally complex robotic systems, including aerial, mobile, cable-driven, and bionic robots. In addition, intelligent compliant control approaches are systematically analyzed, encompassing neural networks, fuzzy logic, sliding mode control, and reinforcement learning. Sensorless compliance techniques are also discussed, along with emerging trends in hardware design and intelligent control methodologies. This survey provides a holistic view of the current landscape, identifies key technical challenges, and outlines future research directions for achieving more robust, intelligent, and adaptive compliant force control in robotic systems. Full article
(This article belongs to the Special Issue Intelligent Control and Applications of Nonlinear Dynamic System)
Show Figures

Figure 1

27 pages, 402 KiB  
Article
Transforming Robots into Cobots: A Sustainable Approach to Industrial Automation
by Michael Fernandez-Vega, David Alfaro-Viquez, Mauricio Zamora-Hernandez, Jose Garcia-Rodriguez and Jorge Azorin-Lopez
Electronics 2025, 14(11), 2275; https://doi.org/10.3390/electronics14112275 - 3 Jun 2025
Cited by 1 | Viewed by 1011
Abstract
The growing need for sustainable and flexible automation solutions has led to the exploration of transforming traditional industrial robots into collaborative robots (cobots). This paper presents a framework for the conversion of conventional industrial robots into safe, intelligent, and sustainable cobots, leveraging advancements [...] Read more.
The growing need for sustainable and flexible automation solutions has led to the exploration of transforming traditional industrial robots into collaborative robots (cobots). This paper presents a framework for the conversion of conventional industrial robots into safe, intelligent, and sustainable cobots, leveraging advancements in artificial intelligence and computer vision and the principles of the circular economy. The proposed modular framework contains key components such as visual perception, cognitive adaptability, safe human–robot interactions, and reinforcement learning-based decision-making. Our methodology includes a comprehensive analysis of safety standards (e.g., ISO/TS 15066), robot typologies suitable for retrofitting, and sustainability strategies, including remanufacturing and lifecycle extension. A multi-phase implementation approach is laid out for a theoretical design to contribute to the development of cost-effective and environmentally responsible robotic systems, offering a scalable solution for extending the usability and social acceptance of legacy robotic platforms in collaborative settings. Full article
(This article belongs to the Special Issue Intelligent Perception and Control for Robotics)
Show Figures

Figure 1

24 pages, 1798 KiB  
Article
HEalthcare Robotics’ ONtology (HERON): An Upper Ontology for Communication, Collaboration and Safety in Healthcare Robotics
by Penelope Ioannidou, Ioannis Vezakis, Maria Haritou, Rania Petropoulou, Stavros T. Miloulis, Ioannis Kouris, Konstantinos Bromis, George K. Matsopoulos and Dimitrios D. Koutsouris
Healthcare 2025, 13(9), 1031; https://doi.org/10.3390/healthcare13091031 - 30 Apr 2025
Viewed by 657
Abstract
Background: Healthcare robotics needs context-aware policy-compliant reasoning to achieve safe human–agent collaboration. The current ontologies fail to provide healthcare-relevant information and flexible semantic enforcement systems. Methods: HERON represents a modular upper ontology which enables healthcare robotic systems to communicate and collaborate while ensuring [...] Read more.
Background: Healthcare robotics needs context-aware policy-compliant reasoning to achieve safe human–agent collaboration. The current ontologies fail to provide healthcare-relevant information and flexible semantic enforcement systems. Methods: HERON represents a modular upper ontology which enables healthcare robotic systems to communicate and collaborate while ensuring safety during operations. The system enables domain-specific instantiations through SPARQL queries and SHACL-based constraint validation to perform context-driven logic. The system models robotic task interactions through simulated eldercare and diagnostic and surgical support scenarios which follow ethical and regulatory standards. Results: The validation tests demonstrated HERON’s capacity to enable safe and explainable autonomous operations in changing environments. The semantic constraints enforced proper eligibility for roles and privacy conditions and policy override functionality during agent task execution. The HERON system demonstrated compatibility with healthcare IT systems and demonstrated adaptability to the GDPR and other policy frameworks. Conclusions: The semantically rich framework of HERON establishes an interoperable foundation for healthcare robotics. The system architecture maintains an open design which enables HL7/FHIR standard integration and robotic middleware compatibility. HERON demonstrates superior healthcare-specific capabilities through its evaluation against SUMO HL7 and MIMO. The future research will focus on optimizing HERON for low-resource clinical environments while extending its applications to remote care emergency triage and adaptive human–robot collaboration. Full article
(This article belongs to the Section TeleHealth and Digital Healthcare)
Show Figures

Figure 1

22 pages, 1742 KiB  
Systematic Review
Trust and Trustworthiness from Human-Centered Perspective in Human–Robot Interaction (HRI)—A Systematic Literature Review
by Debora Firmino de Souza, Sonia Sousa, Kadri Kristjuhan-Ling, Olga Dunajeva, Mare Roosileht, Avar Pentel, Mati Mõttus, Mustafa Can Özdemir and Žanna Gratšjova
Electronics 2025, 14(8), 1557; https://doi.org/10.3390/electronics14081557 - 11 Apr 2025
Cited by 1 | Viewed by 1650
Abstract
The transition from Industry 4.0 to Industry 5.0 highlights recent European efforts to design intelligent devices, systems, and automation that can work alongside human intelligence and enhance human capabilities. In this vision, human–machine interaction (HMI) goes beyond simply deploying machines, such as autonomous [...] Read more.
The transition from Industry 4.0 to Industry 5.0 highlights recent European efforts to design intelligent devices, systems, and automation that can work alongside human intelligence and enhance human capabilities. In this vision, human–machine interaction (HMI) goes beyond simply deploying machines, such as autonomous robots, for economic advantage. It requires societal and educational shifts toward a human-centric research vision, revising how we perceive technological advancements to improve the benefits and convenience for individuals. Furthermore, it also requires determining which priority is given to user preferences and needs to feel safe while collaborating with autonomous intelligent systems. This proposed human-centric vision aims to enhance human creativity and problem-solving abilities by leveraging machine precision and data processing, all while protecting human agency. Aligned with this perspective, we conducted a systematic literature review focusing on trust and trustworthiness in relation to characteristics of humans and systems in human–robot interaction (HRI). Our research explores the aspects that impact the potential for designing and fostering machine trustworthiness from a human-centered standpoint. A systematic analysis was conducted to review 34 articles in recent HRI-related studies. Then, through a standardized screening, we identified and categorized factors influencing trust in automation that can act as trust barriers and facilitators when implementing autonomous intelligent systems. Our study comments on the application areas in which trust is considered, how it is conceptualized, and how it is evaluated within the field. Our analysis underscores the significance of examining users’ trust and the related factors impacting it as foundational elements for promoting secure and trustworthy HRI. Full article
(This article belongs to the Special Issue Emerging Trends in Multimodal Human-Computer Interaction)
Show Figures

Figure 1

19 pages, 7148 KiB  
Article
A Human–Robot Team Knowledge-Enhanced Large Language Model for Fault Analysis in Lunar Surface Exploration
by Hao Wang, Shuqi Xue, Hongbo Zhang, Chunhui Wang and Yan Fu
Aerospace 2025, 12(4), 325; https://doi.org/10.3390/aerospace12040325 - 10 Apr 2025
Viewed by 685
Abstract
Human–robot collaboration for lunar surface exploration requires high safety standards and tedious operational procedures. This process generates extensive task-related data, including various types of faults and influencing factors. However, these data are characteristic of multi-dimensional, time series, and intertwined. Also, prolonged tasks and [...] Read more.
Human–robot collaboration for lunar surface exploration requires high safety standards and tedious operational procedures. This process generates extensive task-related data, including various types of faults and influencing factors. However, these data are characteristic of multi-dimensional, time series, and intertwined. Also, prolonged tasks and multi-factor data coupling pose significant challenges for astronauts in achieving safe and efficient fault localization and resolution. In this paper, we propose a method to enhance the base large language models (LLMs) by embedding knowledge graphs (KGs) of lunar surface exploration, thereby assisting astronauts in reasoning about faults during the exploration process. A multi-round dialog dataset is constructed through the knowledge subgraph embedded in the request analysis process. The LLM is fine-tuned using the p-tuning method to develop a specialized LLM suitable for lunar surface exploration. With reference to the situational awareness (SA) theory, multi-level prompts are designed to facilitate multi-round dialogues and aid decision-making. A case study shows that our proposed model exhibits greater expertise and reliability in responding to lunar surface exploration tasks than classical commercial models, such as ChatGPT and GPT-4. The results indicate that our method provides a reliable and efficient aid for astronauts in fault analysis during lunar surface exploration. Full article
(This article belongs to the Special Issue Aerospace Human–Machine and Environmental Control Engineering)
Show Figures

Figure 1

28 pages, 6530 KiB  
Article
Obstacle Avoidance Technique for Mobile Robots at Autonomous Human-Robot Collaborative Warehouse Environments
by Lucas C. Sousa, Yago M. R. Silva, Vinícius B. Schettino, Tatiana M. B. Santos, Alessandro R. L. Zachi, Josiel A. Gouvêa and Milena F. Pinto
Sensors 2025, 25(8), 2387; https://doi.org/10.3390/s25082387 - 9 Apr 2025
Viewed by 2017
Abstract
This paper presents an obstacle avoidance technique for a mobile robot in human-robot collaborative (HRC) tasks. The proposed solution uses fuzzy logic rules and a convolutional neural network (CNN) in an integrated approach to detect objects during vehicle movement. The goal is to [...] Read more.
This paper presents an obstacle avoidance technique for a mobile robot in human-robot collaborative (HRC) tasks. The proposed solution uses fuzzy logic rules and a convolutional neural network (CNN) in an integrated approach to detect objects during vehicle movement. The goal is to improve the robot’s navigation autonomously and ensure the safety of people and equipment in dynamic environments. Using this technique, it is possible to provide important references to the robot’s internal control system, guiding it to continuously adjust its velocity and yaw in order to avoid obstacles (humans and moving objects) while following the path planned for its task. The approach aims to improve operational safety without compromising productivity, addressing critical challenges in collaborative robotics. The system was tested in a simulated environment using the Robot Operating System (ROS) and Gazebo to demonstrate the effectiveness of navigation and obstacle avoidance. The results obtained with the application of the proposed technique indicate that the framework allows real-time adaptation and safe interaction between robot and obstacles in complex and changing industrial workspaces. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

43 pages, 3617 KiB  
Review
AI and Interventional Radiology: A Narrative Review of Reviews on Opportunities, Challenges, and Future Directions
by Andrea Lastrucci, Nicola Iosca, Yannick Wandael, Angelo Barra, Graziano Lepri, Nevio Forini, Renzo Ricci, Vittorio Miele and Daniele Giansanti
Diagnostics 2025, 15(7), 893; https://doi.org/10.3390/diagnostics15070893 - 1 Apr 2025
Cited by 2 | Viewed by 1569
Abstract
The integration of artificial intelligence in interventional radiology is an emerging field with transformative potential, aiming to make a great contribution to the health domain. This overview of reviews seeks to identify prevailing themes, opportunities, challenges, and recommendations related to the process of [...] Read more.
The integration of artificial intelligence in interventional radiology is an emerging field with transformative potential, aiming to make a great contribution to the health domain. This overview of reviews seeks to identify prevailing themes, opportunities, challenges, and recommendations related to the process of integration. Utilizing a standardized checklist and quality control procedures, this review examines recent advancements in, and future implications of, this domain. In total, 27 review studies were selected through the systematic process. Based on the overview, the integration of artificial intelligence (AI) in interventional radiology (IR) presents significant opportunities to enhance precision, efficiency, and personalization of procedures. AI automates tasks like catheter manipulation and needle placement, improving accuracy and reducing variability. It also integrates multiple imaging modalities, optimizing treatment planning and outcomes. AI aids intra-procedural guidance with advanced needle tracking and real-time image fusion. Robotics and automation in IR are advancing, though full autonomy in AI-guided systems has not been achieved. Despite these advancements, the integration of AI in IR is complex, involving imaging systems, robotics, and other technologies. This complexity requires a comprehensive certification and integration process. The role of regulatory bodies, scientific societies, and clinicians is essential to address these challenges. Standardized guidelines, clinician education, and careful AI assessment are necessary for safe integration. The future of AI in IR depends on developing standardized guidelines for medical devices and AI applications. Collaboration between certifying bodies, scientific societies, and legislative entities, as seen in the EU AI Act, will be crucial to tackling AI-specific challenges. Focusing on transparency, data governance, human oversight, and post-market monitoring will ensure AI integration in IR proceeds with safeguards, benefiting patient outcomes and advancing the field. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging: 2nd Edition)
Show Figures

Figure 1

26 pages, 1410 KiB  
Systematic Review
Capturing Mental Workload Through Physiological Sensors in Human–Robot Collaboration: A Systematic Literature Review
by Eduarda Pereira, Luis Sigcha, Emanuel Silva, Adriana Sampaio, Nuno Costa and Nélson Costa
Appl. Sci. 2025, 15(6), 3317; https://doi.org/10.3390/app15063317 - 18 Mar 2025
Viewed by 1436
Abstract
Human–robot collaboration (HRC) is increasingly prevalent across various industries, promising to boost productivity, efficiency, and safety. As robotics technology advances and takes on more complex tasks traditionally performed by humans, the nature of work and the demands on workers are evolving. This shift [...] Read more.
Human–robot collaboration (HRC) is increasingly prevalent across various industries, promising to boost productivity, efficiency, and safety. As robotics technology advances and takes on more complex tasks traditionally performed by humans, the nature of work and the demands on workers are evolving. This shift emphasizes the need to critically integrate human factors into these interactions, as the effectiveness and safety of these systems are highly dependent on how workers cooperate with and understand robots. A significant challenge in this domain is the lack of a consensus on the most efficient way to operationalize and assess mental workload, which is crucial for optimizing HRC. In this systematic literature review, we analyze the different psychophysiological measures that can reliably capture and differentiate varying degrees of mental workload in different HRC settings. The findings highlight the crucial need for standardized methodologies in workload assessment to enhance HRC models. Ultimately, this work aims to guide both theorists and practitioners in creating more sophisticated, safe, and efficient HRC frameworks by providing a comprehensive overview of the existing literature and pointing out areas for further study. Full article
Show Figures

Figure 1

18 pages, 23703 KiB  
Article
Asymmetry Elliptical Likelihood Potential Field for Real-Time Three-Dimensional Collision Avoidance in Industrial Robots
by Ean-Gyu Han, Dong-Min Seo, Jun-Seo Lee, Ho-Young Kim, Shin-Yeob Kang, Ho-Joon Yang and Tae-Koo Kang
Electronics 2025, 14(6), 1102; https://doi.org/10.3390/electronics14061102 - 11 Mar 2025
Viewed by 604
Abstract
Industrial robots play a crucial role in modern manufacturing, but ensuring safe human–robot collaboration remains a challenge. Traditional collision avoidance methods, such as physical barriers and emergency stops, are limited in efficiency and flexibility. This study proposes the Asymmetry Elliptical Likelihood Potential Field [...] Read more.
Industrial robots play a crucial role in modern manufacturing, but ensuring safe human–robot collaboration remains a challenge. Traditional collision avoidance methods, such as physical barriers and emergency stops, are limited in efficiency and flexibility. This study proposes the Asymmetry Elliptical Likelihood Potential Field (AELPF) algorithm, a novel real-time collision avoidance system inspired by autonomous driving technologies. The AELPF method leverages LiDAR sensors to dynamically compute an asymmetric elliptical repulsive field, enabling precise obstacle detection and avoidance in 3D environments. Unlike conventional potential field approaches, the AELPF accounts for both vertical and horizontal deviations, allowing it to adapt to complex industrial settings. To quantify the performance of AELPF, we compare it to two commonly used algorithms: the Vector Field Histogram (VFH) and the Follow the Gap Method (FGM). In terms of processing time, the VFH algorithm requires 50 ms per cycle, while the FGM algorithm operates at 22 ms. In contrast, the the AELPF, when using only a single channel, processes at 12 ms, which is significantly faster than both the VFH and FGM. These results indicate that the AELPF not only provides faster decision-making but also ensures smoother, more responsive navigation in dynamic environments. Both simulation and physical experiments confirm that the AELPF significantly improves obstacle avoidance, particularly in the z-axis direction, reducing the risk of collisions while maintaining operational efficiency. Full article
(This article belongs to the Special Issue Machine/Deep Learning Applications and Intelligent Systems)
Show Figures

Figure 1

41 pages, 2797 KiB  
Systematic Review
Assessing Safety in Physical Human–Robot Interaction in Industrial Settings: A Systematic Review of Contact Modelling and Impact Measuring Methods
by Samarathunga S. M. B. P. B., Marcello Valori, Giovanni Legnani and Irene Fassi
Robotics 2025, 14(3), 27; https://doi.org/10.3390/robotics14030027 - 28 Feb 2025
Viewed by 2961
Abstract
As collaborative robots (cobots) increasingly share workspaces with humans, ensuring safe physical human–robot interaction (pHRI) has become paramount. This systematic review addresses safety assessment in pHRI, focussing on the industrial field, with the objective of collecting approaches and practices developed so far for [...] Read more.
As collaborative robots (cobots) increasingly share workspaces with humans, ensuring safe physical human–robot interaction (pHRI) has become paramount. This systematic review addresses safety assessment in pHRI, focussing on the industrial field, with the objective of collecting approaches and practices developed so far for modelling, simulating, and verifying possible collisions in human–robot collaboration (HRC). To this aim, advances in human–robot collision modelling and test-based safety evaluation over the last fifteen years were examined, identifying six main categories: human body modelling, robot modelling, collision modelling, determining safe limits, approaches for evaluating human–robot contact, and biofidelic sensor development. Despite the reported advancements, several persistent challenges were identified, including the over-reliance on simplified quasi-static models, insufficient exploration of transient contact dynamics, and a lack of inclusivity in demographic data for establishing safety thresholds. This analysis also underscores the limitations of the biofidelic sensors currently used and the need for standardised validation protocols for the impact scenarios identified through risk assessment. By providing a comprehensive overview of the topic, this review aims to inspire researchers to address underexplored areas and foster innovation in developing advanced, but suitable, models to simulate human–robot contact and technologies and methodologies for reliable and user-friendly safety validation approaches. Further deepening those topics, even combined with each other, will bring about the twofold effect of easing the implementation while increasing the safety of robotic applications characterised by pHRI. Full article
(This article belongs to the Section Industrial Robots and Automation)
Show Figures

Figure 1

27 pages, 12074 KiB  
Article
Near Time-Optimal Trajectories with ISO Standard Constraints for Human–Robot Collaboration in Fabric Co-Transportation
by Renat Kermenov, Alessandro Di Biase, Ilaria Pellicani, Sauro Longhi and Andrea Bonci
Robotics 2025, 14(2), 10; https://doi.org/10.3390/robotics14020010 - 27 Jan 2025
Cited by 1 | Viewed by 1699
Abstract
Enabling robots to work safely close to humans requires both adherence to safety standards and the development of appropriate strategies to plan and control robot movements in accordance with human movements. Collaboration between humans and robots in a shared environment is a joint [...] Read more.
Enabling robots to work safely close to humans requires both adherence to safety standards and the development of appropriate strategies to plan and control robot movements in accordance with human movements. Collaboration between humans and robots in a shared environment is a joint activity aimed at completing specific tasks, requiring coordination, synchronisation, and sometimes physical contact, in which each party contributes its own skills and resources. Among the most challenging tasks of human–robot cooperation is the co-transport of deformable materials such as fabrics. This paper proposes a method for generating the trajectory of a collaborative manipulator. The method is designed for the co-transport of materials such as fabrics. It combines a near time-optimal control strategy that ensures responsiveness in following human actions while simultaneously guaranteeing compliance with the safety limits imposed by current regulations. The combination of these two elements results in a viable co-transport solution which preserves the safety of human operators. This is achieved by constraining the path of the robot trajectory with prescribed velocities and accelerations while simultaneously ensuring a near time-optimal control strategy. In short, the robot movement is generated in such a way as to ensure both the tracking of humans in the co-transportation task and compliance with safety limits. As a first attempt to adopt the proposed approach to integrate time-optimal strategies into human–robot interaction, the simulations and preliminary experimental result obtained are promising. Full article
(This article belongs to the Section Industrial Robots and Automation)
Show Figures

Figure 1

17 pages, 5755 KiB  
Article
A Hybrid Architecture for Safe Human–Robot Industrial Tasks
by Gaetano Lettera, Daniele Costa and Massimo Callegari
Appl. Sci. 2025, 15(3), 1158; https://doi.org/10.3390/app15031158 - 24 Jan 2025
Cited by 1 | Viewed by 1346
Abstract
In the context of Industry 5.0, human–robot collaboration (HRC) is increasingly crucial for enabling safe and efficient operations in shared industrial workspaces. This study aims to implement a hybrid robotic architecture based on the Speed and Separation Monitoring (SSM) collaborative scenario defined in [...] Read more.
In the context of Industry 5.0, human–robot collaboration (HRC) is increasingly crucial for enabling safe and efficient operations in shared industrial workspaces. This study aims to implement a hybrid robotic architecture based on the Speed and Separation Monitoring (SSM) collaborative scenario defined in ISO/TS 15066. The system calculates the minimum protective separation distance between the robot and the operators and slows down or stops the robot according to the risk assessment computed in real time. Compared to existing solutions, the approach prevents collisions and maximizes workcell production by reducing the robot speed only when the calculated safety index indicates an imminent risk of collision. The proposed distributed software architecture utilizes the ROS2 framework, integrating three modules: (1) a fast and reliable human tracking module based on the OptiTrack system that considerably reduces latency times or false positives, (2) an intention estimation (IE) module, employing a linear Kalman filter (LKF) to predict the operator’s next position and velocity, thus considering the current scenario and not the worst case, and (3) a robot control module that computes the protective separation distance and assesses the safety index by measuring the Euclidean distance between operators and the robot. This module dynamically adjusts robot speed to maintain safety while minimizing unnecessary slowdowns, ensuring the efficiency of collaborative tasks. Experimental results demonstrate that the proposed system effectively balances safety and speed, optimizing overall performance in human–robot collaborative industrial environments, with significant improvements in productivity and reduced risk of accidents. Full article
Show Figures

Figure 1

16 pages, 6180 KiB  
Article
Textile Fabric Defect Detection Using Enhanced Deep Convolutional Neural Network with Safe Human–Robot Collaborative Interaction
by Syed Ali Hassan, Michail J. Beliatis, Agnieszka Radziwon, Arianna Menciassi and Calogero Maria Oddo
Electronics 2024, 13(21), 4314; https://doi.org/10.3390/electronics13214314 - 2 Nov 2024
Cited by 5 | Viewed by 3089
Abstract
The emergence of modern robotic technology and artificial intelligence (AI) enables a transformation in the textile sector. Manual fabric defect inspection is time-consuming, error-prone, and labor-intensive. This offers a great possibility for applying more AI-trained automated processes with safe human–robot interaction (HRI) to [...] Read more.
The emergence of modern robotic technology and artificial intelligence (AI) enables a transformation in the textile sector. Manual fabric defect inspection is time-consuming, error-prone, and labor-intensive. This offers a great possibility for applying more AI-trained automated processes with safe human–robot interaction (HRI) to reduce risks of work accidents and occupational illnesses and enhance the environmental sustainability of the processes. In this experimental study, we developed, implemented, and tested a novel algorithm that detects fabric defects by utilizing enhanced deep convolutional neural networks (DCNNs). The proposed method integrates advanced DCNN architectures to automatically classify and detect 13 different types of fabric defects, such as double-ends, holes, broken ends, etc., ensuring high accuracy and efficiency in the inspection process. The dataset is created through augmentation techniques and a model is fine-tuned on a large dataset of annotated images using transfer learning approaches. The experiment was performed using an anthropomorphic robot that was programmed to move above the fabric. The camera attached to the robot detected defects in the fabric and triggered an alarm. A photoelectric sensor was installed on the conveyor belt and linked to the robot to notify it about an impending fabric. The CNN model architecture was enhanced to increase performance. Experimental findings show that the presented system can detect fabric defects with a 97.49% mean Average Precision (mAP). Full article
(This article belongs to the Special Issue Applications of Computer Vision, 3rd Edition)
Show Figures

Figure 1

22 pages, 125192 KiB  
Article
Under-Canopy Drone 3D Surveys for Wild Fruit Hotspot Mapping
by Paweł Trybała, Luca Morelli, Fabio Remondino, Levi Farrand and Micael S. Couceiro
Drones 2024, 8(10), 577; https://doi.org/10.3390/drones8100577 - 12 Oct 2024
Cited by 5 | Viewed by 2453
Abstract
Advances in mobile robotics and AI have significantly expanded their application across various domains and challenging conditions. In the past, this has been limited to safe, controlled, and highly structured settings, where simplifying assumptions and conditions allowed for the effective resolution of perception-based [...] Read more.
Advances in mobile robotics and AI have significantly expanded their application across various domains and challenging conditions. In the past, this has been limited to safe, controlled, and highly structured settings, where simplifying assumptions and conditions allowed for the effective resolution of perception-based tasks. Today, however, robotics and AI are moving into the wild, where human–robot collaboration and robust operation are essential. One of the most demanding scenarios involves deploying autonomous drones in GNSS-denied environments, such as dense forests. Despite the challenges, the potential to exploit natural resources in these settings underscores the importance of developing technologies that can operate in such conditions. In this study, we present a methodology that addresses the unique challenges of natural forest environments by integrating positioning methods, leveraging cameras, LiDARs, GNSS, and vision AI with drone technology for under-canopy wild berry mapping. To ensure practical utility for fruit harvesters, we generate intuitive heat maps of berry locations and provide users with a mobile app that supports interactive map visualization, real-time positioning, and path planning assistance. Our approach, tested in a Scandinavian forest, refines the identification of high-yield wild fruit locations using V-SLAM, demonstrating the feasibility and effectiveness of autonomous drones in these demanding applications. Full article
(This article belongs to the Section Drones in Agriculture and Forestry)
Show Figures

Figure 1

Back to TopTop