sensors-logo

Journal Browser

Journal Browser

Human-Robot Collaborations in Industrial Automation

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (31 May 2022) | Viewed by 43010

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
Department of Engineering and Technology, College of Science, Technology, Engineering, Mathematics and Management, University of Wisconsin-Stout, Menomonie, WI 54751, USA
Interests: computational biomechanics; additive manufacturing; engineering education
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Technology is changing the manufacturing world. For example, sensors are being used to track inventory from the manufacturing floor all the way to a retail shelf or a customer’s door. These types of interconnected systems have been called the fourth industrial revolution, also known as Industry 4.0, and are projected to lower manufacturing costs. As industry moves toward these integrated technologies and lower costs, engineers will need to connect these systems via the Internet of Things (IoT). These engineers will also need to design how these connected systems interact with humans. The focus of this Special Issue is the smart sensors used in these human–robot collaborations. We invite authors to submit original research, new developments, experimental works, and surveys concerning human–robot interactions. Topics of interest include, but are not limited to:

  • Haptic feedback;
  • Controller design;
  • Physical devices using human–robot interactions;
  • Algorithm development;
  • Artificial intelligence;
  • Machine learning;
  • Interface design.

Dr. Anne Schmitz
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human–robot collaboration
  • controller design
  • artificial intelligence
  • haptic feedback

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review, Other

3 pages, 181 KiB  
Editorial
Human–Robot Collaboration in Industrial Automation: Sensors and Algorithms
by Anne Schmitz
Sensors 2022, 22(15), 5848; https://doi.org/10.3390/s22155848 - 05 Aug 2022
Cited by 3 | Viewed by 1666
Abstract
Technology is changing the manufacturing world [...] Full article
(This article belongs to the Special Issue Human-Robot Collaborations in Industrial Automation)

Research

Jump to: Editorial, Review, Other

15 pages, 7938 KiB  
Article
A Resilient and Effective Task Scheduling Approach for Industrial Human-Robot Collaboration
by Andrea Pupa, Wietse Van Dijk, Christiaan Brekelmans and Cristian Secchi
Sensors 2022, 22(13), 4901; https://doi.org/10.3390/s22134901 - 29 Jun 2022
Cited by 11 | Viewed by 2059
Abstract
Effective task scheduling in human-robot collaboration (HRC) scenarios is one of the great challenges of collaborative robotics. The shared workspace inside an industrial setting brings a lot of uncertainties that cannot be foreseen. A prior offline task scheduling strategy is ineffective in dealing [...] Read more.
Effective task scheduling in human-robot collaboration (HRC) scenarios is one of the great challenges of collaborative robotics. The shared workspace inside an industrial setting brings a lot of uncertainties that cannot be foreseen. A prior offline task scheduling strategy is ineffective in dealing with these uncertainties. In this paper, a novel online framework to achieve a resilient and reliable task schedule is presented. The framework can deal with deviations that occur during operation, different operator skills, error by the human or robot, and substitution of actors, while maintaining an efficient schedule by promoting parallel human-robot work. First, the collaborative job and the possible deviations are represented by AND/OR graphs. Subsequently, the proposed architecture chooses the most suitable path to improve the collaboration. If some failures occur, the AND/OR graph is adapted locally, allowing the collaboration to be completed. The framework is validated in an industrial assembly scenario with a Franka Emika Panda collaborative robot. Full article
(This article belongs to the Special Issue Human-Robot Collaborations in Industrial Automation)
Show Figures

Figure 1

20 pages, 2261 KiB  
Article
Teleoperation of High-Speed Robot Hand with High-Speed Finger Position Recognition and High-Accuracy Grasp Type Estimation
by Yuji Yamakawa and Koki Yoshida
Sensors 2022, 22(10), 3777; https://doi.org/10.3390/s22103777 - 16 May 2022
Cited by 1 | Viewed by 2089
Abstract
This paper focuses on the teleoperation of a robot hand on the basis of finger position recognition and grasp type estimation. For the finger position recognition, we propose a new method that fuses machine learning and high-speed image-processing techniques. Furthermore, we propose a [...] Read more.
This paper focuses on the teleoperation of a robot hand on the basis of finger position recognition and grasp type estimation. For the finger position recognition, we propose a new method that fuses machine learning and high-speed image-processing techniques. Furthermore, we propose a grasp type estimation method according to the results of the finger position recognition by using decision tree. We developed a teleoperation system with high speed and high responsiveness according to the results of the finger position recognition and grasp type estimation. By using the proposed method and system, we achieved teleoperation of a high-speed robot hand. In particular, we achieved teleoperated robot hand control beyond the speed of human hand motion. Full article
(This article belongs to the Special Issue Human-Robot Collaborations in Industrial Automation)
Show Figures

Figure 1

17 pages, 11692 KiB  
Article
A Human-Following Motion Planning and Control Scheme for Collaborative Robots Based on Human Motion Prediction
by Fahad Iqbal Khawaja, Akira Kanazawa, Jun Kinugawa and Kazuhiro Kosuge
Sensors 2021, 21(24), 8229; https://doi.org/10.3390/s21248229 - 09 Dec 2021
Cited by 6 | Viewed by 4023
Abstract
Human–Robot Interaction (HRI) for collaborative robots has become an active research topic recently. Collaborative robots assist human workers in their tasks and improve their efficiency. However, the worker should also feel safe and comfortable while interacting with the robot. In this paper, we [...] Read more.
Human–Robot Interaction (HRI) for collaborative robots has become an active research topic recently. Collaborative robots assist human workers in their tasks and improve their efficiency. However, the worker should also feel safe and comfortable while interacting with the robot. In this paper, we propose a human-following motion planning and control scheme for a collaborative robot which supplies the necessary parts and tools to a worker in an assembly process in a factory. In our proposed scheme, a 3-D sensing system is employed to measure the skeletal data of the worker. At each sampling time of the sensing system, an optimal delivery position is estimated using the real-time worker data. At the same time, the future positions of the worker are predicted as probabilistic distributions. A Model Predictive Control (MPC)-based trajectory planner is used to calculate a robot trajectory that supplies the required parts and tools to the worker and follows the predicted future positions of the worker. We have installed our proposed scheme in a collaborative robot system with a 2-DOF planar manipulator. Experimental results show that the proposed scheme enables the robot to provide anytime assistance to a worker who is moving around in the workspace while ensuring the safety and comfort of the worker. Full article
(This article belongs to the Special Issue Human-Robot Collaborations in Industrial Automation)
Show Figures

Figure 1

18 pages, 11530 KiB  
Article
Human–Machine Differentiation in Speed and Separation Monitoring for Improved Efficiency in Human–Robot Collaboration
by Urban B. Himmelsbach, Thomas M. Wendt, Nikolai Hangst, Philipp Gawron and Lukas Stiglmeier
Sensors 2021, 21(21), 7144; https://doi.org/10.3390/s21217144 - 28 Oct 2021
Cited by 13 | Viewed by 2369
Abstract
Human–robot collaborative applications have been receiving increasing attention in industrial applications. The efficiency of the applications is often quite low compared to traditional robotic applications without human interaction. Especially for applications that use speed and separation monitoring, there is potential to increase the [...] Read more.
Human–robot collaborative applications have been receiving increasing attention in industrial applications. The efficiency of the applications is often quite low compared to traditional robotic applications without human interaction. Especially for applications that use speed and separation monitoring, there is potential to increase the efficiency with a cost-effective and easy to implement method. In this paper, we proposed to add human–machine differentiation to the speed and separation monitoring in human–robot collaborative applications. The formula for the protective separation distance was extended with a variable for the kind of object that approaches the robot. Different sensors for differentiation of human and non-human objects are presented. Thermal cameras are used to take measurements in a proof of concept. Through differentiation of human and non-human objects, it is possible to decrease the protective separation distance between the robot and the object and therefore increase the overall efficiency of the collaborative application. Full article
(This article belongs to the Special Issue Human-Robot Collaborations in Industrial Automation)
Show Figures

Figure 1

15 pages, 1894 KiB  
Article
Uncertainty-Aware Knowledge Distillation for Collision Identification of Collaborative Robots
by Wookyong Kwon, Yongsik Jin and Sang Jun Lee
Sensors 2021, 21(19), 6674; https://doi.org/10.3390/s21196674 - 08 Oct 2021
Cited by 7 | Viewed by 2569
Abstract
Human-robot interaction has received a lot of attention as collaborative robots became widely utilized in many industrial fields. Among techniques for human-robot interaction, collision identification is an indispensable element in collaborative robots to prevent fatal accidents. This paper proposes a deep learning method [...] Read more.
Human-robot interaction has received a lot of attention as collaborative robots became widely utilized in many industrial fields. Among techniques for human-robot interaction, collision identification is an indispensable element in collaborative robots to prevent fatal accidents. This paper proposes a deep learning method for identifying external collisions in 6-DoF articulated robots. The proposed method expands the idea of CollisionNet, which was previously proposed for collision detection, to identify the locations of external forces. The key contribution of this paper is uncertainty-aware knowledge distillation for improving the accuracy of a deep neural network. Sample-level uncertainties are estimated from a teacher network, and larger penalties are imposed for uncertain samples during the training of a student network. Experiments demonstrate that the proposed method is effective for improving the performance of collision identification. Full article
(This article belongs to the Special Issue Human-Robot Collaborations in Industrial Automation)
Show Figures

Figure 1

23 pages, 6226 KiB  
Article
Improved Mutual Understanding for Human-Robot Collaboration: Combining Human-Aware Motion Planning with Haptic Feedback Devices for Communicating Planned Trajectory
by Stefan Grushko, Aleš Vysocký, Petr Oščádal, Michal Vocetka, Petr Novák and Zdenko Bobovský
Sensors 2021, 21(11), 3673; https://doi.org/10.3390/s21113673 - 25 May 2021
Cited by 26 | Viewed by 4364
Abstract
In a collaborative scenario, the communication between humans and robots is a fundamental aspect to achieve good efficiency and ergonomics in the task execution. A lot of research has been made related to enabling a robot system to understand and predict human behaviour, [...] Read more.
In a collaborative scenario, the communication between humans and robots is a fundamental aspect to achieve good efficiency and ergonomics in the task execution. A lot of research has been made related to enabling a robot system to understand and predict human behaviour, allowing the robot to adapt its motion to avoid collisions with human workers. Assuming the production task has a high degree of variability, the robot’s movements can be difficult to predict, leading to a feeling of anxiety in the worker when the robot changes its trajectory and approaches since the worker has no information about the planned movement of the robot. Additionally, without information about the robot’s movement, the human worker cannot effectively plan own activity without forcing the robot to constantly replan its movement. We propose a novel approach to communicating the robot’s intentions to a human worker. The improvement to the collaboration is presented by introducing haptic feedback devices, whose task is to notify the human worker about the currently planned robot’s trajectory and changes in its status. In order to verify the effectiveness of the developed human-machine interface in the conditions of a shared collaborative workspace, a user study was designed and conducted among 16 participants, whose objective was to accurately recognise the goal position of the robot during its movement. Data collected during the experiment included both objective and subjective parameters. Statistically significant results of the experiment indicated that all the participants could improve their task completion time by over 45% and generally were more subjectively satisfied when completing the task with equipped haptic feedback devices. The results also suggest the usefulness of the developed notification system since it improved users’ awareness about the motion plan of the robot. Full article
(This article belongs to the Special Issue Human-Robot Collaborations in Industrial Automation)
Show Figures

Figure 1

20 pages, 2864 KiB  
Article
A Mixed-Perception Approach for Safe Human–Robot Collaboration in Industrial Automation
by Fatemeh Mohammadi Amin, Maryam Rezayati, Hans Wernher van de Venn and Hossein Karimpour
Sensors 2020, 20(21), 6347; https://doi.org/10.3390/s20216347 - 07 Nov 2020
Cited by 42 | Viewed by 6597
Abstract
Digital-enabled manufacturing systems require a high level of automation for fast and low-cost production but should also present flexibility and adaptiveness to varying and dynamic conditions in their environment, including the presence of human beings; however, this presence of workers in the shared [...] Read more.
Digital-enabled manufacturing systems require a high level of automation for fast and low-cost production but should also present flexibility and adaptiveness to varying and dynamic conditions in their environment, including the presence of human beings; however, this presence of workers in the shared workspace with robots decreases the productivity, as the robot is not aware about the human position and intention, which leads to concerns about human safety. This issue is addressed in this work by designing a reliable safety monitoring system for collaborative robots (cobots). The main idea here is to significantly enhance safety using a combination of recognition of human actions using visual perception and at the same time interpreting physical human–robot contact by tactile perception. Two datasets containing contact and vision data are collected by using different volunteers. The action recognition system classifies human actions using the skeleton representation of the latter when entering the shared workspace and the contact detection system distinguishes between intentional and incidental interactions if physical contact between human and cobot takes place. Two different deep learning networks are used for human action recognition and contact detection, which in combination, are expected to lead to the enhancement of human safety and an increase in the level of cobot perception about human intentions. The results show a promising path for future AI-driven solutions in safe and productive human–robot collaboration (HRC) in industrial automation. Full article
(This article belongs to the Special Issue Human-Robot Collaborations in Industrial Automation)
Show Figures

Figure 1

25 pages, 10939 KiB  
Article
Development and Application of a Tandem Force Sensor
by Zhijian Zhang, Youping Chen and Dailin Zhang
Sensors 2020, 20(21), 6042; https://doi.org/10.3390/s20216042 - 23 Oct 2020
Cited by 3 | Viewed by 2330
Abstract
In robot teaching for contact tasks, it is necessary to not only accurately perceive the traction force exerted by hands, but also to perceive the contact force at the robot end. This paper develops a tandem force sensor to detect traction and contact [...] Read more.
In robot teaching for contact tasks, it is necessary to not only accurately perceive the traction force exerted by hands, but also to perceive the contact force at the robot end. This paper develops a tandem force sensor to detect traction and contact forces. As a component of the tandem force sensor, a cylindrical traction force sensor is developed to detect the traction force applied by hands. Its structure is designed to be suitable for humans to operate, and the mechanical model of its cylinder-shaped elastic structural body has been analyzed. After calibration, the cylindrical traction force sensor is proven to be able to detect forces/moments with small errors. Then, a tandem force sensor is developed based on the developed cylindrical traction force sensor and a wrist force sensor. The robot teaching experiment of drawer switches were made and the results confirm that the developed traction force sensor is simple to operate and the tandem force sensor can achieve the perception of the traction and contact forces. Full article
(This article belongs to the Special Issue Human-Robot Collaborations in Industrial Automation)
Show Figures

Figure 1

Review

Jump to: Editorial, Research, Other

37 pages, 2247 KiB  
Review
Reinforcement Learning Approaches in Social Robotics
by Neziha Akalin and Amy Loutfi
Sensors 2021, 21(4), 1292; https://doi.org/10.3390/s21041292 - 11 Feb 2021
Cited by 51 | Viewed by 10044
Abstract
This article surveys reinforcement learning approaches in social robotics. Reinforcement learning is a framework for decision-making problems in which an agent interacts through trial-and-error with its environment to discover an optimal behavior. Since interaction is a key component in both reinforcement learning and [...] Read more.
This article surveys reinforcement learning approaches in social robotics. Reinforcement learning is a framework for decision-making problems in which an agent interacts through trial-and-error with its environment to discover an optimal behavior. Since interaction is a key component in both reinforcement learning and social robotics, it can be a well-suited approach for real-world interactions with physically embodied social robots. The scope of the paper is focused particularly on studies that include social physical robots and real-world human-robot interactions with users. We present a thorough analysis of reinforcement learning approaches in social robotics. In addition to a survey, we categorize existent reinforcement learning approaches based on the used method and the design of the reward mechanisms. Moreover, since communication capability is a prominent feature of social robots, we discuss and group the papers based on the communication medium used for reward formulation. Considering the importance of designing the reward function, we also provide a categorization of the papers based on the nature of the reward. This categorization includes three major themes: interactive reinforcement learning, intrinsically motivated methods, and task performance-driven methods. The benefits and challenges of reinforcement learning in social robotics, evaluation methods of the papers regarding whether or not they use subjective and algorithmic measures, a discussion in the view of real-world reinforcement learning challenges and proposed solutions, the points that remain to be explored, including the approaches that have thus far received less attention is also given in the paper. Thus, this paper aims to become a starting point for researchers interested in using and applying reinforcement learning methods in this particular research field. Full article
(This article belongs to the Special Issue Human-Robot Collaborations in Industrial Automation)
Show Figures

Figure 1

Other

17 pages, 2934 KiB  
Letter
A Framework for Human-Robot-Human Physical Interaction Based on N-Player Game Theory
by Rui Zou, Yubin Liu, Jie Zhao and Hegao Cai
Sensors 2020, 20(17), 5005; https://doi.org/10.3390/s20175005 - 03 Sep 2020
Cited by 7 | Viewed by 3110
Abstract
In order to analyze the complex interactive behaviors between the robot and two humans, this paper presents an adaptive optimal control framework for human-robot-human physical interaction. N-player linear quadratic differential game theory is used to describe the system under study. N-player differential game [...] Read more.
In order to analyze the complex interactive behaviors between the robot and two humans, this paper presents an adaptive optimal control framework for human-robot-human physical interaction. N-player linear quadratic differential game theory is used to describe the system under study. N-player differential game theory can not be used directly in actual scenerie, since the robot cannot know humans’ control objectives in advance. In order to let the robot know humans’ control objectives, the paper presents an online estimation method to identify unknown humans’ control objectives based on the recursive least squares algorithm. The Nash equilibrium solution of human-robot-human interaction is obtained by solving the coupled Riccati equation. Adaptive optimal control can be achieved during the human-robot-human physical interaction. The effectiveness of the proposed method is demonstrated by rigorous theoretical analysis and simulations. The simulation results show that the proposed controller can achieve adaptive optimal control during the interaction between the robot and two humans. Compared with the LQR controller, the proposed controller has more superior performance. Full article
(This article belongs to the Special Issue Human-Robot Collaborations in Industrial Automation)
Show Figures

Figure 1

Back to TopTop