A Novel Human Intention Prediction Approach Based on Fuzzy Rules through Wearable Sensing in Human–Robot Handover
Abstract
:1. Introduction
- We developed a novel solution using wearable data glove to predict human handover intentions in human–robot handover.
- We used a data glove instead of EMG sensors to obtain human handover intention information, which avoids the disadvantage that EMG sensors’ signals are easily disturbed by physiological electrical signals. This improved the prediction accuracy of human handover intentions.
- We collected human gesture information to predict human handover intention, which enabled the robot to predict different human handover intentions.
- We proposed a fast HIP method based on fuzzy rules. The fuzzy rules were built based on thresholds of bending angles of five fingers rather than the thresholds of original quaternions of IMUs on five fingers. Using our method, the prediction of human handover intentions could be faster.
- We conducted experiments to verify the effectiveness and accuracy of our method. The experimental results showed that our method was effective and accurate in human HIP.
2. Materials and Methods
2.1. Overall Framework of Human–Robot Handover
2.2. Human Handover Intention Sensing
2.3. Human Handover Intention Prediction
2.4. Experimental Setup
3. Results and Discussion
3.1. Sensing of Human Handover Intentions
3.2. Prediction Model Construction
3.3. Performance Evaluations of the Prediction Model
3.4. Online Human–Robot Handover
3.5. Robot Motion Mode Adjustment
3.6. Online Robot-Human Handover
4. Conclusions
- For human HIS, a wearable data glove is used to sense human handover intention information. Compared with vision-based and physical contact-based sensing methods, data glove-based sensing is not affected by visual occlusion; thus, it does not pose threats to human safety.
- For human HIP, a fast HIP method based on fuzzy rules has been proposed. Using this method, the robot can efficiently predict human handover intentions based on the sensing data obtained by the data glove.
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
References
- Mukherjee, D.; Gupta, K.; Chang, L.H.; Najjaran, H. A survey of robot learning strategies for human-robot collaboration in industrial settings. Robot. Comput.-Integr. Manuf. 2022, 73, 102231. [Google Scholar] [CrossRef]
- Kumar, S.; Savur, C.; Sahin, F. Survey of human–robot collaboration in industrial settings: Awareness, intelligence, and compliance. IEEE Trans. Syst. Man Cybern. Syst. 2020, 51, 280–297. [Google Scholar] [CrossRef]
- Hjorth, S.; Chrysostomou, D. Human–robot collaboration in industrial environments: A literature review on non-destructive disassembly. Robot. Comput.-Integr. Manuf. 2022, 73, 102208. [Google Scholar] [CrossRef]
- Zhang, R.; Lv, Q.; Li, J.; Bao, J.; Liu, T.; Liu, S. A reinforcement learning method for human-robot collaboration in assembly tasks. Robot. Comput.-Integr. Manuf. 2022, 73, 102227. [Google Scholar] [CrossRef]
- Li, S.; Zheng, P.; Fan, J.; Wang, L. Toward proactive human–robot collaborative assembly: A multimodal transfer-learning-enabled action prediction approach. IEEE Trans. Ind. Electron. 2021, 69, 8579–8588. [Google Scholar] [CrossRef]
- El Makrini, I.; Mathijssen, G.; Verhaegen, S.; Verstraten, T.; Vanderborght, B. A virtual element-based postural optimization method for improved ergonomics during human-robot collaboration. IEEE Trans. Autom. Sci. Eng. 2022, 19, 1772–1783. [Google Scholar] [CrossRef]
- He, W.; Li, J.; Yan, Z.; Chen, F. Bidirectional human–robot bimanual handover of big planar object with vertical posture. IEEE Trans. Autom. Sci. Eng. 2021, 19, 1180–1191. [Google Scholar] [CrossRef]
- Liu, D.; Wang, X.; Cong, M.; Du, Y.; Zou, Q.; Zhang, X. Object Transfer Point Predicting Based on Human Comfort Model for Human-Robot Handover. IEEE Trans. Instrum. Meas. 2021, 70, 1–11. [Google Scholar] [CrossRef]
- Ortenzi, V.; Cosgun, A.; Pardi, T.; Chan, W.P.; Croft, E.; Kulić, D. Object handovers: A review for robotics. IEEE Trans. Robot. 2021, 37, 1855–1873. [Google Scholar] [CrossRef]
- Liu, T.; Lyu, E.; Wang, J.; Meng, M.Q.H. Unified Intention Inference and Learning for Human–Robot Cooperative Assembly. IEEE Trans. Autom. Sci. Eng. 2021, 19, 2256–2266. [Google Scholar] [CrossRef]
- Zeng, C.; Chen, X.; Wang, N.; Yang, C. Learning compliant robotic movements based on biomimetic motor adaptation. Robot. Auton. Syst. 2021, 135, 103668. [Google Scholar] [CrossRef]
- Yu, X.; Li, B.; He, W.; Feng, Y.; Cheng, L.; Silvestre, C. Adaptive-constrained impedance control for human–robot co-transportation. IEEE Trans. Cybern. 2021, 52, 13237–13249. [Google Scholar] [CrossRef] [PubMed]
- Khatib, M.; Al Khudir, K.; De Luca, A. Human-robot contactless collaboration with mixed reality interface. Robot. Comput.-Integr. Manuf. 2021, 67, 102030. [Google Scholar] [CrossRef]
- Wang, W.; Li, R.; Chen, Y.; Sun, Y.; Jia, Y. Predicting human intentions in human–robot hand-over tasks through multimodal learning. IEEE Trans. Autom. Sci. Eng. 2021, 19, 2339–2353. [Google Scholar] [CrossRef]
- Rosenberger, P.; Cosgun, A.; Newbury, R.; Kwan, J.; Ortenzi, V.; Corke, P.; Grafinger, M. Object-independent human-to-robot handovers using real time robotic vision. IEEE Robot. Autom. Lett. 2020, 6, 17–23. [Google Scholar] [CrossRef]
- Melchiorre, M.; Scimmi, L.S.; Mauro, S.; Pastorelli, S.P. Vision-based control architecture for human–robot hand-over applications. Asian J. Control. 2021, 23, 105–117. [Google Scholar] [CrossRef]
- Ye, R.; Xu, W.; Xue, Z.; Tang, T.; Wang, Y.; Lu, C. H2O: A benchmark for visual human-human object handover analysis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 15762–15771. [Google Scholar]
- Wei, D.; Chen, L.; Zhao, L.; Zhou, H.; Huang, B. A Vision-Based Measure of Environmental Effects on Inferring Human Intention During Human Robot Interaction. IEEE Sens. J. 2021, 22, 4246–4256. [Google Scholar] [CrossRef]
- Liu, C.; Li, X.; Li, Q.; Xue, Y.; Liu, H.; Gao, Y. Robot recognizing humans intention and interacting with humans based on a multi-task model combining ST-GCN-LSTM model and YOLO model. Neurocomputing 2021, 430, 174–184. [Google Scholar] [CrossRef]
- Chan, W.P.; Pan, M.K.; Croft, E.A.; Inaba, M. An affordance and distance minimization based method for computing object orientations for robot human handovers. Int. J. Soc. Robot. 2020, 12, 143–162. [Google Scholar] [CrossRef]
- Yang, W.; Paxton, C.; Mousavian, A.; Chao, Y.W.; Cakmak, M.; Fox, D. Reactive human-to-robot handovers of arbitrary objects. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 3118–3124. [Google Scholar]
- Alevizos, K.I.; Bechlioulis, C.P.; Kyriakopoulos, K.J. Physical human–robot cooperation based on robust motion intention estimation. Robotica 2020, 38, 1842–1866. [Google Scholar] [CrossRef]
- Wang, P.; Liu, J.; Hou, F.; Chen, D.; Xia, Z.; Guo, S. Organization and understanding of a tactile information dataset TacAct for physical human-robot interaction. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 7328–7333. [Google Scholar]
- Chen, J.; Ro, P.I. Human intention-oriented variable admittance control with power envelope regulation in physical human-robot interaction. Mechatronics 2022, 84, 102802. [Google Scholar] [CrossRef]
- Yu, X.; Li, Y.; Zhang, S.; Xue, C.; Wang, Y. Estimation of human impedance and motion intention for constrained human–robot interaction. Neurocomputing 2020, 390, 268–279. [Google Scholar] [CrossRef]
- Li, H.Y.; Dharmawan, A.G.; Paranawithana, I.; Yang, L.; Tan, U.X. A control scheme for physical human-robot interaction coupled with an environment of unknown stiffness. J. Intell. Robot. Syst. 2020, 100, 165–182. [Google Scholar] [CrossRef]
- Hamad, Y.M.; Aydin, Y.; Basdogan, C. Adaptive human force scaling via admittance control for physical human-robot interaction. IEEE Trans. Haptics 2021, 14, 750–761. [Google Scholar] [CrossRef] [PubMed]
- Khoramshahi, M.; Billard, A. A dynamical system approach for detection and reaction to human guidance in physical human–robot interaction. Auton. Robot. 2020, 44, 1411–1429. [Google Scholar] [CrossRef]
- Li, G.; Li, Z.; Kan, Z. Assimilation control of a robotic exoskeleton for physical human-robot interaction. IEEE Robot. Autom. Lett. 2022, 7, 2977–2984. [Google Scholar] [CrossRef]
- Zhang, T.; Sun, H.; Zou, Y. An electromyography signals-based human-robot collaboration system for human motion intention recognition and realization. Robot. Comput.-Integr. Manuf. 2022, 77, 102359. [Google Scholar] [CrossRef]
- Sirintuna, D.; Ozdamar, I.; Aydin, Y.; Basdogan, C. Detecting human motion intention during pHRI using artificial neural networks trained by EMG signals. In Proceedings of the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 31 August–4 September 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1280–1287. [Google Scholar]
- Mendes, N. Surface Electromyography Signal Recognition Based on Deep Learning for Human-Robot Interaction and Collaboration. J. Intell. Robot. Syst. 2022, 105, 42. [Google Scholar] [CrossRef]
- Cifuentes, C.A.; Frizera, A.; Carelli, R.; Bastos, T. Human–robot interaction based on wearable IMU sensor and laser range finder. Robot. Auton. Syst. 2014, 62, 1425–1439. [Google Scholar] [CrossRef]
- Artemiadis, P.K.; Kyriakopoulos, K.J. An EMG-based robot control scheme robust to time-varying EMG signal features. IEEE Trans. Inf. Technol. Biomed. 2010, 14, 582–588. [Google Scholar] [CrossRef]
- Wolf, M.T.; Assad, C.; Vernacchia, M.T.; Fromm, J.; Jethani, H.L. Gesture-based robot control with variable autonomy from the JPL BioSleeve. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1160–1165. [Google Scholar]
- Liu, H.; Wang, L. Latest developments of gesture recognition for human–robot collaboration. In Advanced Human-Robot Collaboration in Manufacturing; Springer: Berlin/Heidelberg, Germany, 2021; pp. 43–68. [Google Scholar]
- Wang, L.; Liu, S.; Liu, H.; Wang, X.V. Overview of human-robot collaboration in manufacturing. In Proceedings of the 5th International Conference on the Industry 4.0 Model for Advanced Manufacturing: AMP 2020, Belgrade, Serbia, 1–4 June 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 15–58. [Google Scholar]
- Kshirsagar, A.; Lim, M.; Christian, S.; Hoffman, G. Robot gaze behaviors in human-to-robot handovers. IEEE Robot. Autom. Lett. 2020, 5, 6552–6558. [Google Scholar] [CrossRef]
- Cini, F.; Banfi, T.; Ciuti, G.; Craighero, L.; Controzzi, M. The relevance of signal timing in human-robot collaborative manipulation. Sci. Robot. 2021, 6, eabg1308. [Google Scholar] [CrossRef] [PubMed]
- Chen, X.; Zhang, K.; Liu, H.; Leng, Y.; Fu, C. A probability distribution model-based approach for foot placement prediction in the early swing phase with a wearable imu sensor. IEEE Trans. Neural Syst. Rehabil. Eng. 2021, 29, 2595–2604. [Google Scholar] [CrossRef] [PubMed]
- Takano, W. Annotation generation from IMU-based human whole-body motions in daily life behavior. IEEE Trans. Hum.-Mach. Syst. 2020, 50, 13–21. [Google Scholar] [CrossRef]
- Amorim, A.; Guimares, D.; Mendona, T.; Neto, P.; Costa, P.; Moreira, A.P. Robust human position estimation in cooperative robotic cells. Robot. Comput.-Integr. Manuf. 2021, 67, 102035. [Google Scholar] [CrossRef]
- Niu, H.; Van Leeuwen, C.; Hao, J.; Wang, G.; Lachmann, T. Multimodal Natural Human–Computer Interfaces for Computer-Aided Design: A Review Paper. Appl. Sci. 2022, 12, 6510. [Google Scholar] [CrossRef]
- Wu, C.; Wang, K.; Cao, Q.; Fei, F.; Yang, D.; Lu, X.; Xu, B.; Zeng, H.; Song, A. Development of a low-cost wearable data glove for capturing finger joint angles. Micromachines 2021, 12, 771. [Google Scholar] [CrossRef] [PubMed]
- Lin, P.C.; Lu, J.C.; Tsai, C.H.; Ho, C.W. Design and implementation of a nine-axis inertial measurement unit. IEEE/ASME Trans. Mechatron. 2011, 17, 657–668. [Google Scholar]
- Yuan, Q.; Asadi, E.; Lu, Q.; Yang, G.; Chen, I.M. Uncertainty-based IMU orientation tracking algorithm for dynamic motions. IEEE/ASME Trans. Mechatron. 2019, 24, 872–882. [Google Scholar] [CrossRef]
- Jara, L.; Ariza-Valderrama, R.; Fernández-Olivares, J.; González, A.; Pérez, R. Efficient inference models for classification problems with a high number of fuzzy rules. Appl. Soft Comput. 2022, 115, 108164. [Google Scholar] [CrossRef]
- Mousavi, S.M.; Abdullah, S.; Niaki, S.T.A.; Banihashemi, S. An intelligent hybrid classification algorithm integrating fuzzy rule-based extraction and harmony search optimization: Medical diagnosis applications. Knowl.-Based Syst. 2021, 220, 106943. [Google Scholar] [CrossRef]
- Selig, J.M. Geometric Fundamentals of Robotics; Springer: Berlin/Heidelberg, Germany, 2005; Volume 128. [Google Scholar]
- Diebel, J. Representing attitude: Euler angles, unit quaternions, and rotation vectors. Matrix 2006, 58, 1–35. [Google Scholar]
- Wang, L.X.; Mendel, J.M. Generating fuzzy rules by learning from examples. IEEE Trans. Syst. Man Cybern. 1992, 22, 1414–1427. [Google Scholar] [CrossRef] [Green Version]
- Juang, C.F.; Ni, W.E. Human Posture Classification Using Interpretable 3-D Fuzzy Body Voxel Features and Hierarchical Fuzzy Classifiers. IEEE Trans. Fuzzy Syst. 2022, 30, 5405–5418. [Google Scholar] [CrossRef]
- Altman, E.I.; Marco, G.; Varetto, F. Corporate distress diagnosis: Comparisons using linear discriminant analysis and neural networks (the Italian experience). J. Bank. Financ. 1994, 18, 505–529. [Google Scholar] [CrossRef]
- Hearst, M.A.; Dumais, S.T.; Osuna, E.; Platt, J.; Scholkopf, B. Support vector machines. IEEE Intell. Syst. Their Appl. 1998, 13, 18–28. [Google Scholar] [CrossRef] [Green Version]
- Denoeux, T. A k-nearest neighbor classification rule based on Dempster-Shafer theory. IEEE Trans. Syst. Man Cybern. 1995, 25, 804–813. [Google Scholar] [CrossRef] [Green Version]
- Fushiki, T. Estimation of prediction error by using K-fold cross-validation. Stat. Comput. 2011, 21, 137–146. [Google Scholar] [CrossRef]
- Kohavi, R. A study of cross-validation and bootstrap for accuracy estimation and model selection. In Proceedings of the IJCAI 1995, Montreal, QC, Canada, 20–25 August 1995; Volume 14, pp. 1137–1145. [Google Scholar]
Intentions | |||||
---|---|---|---|---|---|
Give the big red cylinder | {−29, −21} | {87, 94} | {98, 106} | {115,120} | {107,115} |
Give the middle black cylinder | {−23,−19} | {46,55} | {53,62} | {39,48} | {16,42} |
Give the small blue cylinder | {−14,−6} | {62,66} | {103,111} | {129,134} | {146, 153} |
Need the big red cylinder | {26,37} | {−19,−5} | {104,115} | {125,132} | {139,149} |
Need the middle black cylinder | {49,73} | {−25,−15} | {−20,−5} | {126,133} | {122,151} |
Need the small blue cylinder | {−11,8} | {117,130} | {8,22} | {−9,−1} | {−12,−5} |
Rotate arm | {−62,−44} | {−36,−25} | {−37,−27} | {−39,−29} | {−40,−31} |
Rotate hand | {4,28} | {88,93} | {85,93} | {80,100} | {72,83} |
Move up | {33,58} | {−32,−9} | {−31,−21} | {−31,−20} | {−29,−16} |
Move down | {−38,−27} | {125,144} | {138,155} | {135,147} | {13,23} |
Move close | {−64,−53} | {2,10} | {158,165} | {166,170} | {167,170} |
Move far | {−2,6} | {62,71} | {133,144} | {145,163} | {161,170} |
Algorithm | Our Method | SVM | LDA | KNN |
---|---|---|---|---|
Give the big red cylinder | 99% | 92% | 38% | 92% |
Give the middle black cylinder | 100% | 100% | 52% | 96% |
Give the small blue cylinder | 98% | 81% | 84% | 93% |
Need the big red cylinder | 100% | 100% | 76% | 100% |
Need the middle black cylinder | 100% | 100% | 66% | 100% |
Need the small blue cylinder | 100% | 99% | 82% | 100% |
Rotate arm | 100% | 100% | 100% | 100% |
Rotate hand | 100% | 100% | 99% | 100% |
Move up | 98% | 100% | 100% | 100% |
Move down | 100% | 99% | 100% | 100% |
Move close | 100% | 100% | 100% | 100% |
Move far | 100% | 100% | 98% | 100% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zou, R.; Liu, Y.; Li, Y.; Chu, G.; Zhao, J.; Cai, H. A Novel Human Intention Prediction Approach Based on Fuzzy Rules through Wearable Sensing in Human–Robot Handover. Biomimetics 2023, 8, 358. https://doi.org/10.3390/biomimetics8040358
Zou R, Liu Y, Li Y, Chu G, Zhao J, Cai H. A Novel Human Intention Prediction Approach Based on Fuzzy Rules through Wearable Sensing in Human–Robot Handover. Biomimetics. 2023; 8(4):358. https://doi.org/10.3390/biomimetics8040358
Chicago/Turabian StyleZou, Rui, Yubin Liu, Ying Li, Guoqing Chu, Jie Zhao, and Hegao Cai. 2023. "A Novel Human Intention Prediction Approach Based on Fuzzy Rules through Wearable Sensing in Human–Robot Handover" Biomimetics 8, no. 4: 358. https://doi.org/10.3390/biomimetics8040358
APA StyleZou, R., Liu, Y., Li, Y., Chu, G., Zhao, J., & Cai, H. (2023). A Novel Human Intention Prediction Approach Based on Fuzzy Rules through Wearable Sensing in Human–Robot Handover. Biomimetics, 8(4), 358. https://doi.org/10.3390/biomimetics8040358