Collision-Free Path Planning in Dynamic Environment Using High-Speed Skeleton Tracking and Geometry-Informed Potential Field Method
Abstract
:1. Introduction
- Regional division of the potential field into free, precautionary, and repulsive zones to suppress oscillations (P2).
- Tangential motion to suppress oscillations and enhance adaptability to dynamic environments (P2 and P3).
- Spherical virtual obstacles to avoid local minima and handle arbitrary 3D obstacles (P1 and P4).
- Fundamental and comprehensive system design from tracking to robot control, with versatility in terms of working environments and high responsiveness.
- High-speed skeleton tracking combining deep learning-based detection and optical flow-based motion extraction with dynamic search area adjustment.
- High-speed geometry-informed potential field path planning method applicable to dynamic environments, complex obstacles, and human skeletons.
2. Related Work
2.1. Human–Robot Collaboration (HRC)
2.2. Human Pose Estimation
2.3. Path Planning
3. High-Speed Skeleton Tracking
3.1. Hybrid Tracking
3.2. Dynamic Search Area Adjustment
3.3. Handling Short-Term Occlusion
4. Collision-Free Path Planning
- Free zone: ;
- Precautionary zone: ;
- Repulsive zone: .
4.1. Minimum Distance Between Human and Robot
4.2. Potential Filed
4.2.1. Attractive Motion
- Convert the rotation vector , representing the end-effector’s orientation relative to the rotation matrix .
- Compute from the rotation matrices and of the end-effector and target orientations, respectively.
- Convert into the rotation vector .
- Compute the control input for orientation using the same method as that for position control.
4.2.2. Repulsive Motion
4.2.3. Tangent Motion
- : Dynamic condition ;
- : The obstacle crosses the line segment between the robot and the target (the red region shown in Figure 6b).
- Danger: ;
- Safe: .
- I (target-based): Points in the direction of the target.
- II (-based): Points in the opposite direction of .
- III (target-directed): Points directly toward the target.
4.3. Virtual Obstacle Setting
4.4. System Asymptotic Stability and Singularity Avoidance
5. Simulation
- CPU: Intel(R) Core(TM) i9-10980XE;
- GPU: NVIDIA RTX A5000/NVIDIA RTX 5000 Ada.
- Why is high-frequency feedback control necessary in HRC scenarios?
- Effect of the precautionary zone and tangent motion.
- Effect of the radius of the precautionary zone .
- Effect of the virtual obstacle.
- Effectiveness under cluttered situations.
5.1. High-Frequency Feedback Control and Human Safety
5.2. Effect of the Precautionary Zone and Tangent Motion
5.3. Effect of the Radius of the Precautionary Zone
- For a slow-moving human, a larger radius of the precautionary zone leads to a greater minimum distance to the human.
- A larger radius of the precautionary zone results in quicker convergence to the target.
5.4. Effect of the Virtual Obstacle
5.5. Effectiveness Under Cluttered Situation
5.6. Summary of the Simulation
- High-frequency feedback control enables the robot to respond with low latency to human motion, maintaining a larger distance from the human even during unpredictable abrupt movements.
- Path planning: Our path planning method runs at over 10,000 Hz, which is extremely fast.
- Path planning: The precautionary zone and tangential vector (Section 4.2 and Section 4.2.3) effectively suppress robot oscillations (P2), handle dynamic obstacles (P3), and improve convergence to the target.
- Path planning: A larger precautionary zone radius enables earlier evasive motion and a more efficient approach to the target.
- Path planning: The virtual obstacle setting (Section 4.3) allows the robot to plan paths globally and avoid local minima (P1) when navigating complex obstacles (P4) with minimal processing time.
- Path planning: Our method achieves efficient collision-free motion generation under the dense and full human skeletal model, successfully balancing the approach to the dynamic target and the maintenance of a safe distance.
6. Real-World Experiment
6.1. High-Speed Skeleton Tracking
6.1.1. Effect of High-Speed Skeleton Tracking
- Conventional method: YOLOv8m-pose detection [49].
- Control method that uses a hybrid tracking system and static search area with its size tuned as 60 pixels in advance.
- Proposed method.
6.1.2. Effect of Dynamic Search Area Adjustment
- Static search area with a fixed size of 60 pixels (pre-tuned);
- Dynamic size adjustment;
- Dynamic adjustment of both size and shape.
6.1.3. Effect of the Solution for Short-Term Occlusion
6.2. Static Target and Static Obstacle
6.3. Static Target and Dynamic Obstacle
6.4. Dynamic Target and Static Obstacle
6.5. Dynamic Target and Dynamic Obstacle
6.6. Summary of Collision-Free Path Planning
- Tracking: Our high-speed skeleton tracking (Section 3) operates at approximately 250 Hz, eight times faster than the conventional method, demonstrating its effectiveness in tracking fast-moving joints under both bright and dark environments.
- Static target and static obstacles: Our system exhibits evasive capabilities and can navigate around complexly shaped obstacles.
- Static target and dynamic obstacles: By utilizing the precautionary zone and the tangential vector , our system prevents oscillatory motion in dynamic environments, effectively extending the robot’s operational space to dynamic scenarios.
- Dynamic target and static obstacles: The robot follows a dynamic target while maintaining a safe distance from obstacles, enabling dynamic tasks in proximity to humans.
- Dynamic target and dynamic obstacles: The robot successfully follows a dynamic target in a dynamic environment, showcasing its capability to perform simultaneous and distinct dynamic tasks alongside humans.
- Tracking: Implementing an evaluation metric to determine how meaningful the motions extracted by high-speed tracking are; optical flow, in this paper, is responsible for eliminating inappropriate motions.
- Tracking: Developing a tracking system that is robust to occlusion by adding a camera to the robot’s end-effector or implementing alternative solutions to mitigate occlusion issues.
- Tracking: Revising the triangulation method to use multiple points instead of a single representative point. Block matching [50] within a limited region could be a potential solution if it can achieve sufficient processing speeds.
- Robot control: Extending the path planning method to account for the robot’s kinematics, thereby preventing kinematically impossible configurations.
- Robot control: Implementing a whole-joint control method to enhance the robot’s evasive capabilities.
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Demir, K.A.; Döven, G.; Sezen, B. Industry 5.0 and Human-Robot Co-working. Procedia Comput. Sci. 2019, 158, 688–695. [Google Scholar] [CrossRef]
- Li, H.; Ding, X. Adaptive and intelligent robot task planning for home service: A review. Eng. Appl. Artif. Intell. 2023, 117, 105618. [Google Scholar] [CrossRef]
- Ajoudani, A.; Zanchettin, A.M.; Ivaldi, S.; Albu-Schäffer, A.; Kosuge, K.; Khatib, O. Progress and prospects of the human–robot collaboration. Auton. Robot. 2018, 42, 957–975. [Google Scholar] [CrossRef]
- Yin, T.; Zhang, Z.; Liang, W.; Zeng, Y.; Zhang, Y. Multi-man–robot disassembly line balancing optimization by mixed-integer programming and problem-oriented group evolutionary algorithm. IEEE Trans. Syst. Man Cybern. Syst. 2024, 54, 1363–1375. [Google Scholar] [CrossRef]
- Hémono, P.; Nait Chabane, A.; Sahnoun, M. Multi-objective optimization of human–robot collaboration: A case study in aerospace assembly line. Comput. Oper. Res. 2025, 174, 106874. [Google Scholar] [CrossRef]
- Wang, T.; Liu, Z.; Wang, L.; Li, M.; Wang, X.V. Data-efficient multimodal human action recognition for proactive human–robot collaborative assembly: A cross-domain few-shot learning approach. Robot. Comput.-Integr. Manuf. 2024, 89, 102785. [Google Scholar] [CrossRef]
- Villani, V.; Pini, F.; Leali, F.; Secchi, C. Survey on human–robot collaboration in industrial settings: Safety, intuitive interfaces and applications. Mechatronics 2018, 55, 248–266. [Google Scholar] [CrossRef]
- Yamakawa, Y.; Matsui, Y.; Ishikawa, M. Development of a Real-Time Human-Robot Collaborative System Based on 1 kHz Visual Feedback Control and Its Application to a Peg-in-Hole Task. Sensors 2021, 21, 663. [Google Scholar] [CrossRef] [PubMed]
- Zhou, P.; Zheng, P.; Qi, J.; Li, C.; Lee, H.-Y.; Duan, A.; Lu, L.; Li, Z.; Hu, L.; Navarro-Alarcon, D. Reactive human–robot collaborative manipulation of deformable linear objects using a new topological latent control model. Robot. Comput.-Integr. Manuf. 2024, 88, 102727. [Google Scholar] [CrossRef]
- Wang, Z.; Qureshi, A.H. DeRi-IGP: Learning to manipulate rigid objects using deformable objects via iterative grasp-pull. arXiv 2025, arXiv:2309.04843. [Google Scholar] [CrossRef]
- Guo, X.; Fan, C.; Zhou, M.; Liu, S.; Wang, J.; Qin, S.; Tang, Y. Human–robot collaborative disassembly line balancing problem with stochastic operation time and a solution via multi-objective shuffled frog leaping algorithm. IEEE Trans. Autom. Sci. Eng. 2024, 21, 4448–4459. [Google Scholar] [CrossRef]
- Sarker, S.; Green, H.N.; Yasar, M.S.; Iqbal, T. CoHRT: A collaboration system for human-robot teamwork. arXiv 2024, arXiv:2410.08504. [Google Scholar]
- Rosenberger, P.; Cosgun, A.; Newbury, R.; Kwan, J.; Ortenzi, V.; Corke, P.; Grafinger, M. Object-Independent Human-to-Robot Handovers using Real Time Robotic Vision. arXiv 2020, arXiv:2006.01797. [Google Scholar] [CrossRef]
- Ishikawa, M. High-speed vision and its applications toward high-speed intelligent systems. J. Robot. Mechatronics 2022, 34, 912–935. [Google Scholar] [CrossRef]
- Zhang, D.; Van, M.; Sopasakis, P.; McLoone, S. An NMPC-ECBF Framework for Dynamic Motion Planning and Execution in Vision-Based Human-Robot Collaboration. arXiv 2024, arXiv:2304.06923. [Google Scholar]
- Xia, X.; Li, T.; Sang, S.; Cheng, Y.; Ma, H.; Zhang, Q.; Yang, K. Path Planning for Obstacle Avoidance of Robot Arm Based on Improved Potential Field Method. Sensors 2023, 23, 3754. [Google Scholar] [CrossRef]
- Lasota, P.A.; Fong, T.; Shah, J.A. A survey of methods for safe human–robot interaction. Found. Trends® Robot. 2017, 5, 261–349. [Google Scholar] [CrossRef]
- Wang, Z.; Liu, Z.; Ouporov, N.; Song, S. ContactHandover: Contact-Guided Robot-to-Human Object Handover. arXiv 2024, arXiv:2404.01402. [Google Scholar]
- Mascaro, E.V.; Sliwowski, D.; Lee, D. HOI4ABOT: Human-Object Interaction Anticipation for Human Intention Reading Collaborative roBOTs. arXiv 2024, arXiv:2309.16524. [Google Scholar]
- Kothari, A.; Tohme, T.; Zhang, X.; Youcef-Toumi, K. Enhanced Human-Robot Collaboration using Constrained Probabilistic Human-Motion Prediction. arXiv 2023, arXiv:2310.03314. [Google Scholar]
- Wright, R.; Parekh, S.; White, R.; Losey, D.P. Safely and Autonomously Cutting Meat with a Collaborative Robot Arm. arXiv 2024, arXiv:2401.07875. [Google Scholar] [CrossRef] [PubMed]
- Zhuang, Z.; Ben-Shabat, Y.; Zhang, J.; Gould, S.; Mahony, R. GoferBot: A Visual Guided Human-Robot Collaborative Assembly System. arXiv 2023, arXiv:2304.08840. [Google Scholar]
- Mathur, P. Proactive Human-Robot Interaction using Visuo-Lingual Transformers. arXiv 2023, arXiv:2310.02506. [Google Scholar]
- Javdani, S.; Admoni, H.; Pellegrinelli, S.; Srinivasa, S.S.; Bagnell, J.A. Shared autonomy via hindsight optimization for teleoperation and teaming. arXiv 2017, arXiv:1706.00155. [Google Scholar] [CrossRef]
- Morrison, D.; Corke, P.; Leitner, J. Closing the loop for robotic grasping: A real-time, generative grasp synthesis approach. arXiv 2018, arXiv:1804.05172. [Google Scholar]
- Sundermeyer, M.; Mousavian, A.; Triebel, R.; Fox, D. Contact-GraspNet: Efficient 6-DoF grasp generation in cluttered scenes. arXiv 2021, arXiv:2103.14127. [Google Scholar]
- Ghadirzadeh, A.; Bütepage, J.; Maki, A.; Kragic, D.; Björkman, M. A Sensorimotor Reinforcement Learning Framework for Physical Human-Robot Interaction. arXiv 2016, arXiv:1607.07939. [Google Scholar]
- Olivas-Padilla, B.E.; Papanagiotou, D.; Senteri, G.; Manitsaris, S.; Glushkova, A. Computational Ergonomics for Task Delegation in Human-Robot Collaboration: Spatiotemporal Adaptation of the Robot to the Human Through Contactless Gesture Recognition. arXiv 2022, arXiv:2203.11007. [Google Scholar]
- Jha, D.K.; Jain, S.; Romeres, D.; Yerazunis, W.; Nikovski, D. Generalizable Human-Robot Collaborative Assembly Using Imitation Learning and Force Control. arXiv 2022, arXiv:2212.01434. [Google Scholar]
- Zhang, F.; Cully, A.; Demiris, Y. Personalized Robot-Assisted Dressing Using User Modeling in Latent Spaces. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017), Vancouver, BC, Canada, 24–28 September 2017. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. arXiv 2018, arXiv:1703.06870. [Google Scholar]
- Lin, G.; Milan, A.; Shen, C.; Reid, I. RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation. arXiv 2016, arXiv:1611.06612. [Google Scholar]
- Zhou, B.; Zhao, H.; Puig, X.; Fidler, S.; Barriuso, A.; Torralba, A. Scene Parsing Through ADE20K Dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5122–5130. [Google Scholar] [CrossRef]
- Cao, Z.; Hidalgo, G.; Simon, T.; Wei, S.-E.; Sheikh, Y. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. arXiv 2019, arXiv:1812.08008. [Google Scholar] [CrossRef]
- Maji, D.; Nagori, S.; Mathew, M.; Poddar, D. YOLO-Pose: Enhancing YOLO for Multi-Person Pose Estimation Using Object Keypoint Similarity Loss. arXiv 2022, arXiv:2204.06806. [Google Scholar]
- Yamakawa, Y.; Yoshida, K. Teleoperation of High-Speed Robot Hand with High-Speed Finger Position Recognition and High-Accuracy Grasp Type Estimation. Sensors 2022, 22, 3777. [Google Scholar] [CrossRef]
- Mun, Y.-J.; Huang, Z.; Chen, H.; Niu, Y.; You, H.; McPherson, D.L.; Driggs-Campbell, K. User-Friendly Safety Monitoring System for Manufacturing Cobots. arXiv 2023, arXiv:2307.01886. [Google Scholar]
- Ichnowski, J.; Avigal, Y.; Satish, V.; Goldberg, K. Deep Learning Can Accelerate Grasp-Optimized Motion Planning. Sci. Robot. 2020, 5, eabd7710. [Google Scholar] [CrossRef]
- Karaman, S.; Frazzoli, E. Sampling-based Algorithms for Optimal Motion Planning. arXiv 2011, arXiv:1105.1186. [Google Scholar]
- Nguyen, Q.; Sreenath, K. Exponential Control Barrier Functions for Enforcing High Relative-Degree Safety-Critical Constraints. In Proceedings of the 2016 American Control Conference (ACC), Boston, MA, USA, 6–8 July 2016; pp. 322–328. [Google Scholar] [CrossRef]
- Zhang, H.; Zhu, Y.; Liu, X.; Xu, X. Analysis of Obstacle Avoidance Strategy for Dual-Arm Robot Based on Speed Field with Improved Artificial Potential Field Algorithm. Electronics 2021, 10, 1850. [Google Scholar] [CrossRef]
- Chen, Y.; Chen, L.; Ding, J.; Liu, Y. Research on Real-Time Obstacle Avoidance Motion Planning of Industrial Robotic Arm Based on Artificial Potential Field Method in Joint Space. Appl. Sci. 2023, 13, 6973. [Google Scholar] [CrossRef]
- Wang, X.; Wu, Q.; Wang, T.; Cui, Y. A Path-Planning Method to Significantly Reduce Local Oscillation of Manipulators Based on Velocity Potential Field. Sensors 2023, 23, 9617. [Google Scholar] [CrossRef]
- Jia, Q.; Wang, X. An Improved Potential Field Method for Path Planning. In Proceedings of the 2010 Chinese Control and Decision Conference, Xuzhou, China, 26–28 May 2010; pp. 2265–2270. [Google Scholar]
- Farnebäck, G.; Bigun, J.; Gustavsson, T. Two-frame motion estimation based on polynomial expansion. In Image Analysis; Bigun, J., Gustavsson, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2003; pp. 363–370. [Google Scholar]
- Kawawaki, Y.; Yamakawa, Y. Real-time Collision Avoidance in Dynamic Environments Using High-speed Skeleton Tracking and the Velocity Potential Method. In Proceedings of the 42nd Annual Conference of the Robotics Society of Japan (RSJ2024), Osaka, Japan, 4–6 September 2024. [Google Scholar]
- Deo, A.S.; Walker, I.D. Overview of damped least-squares methods for inverse kinematics of robot manipulators. J. Intell. Robot. Syst. 1995, 14, 43–68. [Google Scholar] [CrossRef]
- Demonstration Video. Available online: http://www.hfr.iis.u-tokyo.ac.jp/research/CollisionAvoidance/index-e.html (accessed on 2 March 2025).
- Jocher, G.; Chaurasia, A.; Qiu, J. Ultralytics YOLO; Version 8.0.0; AGPL-3.0. 2023. Available online: https://github.com/ultralytics/ultralytics (accessed on 10 January 2025).
- Setyawan, R.A.; Sunoko, R.; Choiron, M.A.; Rahardjo, P.M. Implementation of Stereo Vision Semi-Global Block Matching Methods for Distance Measurement. Indones. J. Electr. Eng. Comput. Sci. 2018, 12, 585. [Google Scholar] [CrossRef]
Method | (P1) Local Minimum | (P2) Oscillation | (P3) Dynamic Situation | (P4) Complex Obstacle | High-Speed |
---|---|---|---|---|---|
Minimum distance (Section 4.1) | ✓ | ✓ | |||
Region division (Section 4.2) | ✓ | ✓ | |||
Tangent motion (Section 4.2.3) | ✓ | ✓ | ✓ | ||
Virtual obstacle (Section 4.3) | ✓ | ✓ | ✓ |
State | ➀,➃ | ➁,➄ | ➂,➅ | |||
---|---|---|---|---|---|---|
(i) | (ii) | (i) | (ii) | (i) | (ii) | |
II | III | II | I | II | III |
Condition | Challenges | Purpose | ||||
---|---|---|---|---|---|---|
Target | Obstacle | (P1) | (P2) | (P3) | (P4) | |
Static | Static | ✓ | ✓ | Can the robot reach the static target while avoiding a human holding a complex-shaped object? | ||
Dynamic | ✓ | ✓ | Can the robot reach the static target while avoiding a human moving toward it? | |||
Dynamic | Static | ✓ | ✓ | Can the robot follow a dynamic task in close proximity to a human? | ||
Dynamic | ✓ | ✓ | Can the robot and human perform different dynamic tasks simultaneously? |
Situation | Method | Frame Rate (fps) | RMSE (Pixel) | |||||
---|---|---|---|---|---|---|---|---|
LS | RS | LE | RE | LW | RW | |||
Bright | Conventional method | 30 | 7.92 | 6.9 | 13.56 | 12.21 | 18.7 | 14.04 |
Control method | 250 | 6.7 | 6.02 | 9.69 | 8.69 | 14.77 | 10.27 | |
Proposed method | 250 | 7.22 | 5.98 | 7.97 | 7.02 | 11.82 | 8.23 | |
Dark | Conventional method | 30 | 3.46 | 2.89 | 5.13 | 4.89 | 9.52 | 9.0 |
Control method | 250 | 3.44 | 2.82 | 4.57 | 4.0 | 8.01 | 6.96 | |
Proposed method | 250 | 3.54 | 3.32 | 4.16 | 3.79 | 7.47 | 6.78 |
Method | Frame Rate (fps) | RMSE (Pixel) | |||||
---|---|---|---|---|---|---|---|
LS | RS | LE | RE | LW | RW | ||
Static search area | 250 | 6.7 | 6.02 | 9.69 | 8.69 | 14.77 | 10.27 |
Dynamic size | 250 | 7.73 | 6.3 | 9.66 | 8.32 | 14.16 | 10.23 |
Dynamic size and shape | 250 | 7.22 | 5.98 | 7.97 | 7.02 | 11.82 | 8.23 |
Camera | Method | Frame Rate (fps) | RMSE (Pixel) | |||||
---|---|---|---|---|---|---|---|---|
LS | RS | LE | RE | LW | RW | |||
Left | Conventional method | 30 | 4.28 | 5.43 | 5.02 | 3.6 | 4.43 | 3.15 |
Proposed method | 250 | 2.66 | 4.79 | 4.82 | 4.46 | 5.21 | 3.64 | |
Right | Conventional method | 30 | 6.15 | 11.93 | 4.78 | 6.71 | 3.63 | 5.19 |
Proposed method | 250 | 4.53 | 9.05 | 4.66 | 5.24 | 3.25 | 4.73 |
Camera | Method | Frame Rate (fps) | RMSE (Pixel) | |||||
---|---|---|---|---|---|---|---|---|
LS | RS | LE | RE | LW | RW | |||
Left | Conventional method | 30 | 3.28 | 6.06 | 4.53 | 3.68 | 3.4 | 2.71 |
Proposed method | 250 | 4.74 | 5.83 | 3.16 | 3.19 | 3.31 | 3.43 | |
Right | Conventional method | 30 | 3.48 | 5.62 | 4.0 | 7.21 | 4.74 | 4.2 |
Proposed method | 250 | 4.78 | 5.25 | 3.59 | 7.15 | 4.87 | 3.84 |
Camera | Method | Frame Rate (fps) | RMSE (Pixel) | |||||
---|---|---|---|---|---|---|---|---|
LS | RS | LE | RE | LW | RW | |||
Left | Conventional method | 30 | 4.04 | 3.52 | 2.08 | 3.13 | 2.23 | 3.01 |
Proposed method | 250 | 4.8 | 4.99 | 4.04 | 3.48 | 2.72 | 4.38 | |
Right | Conventional method | 30 | 2.9 | 5.84 | 2.81 | 3.86 | 2.68 | 4.08 |
Proposed method | 250 | 3.77 | 4.8 | 4.07 | 4.18 | 3.02 | 4.84 |
Camera | Method | Frame Rate (fps) | RMSE (Pixel) | |||||
---|---|---|---|---|---|---|---|---|
LS | RS | LE | RE | LW | RW | |||
Left | Conventional method | 30 | 3.53 | 3.06 | 5.28 | 4.79 | 5.8 | 5.78 |
Proposed method | 250 | 3.57 | 3.36 | 4.8 | 4.89 | 5.24 | 5.25 | |
Right | Conventional method | 30 | 3.32 | 3.06 | 5.05 | 3.94 | 5.97 | 5.21 |
Proposed method | 250 | 3.81 | 3.44 | 4.76 | 4.24 | 5.48 | 5.11 |
Condition | Challenges | Result | ||||
---|---|---|---|---|---|---|
Target | Obstacle | (P1) | (P2) | (P3) | (P4) | |
Static | Static | ✓ | The robot was able to reach the static target while keeping a safe distance from the static and complex-shaped human. | |||
Dynamic | ✓ | ✓ | The robot was able to reach the static target while keeping a safe distance even in the presence of a dynamic human. | |||
Dynamic | Static | ✓ | ✓ | The robot was able to follow the dynamic target while maintaining a safe distance from a static human. | ||
Dynamic | ✓ | ✓ | The robot was able to follow the dynamic target while the human performed another dynamic task in close proximity to the robot. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kawawaki, Y.; Murakami, K.; Yamakawa, Y. Collision-Free Path Planning in Dynamic Environment Using High-Speed Skeleton Tracking and Geometry-Informed Potential Field Method. Robotics 2025, 14, 65. https://doi.org/10.3390/robotics14050065
Kawawaki Y, Murakami K, Yamakawa Y. Collision-Free Path Planning in Dynamic Environment Using High-Speed Skeleton Tracking and Geometry-Informed Potential Field Method. Robotics. 2025; 14(5):65. https://doi.org/10.3390/robotics14050065
Chicago/Turabian StyleKawawaki, Yuki, Kenichi Murakami, and Yuji Yamakawa. 2025. "Collision-Free Path Planning in Dynamic Environment Using High-Speed Skeleton Tracking and Geometry-Informed Potential Field Method" Robotics 14, no. 5: 65. https://doi.org/10.3390/robotics14050065
APA StyleKawawaki, Y., Murakami, K., & Yamakawa, Y. (2025). Collision-Free Path Planning in Dynamic Environment Using High-Speed Skeleton Tracking and Geometry-Informed Potential Field Method. Robotics, 14(5), 65. https://doi.org/10.3390/robotics14050065