Optimal Viewpoint Assistance for Cooperative Manipulation Using D-Optimality
Abstract
:1. Introduction
2. Related Work
2.1. Improving the Accuracy of Object and Environmental Recognition/Reconstruction
2.2. Integrating Viewpoint Optimization with Robot Operations for Task Execution
3. D-Optimality-Based Optimal Viewpoint Selection
3.1. System Architecture for Cooperative Manipulation Task Execution Utilizing Optimal Viewpoints
- (1)
- To quantitatively evaluate the information value of each viewpoint based on the accuracy of shape estimation.
- (2)
- To confirm that information obtained from an external viewpoint remains valid even after being shared.
3.2. Object Modeling and Coordinate Transformation
- (1)
- Transformation from Camera Coordinate System to Monitoring Robot’s Base Coordinate System
- (2)
- Transformation from Monitoring Robot’s Base Coordinate System to Manipulator’s Base Coordinate System
- (3)
- Overall Transformation
4. Optimization of Viewpoints Using D-Optimality
Derivation of the Multivariate Normal Distribution and D-Optimality
Algorithm 1: Threshold-based View angle Selection Using D-optimality |
5. Verification of Cooperative Manipulation System Using Viewpoint Optimization Based on D-Optimality
5.1. Verification of Object Estimation Accuracy Using D-Optimality
5.1.1. Experimental Environment
5.1.2. Experimental Results
5.2. Improving Manipulation Accuracy Using D-Optimality
5.2.1. Experimental Environment
- (1)
- The accuracy of grasp position estimation was evaluated in the perception phase, prior to object grasping. The mean error was calculated based on the Euclidean distance between the estimated object center and the true object center. This is a quantitative indicator of how much the D-optimality algorithm contributes to the accuracy of object position estimation.
- (2)
- The variance of the estimation results across multiple trials was compared to evaluate the stability and reproducibility of the task. This confirms that the viewpoint selection not only improves the success rate in a single trial but also consistently achieves a good success rate.
5.2.2. Experimental Results
- (1)
- Improved Shape Estimation Accuracy: A viewpoint with high D-optimality reduces the variance of shape-related data and reduces the estimation error of object dimensions. Consequently, the object center coordinates can be determined more accurately.
- (2)
- Improved Accuracy of Grasping Position: By improving the accuracy of shape and center coordinate estimation, deviation from the target grip position is suppressed. As a result, variations in grasping positions are minimized, enabling more consistent grasping at positions close to the true value.
- (3)
- Effectiveness of D-optimality as an Index: Experimental results show that there is a clear correlation between D-optimality and estimation and grasping accuracy—that is, selecting a viewpoint with high D-optimality obviously improves the success rate and accuracy of the task execution.
5.3. Collaborative Manipulation Through Optimal Viewpoint Selection
5.3.1. Experimental Environment
5.3.2. Experimental Results
6. Conclusions and Future Work
- (1)
- Autonomous decomposition and understanding of tasks to set the necessary parameters by the robot.
- (2)
- Algorithms for parameter selection and weighting.
- (3)
- Action planning to efficiently acquire optimal information.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- An, X.; Wu, C.; Lin, Y.; Lin, M.; Yoshinaga, T.; Ji, Y. Multi-Robot Systems and Cooperative Object Transport: Communications, Platforms, and Challenges. IEEE Open J. Comput. Soc. 2023, 4, 23–36. [Google Scholar] [CrossRef]
- Queralta, J.P.; Taipalmaa, J.; Can Pullinen, B.; Sarker, V.K.; Nguyen Gia, T.; Tenhunen, H.; Gabbouj, M.; Raitoharju, J.; Westerlund, T. Collaborative Multi-Robot Search and Rescue: Planning, Coordination, Perception, and Active Vision. IEEE Access 2020, 8, 191617–191643. [Google Scholar] [CrossRef]
- Xiao, X.; Dufek, J.; Murphy, R.R. Autonomous Visual Assistance for Robot Operations Using a Tethered UAV. In Field and Service Robotics; Ishigami, G., Yoshida, K., Eds.; Springer Proceedings in Advanced Robotics; Springer: Singapore, 2021; Volume 16, pp. 15–29. [Google Scholar]
- Schmickl, T.; Möslinger, C.; Crailsheim, K. Collective Perception in a Robot Swarm. In Swarm Robotics; Şahin, E., Spears, W.M., Winfield, A.F.T., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4433, pp. 144–157. [Google Scholar]
- Bohg, J.; Morales, A.; Asfour, T.; Kragic, D. Data-Driven Grasp Synthesis—A Survey. IEEE Trans. Robot. 2014, 30, 289–309. [Google Scholar] [CrossRef]
- Guérin, J.; Gibaru, O.; Nyiri, E.; Thiery, S.; Boots, B. Semantically Meaningful View Selection. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1061–1066. [Google Scholar] [CrossRef]
- Dutagaci, H.; Cheung, C.P.; Godil, A. A Benchmark for Best View Selection of 3D Objects. In Proceedings of the ACM Workshop on 3D Object Retrieval, Firenze, Italy, 25 October 2010. [Google Scholar]
- Finean, M.N.; Merkt, W.; Havoutis, I. Where Should I Look? Optimized Gaze Control for Whole-Body Collision Avoidance in Dynamic Environments. IEEE Robot. Autom. Lett. 2022, 7, 1095–1102. [Google Scholar] [CrossRef]
- Reily, B.; Zhang, H. Simultaneous View and Feature Selection for Collaborative Multi-Robot Perception. arXiv 2020, arXiv:2012.09328. [Google Scholar]
- Ren, H.; Qureshi, A.H. Robot Active Neural Sensing and Planning in Unknown Cluttered Environments. IEEE Trans. Robot. 2023, 39, 2738–2750. [Google Scholar] [CrossRef]
- Kwon, B.; Huh, J.; Lee, K.; Lee, S. Optimal Camera Point Selection Toward the Most Preferable View of 3-D Human Pose. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 533–553. [Google Scholar] [CrossRef]
- Laga, H. Semantics-Driven Approach for Automatic Selection of Best Views of 3D Shapes. In Proceedings of the Eurographics 2010 Workshop on 3D Object Retrieval, Norrköping, Sweden, 2–3 May 2010; pp. 1–8. [Google Scholar]
- Mendoza, M.; Vasquez-Gomez, J.I.; Taud, H.; Sucar, L.E.; Reta, C. Supervised Learning of the Next-Best-View for 3D Object Reconstruction. Pattern Recognit. Lett. 2020, 133, 224–231. [Google Scholar] [CrossRef]
- Potthast, C.; Sukhatme, G.S. A Probabilistic Framework for Next Best View Estimation in a Cluttered Environment. J. Vis. Commun. Image Represent. 2014, 25, 148–164. [Google Scholar] [CrossRef]
- Border, R.; Gammell, J.D.; Newman, P. Surface Edge Explorer (SEE): Planning Next Best Views Directly from 3D Observations. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 1–8. [Google Scholar]
- Feixas, M.; Sbert, M.; González, F. A Unified Information-Theoretic Framework for Viewpoint Selection and Mesh Saliency. ACM Trans. Appl. Percept. 2009, 6, 1–23. [Google Scholar] [CrossRef]
- Lee, S.; Park, H. Automatic Setting of Optimal Multi-Exposures for High-Quality Structured Light Depth Imaging. IEEE Access 2024, 12, 152786–152797. [Google Scholar] [CrossRef]
- Morsly, Y.; Aouf, N.; Djouadi, M.S.; Richardson, M. Particle Swarm Optimization Inspired Probability Algorithm for Optimal Camera Network Placement. IEEE Sens. J. 2012, 12, 1402–1412. [Google Scholar] [CrossRef]
- Mantini, P.; Shah, S.K. Camera Placement Optimization Conditioned on Human Behavior and 3D Geometry. In Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Rome, Italy, 27–29 February 2016; SCITEPRESS-Science and Technology Publications: Rome, Italy, 2016; pp. 225–235. [Google Scholar]
- Dufek, J.; Xiao, X.; Murphy, R.R. Best Viewpoints for External Robots or Sensors Assisting Other Robots. IEEE Trans. Hum.-Mach. Syst. 2021, 51, 324–334. [Google Scholar] [CrossRef]
- Puligandla, V.A.; Loncaric, S. A Multiresolution Approach for Large Real-World Camera Placement Optimization Problems. IEEE Access 2022, 10, 61601–61616. [Google Scholar] [CrossRef]
- Cuiral-Zueco, I.; Lopez-Nicolas, G. RGB-D Tracking and Optimal Perception of Deformable Objects. IEEE Access 2020, 8, 136884–136897. [Google Scholar] [CrossRef]
- Ruiz-Celada, O.; Dalmases, A.; Zaplana, I.; Rosell, J. Smart Perception for Situation Awareness in Robotic Manipulation Tasks. IEEE Access 2024, 12, 53974–53985. [Google Scholar] [CrossRef]
- Yang, S.; Scherer, S. CubeSLAM: Monocular 3-D Object SLAM. IEEE Trans. Robot. 2019, 35, 925–938. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kameyama, K.; Horie, K.; Sekiyama, K. Optimal Viewpoint Assistance for Cooperative Manipulation Using D-Optimality. Sensors 2025, 25, 3002. https://doi.org/10.3390/s25103002
Kameyama K, Horie K, Sekiyama K. Optimal Viewpoint Assistance for Cooperative Manipulation Using D-Optimality. Sensors. 2025; 25(10):3002. https://doi.org/10.3390/s25103002
Chicago/Turabian StyleKameyama, Kyosuke, Kazuki Horie, and Kosuke Sekiyama. 2025. "Optimal Viewpoint Assistance for Cooperative Manipulation Using D-Optimality" Sensors 25, no. 10: 3002. https://doi.org/10.3390/s25103002
APA StyleKameyama, K., Horie, K., & Sekiyama, K. (2025). Optimal Viewpoint Assistance for Cooperative Manipulation Using D-Optimality. Sensors, 25(10), 3002. https://doi.org/10.3390/s25103002