Deformable and Fragile Object Manipulation: A Review and Prospects
Abstract
1. Introduction
- Systematically review the state of the art in DOM by analyzing the foundational roles of hardware morphology, sensing, modeling, and control in the context of fragility.
- Structure this analysis using a hierarchical framework that distinguishes between slow, deliberative cognitive control and fast, reflexive safety mechanisms.
- Argue that proprioception is the critical, synergistic bridge between these control layers and propose it as a key direction for creating more robust and adaptive systems.
2. The Role of Hardware in Fragile Object Manipulation: Gripper Morphology
2.1. Soft Grippers
- Fluidic Elastomer Actuators (FEAs): This is one of the most common technologies; they utilize chambers within a soft body that are actuated by positive (pneumatic) or negative (vacuum) pressure to create bending or other motions [28].
- Tendon-Driven Passive Structures: These grippers combine compliant, passive fingers with external actuators (like motors) that pull on tendons or cables to close the grip. This approach allows for soft contact while leveraging conventional actuation [29].
- Granular and Particle Jamming: These grippers consist of a membrane filled with granular materials. In its loose state, the gripper can conform to an object; when a vacuum is applied, the particles jam together, creating a solid grip that is highly adaptive to irregular shapes [30].
- Controlled Adhesion Mechanisms: Rather than applying force, these grippers use principles like electro-adhesion (electrostatic fields) or gecko-adhesion (van der Waals forces) to lift objects with an extremely light touch, making them ideal for the most delicate items [31].
2.2. Rigid Grippers
- Stress Concentration: Contact is often limited to a few points or lines, which can create high pressures that bruise, fracture, or otherwise damage delicate structures.
- Sensitivity to Errors: Small uncertainties in object position, shape, or the control model can lead to the application of excessive force, as the gripper cannot physically yield. The interaction is inherently less robust and safe for the object compared to a soft gripper.
2.3. Comparison and Trade-Offs
3. Sensing and Perception
3.1. Vision Sensing and Perception
3.2. Tactile Sensing and Perception
3.3. Force/Torque Sensing and Perception
3.4. Challenges in Fragile Object Sensing and Perception
- Stress and Strain Detection: Vision systems struggle with detecting internal stresses, while tactile sensors are limited to surface interactions, leaving blind spots in real-time fragility assessments.
- Occlusion and Transparency: Vision sensors fail in occluded environments or with transparent objects, negatively impacting safe manipulation tasks.
- Bandwidth Limitations: High-bandwidth tactile feedback required for fragile object handling introduces complexities in both data acquisition and processing speeds.
- Sensor Fusion: Effective integration of multiple sensing modalities (vision, tactile, and force/torque) remains a challenge, particularly in fragility-aware systems requiring fine-grained real-time feedback.
3.5. Opportunities
- Vision-Inferred Tactile Sensing: Beyond dedicated hardware like GelSight (which uses an internal camera), a prominent research direction uses external vision to infer tactile properties. By observing an object’s deformation, these methods can estimate contact forces and pressures without direct contact, offering a powerful solution for environments where physical tactile sensors are impractical or infeasible [52].
- Leverage Proprioceptive Force Estimation: Using the robot’s own dynamic model and motor currents to estimate contact forces offers a low-cost, universally applicable alternative to dedicated sensors [53]. Future work must focus on creating highly accurate models and robust filtering techniques to disentangle delicate contact forces from the robot’s own dynamic noise.
- Advance Holistic Sensor Fusion: The future of perception lies in methodologies that intelligently fuse the global context from vision with high-frequency local data from tactile sensors and the global interaction dynamics captured by force/torque feedback (either measured or estimated).
4. Modeling Deformable and Fragile Objects
4.1. Model Representation
4.2. Analytical Models
4.3. Data-Driven Models
4.4. Challenges in Fragile Object Modeling
- Stress Threshold Prediction: Fragile objects require precise stress and deformation predictions to avoid local damage, which remains challenging for both analytical and data-driven models.
- Dynamic Fragility Modeling: Objects often change fragility conditions during manipulation (e.g., brittle transitions in glass or softening in tissues). Neither modeling approach fully accounts for these dynamic states.
- Computational Trade-Offs: Analytical models are computationally expensive for high-resolution fragility simulations, whereas data-driven approaches struggle with real-time safety guarantees.
4.5. Opportunities
- Developing hybrid models that incorporate analytical accuracy with data-driven flexibility to adapt to unforeseen fragility changes.
- Implementing real-time fragility monitoring through feedback loops, leveraging high-bandwidth proprioceptive sensing.
- Addressing computational challenges by optimizing algorithms for fragile object dynamics simulations without compromising safety.
5. Motion Planning for Deformable Object Manipulation
5.1. Model-Based Planning
5.2. Learning-Based Planning
5.3. Feedback-Based Control and Visual Servoing
5.4. Challenges in Planning for Fragile Objects
- Integration of Fragility Constraints: Existing planning methods rarely embed fragility-related thresholds, such as limits on stress, strain, or applied force, into the cognitive loop’s trajectory generation.
- Adaptiveness to Uncertainty: Analytical and heuristic methods struggle to adapt when sensory feedback suggests dynamic changes in object fragility during manipulation.
- Real-Time Decision-Making: The computational cost of planning makes it difficult for the cognitive loop to react to fast, unexpected events, placing a heavy burden on lower-level reactive controllers.
- Task-Specific Limitations: Many planning frameworks are designed for specific applications (e.g., garment handling and food preparation) and are not generalizable to objects with diverse fragility profiles.
5.5. Opportunities
- Fragility-Aware Planning Models: They develop planning frameworks for the cognitive loop that incorporate safety constraints directly into trajectory generation, using fragility predictions derived from sensing.
- Hybrid Planning Architectures: They combine the deliberative efficiency of the cognitive loop with the rapid error correction of the reflexive loop while embedding fragility rules to achieve both safety and flexibility.
- Bio-Inspired Predictive Planning: It takes inspiration from biological cognitive systems that integrate proprioception, vision, and tactile feedback for predictive adjustments during manipulation.
- Real-Time Planning Optimization: It enhances computational efficiency for learning-based approaches to enable real-time fragility-aware decision-making.
- Multi-Object Planning Integration: It expands existing frameworks to handle interactive tasks involving multiple fragile objects, such as simultaneous handling or assembly.
6. Control for Deformable Object Manipulation
6.1. Model-Based Control
Impedance and Admittance Control
6.2. Model-Free Feedback-Based Control
6.2.1. Visual Servoing
6.2.2. Tactile Feedback Control
6.3. Learning-Based Control
6.4. Challenges in Control for Fragile Objects
- Lack of Fragility Constraints: Control strategies, especially in Reinforcement Learning, often optimize for task completion without explicit fragility-aware parameters. Reward functions may not sufficiently penalize actions that cause subtle damage, and policies learned via Imitation Learning can fail when encountering unseen states where the object’s fragility becomes a factor.
- Computational Bottlenecks: Model-based controllers like MPC, while capable of predictive planning, often cannot meet the real-time computational requirements for safety-critical tasks. The delay in optimizing a new plan can be longer than the time it takes to irreversibly damage a fragile object.
- Response Latency and Sensor Limitations: The effectiveness of any feedback-based control is limited by sensor and processing latency. For fragile objects, even a small delay in detecting a force spike or slip from visual or tactile data can be the difference between a successful manipulation and a failed one. Furthermore, the limited spatial coverage of tactile sensors means that the controller is blind to damaging events happening outside the contact patch.
- Generalization Gaps: Learning-based methods frequently fail to generalize from simulations to the real world or from training objects to new ones with different fragility properties. A policy trained to handle a firm object may apply excessive force when confronted with a softer, more delicate variant.
6.5. Future Opportunities in Control
- Fragility-Aware Learning: A significant opportunity lies in incorporating fragility constraints directly into the learning process. This can be achieved through safety-constrained reward functions, intrinsic penalties for high forces or rapid deformations, or by training a dedicated “safety critic” that evaluates the risk of an action in parallel with the main control policy.
- Hybrid Control Systems: Future work should explore hybrid frameworks that combine the predictive, optimal nature of model-based controllers with the rapid response of reactive mechanisms. For example, high-level MPC could plan a safe, long-horizon trajectory, while a low-level impedance controller or a simple reflexive loop provides an instantaneous safety net against unexpected forces.
- Hierarchical and Bio-Inspired Control: There is great potential in exploring hierarchical architectures that mimic biological systems. These would feature a high-level cognitive layer for strategic planning and a low-level reflexive layer that handles immediate safety based on high-frequency feedback from proprioceptive or tactile sensors, creating a system that is both intelligent and robustly safe.
7. Rethinking Fragile DOM: Key Challenges and a Path Forward
7.1. The Primary Challenge: Defining and Modeling Fragility
- Global Fragility: Some objects, such as glass rods or thin sheets, exhibit fragility thresholds determined by cumulative stresses from all interactions. Existing approaches that estimate global stresses often focus on force/torque balance but rarely incorporate long-term fatigue or stress accumulation during extended manipulation tasks.
- Local Fragility: For objects like soft tissues or brittle composites, damage may result from localized forces concentrated at specific points of contact. Current tactile- and force/torque-sensing systems are limited in detecting and predicting these localized risks, especially without detailed internal stress models.
7.2. The Need for Predictive and Adaptive Models
7.3. The Disconnect Between Deliberative Planning and Real-Time Control
7.4. A Path Forward: Close the Perception, Modeling, Planning, and Control Loop via Proprioception
8. Conclusions
- The choice of hardware morphology, from rigid to soft grippers, establishes a baseline of passive safety that fundamentally shapes the subsequent control challenges.
- A primary limitation in current systems is the cognitive–reflexive gap—a disconnect between slow, deliberative planners and fast, reactive controllers that hinders real-time adaptation.
- External sensing modalities like vision and touch, while critical, are often insufficient without being grounded by high-bandwidth proprioception, which provides the direct causal link between action and consequence.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Yin, H.; Varava, A.; Kragic, D. Modeling, learning, perception, and control methods for deformable object manipulation. Sci. Robot. 2021, 6, eabd8803. [Google Scholar] [CrossRef]
- Kang, H.; Im, S.; Jo, J.; Kang, M.S. Enhancing recycling efficiency: A rapid glass bottle sorting gripper. Robot. Auton. Syst. 2024, 174, 104647. [Google Scholar] [CrossRef]
- Kragic, D.; Björkman, M.; Christensen, H.I.; Eklundh, J.O. Vision for robotic object manipulation in domestic settings. Robot. Auton. Syst. 2005, 52, 85–100. [Google Scholar] [CrossRef]
- Lee, Y.; Virgala, I.; Sadati, S.H.; Falotico, E. Design, modeling and control of kinematically redundant robots. Front. Robot. AI 2024, 11, 1399217. [Google Scholar] [CrossRef] [PubMed]
- Ishikawa, R.; Hamaya, M.; Von Drigalski, F.; Tanaka, K.; Hashimoto, A. Learning by breaking: Food fracture anticipation for robotic food manipulation. IEEE Access 2022, 10, 99321–99329. [Google Scholar] [CrossRef]
- Li, Y.; Bly, R.; Akkina, S.; Qin, F.; Saxena, R.C.; Humphreys, I.; Whipple, M.; Moe, K.; Hannaford, B. Learning surgical motion pattern from small data in endoscopic sinus and skull base surgeries. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 7751–7757. [Google Scholar]
- De, S.; Rosen, J.; Dagan, A.; Hannaford, B.; Swanson, P.; Sinanan, M. Assessment of tissue damage due to mechanical stresses. Int. J. Robot. Res. 2007, 26, 1159–1171. [Google Scholar] [CrossRef]
- Li, Y.; Konuthula, N.; Humphreys, I.M.; Moe, K.; Hannaford, B.; Bly, R. Real-time virtual intraoperative CT in endoscopic sinus surgery. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 249–260. [Google Scholar] [CrossRef] [PubMed]
- Gu, F.; Zhou, Y.; Wang, Z.; Jiang, S.; He, B. A Survey on Robotic Manipulation of Deformable Objects: Recent Advances, Open Challenges and New Frontiers. arXiv 2023. [Google Scholar] [CrossRef]
- Zhu, J.; Cherubini, A.; Dune, C.; Navarro-Alarcon, D.; Alambeigi, F.; Berenson, D.; Ficuciello, F.; Harada, K.; Kober, J.; Li, X.; et al. Challenges and Outlook in Robotic Manipulation of Deformable Objects. arXiv 2021. [Google Scholar] [CrossRef]
- Jiménez, P. Survey on model-based manipulation planning of deformable objects. Robot. Comput. Integr. Manuf. 2012, 28, 154–163. [Google Scholar] [CrossRef]
- Herguedas, R.; López-Nicolás, G.; Aragüés, R.; Sagüés, C. Survey on multi-robot manipulation of deformable objects. In Proceedings of the 2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Zaragoza, Spain, 10–13 September 2019; pp. 977–984. [Google Scholar] [CrossRef]
- Arriola-Rios, V.E.; Guler, P.; Ficuciello, F.; Kragic, D.; Siciliano, B.; Wyatt, J.L. Modeling of Deformable Objects for Robotic Manipulation: A Tutorial and Review. Front. Robot. AI 2020, 7, 82. [Google Scholar] [CrossRef]
- Kadi, H.A.; Terzić, K. Data-Driven Robotic Manipulation of Cloth-like Deformable Objects: The Present, Challenges and Future Prospects. Sensors 2023, 23, 2389. [Google Scholar] [CrossRef]
- Blanco-Mulero, D.; Dong, Y.; Borras, J.; Pokorny, F.T.; Torras, C. T-DOM: A Taxonomy for Robotic Manipulation of Deformable Objects. arXiv 2024. [Google Scholar] [CrossRef]
- Sanchez, J.; Mohy El Dine, K.; Corrales, J.A.; Bouzgarrou, B.C.; Mezouar, Y. Blind Manipulation of Deformable Objects Based on Force Sensing and Finite Element Modeling. Front. Robot. AI 2020, 7, 73. [Google Scholar] [CrossRef] [PubMed]
- Gorniak, S.L.; Zatsiorsky, V.M.; Latash, M.L. Manipulation of a fragile object. Exp. Brain Res. 2010, 202, 413–430. [Google Scholar] [CrossRef] [PubMed]
- Tuthill, J.C.; Azim, E. Proprioception. Curr. Biol. 2018, 28, R194–R203. [Google Scholar] [CrossRef]
- Li, Y. Trends in Control and Decision-Making for Human-Robot Collaboration Systems. IEEE Control Syst. Mag. 2019, 39, 101–103. [Google Scholar] [CrossRef]
- Hogan, F.R.; Ballester, J.; Dong, S.; Rodriguez, A. Tactile dexterity: Manipulation primitives with tactile feedback. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France (virtual), 31 May–31 August 2020; pp. 8863–8869. [Google Scholar]
- Qi, Y.; Jin, L.; Li, H.; Li, Y.; Liu, M. Discrete Computational Neural Dynamics Models for Solving Time-Dependent Sylvester Equations with Applications to Robotics and MIMO Systems. IEEE Trans. Ind. Inform. 2020, 16, 6231–6241. [Google Scholar] [CrossRef]
- Billard, A.; Kragic, D. Trends and challenges in robot manipulation. Science 2019, 364, eaat8414. [Google Scholar] [CrossRef]
- Wang, Z.; Furuta, H.; Hirai, S.; Kawamura, S. A scooping-binding robotic gripper for handling various food products. Front. Robot. AI 2021, 8, 640805. [Google Scholar] [CrossRef]
- Zaidi, S.; Maselli, M.; Laschi, C.; Cianchetti, M. Actuation technologies for soft robot grippers and manipulators: A review. Curr. Robot. Rep. 2021, 2, 355–369. [Google Scholar] [CrossRef]
- Dzedzickis, A.; Petronienė, J.J.; Petkevičius, S.; Bučinskas, V. Soft grippers in robotics: Progress of last 10 years. Machines 2024, 12, 887. [Google Scholar] [CrossRef]
- Bo, V.; Franco, L.; Turco, E.; Pozzi, M.; Malvezzi, M.; Prattichizzo, D. Design and control of soft-rigid grippers for food handling. In Proceedings of the ICRA2024 Workshop on Cooking Robotics: Perception and Motion Planning, Yokohama, Japan, 13–17 May 2024. [Google Scholar]
- Shintake, J.; Cacucciolo, V.; Floreano, D.; Shea, H. Soft robotic grippers. Adv. Mater. 2018, 30, 1707035. [Google Scholar] [CrossRef]
- Zhou, Y.; Headings, L.M.; Dapino, M.J. Modeling of fluidic prestressed composite actuators with application to soft robotic grippers. IEEE Trans. Robot. 2022, 38, 2166–2178. [Google Scholar] [CrossRef]
- Ko, T. A tendon-driven robot gripper with passively switchable underactuated surface and its physics simulation based parameter optimization. IEEE Robot. Autom. Lett. 2020, 5, 5002–5009. [Google Scholar] [CrossRef]
- Li, H.; Sun, J.; Herrmann, J.M. Beyond jamming grippers: Granular material in robotics. Adv. Robot. 2024, 38, 715–729. [Google Scholar] [CrossRef]
- Glick, P.; Suresh, S.A.; Ruffatto, D.; Cutkosky, M.; Tolley, M.T.; Parness, A. A soft robotic gripper with gecko-inspired adhesive. IEEE Robot. Autom. Lett. 2018, 3, 903–910. [Google Scholar] [CrossRef]
- Shahian Jahromi, B.; Tulabandhula, T.; Cetin, S. Real-time hybrid multi-sensor fusion framework for perception in autonomous vehicles. Sensors 2019, 19, 4357. [Google Scholar] [CrossRef] [PubMed]
- Ferreira, J.F.; Portugal, D.; Andrada, M.E.; Machado, P.; Rocha, R.P.; Peixoto, P. Sensing and artificial perception for robots in precision forestry: A survey. Robotics 2023, 12, 139. [Google Scholar] [CrossRef]
- Luo, J.; Zhou, X.; Zeng, C.; Jiang, Y.; Qi, W.; Xiang, K.; Pang, M.; Tang, B. Robotics perception and control: Key technologies and applications. Micromachines 2024, 15, 531. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, J.; Li, S. STMVO: Biologically inspired monocular visual odometry. Neural Comput. Appl. 2018, 29, 215–225. [Google Scholar] [CrossRef]
- Elmquist, A.; Negrut, D. Modeling cameras for autonomous vehicle and robot simulation: An overview. IEEE Sens. J. 2021, 21, 25547–25560. [Google Scholar] [CrossRef]
- Li, J.; Gao, W.; Wu, Y.; Liu, Y.; Shen, Y. High-quality indoor scene 3D reconstruction with RGB-D cameras: A brief review. Comput. Vis. Media 2022, 8, 369–393. [Google Scholar] [CrossRef]
- Chakravarthi, B.; Verma, A.A.; Daniilidis, K.; Fermuller, C.; Yang, Y. Recent event camera innovations: A survey. In Proceedings of the European Conference on Computer Vision, Milan, Italy, 29 September–4 October 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 342–376. [Google Scholar]
- Guo, Y.; Jiang, X.; Liu, Y. Deformation control of a deformable object based on visual and tactile feedback. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September– 1October 2021; pp. 675–681. [Google Scholar]
- Lee, Y. Three-Dimensional Dense Reconstruction: A Review of Algorithms and Datasets. Sensors 2024, 24, 5861. [Google Scholar] [CrossRef] [PubMed]
- Miyasaka, M.; Haghighipanah, M.; Li, Y.; Hannaford, B. Hysteresis model of longitudinally loaded cable for cable driven robots and identification of the parameters. In Proceedings of the Robotics and Automation (ICRA), 2016 IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 16–21 May 2016; pp. 4051–4057. [Google Scholar]
- Mandil, W.; Rajendran, V.; Nazari, K.; Ghalamzan-Esfahani, A. Tactile-sensing technologies: Trends, challenges and outlook in agri-food manipulation. Sensors 2023, 23, 7362. [Google Scholar] [CrossRef] [PubMed]
- Shimonomura, K. Tactile image sensors employing camera: A review. Sensors 2019, 19, 3933. [Google Scholar] [CrossRef]
- Yousef, H.; Boukallel, M.; Althoefer, K. Tactile sensing for dexterous in-hand manipulation in robotics—A review. Sens. Actuators A Phys. 2011, 167, 171–187. [Google Scholar] [CrossRef]
- Yuan, W.; Dong, S.; Adelson, E.H. Gelsight: High-resolution robot tactile sensors for estimating geometry and force. Sensors 2017, 17, 2762. [Google Scholar] [CrossRef]
- Ward-Cherrier, B.; Pestell, N.; Cramphorn, L.; Winstone, B.; Giannaccini, M.E.; Rossiter, J.; Lepora, N.F. The tactip family: Soft optical tactile sensors with 3d-printed biomimetic morphologies. Soft Robot. 2018, 5, 216–227. [Google Scholar] [CrossRef]
- Meribout, M.; Takele, N.A.; Derege, O.; Rifiki, N.; El Khalil, M.; Tiwari, V.; Zhong, J. Tactile sensors: A review. Measurement 2024, 238, 115332. [Google Scholar] [CrossRef]
- Welle, M.C.; Lippi, M.; Lu, H.; Lundell, J.; Gasparri, A.; Kragic, D. Enabling robot manipulation of soft and rigid objects with vision-based tactile sensors. In Proceedings of the 2023 IEEE 19th International Conference on Automation Science and Engineering (CASE), Auckland, New Zealand, 26–29 August 2023; pp. 1–7. [Google Scholar]
- Muscolo, G.G.; Fiorini, P. Force–torque sensors for minimally invasive surgery robotic tools: An overview. IEEE Trans. Med. Robot. Bionics 2023, 5, 458–471. [Google Scholar] [CrossRef]
- Miyasaka, M.; Haghighipanah, M.; Li, Y.; Matheson, J.; Lewis, A.; Hannaford, B. Modeling Cable-Driven Robot With Hysteresis and Cable–Pulley Network Friction. IEEE/ASME Trans. Mechatron. 2020, 25, 1095–1104. [Google Scholar] [CrossRef]
- Cao, M.Y.; Laws, S.; y Baena, F.R. Six-axis force/torque sensors for robotics applications: A review. IEEE Sens. J. 2021, 21, 27238–27251. [Google Scholar] [CrossRef]
- Yamaguchi, A.; Atkeson, C.G. Recent progress in tactile sensing and sensors for robotic manipulation: Can we turn tactile sensing into vision? Adv. Robot. 2019, 33, 661–673. [Google Scholar] [CrossRef]
- Li, Y.; Hannaford, B. Gaussian Process Regression for Sensorless Grip Force Estimation of Cable-Driven Elongated Surgical Instruments. IEEE Robot. Autom. Lett. 2017, 2, 1312–1319. [Google Scholar] [CrossRef]
- Yi, H.C.; You, Z.H.; Huang, D.S.; Guo, Z.H.; Chan, K.C.; Li, Y. Learning Representations to Predict Intermolecular Interactions on Large-Scale Heterogeneous Molecular Association Network. iScience 2020, 23, 101261. [Google Scholar] [CrossRef]
- Malassiotis, S.; Strintzis, M. Tracking textured deformable objects using a finite-element mesh. IEEE Trans. Circuits Syst. Video Technol. 1998, 8, 756–774. [Google Scholar] [CrossRef]
- Schulman, J.; Lee, A.; Ho, J.; Abbeel, P. Tracking deformable objects with point clouds. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 1130–1137. [Google Scholar] [CrossRef]
- Li, H.; Shan, J.; Wang, H. SDFPlane: Explicit Neural Surface Reconstruction of Deformable Tissues. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2024; Linguraru, M.G., Dou, Q., Feragen, A., Giannarou, S., Glocker, B., Lekadir, K., Schnabel, J.A., Eds.; Springer: Cham, Switzerland, 2024; pp. 542–552. [Google Scholar] [CrossRef]
- Fu, J.; Xiang, C.; Yin, C.; Guo, Y.X.; Yin, Z.Y.; Cheng, H.D.; Sun, X. Basic Principles of Deformed Objects with Methods of Analytical Mechanics. J. Nonlinear Math. Phys. 2024, 31, 57. [Google Scholar] [CrossRef]
- Gao, K.; Gao, Y.; He, H.; Lu, D.; Xu, L.; Li, J. Nerf: Neural radiance field in 3d vision, a comprehensive review. arXiv 2022, arXiv:2210.00379. [Google Scholar]
- Zobeidi, E.; Atanasov, N. A deep signed directional distance function for object shape representation. arXiv 2021, arXiv:2107.11024. [Google Scholar] [CrossRef]
- Zhou, Y.; Lee, Y. Simultaneous Super-resolution and Depth Estimation for Satellite Images Based on Diffusion Model. In Proceedings of the 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Abu Dhabi, United Arab Emirates, 14–18 October 2024; pp. 1–8. [Google Scholar]
- Li, Y.; Li, S.; Song, Q.; Liu, H.; Meng, M.Q.H. Fast and robust data association using posterior based approximate joint compatibility test. IEEE Trans. Ind. Inform. 2014, 10, 331–339. [Google Scholar] [CrossRef]
- Li, Y.; Li, S.; Hannaford, B. A model based recurrent neural network with randomness for efficient control with applications. IEEE Trans. Ind. Inform. 2018, 15, 2054–2063. [Google Scholar] [CrossRef]
- Herguedas, R.; Sundaram, A.M.; López-Nicolás, G.; Roa, M.A.; Sagüés, C. Adaptive Bayesian optimization for robotic pushing of thin fragile deformable objects. In Proceedings of the Iberian Robotics Conference; Springer: Berlin/Heidelberg, Germany, 2023; pp. 351–362. [Google Scholar]
- Ficuciello, F.; Migliozzi, A.; Coevoet, E.; Petit, A.; Duriez, C. FEM-based deformation control for dexterous manipulation of 3D soft objects. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4007–4013. [Google Scholar]
- Makiyeh, F. Vision-Based Shape Servoing of Soft Objects Using the Mass-Spring Model. Ph.D. Thesis, Université de Rennes, Rennes, France, 2023. [Google Scholar]
- Li, Y.; Li, S.; Hannaford, B. A novel recurrent neural network for improving redundant manipulator motion planning completeness. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–26 May 2018; pp. 2956–2961. [Google Scholar]
- LaValle, S.M.; Kuffner, J.J., Jr. Randomized kinodynamic planning. Int. J. Robot. Res. 2001, 20, 378–400. [Google Scholar] [CrossRef]
- Li, Y.; Hannaford, B. Soft-obstacle Avoidance for Redundant Manipulators with Recurrent Neural Network. In Proceedings of the Intelligent Robots and Systems (IROS), 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1–6. [Google Scholar]
- Salhotra, G.; Liu, I.C.A.; Dominguez-Kuhne, M.; Sukhatme, G.S. Learning deformable object manipulation from expert demonstrations. IEEE Robot. Autom. Lett. 2022, 7, 8775–8782. [Google Scholar] [CrossRef]
- Matas, J.; James, S.; Davison, A.J. Sim-to-real reinforcement learning for deformable object manipulation. In Proceedings of the Conference on Robot Learning, PMLR, Zürich, Switzerland, 29–31 October 2018; pp. 734–743. [Google Scholar]
- Li, Y.; Bly, R.; Whipple, M.; Humphreys, I.; Hannaford, B.; Moe, K. Use endoscope and instrument and pathway relative motion as metric for automated objective surgical skill assessment in skull base and sinus surgery. J. Neurol. Surg. Part B Skull Base 2018, 79, A194. [Google Scholar] [CrossRef]
- Lee, J.H. Model predictive control: Review of the three decades of development. Int. J. Control. Autom. Syst. 2011, 9, 415–424. [Google Scholar] [CrossRef]
- Li, Y.; Li, S.; Miyasaka, M.; Lewis, A.; Hannaford, B. Improving Control Precision and Motion Adaptiveness for Surgical Robot with Recurrent Neural Network. In Proceedings of the Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 1–6. [Google Scholar]
- Mizanoor Rahman, S.; Ikeura, R. Cognition-based variable admittance control for active compliance in flexible manipulation of heavy objects with a power-assist robotic system. Robot. Biomim. 2018, 5, 7. [Google Scholar] [CrossRef]
- Li, M.; Yin, H.; Tahara, K.; Billard, A. Learning object-level impedance control for robust grasping and dexterous manipulation. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 6784–6791. [Google Scholar]
- Lagneau, R.; Krupa, A.; Marchal, M. Automatic shape control of deformable wires based on model-free visual servoing. IEEE Robot. Autom. Lett. 2020, 5, 5252–5259. [Google Scholar] [CrossRef]
- De Luca, A.; Albu-Schaffer, A.; Haddadin, S.; Hirzinger, G. Collision Detection and Safe Reaction with the DLR-III Lightweight Manipulator Arm. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 1623–1630. [Google Scholar] [CrossRef]
- Du, Z.; Wang, W.; Yan, Z.; Dong, W.; Wang, W. Variable Admittance Control Based on Fuzzy Reinforcement Learning for Minimally Invasive Surgery Manipulator. Sensors 2017, 17, 844. [Google Scholar] [CrossRef]
- She, Y.; Wang, S.; Dong, S.; Sunil, N.; Rodriguez, A.; Adelson, E. Cable Manipulation with a Tactile-Reactive Gripper. arXiv 2020. [Google Scholar] [CrossRef]
- Inceoglu, A.; Ince, G.; Yaslan, Y.; Sariel, S. Failure Detection Using Proprioceptive, Auditory and Visual Modalities. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 2491–2496. [Google Scholar] [CrossRef]
- Zhou, P.; Zheng, P.; Qi, J.; Li, C.; Lee, H.Y.; Duan, A.; Lu, L.; Li, Z.; Hu, L.; Navarro-Alarcon, D. Reactive human–robot collaborative manipulation of deformable linear objects using a new topological latent control model. Robot. Comput. Integr. Manuf. 2024, 88, 102727. [Google Scholar] [CrossRef]
- Patni, S.P.; Stoudek, P.; Chlup, H.; Hoffmann, M. Online elasticity estimation and material sorting using standard robot grippers. Int. J. Adv. Manuf. Technol. 2024, 132, 6033–6051. [Google Scholar] [CrossRef]
- Gutierrez-Giles, A.; Padilla-Castañeda, M.A.; Alvarez-Icaza, L.; Gutierrez-Herrera, E. Force-Sensorless Identification and Classification of Tissue Biomechanical Parameters for Robot-Assisted Palpation. Sensors 2022, 22, 8670. [Google Scholar] [CrossRef] [PubMed]
- Chen, P.Y.; Liu, C.; Ma, P.; Eastman, J.; Rus, D.; Randle, D.; Ivanov, Y.; Matusik, W. Learning Object Properties Using Robot Proprioception via Differentiable Robot-Object Interaction. arXiv 2025. [Google Scholar] [CrossRef]
- Kaboli, M.; Yao, K.; Cheng, G. Tactile-based manipulation of deformable objects with dynamic center of mass. In Proceedings of the 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancún, Mexico, 15–17 November 2016; pp. 752–757. [Google Scholar] [CrossRef]
- Kwiatkowski, J.; Cockburn, D.; Duchaine, V. Grasp stability assessment through the fusion of proprioception and tactile signals using convolutional neural networks. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 286–292. [Google Scholar]
- Blanco-Mulero, D.; Alcan, G.; Abu-Dakka, F.J.; Kyrki, V. QDP: Learning to sequentially optimise quasi-static and dynamic manipulation primitives for robotic cloth manipulation. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 1–5 October 2023; pp. 984–991. [Google Scholar]
- Hietala, J.; Blanco-Mulero, D.; Alcan, G.; Kyrki, V. Learning Visual Feedback Control for Dynamic Cloth Folding. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 1455–1462. [Google Scholar] [CrossRef]
- Elbrechter, C.; Haschke, R.; Ritter, H. Folding paper with anthropomorphic robot hands using real-time physics-based modeling. In Proceedings of the 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012), Osaka, Japan, 29 November–1 December 2012; pp. 210–215. [Google Scholar] [CrossRef]
- Kim, S.C.; Ryu, S. Robotic Kinesthesia: Estimating Object Geometry and Material With Robot’s Haptic Senses. IEEE Trans. Haptics 2024, 17, 998–1005. [Google Scholar] [CrossRef]
- Mitsioni, I.; Karayiannidis, Y.; Stork, J.A.; Kragic, D. Data-Driven Model Predictive Control for the Contact-Rich Task of Food Cutting. In Proceedings of the 2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids), Toronto, ON, Canada, 15–17 October 2019; pp. 244–250. [Google Scholar] [CrossRef]
- Gemici, M.C.; Saxena, A. Learning haptic representation for manipulating deformable food objects. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 638–645. [Google Scholar] [CrossRef]
- Bednarek, M.; Kicki, P.; Bednarek, J.; Walas, K. Gaining a Sense of Touch Object Stiffness Estimation Using a Soft Gripper and Neural Networks. Electronics 2021, 10, 96. [Google Scholar] [CrossRef]
- Yao, S.; Hauser, K. Estimating Tactile Models of Heterogeneous Deformable Objects in Real Time. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 12583–12589. [Google Scholar] [CrossRef]
- Lee, M.A.; Zhu, Y.; Zachares, P.; Tan, M.; Srinivasan, K.; Savarese, S.; Fei-Fei, L.; Garg, A.; Bohg, J. Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks. arXiv 2019. [Google Scholar] [CrossRef]
- Li, Y.; Miyasaka, M.; Haghighipanah, M.; Cheng, L.; Hannaford, B. Dynamic modeling of cable driven elongated surgical instruments for sensorless grip force estimation. In Proceedings of the Robotics and Automation (ICRA), 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 4128–4134. [Google Scholar]
- Khalil, F.; Payeur, P.; Cretu, A.M. Integrated Multisensory Robotic Hand System for Deformable Object Manipulation. In Proceedings of the IASTED Technology Conferences/705: ARP/706: RA/707: NANA/728: CompBIO, Cambridge, MA, USA, 1–3 November 2010. [Google Scholar] [CrossRef]
- Caldwell, T.M.; Coleman, D.; Correll, N. Optimal parameter identification for discrete mechanical systems with application to flexible object manipulation. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 898–905. [Google Scholar] [CrossRef]
- Mazhitov, A.; Adilkhanov, A.; Massalim, Y.; Kappassov, Z.; Varol, H.A. Deformable Object Recognition Using Proprioceptive and Exteroceptive Tactile Sensing. In Proceedings of the 2019 IEEE/SICE International Symposium on System Integration (SII), Paris, France, 14–16 January 2019; pp. 734–739. [Google Scholar] [CrossRef]
- Yong, S.; Chapman, J.; Aw, K. Soft and flexible large-strain piezoresistive sensors: On implementing proprioception, object classification and curvature estimation systems in adaptive, human-like robot hands. Sens. Actuators A Phys. 2022, 341, 113609. [Google Scholar] [CrossRef]
- Cretu, A.M.; Payeur, P.; Petriu, E.M. Soft Object Deformation Monitoring and Learning for Model-Based Robotic Hand Manipulation. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2012, 42, 740–753. [Google Scholar] [CrossRef]
- Rostamian, B.; Koolani, M.; Abdollahzade, P.; Lankarany, M.; Falotico, E.; Amiri, M.; Thakor, N.V. Texture recognition based on multi-sensory integration of proprioceptive and tactile signals. Sci. Rep. 2022, 12, 21690. [Google Scholar] [CrossRef]
- Oller, M.; i Lisbona, M.P.; Berenson, D.; Fazeli, N. Manipulation via membranes: High-resolution and highly deformable tactile sensing and control. In Proceedings of the Conference on Robot Learning (CoRL 2023), PMLR, Atlanta, GA, USA, 6–9 November 2023; pp. 1850–1859. [Google Scholar]
- Chen, L.; Lu, W.; Zhang, K.; Zhang, Y.; Zhao, L.; Zheng, Y. TossNet: Learning to Accurately Measure and Predict Robot Throwing of Arbitrary Objects in Real Time with Proprioceptive Sensing. IEEE Trans. Robot. 2024, 40, 3232–3251. [Google Scholar] [CrossRef]
- Luo, S.; Mou, W.; Althoefer, K.; Liu, H. iCLAP: Shape recognition by combining proprioception and touch sensing. Auton. Robot. 2019, 43, 993–1004. [Google Scholar] [CrossRef]
- Sipos, A.; Fazeli, N. MultiSCOPE: Disambiguating In-Hand Object Poses with Proprioception and Tactile Feedback. In Proceedings of the Robotics: Science and Systems XIX. Robotics: Science and Systems Foundation, Daegu, Republic of Korea, 10–14 July 2023. [Google Scholar] [CrossRef]
- King, D.; Adidharma, L.; Peng, H.; Moe, K.; Li, Y.; Yang, Z.; Young, C.; Ferreria, M.; Humphreys, I.; Abuzeid, W.M.; et al. Automatic summarization of endoscopic skull base surgical videos through object detection and hidden Markov modeling. Comput. Med. Imaging Graph. 2023, 108, 102248. [Google Scholar] [CrossRef]
- Li, Y. Deep Causal Learning for Robotic Intelligence. Front. Neurorobot. 2023, 17, 1128591. [Google Scholar] [CrossRef]
Reference | Focus Area | Modalities | Noted Limitations |
---|---|---|---|
Gu et al. (2023) [9] | General review of DOM; data-driven and hybrid methods | Vision, tactile, and force | Limited mention of proprioception; minimal focus on fusion |
Zhu et al. (2021) [10] | Challenges and future directions in DOM | Vision, force, and tactile | Suggests multi-modal fusion but without deep implementation details |
Jiménez (2012) [11] | Model-based manipulation planning | Mostly modeling | Little discussion of sensing modalities |
Herguedas et al. (2019) [12] | Multi-robot systems for DOM | Vision and force | Limited on tactile and proprioception; focuses on coordination |
Arriola-Rios et al. (2020) [13] | Modeling of deformable objects for robotic manipulation | Vision and force | Focuses on object modeling; less discussion on action planning and multi-modal fusion |
Yin et al. (2021) [1] | Modeling, learning, perception, and control methods | Vision and tactile | Briefly mentions force; lacks multi-modal integration |
Kadi and Terzić (2023) [14] | Data-driven approaches for cloth-like deformables | Vision and tactile | Discusses challenges but does not cover proprioception deeply |
Blanco-Mulero et al. (2024) [15] | Proposed taxonomy (T-DOM) for deformable manipulation tasks | Vision, force, and tactile | High-level categorization; not focused on sensing strategies |
Sanchez et al. (2018) [16] | Robotic manipulation and sensing of deformable objects in domestic and industrial applications | Vision, force, and tactile | Broad classification across object types and tasks; limited depth on sensor-fusion strategies and minimal focus on proprioception |
Feature | Rigid Grippers | Soft Grippers |
---|---|---|
Passive Safety | Low. Lacks compliance, creating a higher risk of damage from unexpected contact or control errors. | High. Inherently compliant materials absorb impacts and prevent force spikes, making interactions safer. |
Force Distribution | Concentrated at a few points, leading to high stress that can easily damage fragile objects. | Distributed over a larger surface area as the gripper conforms to the object’s shape. |
Shape Adaptability | Low. Requires a precise model of the object’s geometry for a successful grasp. | High. Can passively conform to a wide variety of regular and irregular shapes. |
Control Complexity | High. Safety is highly dependent on sophisticated, real-time feedback and precise force control. | Lower. The gripper’s morphology handles much of the complexity, reducing the burden on the controller. |
Precision and Strength | Typically high. Capable of precise positioning and applying significant force. | Generally lower. Precision can be more difficult to model and control; payload-to-weight ratios vary widely. |
Durability and Sanitation | High. Often made of robust metals or plastics that are durable and easy to sterilize. | Varies. Soft materials can be susceptible to wear, tear, and contamination, posing challenges for some applications. |
Modality | Advantages of DOM | Limitations of DOM |
---|---|---|
Vision | • Global, non-contact sensing of shapes & motion | • Highly prone to occlusions Cannot measure contact forces or internal stress Poor with transparent or textureless objects |
Tactile | • High-resolution local data (force, slip, and texture) High-frequency feedback for fine control | • Sensing area limited to direct contact Complex or costly to integrate large arrays |
Force/Torque | • Measures net interaction force for global control Excellent for enforcing overall force limits | • Lacks spatial resolution (cannot localize contact) Sensitive to noise from the robot’s own dynamics |
Method | Advantages | Disadvantages |
---|---|---|
Mesh-based | Real-time collision checks; straightforward to implement | Limited deformation fidelity; mesh artifacts under large strains |
SDF | Smooth, continuous geometry; precise deformation recovery | High memory footprint; expensive distance queries |
Mass–spring | Very fast simulation; intuitive parameter tuning | Oversimplified physics; cannot capture complex material behaviors |
FEM | High-fidelity modeling; supports nonlinear constitutive laws | Computationally intensive; requires mesh generation and parameter tuning |
Data-driven | Learns from real examples; often real-time inference | Data-hungry; limited interpretability; risk of overfitting and poor generalization |
Sensing Modalities | Control Method | Assigned Loop | Note | |
---|---|---|---|---|
[78] | Joint torque | Ultra-fast proprioceptive collision detection within the joint servo driver | Spinal Reflex (<50 ms) | Leverages high-frequency torque error thresholds to instantly halt motor commands at sub-millisecond latencies without higher-level inference. |
[79] | Joint torque | Hybrid variable admittance via Fuzzy Sarsalearning | Long-Latency Reflex (50–100 ms) | Adapts admittance gains online based on torque feedback, providing skill-tuned compliance in tens of milliseconds. |
[80] | GelSight | Parallel PD grip control and LQR pose control on a learned linear model | Long-Latency Reflex (50–100 ms) | Runs lightweight learned models at ∼60–125 Hz on tactile cues to maintain cable alignment without full planning. |
[81] | Proprioception, vision, and audio | HMM-based multimodal anomaly detection | Long-Latency Reflex (50–100 ms) | Fuses proprioceptive residuals with audio/vision in an HMM to quickly flag failures without deliberation. |
[82] | RGB-D vision | Topological autoencoder + fixed-time sliding-mode controller (∼20 Hz) | Long-Latency Reflex (50–100 ms) | Provides reflexive shape corrections using low-dimensional latent models for real-time adaptation. |
[83] | Wrist force/torque | Real-time elasticity estimation from force–position curves | Long-Latency Reflex (50–100 ms) | Infers material properties on the fly to adjust grasp strategies within tens of milliseconds. |
[84] | Joint positions | Observer for force/velocity estimation + Bayesian parameter classifier | Long-Latency Reflex (50–100 ms) | Uses a state observer on proprioceptive data to infer forces and classify tissue parameters rapidly. |
[85] | Joint encoder | Differentiable simulation pipeline for inverse parameter identification | Long-Latency Reflex (50–100 ms) | Inverts a differentiable model on high-rate encoder streams to infer mass and elasticity in real time. |
[86] | Tactile | Slip detection via tangential-force thresholds + immediate position adjustment | Long-Latency Reflex (50–100 ms) | Detects slip through fast tactile thresholds and issues corrective motions to prevent object loss. |
[87] | Vision, tactile, and encoder | HMM + kernel logistic regression + Bayesian networks | Cognitive (>100 ms) | Integrates multi-modal cues with probabilistic learning to predict and replan stable grasps. |
[88] | Vision | Sequential RL for manipulation- primitive parameters | Cognitive (>100 ms) | Learns high-level parameter sequences for long-horizon cloth tasks via deliberative policy optimization. |
[89] | Vision | RL with dynamic domain randomization (∼25 fps) | Cognitive (>100 ms) | Trains end-to-end visual policies for cloth folding through deliberative RL. |
[90] | Vision, proprioception, and tactile | Predefined folding trajectories + sensory feedback | Cognitive (>100 ms) | Uses physics-based modeling and sensory fusion to plan multi-step folding sequences. |
[91] | Joint torque | Supervised learning on haptic time series for classification | Cognitive (>100 ms) | Trains models on torque signatures to classify geometry/materials and inform high-level planning. |
[92] | Force and proprioception | MPC with RNN/LSTM dynamics (∼10 Hz) | Cognitive (>100 ms) | Embeds learned RNN dynamics into MPC for deliberative adaptation to varied food properties. |
[93] | Proprioception and dynamics | SVR on haptic histograms + Monte Carlo–greedy planning | Cognitive (>100 ms) | Builds latent haptic belief models to guide long-horizon manipulation planning. |
[94] | IMUs | ConvBiLSTM regression on squeeze–release inertial data | Cognitive (>100 ms) | Learns inertial patterns to predict stiffness, informing subsequent manipulation trajectories. |
[95] | Joint angles | Projected diagonal Kalman filters on spring--voxel models (∼23 Hz) | Cognitive (>100 ms) | Recursively updates voxel-wise stiffness estimates to support planning over object compliance. |
[96] | RGB, F/T, joint encoder | Self-supervised latent fusion + deep RL | Cognitive (>100 ms) | Trains compact embeddings to improve sample-efficient, deliberative control in contact-rich scenarios. |
Proprioception | Ta | Vi | Design Philosophy | ||||
---|---|---|---|---|---|---|---|
P | V | T | I | ||||
[98] | ✓ | ✓ | ✓ | – | ✓ | ✓ | Real-time fusion for deformable-object modeling and control |
[99] | ✓ | – | ✓ | – | – | – | Proprioceptive torque/angle-based identification of flexible-loop spring constants via variational integrators. |
[93] | ✓ | – | ✓ | – | – | – | Haptic (encoder + effort/F/T) fusion for deformable-food property estimation and planning (no velocity/IMU/tactile). |
[100] | ✓ | – | ✓ | – | ✓ | – | Fusion of joint encoders and torque sensing with a tactile array for rigid vs. deformable classification (97.5% accuracy). |
[94] | – | – | – | ✓ | – | – | Deep-learning stiffness regression using only IMU-based inertial proprioception (≤8.7 % MAPE). |
[95] | ✓ | – | ✓ | – | – | ✓ | Real-time volumetric stiffness field estimation from joint torque and optional vision for heterogeneous deformables. |
[85] | ✓ | – | – | – | – | – | Differentiable simulation for mass and elastic-modulus estimation from joint-encoder signals alone. |
[101] | ✓ | – | – | – | – | – | Large-strain piezoresistive proprioceptive sensing for single-grasp object shape classification and curvature estimation. |
[102] | ✓ | – | – | – | ✓ | ✓ | Neural-network–based vision–force fusion for predictive deformable-object modeling (no joint-torque/IMU proprioception). |
[84] | ✓ | ✓ | – | – | – | – | Sensorless force/velocity estimation from joint positions and commanded torques for biomechanical parameter identification and classification in robotic palpation. |
[83] | ✓ | – | ✓ | – | – | – | Online elasticity/viscoelasticity estimation from gripper position and F/T sensing for real-time material sorting. |
[103] | – | ✓ | – | – | ✓ | – | Neuromorphic fusion for speed-invariant texture discrimination. |
[104] | – | – | ✓ | – | ✓ | – | Learning soft-membrane dynamics from high-res tactile geometry and reaction wrenches for real-time dexterous control. |
[105] | ✓ | ✓ | ✓ | – | – | – | TossNet: real-time trajectory prediction from end-effector poses, velocity, and F/T-based proprioception. |
[106] | ✓ | – | – | – | ✓ | – | Four-dimensional ICP–based fusion of encoder positions and tactile codebook labels for high-accuracy shape recognition. |
[107] | ✓ | – | ✓ | – | – | – | Bimanual in-hand object-pose disambiguation via iterative contact probing using only joint-encoder and wrist F/T feedback, which is refined by dual particle-filter estimation. |
[91] | – | – | ✓ | – | – | – | Joint-torque-driven classification/regression for simultaneous estimation of object geometry and materials using kinesthetic sensing. |
[96] | ✓ | ✓ | ✓ | – | – | ✓ | Variational self-supervised fusion of RGB-D, EE pose/velocity, and F/T for RL-based peg insertion (no IMU/tactile arrays). |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhu, Y.; Yang, D.; Lee, Y. Deformable and Fragile Object Manipulation: A Review and Prospects. Sensors 2025, 25, 5430. https://doi.org/10.3390/s25175430
Zhu Y, Yang D, Lee Y. Deformable and Fragile Object Manipulation: A Review and Prospects. Sensors. 2025; 25(17):5430. https://doi.org/10.3390/s25175430
Chicago/Turabian StyleZhu, Yicheng, David Yang, and Yangming Lee. 2025. "Deformable and Fragile Object Manipulation: A Review and Prospects" Sensors 25, no. 17: 5430. https://doi.org/10.3390/s25175430
APA StyleZhu, Y., Yang, D., & Lee, Y. (2025). Deformable and Fragile Object Manipulation: A Review and Prospects. Sensors, 25(17), 5430. https://doi.org/10.3390/s25175430