Robotic Disassembly of Electrical Cable Connectors: A Critical Review
Abstract
1. Introduction
2. Research Background
- Unknown number and type of ECCs. Unless prior knowledge of the product is available, the number and types of connectors cannot be determined in advance. In e-waste disassembly, information such as CAD models or component lists is usually accessible only when working with specific, well-documented products [18].
- High variability in shapes and dimensions. Connector standards such as DIN 41612 for PCB connectors and IEC 61076 for circular connectors provide structured specifications; however, significant variability remains due to diverse application requirements and manufacturer-specific designs. As a result, since it is unfeasible to compile exhaustive information on all existing ECCs, autonomous disassembly systems should not rely on precise shapes to detect connectors unless processed products are established in advance.
- Unpredictable condition of ECCs. Detection is further complicated by the fact that connectors in e-waste often suffer from typical end-of-life issues such as damage, dirt, or occlusion by other components.
- Lack of training datasets. To the best of the authors’ knowledge, there are currently no publicly available ECC training datasets for deep learning-based detectors. As a result, several researchers develop their own datasets for direct ECC detection [31,32], wire harness detection [33], or ECC orientation estimation [34].
- Variety of locking mechanisms. As shown in Figure 3, several types of locking mechanism exist and each one requires different unlocking strategies (see Table 1). Unlocking may involve twisting, pulling, pushing specific features, or a combination of these actions. As demonstrated by Zang et al. [35], it is possible to design suitable grippers for connectors with similar locks; however, a general solution has yet to be found.
- Designing a universal end-effector. While each connector could be disassembled with a specialized tool, the required number of tools would be impractical. A universal end-effector for connector disassembly should rely on articulated grippers with several degrees of freedom (DoF) [36], enabling the system to reach, orient, and manipulate connectors in constrained or awkward positions. Compliance may also be beneficial to handle small misalignments and reduce the risk of damage.
- Spatial constraints. Connectors are often embedded in dense assemblies, leaving little room to manipulate locking mechanisms without damaging nearby components. Thus, grippers must be as compact as possible while still being equipped with active joints. Inspiration for such designs may come from other domains, such as soft robotics and bio-inspired mechanisms [37].
- Unknown mechanical properties. Factors such as extraction force, elasticity, tensile strength, and impact resistance are rarely documented but are crucial to avoid damaging physical interfaces of reusable parts, especially board-mounted connectors.
3. Research Method
4. Results
4.1. Detection
- Two-stage detectors first generate a set of candidate object regions on the processed image and then, in a second stage, classify and refine these proposals. These models typically have a deeper and more complex architecture, often combining a powerful backbone network (e.g., ResNet [44] or VGG [45] for feature extraction) with region proposal and classification heads that include multiple convolutional and fully connected layers. The added architectural depth and modular design generally improve accuracy and robustness, particularly for small or partially occluded objects, but increase computational cost and inference time. For these reasons, two-stage detectors are commonly adopted in inspection and defect-detection tasks where precision is prioritized over speed. For instance, Zhang and Shen [46] employ an improved Faster R-CNN [47] model to locate and classify solder-joint defects on connector pins, while Calabrese et al. [48] use Mask R-CNN [49] for printed circuit board defect detection. The most widely used networks of this class include R-CNN [50], Fast R-CNN [51], Faster R-CNN, and Mask R-CNN.
- One-stage detectors directly predict object bounding boxes and class probabilities in a single forward pass over the image, without a separate region proposal stage. Architecturally, these models are shallower and more streamlined, often relying on lightweight backbones (e.g., Darknet [52], MobileNet [53]) and feature pyramid or multi-scale prediction layers to balance accuracy and speed. Their compact design allows real-time inference and makes them well suited for embedded or resource-constrained systems, even though sometimes at the expense of detection accuracy. Due to these advantages, one-stage detectors are widely used for connector and component detection tasks. For example, De Gregorio et al. [54] employ YOLO [55] for wire terminal detection, achieving 88% mean average precision (mAP) after fine-tuning a model pre-trained on ImageNet with 5000 workspace images. Other studies adopt YOLO and its variants or MobileNet-SSD architectures for similar applications [33,56,57]. Although most of these works use images with plain backgrounds, the ability of such models to detect objects in complex scenes has been extensively demonstrated in the literature [58].
4.2. Pose Estimation
4.3. Accessibility Evaluation and Motion Planning
4.4. Connector Manipulation
4.5. Extraction
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Kiddee, P.; Naidu, R.; Wong, M.H. Electronic Waste Management Approaches: An Overview. Waste Manag. 2013, 33, 1237–1250. [Google Scholar] [CrossRef]
- Baldé, C.P.; Kuehr, R.; Yamamoto, T.; McDonald, R.; D’Angelo, E.; Althaf, S.; Bel, G.; Deubzer, O.; Fernanez-Cubillo, E.; Forti, V.; et al. The Global E-Waste Monitor 2024; Technical Report; International Telecommunication Union (ITU) and United Nations Institute for Training and Research (UNITAR): Geneva, Switzerland; Bonn, Germany, 2024. [Google Scholar]
- Vaccari, M.; Vinti, G.; Cesaro, A.; Belgiorno, V.; Salhofer, S.; Dias, M.I.; Jandric, A. WEEE Treatment in Developing Countries: Environmental Pollution and Health Consequences—An Overview. Int. J. Environ. Res. Public Health 2019, 16, 1595. [Google Scholar] [CrossRef]
- International Energy Agency (IEA). Global EV Outlook 2025—Analysis-IEA; Technical Report; International Energy Agency (IEA): Paris, France, 2025. [Google Scholar]
- Xu, C.; Dai, Q.; Gaines, L.; Hu, M.; Tukker, A.; Steubing, B. Future Material Demand for Automotive Lithium-Based Batteries. Commun. Mater. 2020, 1, 99. [Google Scholar] [CrossRef]
- Geissdoerfer, M.; Savaget, P.; Bocken, N.M.P.; Hultink, E.J. The Circular Economy – A New Sustainability Paradigm? J. Clean. Prod. 2017, 143, 757–768. [Google Scholar] [CrossRef]
- Potting, J.; Hekkert, M.; Worrell, E.; Hanemaaijer, A. Circular Economy: Measuring Innovation In The Product Chain; Policy Report; PBL Netherlands Environmental Assessment Agency: The Hague, The Netherlands, 2017. [Google Scholar]
- Kouloumpis, V.; Konstantzos, G.E.; Chroni, C.; Abeliotis, K.; Lasaridi, K. Does the Circularity End Justify the Means? A Life Cycle Assessment of Preparing Waste Electrical and Electronic Equipment for Reuse. Sustain. Prod. Consum. 2023, 41, 291–304. [Google Scholar] [CrossRef]
- Priyono, A.; Ijomah, W.; Bititci, U. Disassembly for Remanufacturing: A Systematic Literature Review, New Model Development and Future Research Needs. J. Ind. Eng. Manag. 2016, 9, 899–932. [Google Scholar] [CrossRef]
- Li, J.; Barwood, M.; Rahimifard, S. Robotic Disassembly for Increased Recovery of Strategically Important Materials from Electrical Vehicles. Robot. Comput.-Integr. Manuf. 2018, 50, 203–212. [Google Scholar] [CrossRef]
- Sterkens, W.; Abdelbaky, M.; Peeters, J. Assessing the Risk and Disassembly Complexity of Battery-Powered WEEE. In Proceedings of the 2024 International Conference on Electronics Goes Green 2024+, EGG 2024, Berlin, Germany, 18–20 June 2024; pp. 1–9. [Google Scholar] [CrossRef]
- Bugryniec, P.J.; Resendiz, E.G.; Nwophoke, S.M.; Khanna, S.; James, C.; Brown, S.F. Review of Gas Emissions from Lithium-Ion Battery Thermal Runaway Failure —Considering Toxic and Flammable Compounds. J. Energy Storage 2024, 87, 111288. [Google Scholar] [CrossRef]
- Grant, K.; Goldizen, F.C.; Sly, P.D.; Brune, M.N.; Neira, M.; van den Berg, M.; Norman, R.E. Health Consequences of Exposure to E-Waste: A Systematic Review. Lancet Glob. Health 2013, 1, e350–e361. [Google Scholar] [CrossRef]
- Adusei, A.; Arko-Mensah, J.; Dzodzomenyo, M.; Stephens, J.; Amoabeng, A.; Waldschmidt, S.; Löhndorf, K.; Agbeko, K.; Takyi, S.; Kwarteng, L.; et al. Spatiality in Health: The Distribution of Health Conditions Associated with Electronic Waste Processing Activities at Agbogbloshie, Accra. Ann. Glob. Health 2020, 86, 31. [Google Scholar] [CrossRef]
- Rastegarpanah, A.; Ahmeid, M.; Marturi, N.; Attidekou, P.S.; Musbahu, M.; Ner, R.; Lambert, S.; Stolkin, R. Towards Robotizing the Processes of Testing Lithium-Ion Batteries. Proc. Inst. Mech. Eng. Part I J. Syst. Control Eng. 2021, 235, 1309–1325. [Google Scholar] [CrossRef]
- Wang, K.; Li, X.; Gao, L.; Li, P. Modeling and Balancing for Green Disassembly Line Using Associated Parts Precedence Graph and Multi-objective Genetic Simulated Annealing. Int. J. Precis. Eng. Manuf.—Green Technol. 2021, 8, 1597–1613. [Google Scholar] [CrossRef]
- Li, R.; Pham, D.; Huang, J.; Tan, Y.; Qu, M.; Wang, Y.; Kerin, M.; Jiang, K.; Su, S.; Ji, C.; et al. Unfastening of Hexagonal Headed Screws by a Collaborative Robot. IEEE Trans. Autom. Sci. Eng. 2020, 17, 1455–1468. [Google Scholar] [CrossRef]
- Saenz, J.; Felsch, T.; Walter, C.; König, T.; Poenicke, O.; Bayrhammer, E.; Vorbröcker, M.; Berndt, D.; Elkmann, N.; Arlinghaus, J. Automated Disassembly of E-Waste—Requirements on Modeling of Processes and Product States. Front. Robot. AI 2024, 11, 1303279. [Google Scholar] [CrossRef] [PubMed]
- Guo, X.; Zhou, M.; Liu, S.; Qi, L. Multiresource-Constrained Selective Disassembly With Maximal Profit and Minimal Energy Consumption. IEEE Trans. Autom. Sci. Eng. 2021, 18, 804–816. [Google Scholar] [CrossRef]
- Kaarlela, T.; Villagrossi, E.; Rastegarpanah, A.; San-Miguel-Tello, A.; Pitkäaho, T. Robotised Disassembly of Electric Vehicle Batteries: A Systematic Literature Review. J. Manuf. Syst. 2024, 74, 901–921. [Google Scholar] [CrossRef]
- Tan, W.; Chin, C.; Garg, A.; Gao, L. A Hybrid Disassembly Framework for Disassembly of Electric Vehicle Batteries. Int. J. Energy Res. 2021, 45, 8073–8082. [Google Scholar] [CrossRef]
- Duan, L.; Li, J.; Bao, J.; Lv, J.; Zheng, H. A MR-Assisted and Scene Perception System for Human-Robot Collaborative Disassembly of Power Batteries. In Proceedings of the 19th IEEE International Conference on Automation Science and Engineering, CASE 2023, Auckland, New Zealand, 26–30 August 2023; pp. 1–8. [Google Scholar] [CrossRef]
- Apple. Apple Adds Earth Day Donations to Trade-in and Recycling Program, 2018. Press Release: 2018-04-19. Available online: https://www.apple.com/newsroom/2018/04/apple-adds-earth-day-donations-to-trade-in-and-recycling-program/ (accessed on 16 September 2025).
- Wu, S.; Kaden, N.; Dröder, K. A Systematic Review on Lithium-Ion Battery Disassembly Processes for Efficient Recycling. Batteries 2023, 9, 297. [Google Scholar] [CrossRef]
- Qu, M.; Pham, D.; Altumi, F.; Gbadebo, A.; Hartono, N.; Jiang, K.; Kerin, M.; Lan, F.; Micheli, M.; Xu, S.; et al. Robotic Disassembly Platform for Disassembly of a Plug-In Hybrid Electric Vehicle Battery: A Case Study. Automation 2024, 5, 50–67. [Google Scholar] [CrossRef]
- Quattrucci, L.; Ceccarelli, M.; Russo, M. Design and Experiments of a Cutting Robot for Disassembly Tasks. In Proceedings of the 3rd International Workshop IFToMM for Sustainable Development Goals (I4SDG 2025); Carbone, G., Quaglia, G., Eds.; Springer: Cham, Switzerland, 2025; Volume 179, pp. 71–79. [Google Scholar] [CrossRef]
- Assadi, A.; Götz, T.; Gebhardt, A.; Mannuß, O.; Meese, B.; Wanner, J.; Singha, S.; Halt, L.; Birke, P.; Sauer, A. Automated Disassembly of Battery Systems to Battery Modules. Procedia CIRP 2024, 122, 25–30. [Google Scholar] [CrossRef]
- Moher, D.; Shamseer, L.; Clarke, M.; Ghersi, D.; Liberati, A.; Petticrew, M.; Shekelle, P.; Stewart, L.A.; PRISMA-P Group. Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) 2015 Statement. Syst. Rev. 2015, 4, 1. [Google Scholar] [CrossRef]
- APRE; CDTI. Guiding Notes to Use the TRL Selfassessment Tool, 2022. European Union —Horizon 2020. Available online: https://horizoneuropencpportal.eu/sites/default/files/2022-12/trl-assessment-tool-guide-final.pdf (accessed on 26 February 2026).
- Pecht, M.G.; Kyeong, S. What Is an Electrical Connector? In Electrical Connectors: Design, Manufacture, Test, and Selection; Kyeong, S., Pecht, M.G., Eds.; Wiley: Hoboken, NJ, USA, 2020; pp. 1–15. [Google Scholar] [CrossRef]
- Wang, H.; Johansson, B. Deep Learning-Based Connector Detection for Robotized Assembly of Automotive Wire Harnesses. In Proceedings of the 19th IEEE International Conference on Automation Science and Engineering, CASE 2023, Auckland, New Zealand, 26–30 August 2023; pp. 1–8. [Google Scholar] [CrossRef]
- Sadok, D.; Bezerra, D.; Dantas, M.; Reis, G.; Leuchtenberg, P.; Ledebour, C.; Souza, R.; Lins, S.; Marquezini, M.; Kelner, J. Rbot: Development of a Robot-Driven Radio Base Station Maintenance System. Int. J. Intell. Robot. Appl. 2022, 6, 270–287. [Google Scholar] [CrossRef]
- Nguyen, T.; Kim, D.; Lim, H.K.; Yoon, J. Revolutionizing Robotized Assembly for Wire Harness: A 3D Vision-Based Method for Multiple Wire-Branch Detection. J. Manuf. Syst. 2024, 72, 360–372. [Google Scholar] [CrossRef]
- Caporali, A.; Galassi, K.; Berselli, G.; Palli, G. Monocular Estimation of Connector Orientation: Combining Deformable Linear Object Priors and Smooth Angle Classification. In Proceedings of the 2024 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM 2024, Boston, MA, USA, 15–19 July 2024; pp. 799–804. [Google Scholar] [CrossRef]
- Zang, Y.; Hou, Z.; Pan, M.; Wang, Z.; Cai, H.; Yu, S.; Ren, Y.; Zhao, M. A Robotic Solution to Peg in/out Hole Tasks with Latching Requirements. IEEE Robot. Autom. Lett. 2024, 9, 1357–1364. [Google Scholar] [CrossRef]
- Borras, J.; Heudorfer, R.; Rader, S.; Kaiser, P.; Asfour, T. The KIT Swiss Knife Gripper for Disassembly Tasks: A Multi-Functional Gripper for Bimanual Manipulation with a Single Arm. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2018, Madrid, Spain, 1–5 October 2018; pp. 4590–4597. [Google Scholar] [CrossRef]
- Zhao, M.; Tao, Y.; Guo, W.; Ge, Z.; Hu, H.; Yan, Y.; Zou, C.; Wang, G.; Ren, Y. Multifunctional Flexible Magnetic Drive Gripper for Target Manipulation in Complex Constrained Environments. Lab Chip 2024, 24, 2122–2134. [Google Scholar] [CrossRef]
- Kay, I.; Farhad, S.; Mahajan, A.; Esmaeeli, R.; Hashemi, S. Robotic Disassembly of Electric Vehicles’ Battery Modules for Recycling. Energies 2022, 15, 4856. [Google Scholar] [CrossRef]
- Foo, G.; Kara, S.; Pagnucco, M. An Ontology-Based Method for Semi-Automatic Disassembly of Lcd Monitors and Unexpected Product Types. Int. J. Autom. Technol. 2021, 15, 168–181. [Google Scholar] [CrossRef]
- Ohnemüller, G.; Beller, M.; Rosemann, B.; Döpper, F. Disassembly and Its Obstacles: Challenges Facing Remanufacturers of Lithium-Ion Traction Batteries. Processes 2025, 13, 123. [Google Scholar] [CrossRef]
- Yumbla, F.; Abeyabas, M.; Luong, T.; Yi, J.S.; Moon, H. Preliminary Connector Recognition System Based on Image Processing for Wire Harness Assembly Tasks. In Proceedings of the 20th International Conference on Control, Automation and Systems, ICCAS 2020, Busan, Repubic of Korea, 13–16 October 2020; pp. 1146–1150. [Google Scholar] [CrossRef]
- Monguzzi, A.; Cella, C.; Zanchettin, A.; Rocco, P. Vision-Based State and Pose Estimation for Robotic Bin Picking of Cables. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023, Detroit, MI, USA, 1–5 October 2023; pp. 3114–3120. [Google Scholar] [CrossRef]
- Tamada, T.; Yamakawa, Y.; Senoo, T.; Ishikawa, M. High-Speed Manipulation of Cable Connector Using a High-Speed Robot Hand. In Proceedings of the 2013 IEEE International Conference on Robotics and Biomimetics, ROBIO 2013, Shenzhen, China, 12–14 December 2013; pp. 1598–1604. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015; pp. 1–14. [Google Scholar] [CrossRef]
- Zhang, K.; Shen, H. Solder Joint Defect Detection in the Connectors Using Improved Faster-Rcnn Algorithm. Appl. Sci. 2021, 11, 576. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- Calabrese, M.; Agnusdei, L.; Fontana, G.; Papadia, G.; Del Prete, A. Application of Mask R-CNN and YOLOv8 Algorithms for Defect Detection in Printed Circuit Board Manufacturing. Discov. Appl. Sci. 2025, 7, 257. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. In Proceedings of the 16th IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 27th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA, 22–25 October 2014; pp. 580–587. [Google Scholar] [CrossRef]
- Girshick, R. Fast R-CNN. In Proceedings of the 15th IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar] [CrossRef]
- Redmon, J. Darknet: Open Source Neural Networks in C, 2013–2016. Available online: https://pjreddie.com/darknet/ (accessed on 5 January 2026).
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
- De Gregorio, D.; Zanella, R.; Palli, G.; Pirozzi, S.; Melchiorri, C. Integration of Robotic Vision and Tactile Sensing for Wire-Terminal Insertion Tasks. IEEE Trans. Autom. Sci. Eng. 2019, 16, 585–598. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 779–788. [Google Scholar] [CrossRef]
- Chang, P.; Padir, T. Model-Based Manipulation of Linear Flexible Objects: Task Automation in Simulation and Real World. Machines 2020, 8, 46. [Google Scholar] [CrossRef]
- Zhou, H.; Li, S.; Lu, Q.; Qian, J. A Practical Solution to Deformable Linear Object Manipulation: A Case Study on Cable Harness Connection. In Proceedings of the 5th IEEE International Conference on Advanced Robotics and Mechatronics, ICARM 2020, Shenzhen, China, 18–21 December 2020; pp. 329–333. [Google Scholar] [CrossRef]
- Diwan, T.; Anirudh, G.; Tembhurne, J.V. Object Detection Using YOLO: Challenges, Architectural Successors, Datasets and Applications. Multimed. Tools Appl. 2023, 82, 9243–9275. [Google Scholar] [CrossRef] [PubMed]
- Ultralytics. Comprehensive Guide to Ultralytics YOLOv5. Available online: https://docs.ultralytics.com/yolov5/ (accessed on 3 January 2026).
- Li, S.; Zheng, P.; Zheng, L. An AR-Assisted Deep Learning-Based Approach for Automatic Inspection of Aviation Connectors. IEEE Trans. Ind. Inform. 2021, 17, 1721–1731. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot Multibox Detector. In Computer Vision—ECCV 2016 (ECCV 2016); Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer: Cham, Switzerland, 2016; Volume 9905, pp. 21–37. [Google Scholar] [CrossRef]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. In Proceedings of the 16th IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, 22–29 October 2017; pp. 2999–3007. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar] [CrossRef]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
- Han, K.; Wang, Y.; Chen, H.; Chen, X.; Guo, J.; Liu, Z.; Tang, Y.; Xiao, A.; Xu, C.; Xu, Y.; et al. A Survey on Vision Transformer. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 87–110. [Google Scholar] [CrossRef] [PubMed]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer Using Shifted Windows. In Proceedings of the 18th IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, 11–17 October 2021; pp. 9992–10002. [Google Scholar] [CrossRef]
- Li, Y.; Wu, C.Y.; Fan, H.; Mangalam, K.; Xiong, B.; Malik, J.; Feichtenhofer, C. MViTv2: Improved Multiscale Vision Transformers for Classification and Detection. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, 19–24 June 2022; pp. 4794–4804. [Google Scholar] [CrossRef]
- Zorn, M.; Ionescu, C.; Klohs, D.; Zähl, K.; Kisseler, N.; Daldrup, A.; Hams, S.; Zheng, Y.; Offermanns, C.; Flamme, S.; et al. An Approach for Automated Disassembly of Lithium-Ion Battery Packs and High-Quality Recycling Using Computer Vision, Labeling, and Material Characterization. Recycling 2022, 7, 48. [Google Scholar] [CrossRef]
- Yildiz, E.; Renaudo, E.; Hollenstein, J.; Piater, J.; Wörgötter, F. An Extended Visual Intelligence Scheme for Disassembly in Automated Recycling Routines. In Robotics, Computer Vision and Intelligent Systems (ROBOVIS 2020, ROBOVIS 2021); Galambos, P., Kayacan, E., Madani, K., Eds.; Springer: Cham, Switzerland, 2022; Volume 1667, pp. 25–50. [Google Scholar] [CrossRef]
- Kalitsios, G.; Lazaridis, L.; Psaltis, A.; Axenopoulos, A.; Daras, P. Vision-Enhanced System For Human-Robot Disassembly Factory Cells: Introducing A New Screw Dataset. In Proceedings of the 4th International Conference on Robotics and Computer Vision, ICRCV 2022, Wuhan, China, 25–27 September 2022; pp. 204–208. [Google Scholar] [CrossRef]
- Qiao, L.; Veltrup, M.; Mayer, B. Comparison of Mask R-CNN and YOLOv8-seg for Improved Monitoring of the PCB Surface during Laser Cleaning. Sci. Rep. 2025, 15, 17185. [Google Scholar] [CrossRef]
- Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.Y.; et al. Segment Anything. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023, Paris, France, 2–6 October 2023; pp. 3992–4003. [Google Scholar] [CrossRef]
- Radford, A.; Kim, J.W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. Learning Transferable Visual Models From Natural Language Supervision. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, Virtual, 18-24 July 2021; ML Research Press: Cambridge, MA, USA, 2021; pp. 8748–8763. [Google Scholar]
- Liu, S.; Zeng, Z.; Ren, T.; Li, F.; Zhang, H.; Yang, J.; Jiang, Q.; Li, C.; Yang, J.; Su, H.; et al. Grounding DINO: Marrying DINO with Grounded Pre-training for Open-Set Object Detection. In Computer Vision—ECCV 2024 (ECCV 2024); Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G., Eds.; Springer: Cham, Switzerland, 2025; Volume 15105, pp. 38–55. [Google Scholar] [CrossRef]
- Mazzotti, L.; Angelini, M.; Carricato, M. Solving the Wire Loop Game with a Reinforcement-Learning Controller Based on Haptic Feedback. In Proceedings of the 20th IEEE/ASME International Conference on Mechatronic, Embedded Systems and Applications, MESA 2024, Genova, Italy, 2–4 September 2024; pp. 1–8. [Google Scholar] [CrossRef]
- Monguzzi, A.; Zanchettin, A.M.; Rocco, P. Sensorless Robotized Cable Contour Following and Connector Detection. Mechatronics 2024, 97, 103096. [Google Scholar] [CrossRef]
- Caporali, A.; Galassi, K.; Žagar, B.L.; Zanella, R.; Palli, G.; Knoll, A.C. RT-DLO: Real-Time Deformable Linear Objects Instance Segmentation. IEEE Trans. Ind. Inform. 2023, 19, 11333–11342. [Google Scholar] [CrossRef]
- Choi, A.; Tong, D.; Park, B.; Terzopoulos, D.; Joo, J.; Jawed, M.K. mBEST: Realtime Deformable Linear Object Detection Through Minimal Bending Energy Skeleton Pixel Traversals. IEEE Robot. Autom. Lett. 2023, 8, 4863–4870. [Google Scholar] [CrossRef]
- Yu, C.; Wang, J.; Feng, P.; Yu, D.; Zhang, J. CVF-DLO: Cross-Visual-Field Branched Deformable Linear Objects Route Estimation. IEEE Robot. Autom. Lett. 2025, 10, 8332–8339. [Google Scholar] [CrossRef]
- Zürn, M.; Kienzlen, A.; Klingel, L.; Lechler, A.; Verl, A.; Ren, S.; Xu, W. Deep Learning-Based Instance Segmentation for Feature Extraction of Branched Deformable Linear Objects for Robotic Manipulation. In Proceedings of the 19th IEEE International Conference on Automation Science and Engineering, CASE 2023, Auckland, New Zealand, 26–30 August 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Yang, Y.; Stork, J.; Stoyanov, T. Tracking Branched Deformable Linear Objects Using Particle Filtering on Depth Images. In Proceedings of the 20th IEEE International Conference on Automation Science and Engineering, CASE 2024, Bari, Italy, 28 August–1 September 2024; pp. 912–919. [Google Scholar] [CrossRef]
- Lazaros, N.; Sirakoulis, G.C.; Gasteratos, A. Review of Stereo Vision Algorithms: From Software to Hardware. Int. J. Optomechatronics 2008, 2, 435–462. [Google Scholar] [CrossRef]
- Fuchs, S. Multipath Interference Compensation in Time-of-Flight Camera Images. In Proceedings of the 2010 20th International Conference on Pattern Recognition, ICPR 2010, Istanbul, Turkey, 23–26 August 2010; pp. 3583–3586. [Google Scholar] [CrossRef]
- Kong, D.; Zhang, Y.; Dai, W. Direct Near-Infrared-Depth Visual SLAM with Active Lighting. IEEE Robot. Autom. Lett. 2021, 6, 7057–7064. [Google Scholar] [CrossRef]
- Bonifazi, G.; Fiore, L.; Gasbarrone, R.; Palmieri, R.; Serranti, S. Hyperspectral Imaging Applied to WEEE Plastic Recycling: A Methodological Approach. Sustainability 2024, 15, 11345. [Google Scholar] [CrossRef]
- Cui, H.; Hu, Q.; Mao, Q. Real-Time Geometric Parameter Measurement of High-Speed Railway Fastener Based on Point Cloud from Structured Light Sensors. Sensors 2018, 18, 3675. [Google Scholar] [CrossRef]
- Besl, P.J.; McKay, N.D. A Method for Registration of 3-D Shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
- Zheng, L.; Mai, C.; Liao, W.; Wen, Y.; Liu, G. 3D point cloud registration for apple tree based on Kinect camera. Nongye Jixie Xuebao/Trans. Chin. Soc. Agric. Mach. 2016, 47, 9–14. [Google Scholar] [CrossRef]
- Yang, H.; Shi, J.; Carlone, L. Teaser: Fast and Certifiable Point Cloud Registration. IEEE Trans. Robot. 2021, 37, 314–333. [Google Scholar] [CrossRef]
- Ying, C.; Mo, Y.; Matsuura, Y.; Yamazaki, K. Pose Estimation of a Small Connector Attached to the Tip of a Cable Sticking Out of a Circuit Board. Int. J. Autom. Technol. 2022, 16, 208–217. [Google Scholar] [CrossRef]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In Proceedings of the Advances in Neural Information Processing Systems Neural Information Processing Systems Foundation, Long Beach, CA, USA, 4–9 December 2017; pp. 5100–5109. [Google Scholar] [CrossRef]
- Caporali, A.; Galassi, K.; Palli, G. Deformable Linear Objects 3D Shape Estimation and Tracking From Multiple 2D Views. IEEE Robot. Autom. Lett. 2023, 8, 3851–3858. [Google Scholar] [CrossRef]
- Mou, F.; Ren, H.; Wang, B.; Wu, D. Pose Estimation and Robotic Insertion Tasks Based on YOLO and Layout Features. Eng. Appl. Artif. Intell. 2022, 114, 105164. [Google Scholar] [CrossRef]
- Duncan, K.; Sarkar, S.; Alqasemi, R.; Dubey, R. Multi-Scale Superquadric Fitting for Efficient Shape and Pose Recovery of Unknown Objects. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, ICRA 2013, Karlsruhe, Germany, 6–10 May 2025; pp. 4238–4243. [Google Scholar] [CrossRef]
- Makhal, A.; Thomas, F.; Alba Perez, G. Grasping Unknown Objects in Clutter by Superquadric Representation. In Proceedings of the 2nd IEEE International Conference on Robotic Computing, IRC 2018, Laguna Hills, CA, USA, 31 January–2 February 2018; pp. 292–299. [Google Scholar] [CrossRef]
- Gebauer, D.; Roith, A.; Dirr, J.; Daub, R. Robot-Based, Sensitive Mating of Electrical Connectors Using Automatically Designed Gripper Jaws. Procedia CIRP 2024, 130, 861–866. [Google Scholar] [CrossRef]
- Wu, K.; Chen, R.; Chen, Q.; Li, W. Robotic Assembly of Deformable Linear Objects via Curriculum Reinforcement Learning. IEEE Robot. Autom. Lett. 2025, 10, 4770–4777. [Google Scholar] [CrossRef]
- Liang, J.; Buzzatto, J.; Busby, B.; Jiang, H.; Matsunaga, S.; Haraguchi, R.; Toshisada, M.; Macdonald, B.; Liarokapis, M. On Robust Assembly of Flexible Flat Cables Combining CAD and Image Based Multiview Pose Estimation and a Multimodal Robotic Gripper. IEEE Open J. Ind. Electron. Soc. 2024, 5, 1104–1114. [Google Scholar] [CrossRef]
- Hartisch, R.; Haninger, K. High-Speed Electrical Connector Assembly by Structured Compliance in a Finray-Effect Gripper. IEEE/ASME Trans. Mechatronics 2024, 29, 810–819. [Google Scholar] [CrossRef]
- Li, N.; Ma, X.; Zhang, C.; Zou, H.; Li, F. Toolkit for Dynamic Control Rapid Prototype Simulation System of Robots Applied in Space Experimental Cabin. In Intelligent Robotics and Applications (ICIRA 2021); Liu, X., Nie, Z., Yu, J., Xie, F., Song, R., Eds.; Springer: Cham, Switzerland, 2021; Volume 13016, pp. 417–427. [Google Scholar] [CrossRef]
- Song, H.C.; Kim, Y.L.; Lee, D.H.; Song, J.B. Electric Connector Assembly Based on Vision and Impedance Control Using Cable Connector-Feeding System. J. Mech. Sci. Technol. 2017, 31, 5997–6003. [Google Scholar] [CrossRef]
- Ortner, M.; Gadringer, S.; Gattringer, H.; Mueller, A.; Naderer, R. Automatized Insertion of Multipolar Electric Plugs by Means of Force Controlled Industrial Robots. In Proceedings of the 25th IEEE International Conference on Emerging Technologies and Factory Automation, ETFA 2020, Vienna, Austria, 8–11 September 2020; Volume 1, pp. 1465–1472. [Google Scholar] [CrossRef]
- Zacharias, F.; Borst, C.; Hirzinger, G. Capturing Robot Workspace Structure: Representing Robot Capabilities. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2007, San Diego, CA, USA, 29 October–2 November 2007; pp. 3229–3236. [Google Scholar] [CrossRef]
- Vahrenkamp, N.; Asfour, T.; Dillmann, R. Robot Placement Based on Reachability Inversion. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, ICRA 2013, Karlsruhe, Germany, 6–10 May 2013; pp. 1970–1975. [Google Scholar] [CrossRef]
- Akinola, I.; Varley, J.; Chen, B.; Allen, P.K. Workspace Aware Online Grasp Planning. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2018, Madrid, Spain, 1–5 October 2018; pp. 2917–2924. [Google Scholar] [CrossRef]
- Jang, H.Y.; Moradi, H.; Le Minh, P.; Lee, S.; Han, J. Visibility-Based Spatial Reasoning for Object Manipulation in Cluttered Environments. Comput.-Aided Des. 2008, 40, 422–438. [Google Scholar] [CrossRef]
- Azizi, V.; Kimmel, A.; Bekris, K.; Kapadia, M. Geometric Reachability Analysis for Grasp Planning in Cluttered Scenes for Varying End-Effectors. In Proceedings of the 13th IEEE Conference on Automation Science and Engineering, CASE 2017, Xi’an, China, 20–23 August 2017; pp. 764–769. [Google Scholar] [CrossRef]
- Kavraki, L.E.; Švestka, P.; Latombe, J.C.; Overmars, M.H. Probabilistic Roadmaps for Path Planning in High-Dimensional Configuration Spaces. IEEE Trans. Robot. Autom. 1996, 12, 566–580. [Google Scholar] [CrossRef]
- Karaman, S.; Frazzoli, E. Sampling-Based Algorithms for Optimal Motion Planning. Int. J. Robot. Res. 2011, 30, 846–894. [Google Scholar] [CrossRef]
- Ratliff, N.; Zucker, M.; Andrew Bagnell, J.; Srinivasa, S. CHOMP: Gradient Optimization Techniques for Efficient Motion Planning. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, ICRA 2009, Kobe, Japan, 12–17 May 2009; pp. 489–494. [Google Scholar] [CrossRef]
- Schulman, J.; Duan, Y.; Ho, J.; Lee, A.; Awwal, I.; Bradlow, H.; Pan, J.; Patil, S.; Goldberg, K.; Abbeel, P. Motion Planning with Sequential Convex Optimization and Convex Collision Checking. Int. J. Robot. Res. 2014, 33, 1251–1270. [Google Scholar] [CrossRef]
- Ichter, B.; Harrison, J.; Pavone, M. Learning Sampling Distributions for Robot Motion Planning. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation, ICRA 2018, Brisbane, Australia, 21–25 May 2018; pp. 7087–7094. [Google Scholar] [CrossRef]
- Qureshi, A.H.; Miao, Y.; Simeonov, A.; Yip, M.C. Motion Planning Networks: Bridging the Gap between Learning-Based and Classical Motion Planners. IEEE Trans. Robot. 2021, 37, 48–66. [Google Scholar] [CrossRef]
- Gebauer, D.; Geng, P.; Hartmann, A.; Dirr, J.; Fuchs, S.; Daub, R. Uncertainty-Integrating, Automated Design of Gripper Jaws for Robust Grasping of Electrical Connectors. Prod. Eng. 2025, 19, 15–27. [Google Scholar] [CrossRef]
- Buzzatto, J.; Chapman, J.; Shahmohammadi, M.; Sanches, F.; Nejati, M.; Matsunaga, S.; Haraguchi, R.; Mariyama, T.; MacDonald, B.; Liarokapis, M. On Robotic Manipulation of Flexible Flat Cables: Employing a Multi-Modal Gripper with Dexterous Tips, Active Nails, and a Reconfigurable Suction Cup Module. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2022, Kyoto, Japan, 23–27 October 2022; pp. 1602–1608. [Google Scholar] [CrossRef]
- Tu, Y.; Jiang, J.; Huang, J.; Sui, J.; Yang, S. A Review of Wrist Mechanism Design and the Application in Gastrointestinal Minimally Invasive Surgery of Multi-Degree-of-Freedom Surgical Laparoscopic Instruments. Surg. Endosc. 2025, 39, 99–121. [Google Scholar] [CrossRef] [PubMed]
- Cepolina, F.; Razzoli, R. Review of Robotic Surgery Platforms and End Effectors. J. Robot. Surg. 2024, 18, 74. [Google Scholar] [CrossRef]
- Zhao, Y.; Tong, D.; Chen, Y.; Chen, Q.; Wu, Z.; Xu, X.; Fan, X.; Xie, H.; Yang, Z. Microgripper Robot with End Electropermanent Magnet Collaborative Actuation. Micromachines 2024, 15, 798. [Google Scholar] [CrossRef] [PubMed]
- Russo, M.; Sadati, S.M.H.; Dong, X.; Mohammad, A.; Walker, I.D.; Bergeles, C.; Xu, K.; Axinte, D.A. Continuum Robots: An Overview. Adv. Intell. Syst. 2023, 5, 2200367. [Google Scholar] [CrossRef]
- Wang, M.; Dong, X.; Ba, W.; Mohammad, A.; Axinte, D.; Norton, A. Design, Modelling and Validation of a Novel Extra Slender Continuum Robot for in-Situ Inspection and Repair in Aeroengine. Robot. Comput.-Integr. Manuf. 2021, 67, 102054. [Google Scholar] [CrossRef]
- Li, G.; Yu, J.; Dong, D.; Pan, J.; Wu, H.; Cao, S.; Pei, X.; Huang, X.; Yi, J. Systematic Design of a 3-DOF Dual-Segment Continuum Robot for In Situ Maintenance in Nuclear Power Plants. Machines 2022, 10, 596. [Google Scholar] [CrossRef]
- Sun, L.; Chen, X. Flexible Continuum Robot System for Minimally Invasive Endoluminal Gastrointestinal Endoscopy. Machines 2024, 12, 370. [Google Scholar] [CrossRef]
- Elguea-Aguinaco, I.; Serrano-Muñoz, A.; Chrysostomou, D.; Inziarte-Hidalgo, I.; Bøgh, S.; Arana-Arexolaleiba, N. A Review on Reinforcement Learning for Contact-Rich Robotic Manipulation Tasks. Robot. Comput.-Integr. Manuf. 2023, 81, 102517. [Google Scholar] [CrossRef]
- Zhu, H.; Gupta, A.; Rajeswaran, A.; Levine, S.; Kumar, V. Dexterous Manipulation with Deep Reinforcement Learning: Efficient, General, and Low-Cost. In Proceedings of the 2019 International Conference on Robotics and Automation, ICRA 2019, Montreal, QC, Canada, 20–24 May 2019; pp. 3651–3657. [Google Scholar] [CrossRef]
- Andrychowicz, O.M.; Baker, B.; Chociej, M.; Józefowicz, R.; McGrew, B.; Pachocki, J.; Petron, A.; Plappert, M.; Powell, G.; Ray, A.; et al. Learning Dexterous In-Hand Manipulation. Int. J. Robot. Res. 2020, 39, 3–20. [Google Scholar] [CrossRef]
- Zhao, W.; Queralta, J.P.; Westerlund, T. Sim-to-Real Transfer in Deep Reinforcement Learning for Robotics: A Survey. In Proceedings of the 2020 IEEE Symposium Series on Computational Intelligence, SSCI 2020, Canberra, Australia, 1–4 December 2020; pp. 737–744. [Google Scholar] [CrossRef]
- Peng, X.B.; Andrychowicz, M.; Zaremba, W.; Abbeel, P. Sim-to-Real Transfer of Robotic Control with Dynamics Randomization. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation, ICRA 2018, Brisbane, Australia, 21–25 May 2018; pp. 3803–3810. [Google Scholar] [CrossRef]
- Rosette, M.; Armstrong, T.; Nave, K.; Brown, J.; DuFrene, K.; Fitter, N.; Davidson, J. Tactile-based autonomous cable manipulation: Pose estimation and connector verification. In Proceedings of the ASME 2024 International Mechanical Engineering Congress and Exposition, IMECE 2024, Portland, OR, USA, 17–21 November 2024; Volume 2, pp. 1–8. [Google Scholar] [CrossRef]
- Yumbla, F.; Yi, J.S.; Abayebas, M.; Shafiyev, M.; Moon, H. Tolerance Dataset: Mating Process of Plug-in Cable Connectors for Wire Harness Assembly Tasks. Intell. Serv. Robot. 2020, 13, 159–168. [Google Scholar] [CrossRef]
- Zang, Y.; Xu, X.; Qu, M.; Dixon, R.; Ye, J.; Hajiyavand, A.; Goli, F.; Zhang, Y.; Wang, Y. Robotic Disassembly Skill Acquisition Based on Reinforcement Learning With External Knowledge Injection. IEEE Trans. Ind. Inform. 2025, 21, 4576–4585. [Google Scholar] [CrossRef]




| Lock Type | Examples | Characteristics | Disassembly Strategy |
|---|---|---|---|
| Friction Fit (No Active Lock) | USB-A, HDMI | Relies on tight tolerances and friction alone. | Gentle pulling, often with slight rocking motion. |
| Magnetic Locking | MagSafe, some medical-grade connectors. | Uses magnets for quick attach–detach. | Usually disconnects under lateral force or when cable is pulled. |
| Latch or Snap-In Mechanism | RJ45 (Ethernet), Molex, JST, ATX connectors. | Plastic tab must be depressed to release. | Use thumb or flat tool to press the latch before pulling. |
| Bayonet Lock | BNC, some circular military connectors. | Quarter-turn twist locks the connector. | Twist the coupling nut or shell to unlock. |
| Threaded Coupling | M12, SMA, MC4 (solar), aviation connectors. | Threaded outer sleeve or ring. | Use proper-sized wrenches or hands to unscrew. |
| Screw-Lock (e.g., D-subminiature) | DB9, DB25 | Uses captive screws to secure plug to socket. | Use precision screwdriver to loosen screws before unplugging. |
| Cam or Lever Lock | High-current industrial connectors, Harting HAN. | Lever actuated to compress or release the connection. | Lift lever fully before attempting to separate halves. |
| Approach | Main Features | Advantages | Limitations | Representative Works |
|---|---|---|---|---|
| Traditional Computer Vision | Structured image-processing pipeline (noise reduction, thresholding, edge detection, blob analysis). | Simple, explainable, low computational cost; effective in controlled conditions. | Poor robustness to lighting, background, and object variability; not suitable for unstructured e-waste. | [41,42,43] |
| Deep Learning Object Detection | Bounding box regression and classification via learned features (CNN/transformers). | High robustness to clutter, lighting changes, and object variability. | Requires labeled training data; provides only bounding boxes (no shape details). | [31,54] |
| Instance Segmentation | Pixel-level object identification combined with class prediction. | Provides precise contour and class info; handles overlapping or irregular shapes. | Computationally intensive; requires precise pixel-level annotations. | [68,69,70,71] |
| Physical Cable-Following | Follows cables physically to locate the connector at the termination. | Avoids direct visual detection of small components; useful for hidden connectors. | Limited by cable routing complexity; physical following is risky in clutter. | [76] |
| Architecture | Mechanism | Advantages | Limitations | Representative Works |
|---|---|---|---|---|
| Two-Stage Detectors | Region proposal + classification (e.g., Faster R-CNN, Mask R-CNN). | High precision and robustness; handles occlusions and small objects well. | Computationally expensive; slower inference; needs large annotated datasets. | [46,48] |
| One-Stage Detectors | Single-pass prediction of bounding boxes and classes (e.g., YOLO, SSD). | Fast inference, real-time capability, suitable for embedded use. | Reduced accuracy on small or occluded connectors; dataset-dependent. | [31,33,54,56,57,60] |
| Visual Transformers | Hierarchical attention mechanisms (e.g., Swin Transformer) modeling long-range dependencies. | Superior generalization and robustness to background variability; global context awareness. | High computational cost and massive data requirements for pre-training. | [65] |
| Foundation Multimodal Models | Models trained on huge datasets (e.g., millions of image-text samples), learn a generalized representation of the world. | Detection based on text prompts, zero-shot generalization, possible fine-tuning with few samples. | Lack of studies addressing ECC detection, expensive hardware requirements for local computation. | [72,73,74] |
| Approach | Main Features | Advantages | Limitations | Representative Works |
|---|---|---|---|---|
| Planar Approximation | Assumes connectors lie on a 2D plane; uses blob/edge analysis for X-Y position and Z-rotation. | Simple implementation using traditional CV; low computational cost. | Fails in 3D clutter; inaccurate for tilted connectors; limited to structured scenes. | [42,43] |
| 3D-Model-Based (Point Cloud) | Aligns acquired point clouds with CAD models using ICP, FPFH, or global optimization. | High accuracy; handles full 6D pose estimation. | Requires CAD models; sensitive to outliers and initial alignment. | [33,88,89] |
| 3D-Model-Based (Projection) | Matches 2D camera images with projected views of a 3D CAD model. | Accurate 6D pose recovery from mono/stereo images. | Time-consuming due to multiple projection comparisons; relies on CAD. | [57] |
| Reference Cloud Matching | Matches scene data to a previously scanned reference point cloud (no CAD); approximates shape as cuboid. | Reduces dependency on proprietary CAD models. | Still requires prior scanning/knowledge of the specific connector instance. | [90] |
| Indirect/Cable-Based | Infers connector pose by tracking the cable end and predicting rotation via Neural Networks. | Avoids direct CAD need; robust to connector occlusion if cable is visible. | Error accumulation from cable tracking; difficult for small/symmetrical connectors. | [34,92,93] |
| Shape Approximation | Approximates unknown objects with simplified shapes (spheres, cylinders, cuboids). | Enables pose estimation for unknown objects without prior models. | Result is an approximation; ideal for simple shapes. | [94] |
| Approach | Main Features | Advantages | Limitations | Representative Works |
|---|---|---|---|---|
| Rigid Custom Grippers | Geometry-matched jaws derived from CAD; simple open/close logic. | High stability and precision for specific connector types. | Lacks versatility (one tool per connector); cannot handle uncertainty or heterogeneous locks. | [32,101,102,114] |
| Passive Compliant Grippers | Uses flexible materials or mechanisms to adapt to different shapes. | Tolerates misalignment during insertion/removal without complex control. | Difficult to model; usually lacks force sensing/feedback. | [99] |
| Articulated/Multimodal end-effector | Multiple DoF fingers or active tools (nails, reconfigurable tips) for in-hand manipulation. | High dexterity; capable of operating locking mechanisms. | Bulky designs reduce accessibility; complex control and hardware. | [35,43,115] |
| Surgical/Continuum end-effector | Flexible, snake-like end-effectors or micro-grippers inspired by medical robotics. | Excellent accessibility in confined/occluded spaces; integrates local sensing. | High cost; complex non-linear control; currently mostly teleoperated. | [116,117,118,119,120] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Dall’Olio, M.; Ida’, E.; Carricato, M. Robotic Disassembly of Electrical Cable Connectors: A Critical Review. Robotics 2026, 15, 60. https://doi.org/10.3390/robotics15030060
Dall’Olio M, Ida’ E, Carricato M. Robotic Disassembly of Electrical Cable Connectors: A Critical Review. Robotics. 2026; 15(3):60. https://doi.org/10.3390/robotics15030060
Chicago/Turabian StyleDall’Olio, Matteo, Edoardo Ida’, and Marco Carricato. 2026. "Robotic Disassembly of Electrical Cable Connectors: A Critical Review" Robotics 15, no. 3: 60. https://doi.org/10.3390/robotics15030060
APA StyleDall’Olio, M., Ida’, E., & Carricato, M. (2026). Robotic Disassembly of Electrical Cable Connectors: A Critical Review. Robotics, 15(3), 60. https://doi.org/10.3390/robotics15030060

