A Survey on Deep Learning in 3D CAD Reconstruction
Abstract
1. Introduction
- We briefly review the recent progress in deep learning-based 3D CAD reconstruction. In addition, recent research methods and research trends over the past years are also given.
- We summarize existing reconstruction approaches based on different input modalities, including point clouds, sketches, and other forms. We also summarize CAD sketch design generation based on deep learning.
- We introduce commonly used CAD data representation formats in deep learning frameworks, which helps improve our understanding of model structure and design consistency.
- We summarize the commonly used public CAD datasets.
- We discuss the main challenges, existing limitations, and potential directions for future research.
2. Reconstruction from Point Cloud to CAD Model
3. Reconstructing CAD Models from Sketches
4. Reconstruction of CAD Models from Other Forms
5. Design and Generation of CAD Sketches
6. CAD Representation
6.1. B-Rep (Boundary Representation)
6.2. Polygon Mesh
6.3. Sequence Representation
6.4. Constructive Solid Geometry (CSG)
6.5. Sketch Representation
7. Datasets
7.1. ShapeNetCore Dataset
7.2. CSGNet Synthetic Dataset
7.3. ABC Dataset
7.4. CC3D Dataset
7.5. CC3D-Ops Dataset
7.6. Fusion 360 Gallery Reconstruction Dataset
7.7. MFCAD++ Dataset
7.8. DeepCAD Dataset
7.9. Fusion 360 Gallery Assembly Dataset
7.10. SketchGraphs Dataset
7.11. Furniture Dataset
8. Metrics
8.1. Reconstruction from a Point Cloud to a CAD Model
8.2. Reconstruction from Sketches to CAD Models
8.2.1. From Sketches to 3D CAD Models
8.2.2. Generative Reconstruction of CAD Sketches
8.3. Reconstruction from Other Forms to CAD Models
8.3.1. Other Forms to B-Rep 3D CAD Model Reconstruction
8.3.2. Reconstruction of 3D CAD Models from Other Forms
9. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Wu, J.; Zhang, C.; Xue, T.; Freeman, B.; Tenenbaum, J. Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling. Adv. Neural Inf. Process. Syst. 2016, 29, 82–90. [Google Scholar]
- Fan, H.; Su, H.; Guibas, L.J. A point set generation network for 3D object reconstruction from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 605–613. [Google Scholar]
- Wohlers, T. 3D printing and additive manufacturing state of the industry. In Annual Worldwide Progress Report; Wohlers Associates: Fort Collins, CO, USA, 2014. [Google Scholar]
- Jamróz, W.; Szafraniec, J.; Kurek, M.; Jachowicz, R. 3D printing in pharmaceutical and medical applications–recent achievements and challenges. Pharm. Res. 2018, 35, 1–22. [Google Scholar] [CrossRef] [PubMed]
- Prince, S. Digital Visual Effects in Cinema: The Seduction of Reality; Rutgers University Press: New Brunswick, NJ, USA, 2011. [Google Scholar]
- Galeazzi, F. Towards the definition of best 3D practices in archaeology: Assessing 3D documentation techniques for intra-site data recording. J. Cult. Herit. 2016, 17, 159–169. [Google Scholar] [CrossRef]
- Chang, A.X.; Funkhouser, T.; Guibas, L.; Hanrahan, P.; Huang, Q.; Li, Z.; Savarese, S.; Savva, M.; Song, S.; Su, H.; et al. ShapeNet: An information-rich 3D model repository. arXiv 2015, arXiv:1512.03012. [Google Scholar]
- Koch, S.; Matveev, A.; Jiang, Z.; Williams, F.; Artemov, A.; Burnaev, E.; Alexa, M.; Zorin, D.; Panozzo, D. ABC: A big CAD model dataset for geometric deep learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 9601–9611. [Google Scholar]
- Willis, K.D.; Pu, Y.; Luo, J.; Chu, H.; Du, T.; Lambourne, J.G.; Solar-Lezama, A.; Matusik, W. Fusion 360 gallery: A dataset and environment for programmatic CAD construction from human design sequences. ACM Trans. Graph. TOG 2021, 40, 1–24. [Google Scholar] [CrossRef]
- Wu, R.; Xiao, C.; Zheng, C. DeepCAD: A deep generative network for computer-aided design models. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 6772–6782. [Google Scholar]
- Sharma, G.; Goyal, R.; Liu, D.; Kalogerakis, E.; Maji, S. CSGNet: Neural shape parser for constructive solid geometry. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 5515–5523. [Google Scholar]
- Sharma, G.; Liu, D.; Maji, S.; Kalogerakis, E.; Chaudhuri, S.; Měch, R. ParSeNet: A parametric surface fitting network for 3D point clouds. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part VII 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 261–276. [Google Scholar]
- Guo, H.; Liu, S.; Pan, H.; Liu, Y.; Tong, X.; Guo, B. Complexgen: CAD reconstruction by B-rep chain complex generation. ACM Trans. Graph. TOG 2022, 41, 1–18. [Google Scholar] [CrossRef]
- Jayaraman, P.K.; Lambourne, J.G.; Desai, N.; Willis, K.D.; Sanghi, A.; Morris, N.J. SolidGen: An autoregressive model for direct B-rep synthesis. arXiv 2022, arXiv:2203.13944. [Google Scholar]
- Xu, X.; Willis, K.D.; Lambourne, J.G.; Cheng, C.Y.; Jayaraman, P.K.; Furukawa, Y. SkexGen: Autoregressive generation of CAD construction sequences with disentangled codebooks. arXiv 2022, arXiv:2207.04632. [Google Scholar]
- Jayaraman, P.K.; Sanghi, A.; Lambourne, J.G.; Willis, K.D.; Davies, T.; Shayani, H.; Morris, N. UV-Net: Learning from boundary representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, USA, 19–25 June 2021; pp. 11703–11712. [Google Scholar]
- Lambourne, J.G.; Willis, K.D.; Jayaraman, P.K.; Sanghi, A.; Meltzer, P.; Shayani, H. BRepNet: A topological message passing system for solid models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, USA, 19–25 June 2021; pp. 12773–12782. [Google Scholar]
- Chen, Z.; Tagliasacchi, A.; Zhang, H. BSP-Net: Generating compact meshes via binary space partitioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 45–54. [Google Scholar]
- Du, T.; Inala, J.P.; Pu, Y.; Spielberg, A.; Schulz, A.; Rus, D.; Solar-Lezama, A.; Matusik, W. InverseCSG: Automatic conversion of 3D models to CSG trees. ACM Trans. Graph. TOG 2018, 37, 1–16. [Google Scholar] [CrossRef]
- Kania, K.; Zieba, M.; Kajdanowicz, T. UCSG-NET-unsupervised discovering of constructive solid geometry tree. Adv. Neural Inf. Process. Syst. 2020, 33, 8776–8786. [Google Scholar]
- Chen, Y.; Lee, H. A neural network system feature recognition for two-dimensional. Int. J. Comput. Integr. Manuf. 1998, 11, 111–117. [Google Scholar] [CrossRef]
- Ding, L.; Yue, Y. Novel ANN-based feature recognition incorporating design by features. Comput. Ind. 2004, 55, 197–222. [Google Scholar] [CrossRef]
- Henderson, M.R.; Srinath, G.; Stage, R.; Walker, K.; Regli, W. Boundary representation-based feature identification. In Manufacturing Research and Technology; Elsevier: Amsterdam, The Netherlands, 1994; Volume 20, pp. 15–38. [Google Scholar]
- Marquez, M.; Gill, R.; White, A. Application of neural networks in feature recognition of mould reinforced plastic parts. Concurr. Eng. 1999, 7, 115–122. [Google Scholar] [CrossRef]
- Cao, W.; Robinson, T.; Hua, Y.; Boussuge, F.; Colligan, A.R.; Pan, W. Graph representation of 3D CAD models for machining feature recognition with deep learning. In Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Virtual, 17–19 August 2020. [Google Scholar]
- Li, P.; Guo, J.; Zhang, X.; Yan, D.M. SECAD-Net: Self-supervised CAD reconstruction by learning sketch-extrude operations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2023; pp. 16816–16826. [Google Scholar]
- Ren, D.; Zheng, J.; Cai, J.; Li, J.; Zhang, J. ExtrudeNet: Unsupervised inverse sketch-and-extrude for shape parsing. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 482–498. [Google Scholar]
- Zhou, S.; Tang, T.; Zhou, B. CADParser: A learning approach of sequence modeling for B-rep CAD. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), Macau, China, 19–25 August 2023; International Joint Conferences on Artificial Intelligence Organization: Macau, China, 2023. [Google Scholar]
- Wang, X.; Xu, Y.; Xu, K.; Tagliasacchi, A.; Zhou, B.; Mahdavi-Amiri, A.; Zhang, H. PIE-NET: Parametric inference of point cloud edges. Adv. Neural Inf. Process. Syst. 2020, 33, 20167–20178. [Google Scholar]
- Uy, M.A.; Chang, Y.Y.; Sung, M.; Goel, P.; Lambourne, J.G.; Birdal, T.; Guibas, L.J. Point2Cyl: Reverse engineering 3D objects from point clouds to extrusion cylinders. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 19–24 June 2022; pp. 11850–11860. [Google Scholar]
- Hu, W.; Zheng, J.; Zhang, Z.; Yuan, X.; Yin, J.; Zhou, Z. PlankAssembly: Robust 3D Reconstruction from Three Orthographic Views with Learnt Shape Programs. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2–6 October 2023; pp. 18495–18505. [Google Scholar]
- Willis, K.D.; Jayaraman, P.K.; Lambourne, J.G.; Chu, H.; Pu, Y. Engineering sketch generation for computer-aided design. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 19–25 June 2021; pp. 2105–2114. [Google Scholar]
- Buonamici, F.; Carfagni, M.; Furferi, R.; Governi, L.; Lapini, A.; Volpe, Y. Reverse engineering modeling methods and tools: A survey. Comput.-Aided Des. Appl. 2018, 15, 443–464. [Google Scholar] [CrossRef]
- Masuda, H. Topological operators and Boolean operations for complex-based nonmanifold geometric models. Comput.-Aided Des. 1993, 25, 119–129. [Google Scholar] [CrossRef]
- Birdal, T.; Busam, B.; Navab, N.; Ilic, S.; Sturm, P. Generic primitive detection in point clouds using novel minimal quadric fits. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 1333–1347. [Google Scholar] [CrossRef]
- Li, L.; Sung, M.; Dubrovina, A.; Yi, L.; Guibas, L.J. Supervised fitting of geometric primitives to 3D point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 2652–2660. [Google Scholar]
- Sommer, C.; Sun, Y.; Bylow, E.; Cremers, D. PrimiTect: Fast continuous hough voting for primitive detection. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; IEEE: Paris, France, 2020; pp. 8404–8410. [Google Scholar]
- Dong, Y.; Xu, B.; Liao, T.; Yin, C.; Tan, Z. Application of local-feature-based 3-D point cloud stitching method of low-overlap point cloud to aero-engine blade measurement. IEEE Trans. Instrum. Meas. 2023, 72, 1–13. [Google Scholar] [CrossRef]
- Deng, J.; Liu, S.; Chen, H.; Chang, Y.; Yu, Y.; Ma, W.; Wang, Y.; Xie, H. A Precise Method for Identifying 3D Circles in Freeform Surface Point Clouds. IEEE Trans. Instrum. Meas. 2025. [Google Scholar] [CrossRef]
- Song, Y.; Huang, G.; Yin, J.; Wang, D. Three-dimensional reconstruction of bubble geometry from single-perspective images based on ray tracing algorithm. Meas. Sci. Technol. 2024, 36, 016010. [Google Scholar] [CrossRef]
- Xu, X.; Fu, X.; Zhao, H.; Liu, M.; Xu, A.; Ma, Y. Three-dimensional reconstruction and geometric morphology analysis of lunar small craters within the patrol range of the Yutu-2 Rover. Remote Sens. 2023, 15, 4251. [Google Scholar] [CrossRef]
- Birdal, T.; Busam, B.; Navab, N.; Ilic, S.; Sturm, P. A minimalist approach to type-agnostic detection of quadrics in point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 3530–3540. [Google Scholar]
- Paschalidou, D.; Ulusoy, A.O.; Geiger, A. Superquadrics Revisited: Learning 3D Shape Parsing beyond Cuboids. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 10396–10405. [Google Scholar]
- Benko, P.; Várady, T. Segmentation methods for smooth point regions of conventional engineering objects. Comput.-Aided Des. 2004, 36, 511–523. [Google Scholar] [CrossRef]
- Xu, X.; Peng, W.; Cheng, C.Y.; Willis, K.D.; Ritchie, D. Inferring CAD modeling sequences using zone graphs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 6062–6070. [Google Scholar]
- Atzmon, M.; Lipman, Y. SAL: Sign agnostic learning of shapes from raw data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 2565–2574. [Google Scholar]
- Park, J.J.; Florence, P.; Straub, J.; Newcombe, R.; Lovegrove, S. DeepSDF: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 165–174. [Google Scholar]
- Cherenkova, K.; Aouada, D.; Gusev, G. Pvdeconv: Point-voxel deconvolution for autoencoding CAD construction in 3D. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; IEEE: New York, NY, USA, 2020; pp. 2741–2745. [Google Scholar]
- Yang, B.; Jiang, H.; Pan, H.; Wonka, P.; Xiao, J.; Lin, G. PS-CAD: Local Geometry Guidance via Prompting and Selection for CAD Reconstruction. arXiv 2024, arXiv:2405.15188. [Google Scholar] [CrossRef]
- Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.Y.; et al. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–7 October 2023; pp. 4015–4026. [Google Scholar]
- Liu, Y.; D’Aronco, S.; Schindler, K.; Wegner, J.D. PC2WF: 3D wireframe reconstruction from raw point clouds. arXiv 2021, arXiv:2103.02766. [Google Scholar]
- Zhou, Y.; Qi, H.; Zhai, Y.; Sun, Q.; Chen, Z.; Wei, L.Y.; Ma, Y. Learning to reconstruct 3D manhattan wireframes from a single image. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 7698–7707. [Google Scholar]
- Zhou, Y.; Qi, H.; Ma, Y. End-to-end wireframe parsing. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 962–971. [Google Scholar]
- Xue, N.; Wu, T.; Bai, S.; Wang, F.; Xia, G.S.; Zhang, L.; Torr, P.H. Holistically-attracted wireframe parsing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 2788–2797. [Google Scholar]
- Wu, X.; Jiang, L.; Wang, P.S.; Liu, Z.; Liu, X.; Qiao, Y.; Ouyang, W.; He, T.; Zhao, H. Point transformer v3: Simpler faster stronger. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2024; pp. 4840–4851. [Google Scholar]
- Yin, C.; Wang, B.; Gan, V.J.; Wang, M.; Cheng, J.C. Automated semantic segmentation of industrial point clouds using ResPointNet++. Autom. Constr. 2021, 130, 103874. [Google Scholar] [CrossRef]
- Yu, W.; Shu, J.; Yang, Z.; Ding, H.; Zeng, W.; Bai, Y. Deep learning-based pipe segmentation and geometric reconstruction from poorly scanned point clouds using BIM-driven data alignment. Autom. Constr. 2025, 173, 106071. [Google Scholar] [CrossRef]
- Volk, R.; Stengel, J.; Schultmann, F. Building Information Modeling (BIM) for existing buildings—Literature review and future needs. Autom. Constr. 2014, 38, 109–127. [Google Scholar] [CrossRef]
- Tang, S.; Li, X.; Zheng, X.; Wu, B.; Wang, W.; Zhang, Y. BIM generation from 3D point clouds by combining 3D deep learning and improved morphological approach. Autom. Constr. 2022, 141, 104422. [Google Scholar] [CrossRef]
- Edelsbrunner, H.; Kirkpatrick, D.; Seidel, R. On the shape of a set of points in the plane. IEEE Trans. Inf. Theory 1983, 29, 551–559. [Google Scholar] [CrossRef]
- Son, H.; Kim, C. Automatic segmentation and 3D modeling of pipelines into constituent parts from laser-scan data of the built environment. Autom. Constr. 2016, 68, 203–211. [Google Scholar] [CrossRef]
- Jung, J.; Hong, S.; Yoon, S.; Kim, J.; Heo, J. Automated 3D wireframe modeling of indoor structures from point clouds using constrained least-squares adjustment for as-built BIM. J. Comput. Civ. Eng. 2016, 30, 04015074. [Google Scholar] [CrossRef]
- Avetisyan, A.; Dahnert, M.; Dai, A.; Savva, M.; Chang, A.X.; Nießner, M. Scan2CAD: Learning CAD model alignment in rgb-d scans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 2614–2623. [Google Scholar]
- Pu, J.; Lou, K.; Ramani, K. A 2D sketch-based user interface for 3D CAD model retrieval. Comput.-Aided Des. Appl. 2005, 2, 717–725. [Google Scholar] [CrossRef]
- Bonnici, A.; Akman, A.; Calleja, G.; Camilleri, K.P.; Fehling, P.; Ferreira, A.; Hermuth, F.; Israel, J.H.; Landwehr, T.; Liu, J.; et al. Sketch-based interaction and modeling: Where do we stand? AI EDAM 2019, 33, 370–388. [Google Scholar] [CrossRef]
- Lun, Z.; Gadelha, M.; Kalogerakis, E.; Maji, S.; Wang, R. 3D shape reconstruction from sketches via multi-view convolutional networks. In Proceedings of the 2017 International Conference on 3D Vision (3DV), Qingdao, China, 10–12 October 2017; IEEE: New York, NY, USA, 2017; pp. 67–77. [Google Scholar]
- Li, C.; Pan, H.; Bousseau, A.; Mitra, N.J. Sketch2CAD: Sequential CAD modeling by sketching in context. ACM Trans. Graph. TOG 2020, 39, 1–14. [Google Scholar] [CrossRef]
- Huang, H.; Kalogerakis, E.; Yumer, E.; Mech, R. Shape synthesis from sketches via procedural models and convolutional networks. IEEE Trans. Vis. Comput. Graph. 2016, 23, 2003–2013. [Google Scholar] [CrossRef]
- Bae, S.H.; Balakrishnan, R.; Singh, K. ILoveSketch: As-natural-as-possible sketching system for creating 3d curve models. In Proceedings of the 21st Annual ACM Symposium on User Interface Software and Technology (UIST), Monterey, CA, USA, 19–22 October 2008; pp. 151–160. [Google Scholar]
- Han, W.; Xiang, S.; Liu, C.; Wang, R.; Feng, C. SPARE3D: A dataset for spatial reasoning on three-view line drawings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 14690–14699. [Google Scholar]
- Shin, B.S.; Shin, Y.G. Fast 3D solid model reconstruction from orthographic views. Comput.-Aided Des. 1998, 30, 63–76. [Google Scholar] [CrossRef]
- Nash, C.; Ganin, Y.; Eslami, S.A.; Battaglia, P. PolyGen: An autoregressive generative model of 3D meshes. In Proceedings of the International Conference on Machine Learning (ICML), Washington, DC, USA, 13–18 July 2020; pp. 7220–7229. [Google Scholar]
- Wang, W.; Grinstein, G.G. A survey of 3D solid reconstruction from 2D projection line drawings. In Proceedings of the Computer Graphics Forum, Annaheim, CA, USA, 2–6 August 1993; Wiley Online Library: Hoboken, NJ, USA, 1993; Volume 12, pp. 137–158. [Google Scholar]
- Sakurai, H.; Gossard, D.C. Solid model input through orthographic views. ACM SIGGRAPH Comput. Graph. 1983, 17, 243–252. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st Advances in Neural Information Processing Systems (NeurIPS 2017), Long Beach, CA, USA, 4–9 December 2017; Volume 30, pp. 5998–6008. [Google Scholar]
- Fan, Z.; Zhu, L.; Li, H.; Chen, X.; Zhu, S.; Tan, P. FloorPlanCAD: A large-scale CAD drawing dataset for panoptic symbol spotting. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Virtual Conference, 11–17 October 2021; pp. 10128–10137. [Google Scholar]
- Fan, Z.; Chen, T.; Wang, P.; Wang, Z. Cadtransformer: Panoptic symbol spotting transformer for CAD drawings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 10986–10996. [Google Scholar]
- Shtof, A.; Agathos, A.; Gingold, Y.; Shamir, A.; Cohen-Or, D. Geosemantic snapping for sketch-based modeling. In Proceedings of the Computer Graphics Forum, Annaheim, CA, USA, 2–6 August 1993; Wiley Online Library: Hoboken, NJ, USA, 2013; Volume 32, pp. 245–253. [Google Scholar]
- Yang, G.; Huang, X.; Hao, Z.; Liu, M.Y.; Belongie, S.; Hariharan, B. PointFlow: 3D point cloud generation with continuous normalizing flows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 4541–4550. [Google Scholar]
- Wang, N.; Zhang, Y.; Li, Z.; Fu, Y.; Liu, W.; Jiang, Y.G. Pixel2Mesh: Generating 3D mesh models from single rgb images. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 52–67. [Google Scholar]
- Xu, X.; Jayaraman, P.K.; Lambourne, J.G.; Willis, K.D.; Furukawa, Y. Hierarchical neural coding for controllable CAD model generation. arXiv 2023, arXiv:2307.00149. [Google Scholar]
- Yu, F.; Chen, Z.; Li, M.; Sanghi, A.; Shayani, H.; Mahdavi-Amiri, A.; Zhang, H. Capri-Net: Learning compact CAD shapes with adaptive primitive assembly. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 11768–11778. [Google Scholar]
- Geometry, C.S. Neural Shape Parsers for Constructive Solid Geometry. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 1234–1245. [Google Scholar]
- Xu, X.; Lambourne, J.G.; Jayaraman, P.K.; Wang, Z.; Willis, K.D.; Furukawa, Y. BrepGen: A B-rep Generative Diffusion Model with Structured Latent Geometry. arXiv 2024, arXiv:2401.15563. [Google Scholar] [CrossRef]
- Ho, J.; Jain, A.; Abbeel, P. Denoising diffusion probabilistic models. Adv. Neural Inf. Process. Syst. 2020, 33, 6840–6851. [Google Scholar]
- Jones, B.T.; Hu, M.; Kodnongbua, M.; Kim, V.G.; Schulz, A. Self-supervised representation learning for CAD. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–24 June 2023; pp. 21327–21336. [Google Scholar]
- Druc, S.; Balu, A.; Wooldridge, P.; Krishnamurthy, A.; Sarkar, S. Concept activation vectors for generating user-defined 3D shapes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 2993–3000. [Google Scholar]
- Camba, J.D.; Contero, M.; Company, P. Parametric CAD modeling: An analysis of strategies for design reusability. Comput.-Aided Des. 2016, 74, 18–31. [Google Scholar] [CrossRef]
- Choi, G.H.; Mun, D.H.; Han, S.H. Exchange of CAD part models based on the macro-parametric approach. Int. J. CAD/CAM 2002, 2, 13–21. [Google Scholar]
- Yan, C.; Vanderhaeghe, D.; Gingold, Y. A benchmark for rough sketch cleanup. ACM Trans. Graph. TOG 2020, 39, 1–14. [Google Scholar] [CrossRef]
- Para, W.; Bhat, S.; Guerrero, P.; Kelly, T.; Mitra, N.; Guibas, L.J.; Wonka, P. SketchGen: Generating constrained CAD sketches. Adv. Neural Inf. Process. Syst. 2021, 34, 5077–5088. [Google Scholar]
- Ganin, Y.; Bartunov, S.; Li, Y.; Keller, E.; Saliceti, S. Computer-aided design as language. Adv. Neural Inf. Process. Syst. 2021, 34, 5885–5897. [Google Scholar]
- Seff, A.; Zhou, W.; Richardson, N.; Adams, R.P. Vitruvion: A generative model of parametric CAD sketches. arXiv 2021, arXiv:2109.14124. [Google Scholar]
- Xu, P.; Fu, H.; Zheng, Y.; Singh, K.; Huang, H.; Tai, C.L. Model-guided 3D sketching. IEEE Trans. Vis. Comput. Graph. 2018, 25, 2927–2939. [Google Scholar] [CrossRef]
- Karadeniz, A.S.; Mallis, D.; Mejri, N.; Cherenkova, K.; Kacem, A.; Aouada, D. DAVINCI: A Single-Stage Architecture for Constrained CAD Sketch Inference. arXiv 2024, arXiv:2410.22857. [Google Scholar]
- Sarcar, M.; Rao, K.M.; Narayan, K.L. Computer Aided Design and Manufacturing; PHI Learning Pvt. Ltd.: Delhi, India, 2008. [Google Scholar]
- Radhakrishnan, P.; Subramanyan, S.; Raju, V. Cad/Cam/Cim; New Age International: Delhi, India, 2008. [Google Scholar]
- Ansaldi, S.; De Floriani, L.; Falcidieno, B. Geometric modeling of solid objects by using a face adjacency graph representation. ACM SIGGRAPH Comput. Graph. 1985, 19, 131–139. [Google Scholar] [CrossRef]
- Ming, H.; Yanzhu, D.; Jianguang, Z.; Yong, Z. A topological enabled three-dimensional model based on constructive solid geometry and boundary representation. Clust. Comput. 2016, 19, 2027–2037. [Google Scholar] [CrossRef]
- Pottmann, H.; Leopoldseder, S.; Hofer, M.; Steiner, T.; Wang, W. Industrial geometry: Recent advances and applications in CAD. Comput.-Aided Des. 2005, 37, 751–766. [Google Scholar] [CrossRef]
- Stroud, I.; Nagy, H. Solid Modelling and CAD Systems: How to Survive a CAD System; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
- Lou, Y.; Li, X.; Chen, H.; Zhou, X. BRep-BERT: Pre-training boundary representation BERT with sub-graph node contrastive learning. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, Birmingham, UK, 21–25 October 2023; pp. 1657–1666. [Google Scholar]
- Morovič, L.; Milde, J. CAD model created from polygon mesh. Appl. Mech. Mater. 2015, 808, 233–238. [Google Scholar] [CrossRef]
- Botsch, M.; Pauly, M.; Kobbelt, L.; Alliez, P.; Lévy, B.; Bischoff, S.; Röossl, C. Geometric Modeling Based on Polygonal Meshes. Technical Report, INRIA. 2007. Available online: https://hal.inria.fr/inria-00186820 (accessed on 30 March 2025).
- Guo, J.; Ding, F.; Jia, X.; Yan, D.M. Automatic and high-quality surface mesh generation for CAD models. Comput.-Aided Des. 2019, 109, 49–59. [Google Scholar] [CrossRef]
- Ebert, D.S.; Musgrave, F.K.; Peachey, D.; Perlin, K.; Worley, S. Texturing and Modeling: A Procedural Approach; Elsevier: Amsterdam, The Netherlands, 2002. [Google Scholar]
- Safdar, M.; Jauhar, T.A.; Kim, Y.; Lee, H.; Noh, C.; Kim, H.; Lee, I.; Kim, I.; Kwon, S.; Han, S. Feature-based translation of CAD models with macro-parametric approach: Issues of feature mapping, persistent naming, and constraint translation. J. Comput. Des. Eng. 2020, 7, 603–614. [Google Scholar] [CrossRef]
- Howard, E.; Musto, J. Introduction to Solid Modeling Using Solidworks; McGraw-Hill Inc.: New York, NY, USA, 2005. [Google Scholar]
- Shapiro, V.; Vossler, D.L. Construction and optimization of CSG representations. Comput.-Aided Des. 1991, 23, 4–20. [Google Scholar] [CrossRef]
- Yu, F.; Chen, Q.; Tanveer, M.; Mahdavi Amiri, A.; Zhang, H. D2CSG: Unsupervised learning of compact CSG trees with dual complements and dropouts. Adv. Neural Inf. Process. Syst. 2023, 36, 22807–22819. [Google Scholar]
- Gryaditskaya, Y.; Sypesteyn, M.; Hoftijzer, J.W.; Pont, S.C.; Durand, F.; Bousseau, A. OpenSketch: A richly-annotated dataset of product design sketches. ACM Trans. Graph. 2019, 38, 232-1. [Google Scholar] [CrossRef]
- Liao, R.; Li, Y.; Song, Y.; Wang, S.; Hamilton, W.; Duvenaud, D.K.; Urtasun, R.; Zemel, R. Efficient graph generation with graph recurrent attention networks. Adv. Neural Inf. Process. Syst. 2019, 32, 4257–4267. [Google Scholar]
- Seff, A.; Ovadia, Y.; Zhou, W.; Adams, R.P. Sketchgraphs: A large-scale dataset for modeling relational geometry in computer-aided design. arXiv 2020, arXiv:2007.08506. [Google Scholar]
- Yi, L.; Kim, V.G.; Ceylan, D.; Shen, I.C.; Yan, M.; Su, H.; Lu, C.; Huang, Q.; Sheffer, A.; Guibas, L. A scalable active framework for region annotation in 3D shape collections. ACM Trans. Graph. ToG 2016, 35, 1–12. [Google Scholar] [CrossRef]
- Dupont, E.; Cherenkova, K.; Kacem, A.; Ali, S.A.; Arzhannikov, I.; Gusev, G.; Aouada, D. Cadops-net: Jointly learning CAD operation types and steps from boundary-representations. In Proceedings of the 2022 International Conference on 3D Vision (3DV), Prague, Czech Republic, 12–15 September 2022; pp. 114–123. [Google Scholar]
- Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; Xiao, J. 3D shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1912–1920. [Google Scholar]
- Mo, K.; Zhu, S.; Chang, A.X.; Yi, L.; Tripathi, S.; Guibas, L.J.; Su, H. PartNet: A large-scale benchmark for fine-grained and hierarchical part-level 3D object understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 909–918. [Google Scholar]
- Colligan, A.R.; Robinson, T.T.; Nolan, D.C.; Hua, Y.; Cao, W. Hierarchical CADNet: Learning from B-reps for machining feature recognition. Comput.-Aided Des. 2022, 147, 103226. [Google Scholar] [CrossRef]
- MFCAD: A Dataset of 3D CAD Models with Machining Feature Labels. 2021. Available online: https://github.com/hducg/MFCAD (accessed on 14 June 2021).
- Willis, K.D.; Jayaraman, P.K.; Chu, H.; Tian, Y.; Li, Y.; Grandi, D.; Sanghi, A.; Tran, L.; Lambourne, J.G.; Solar-Lezama, A.; et al. Joinable: Learning bottom-up assembly of parametric CAD joints. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 19–24 June 2022; pp. 15849–15860. [Google Scholar]
- Cazals, F.; Pouget, M. Estimating differential quantities using polynomial fitting of osculating jets. Comput. Aided Geom. Des. 2005, 22, 121–146. [Google Scholar] [CrossRef]
- Achlioptas, P.; Diamanti, O.; Mitliagkas, I.; Guibas, L. Learning representations and generative models for 3D point clouds. In Proceedings of the International Conference on Machine Learning (ICML), Stockholm, Sweden, 10–15 July 2018; pp. 40–49. [Google Scholar]
- Groueix, T.; Fisher, M.; Kim, V.G.; Russell, B.C.; Aubry, M. A papier-mâché approach to learning 3D surface generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 216–224. [Google Scholar]
- Chen, Z.; Zhang, H. Learning implicit fields for generative shape modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 5939–5948. [Google Scholar]
Datasets | Year | Model Scale | Included Features | Data Representation Format | ||||
---|---|---|---|---|---|---|---|---|
B-Rep | Mesh | Seq. | CSG | Sketch | ||||
ShapeNetCore | 2015 | 51,300 | Category labeling | – | ✓ | – | – | – |
CSGNet Synthetic Dataset | 2018 | 23,500 K | The form of a program expression | – | – | – | ✓ | – |
ABC | 2019 | 1,000,000+ | Parametric description | ✓ | ✓ | – | – | – |
SketchGraphs | 2020 | 15,000,000 sketches | Geometric constraint diagram annotation | – | – | ✓ | – | ✓ |
CC3D | 2020 | 50,000+ | No category restrictions | ✓ | ✓ | – | – | – |
DeepCAD | 2021 | 178,238 | CAD operation command sequence annotation | – | – | ✓ | – | – |
Fusion 360 Gallery reconstruction | 2021 | 8625 | Build history | ✓ | ✓ | ✓ | – | ✓ |
MFCAD++ | 2022 | 59,655 | Processing feature annotation | ✓ | – | – | – | – |
CC3D-ops | 2022 | 37,000+ | Operation and step label | ✓ | ✓ | ✓ | – | – |
Fusion 360 Gallery assembly | 2022 | 8251 Assemblies /154,468 Parts | Assembly information labeling | ✓ | ✓ | – | – | – |
Furniture Dataset | 2024 | 6171 | Category labeling | ✓ | – | – | – | – |
Dataset | Dataset Download URL |
---|---|
ShapeNetCore | http://www.shapenet.org (accessed on 30 March 2025) |
CSGNet Synthetic Dataset | https://hippogriff.github.io/CSGNet (accessed on 30 March 2025) |
ABC | https://deep-geometry.github.io/abc-dataset (accessed on 30 March 2025) |
CC3D | Paper not provided |
CC3D-ops | https://cvi2.uni.lu/cc3d-ops/ (accessed on 30 March 2025) |
Fusion 360 Gallery Reconstruction | https://github.com/AutodeskAILab/Fusion360GalleryDataset (accessed on 30 March 2025) |
MFCAD++ | https://pure.qub.ac.uk/en/datasets/mfcad-dataset (accessed on 30 March 2025) |
DeepCAD | https://github.com/ChrisWu1997/DeepCAD (accessed on 30 March 2025) |
Fusion 360 Gallery Assembly | https://github.com/AutodeskAILab/Fusion360GalleryDataset (accessed on 30 March 2025) |
SketchGraphs | https://github.com/PrincetonLIPS/SketchGraphs (accessed on 30 March 2025) |
Furniture Dataset | https://github.com/samxuxiang/BrepGen (accessed on 30 March 2025) |
Method | Publication | On the Dataset | Seg. ↑ | Norm. ↓ | B.B. ↑ | E.A. ↓ | E.C. ↓ | Fit Cyl ↓ | Fit Glob ↓ |
---|---|---|---|---|---|---|---|---|---|
Point2Cyl [30] | CVPR’22 | Fusion Gallery | 0.736 | 8.547 | 0.911 | 8.137 | 0.0525 | 0.0704 | 0.0305 |
H.V.+NJ [121] | - | Fusion Gallery | 0.409 | 12.264 | 0.595 | 58.868 | 0.1248 | 0.1492 | 0.0683 |
Point2Cyl | CVPR’22 | DeepCAD | 0.833 | 8.563 | 0.919 | 7.923 | 0.0267 | 0.0758 | 0.0308 |
H.V.+NJ | - | DeepCAD | 0.540 | 13.573 | 0.577 | 59.785 | 0.0435 | 0.1664 | 0.0459 |
Method | Dataset | CD ↓ | HD ↓ | ECD ↓ | NC ↑ | IR (%) ↓ |
---|---|---|---|---|---|---|
DeepCAD [10] | DeepCAD [10] | 4.25 | 39.25 | 19.33 | 0.49 | 7.14 |
HNC-CAD [81] | 1.09 | 20.23 | 5.94 | 0.75 | 0.32 | |
Point2cyl [30] | 1.00 | 20.93 | 20.45 | 0.73 | 0.0 | |
SECAD-Net [26] | 0.42 | 9.96 | 5.54 | 0.73 | 0.38 | |
PS-CAD [49] | 0.21 | 8.66 | 4.65 | 0.89 | 0.43 | |
HNC-CAD [81] | Fusion360 [9] | 1.38 | 20.61 | 9.24 | 0.52 | 0.33 |
SECAD-Net [26] | 0.69 | 12.76 | 5.15 | 0.64 | 3.82 | |
PS-CAD [49] | 0.57 | 12.0 | 5.02 | 0.67 | 1.51 |
Method | Publication | Precision | Recall | F1 Score |
---|---|---|---|---|
Hu et al. [31] | CVPR’23 | 84.12 | 82.05 | 82.62 |
Sakurai and Shin et al. [71,74] | - | 99.64 | 26.47 | 39.31 |
Method | Parameters | Bits per Vertex | Bits per Sketch | Unique% | Valid% | Novel% |
---|---|---|---|---|---|---|
CurveGen [32] | 2,155,542 | 1.75/0.20 | 176.69/30.64 | 99.90 | 81.50 | 90.90 |
TurtleGen [32] | 2,690,310 | 2.27 | 54.54 | 86.40 | 42.90 | 80.60 |
SketchGraphs [113] | 18,621,560 | - | 99.38 | 76.20 | 65.80 | 69.10 |
Method | Publication | COV% ↑ | MMD ↓ | JSD ↓ | Novel% ↑ | Unique% ↑ | Valid% ↑ |
---|---|---|---|---|---|---|---|
BrepGen [84] | CVPR’24 | 71.26 | 1.04 | 0.09 | 99.8 | 99.7 | 62.9 |
DeepCAD [10] | ICCV’21 | 65.46 | 1.29 | 1.67 | 87.4 | 89.3 | 46.1 |
Method | ACCcmd ↑ | ACCparam ↑ | Median CD ↓ | Invalid Ratio ↓ |
---|---|---|---|---|
DeepCAD [10] + Aug | 99.50 | 97.98 | 0.752 | 2.72 |
DeepCAD [10] | 99.36 | 97.47 | 0.787 | 3.30 |
Alt-ArcMid | 99.34 | 97.31 | 0.790 | 3.26 |
Alt-Trans | 99.33 | 97.56 | 0.792 | 3.30 |
Alt-Rel | 99.50 | 97.66 | 0.863 | 3.51 |
Alt-Regr | - | - | 2.142 | 4.32 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lin, R.; Ji, Y.; Ding, W.; Wu, T.; Zhu, Y.; Jiang, M. A Survey on Deep Learning in 3D CAD Reconstruction. Appl. Sci. 2025, 15, 6681. https://doi.org/10.3390/app15126681
Lin R, Ji Y, Ding W, Wu T, Zhu Y, Jiang M. A Survey on Deep Learning in 3D CAD Reconstruction. Applied Sciences. 2025; 15(12):6681. https://doi.org/10.3390/app15126681
Chicago/Turabian StyleLin, Ruiquan, Yunwei Ji, Wanting Ding, Tianxiang Wu, Yaosheng Zhu, and Mengxi Jiang. 2025. "A Survey on Deep Learning in 3D CAD Reconstruction" Applied Sciences 15, no. 12: 6681. https://doi.org/10.3390/app15126681
APA StyleLin, R., Ji, Y., Ding, W., Wu, T., Zhu, Y., & Jiang, M. (2025). A Survey on Deep Learning in 3D CAD Reconstruction. Applied Sciences, 15(12), 6681. https://doi.org/10.3390/app15126681