A Review of Optical-Based Three-Dimensional Reconstruction and Multi-Source Fusion for Plant Phenotyping
Abstract
:1. Introduction
2. Three-Dimensional Reconstruction Technology
2.1. Active 3D Reconstruction Techniques
2.1.1. Structured Light Method
2.1.2. Time-of-Flight (TOF)
2.1.3. Laser Scanning Method
2.1.4. Wavelength Selection in Active 3D Reconstruction
2.2. Passive Three-Dimensional Reconstruction Techniques
2.2.1. Stereo Vision Method
2.2.2. Structure from Motion (SfM)
2.3. Deep Learning-Based 3D Reconstruction Technology
2.3.1. Neural Radiance Field (NeRF)
2.3.2. 3D Reconstruction Method Based on Convolutional Neural Networks (CNNs)
2.3.3. 3D Gaussian Splatting (3DGS)
3. Exploration of Multi-Technology Fusion for 3D Reconstruction of Plants
3.1. Multi-Sensor Fusion Techniques
3.1.1. Sensor Types and Functions
3.1.2. Fusion Methods
3.1.3. Advantages and Challenges of Multi-Sensor Fusion Techniques
- (1)
- Advantages
- (2)
- Challenges
3.2. Physical Modeling Combined with 3D Reconstruction
3.2.1. Plant Physical Modeling
- (1)
- Growth Model
- (2)
- Mechanical Model
3.2.2. Combined Method and Effect
- (1)
- Combination Method
- (2)
- Effectiveness Enhancement
3.3. Dynamic Scene Reconstruction Techniques
3.3.1. Impact of Dynamic Factors and Response Strategies
- (1)
- Wind Impact and Coping Strategies
- (2)
- Impact of Plant Growth and Motion and Coping Strategies
3.3.2. Applications of High-Speed Cameras, Event Cameras, and Other Equipment
- (1)
- High-Speed Camera Equipment
- (2)
- Event Cameras
- (3)
- Algorithm Applications
3.3.3. Time Dimension Information Processing
4. Future Perspectives
4.1. Technology Convergence and Innovative Applications
4.2. Intelligence and Automation
4.3. Popularization and Application of Precision Agriculture
4.4. Data Integration and Decision Support Through Cross-Domain Fusion
5. Summary
Funding
Conflicts of Interest
References
- Ju, J.; Zhang, M.; Zhang, Y.; Chen, Q.; Gao, Y.; Yu, Y.; Wu, Z.; Hu, Y.; Liu, X.; Song, J.; et al. Development of Lettuce Growth Monitoring Model Based on Three-Dimensional Reconstruction Technology. Agronomy 2024, 15, 29. [Google Scholar] [CrossRef]
- Arshad, M.A.; Jubery, T.; Afful, J.; Jignasu, A.; Balu, A.; Ganapathysubramanian, B.; Sarkar, S.; Krishnamurthy, A. Evaluating Neural Radiance Fields for 3D Plant Geometry Reconstruction in Field Conditions. Plant Phenomics 2024, 6, 0235. [Google Scholar] [CrossRef]
- Yang, D.; Yang, H.; Liu, D.; Wang, X. Research on automatic 3D reconstruction of plant phenotype based on Multi-View images. Comput. Electron. Agric. 2024, 220, 108866. [Google Scholar] [CrossRef]
- Gu, W.; Wen, W.; Wu, S.; Zheng, C.; Lu, X.; Chang, W.; Xiao, P.; Guo, X. 3D reconstruction of wheat plants by integrating point cloud data and virtual design optimization. Agriculture 2024, 14, 391. [Google Scholar] [CrossRef]
- Zhang, H.; Wang, L.; Jin, X.; Bian, L.; Ge, Y. High-throughput phenotyping of plant leaf morphological, physiological, and biochemical traits on multiple scales using optical sensing. Crop J. 2023, 11, 1303–1318. [Google Scholar] [CrossRef]
- Miao, Y.; Wang, L.; Peng, C.; Li, H.; Li, X.; Zhang, M. Banana plant counting and morphological parameters measurement based on terrestrial laser scanning. Plant Methods 2022, 18, 66. [Google Scholar] [CrossRef]
- Stausberg, L.; Jost, B.; Klingbeil, L.; Kuhlmann, H. A 3D Surface Reconstruction Pipeline for Plant Phenotyping. Remote Sens. 2024, 16, 4720. [Google Scholar] [CrossRef]
- Guan, H.; Liu, M.; Ma, X.; Yu, S. Three-dimensional reconstruction of soybean canopies using multisource imaging for phenotyping analysis. Remote Sens. 2018, 10, 1206. [Google Scholar] [CrossRef]
- Hu, K.; Ying, W.; Pan, Y.; Kang, H.; Chen, C. High-fidelity 3D reconstruction of plants using Neural Radiance Fields. Comput. Electron. Agric. 2024, 220, 108848. [Google Scholar] [CrossRef]
- Chen, H.; Liu, S.; Wang, L.; Wang, C.; Xiang, X.; Zhao, Y.; Lan, Y. Automated 3D Reconstruction and Verification of Single Crop Plants Based on Kinect V3. J. Agric. Eng. 2022, 38, 215–223. [Google Scholar]
- Gao, J.; Chen, L.; Shen, Q.; Cao, X.; Yao, Y. Progress in Differentiable Rendering Based on 3D Gaussian Sputtering Technology. Prog. Laser Optoelectron. 2024, 61, 1–23. [Google Scholar] [CrossRef]
- Cui, B.; Tao, W.; Zhao, H. High-precision 3D reconstruction for small-to-medium-sized objects utilizing line-structured light scanning: A review. Remote Sens. 2021, 13, 4457. [Google Scholar] [CrossRef]
- Zhu, J.; Zhai, R.; Ren, H.; Xie, K.; Du, A.; He, X.; Cui, C.; Wang, Y.; Ye, J.; Wang, J.; et al. Crops3D: A diverse 3D crop dataset for realistic perception and segmentation toward agricultural applications. Sci. Data 2024, 11, 1438. [Google Scholar] [CrossRef]
- Yadav, N.; Sidana, N. Precision Agriculture Technologies: Analysing the Use of Advanced Technologies, Such as Drones, Sensors, and GPS, in Precision Agriculture for Optimizing Resource Management, Crop Monitoring, and Yield Prediction. J. Adv. Zool. 2023, 44, 255. [Google Scholar]
- Choy, C.B.; Xu, D.; Gwak, J.; Chen, K.; Savarese, S. 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Part VIII 14, pp. 628–644. [Google Scholar] [CrossRef]
- Xiao, S.; Fei, S.; Ye, Y.; Xu, D.; Xie, Z.; Bi, K.; Guo, Y.; Li, B.; Zhang, R.; Ma, Y. 3D reconstruction and characterization of cotton bolls in situ based on UAV technology. ISPRS J. Photogramm. Remote Sens. 2024, 209, 101–116. [Google Scholar] [CrossRef]
- Harandi, N.; Vandenberghe, B.; Vankerschaver, J.; Depuydt, S.; Van Messem, A. How to make sense of 3D representations for plant phenotyping: A compendium of processing and analysis techniques. Plant Methods 2023, 19, 60. [Google Scholar] [CrossRef] [PubMed]
- Nasiri, A.; Omid, M.; Taheri-Garavand, A.; Jafari, A. Deep learning-based precision agriculture through weed recognition in sugar beet fields. Sustain. Comput. Inform. Syst. 2022, 35, 100759. [Google Scholar] [CrossRef]
- Jiang, Y.; Li, C.; Paterson, A.H. High throughput phenotyping of cotton plant height using depth images under field conditions. Comput. Electron. Agric. 2016, 130, 57–68. [Google Scholar] [CrossRef]
- Yang, Y.; Wang, X.; Ren, M. Data Fusion Method for Photometric-Structured Light Measurement on Metal Surfaces Based on Multilayer Perceptron. Acta Opt. Sin. 2023, 43, 200–211. [Google Scholar] [CrossRef]
- Meza, J.; Vargas, R.; Romero, L.A.; Zhang, S.; Marrugo, A.G. What is the best triangulation approach for a structured light system? In Proceedings of the Dimensional Optical Metrology and Inspection for Practical Applications IX, Online, 27 April–9 May 2020; Volume 11397, pp. 65–74. [Google Scholar] [CrossRef]
- Huang, H.; Liu, G.; Deng, L.; Song, T.; Qin, F. Multi-line laser 3D reconstruction method based on spatial quadric surface and geometric estimation. Sci. Rep. 2024, 14, 23589. [Google Scholar] [CrossRef]
- Geng, J. Structured-light 3D surface imaging: A tutorial. Adv. Opt. Photonics 2011, 3, 128–160. [Google Scholar] [CrossRef]
- Zhou, W.; Jia, Y.; Fan, L.; Fan, G.; Lu, F. A MEMS-based real-time structured light 3-D measuring architecture on FPGA. J. Real-Time Image Process. 2024, 21, 98. [Google Scholar] [CrossRef]
- Feng, Q.; Cheng, W.; Yang, Q.; Xun, Y.; Wang, X. Research on Recognition and Localization of Overlapping Tomato Fruits Based on Line Structured Light Vision. J. China Agric. Univ. 2015, 20, 100–106. [Google Scholar]
- Lv, H. Research on 3D Reconstruction and Segmentation of Plants Based on Multi-View Images. Master’s Thesis, Chongqing Jiaotong University, Chongqing, China, 2022. [Google Scholar]
- Chen, X.; Peng, Y.; Yu, S.; Hu, D. Measurement of Shape Index of Spherical Fruits Based on Binocular Structured Light 3D Reconstruction. Trans. Chin. Soc. Agric. Eng. 2024, 40, 187–194. [Google Scholar]
- Vázquez-Arellano, M.; Reiser, D.; Paraforos, D.S.; Garrido-Izard, M.; Burce, M.E.C.; Griepentrog, H.W. 3-D reconstruction of maize plants using a time-of-flight camera. Comput. Electron. Agric. 2018, 145, 235–247. [Google Scholar] [CrossRef]
- Si, X. Application of Depth Image Processing in Precision Agriculture. Master’s Thesis, University of Science and Technology of China, Hefei, China, 2017. [Google Scholar]
- Foix, S.; Alenya, G.; Torras, C. Lock-in time-of-flight (ToF) cameras: A survey. IEEE Sens. J. 2011, 11, 1917–1926. [Google Scholar] [CrossRef]
- Hansard, M.; Lee, S.; Choi, O.; Horaud, R.P. Time-of-Flight Cameras: Principles, Methods and Applications; Springer Science & Business Media: London, UK, 2012. [Google Scholar]
- Kazmi, W.; Foix, S.; Alenyà, G.; Andersen, H.J. Indoor and outdoor depth imaging of leaves with time-of-flight and stereo vision sensors: Analysis and comparison. ISPRS J. Photogramm. Remote Sens. 2014, 88, 128–146. [Google Scholar] [CrossRef]
- Xia, C.; Shi, Y.; Yin, W. 3D Plant Point Cloud Data Acquisition and Denoising Using TOF Depth Sensing. Trans. Chin. Soc. Agric. Eng. 2018, 34, 168–174. [Google Scholar]
- Yang, C.; Kang, J.; Eom, D.S. Enhancing ToF Sensor Precision Using 3D Models and Simulation for Vision Inspection in Industrial Mobile Robots. Appl. Sci. 2024, 14, 4595. [Google Scholar] [CrossRef]
- Xu, A.; Rao, L.; Fan, G.; Chen, N. Fast and high accuracy 3D point cloud registration for automatic reconstruction from laser scanning data. IEEE Access 2023, 11, 42497–42509. [Google Scholar] [CrossRef]
- Ebrahim, M.A.B. 3D laser scanners’ techniques overview. Int. J. Sci. Res. 2015, 4, 323–331. [Google Scholar]
- Farhan, S.M.; Yin, J.; Chen, Z.; Memon, M.S. A comprehensive review of LiDAR applications in crop management for precision agriculture. Sensors 2024, 24, 5409. [Google Scholar] [CrossRef] [PubMed]
- Muralikrishnan, B. Performance evaluation of terrestrial laser scanners—A review. Meas. Sci. Technol. 2021, 32, 072001. [Google Scholar] [CrossRef]
- Suresh, A.; Mathew, A.; Dhanish, P. Factors influencing the measurement using 3D laser scanner: A designed experimental study. J. Mech. Sci. Technol. 2024, 38, 5605–5615. [Google Scholar] [CrossRef]
- Lau, L.; Quan, Y.; Wan, J.; Zhou, N.; Wen, C.; Qian, N.; Jing, F. An autonomous ultra-wide band-based attitude and position determination technique for indoor mobile laser scanning. ISPRS Int. J. Geo-Inf. 2018, 7, 155. [Google Scholar] [CrossRef]
- Cao, L.; Coops, N.C.; Sun, Y.; Ruan, H.; Wang, G.; Dai, J.; She, G. Estimating canopy structure and biomass in bamboo forests using airborne LiDAR data. ISPRS J. Photogramm. Remote Sens. 2019, 148, 114–129. [Google Scholar] [CrossRef]
- Micheletto, M.J.; Chesñevar, C.I.; Santos, R. Methods and applications of 3D ground crop analysis using LiDAR technology: A survey. Sensors 2023, 23, 7212. [Google Scholar] [CrossRef]
- Freißmuth, L.; Mattamala, M.; Chebrolu, N.; Schaefer, S.; Leutenegger, S.; Fallon, M. Online tree reconstruction and forest inventory on a mobile robotic system. In Proceedings of the 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Abu Dhabi, United Arab Emirates, 14–18 October 2024; pp. 11765–11772. [Google Scholar] [CrossRef]
- Debnath, S.; Paul, M.; Debnath, T. Applications of LiDAR in agriculture and future research directions. J. Imaging 2023, 9, 57. [Google Scholar] [CrossRef]
- Li, Y.; Zhao, X.; Schwertfeger, S. Detection and Utilization of Reflections in LiDAR Scans through Plane Optimization and Plane SLAM. Sensors 2024, 24, 4794. [Google Scholar] [CrossRef]
- Bleier, M.; Nüchter, A. Low-cost 3d laser scanning in air orwater using self-calibrating structured light. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 105–112. [Google Scholar] [CrossRef]
- Xie, D.; Lü, C.; Zu, M.; Cheng, H. Research Progress on Simulated Materials of Visible–Near Infrared Reflectance Spectra of Green Vegetation. Spectrosc. Spectr. Anal. 2021, 41, 1032–1038. [Google Scholar]
- Reddy, P.; Guthridge, K.M.; Panozzo, J.; Ludlow, E.J.; Spangenberg, G.C.; Rochfort, S.J. Near-infrared hyperspectral imaging pipelines for pasture seed quality evaluation: An overview. Sensors 2022, 22, 1981. [Google Scholar] [CrossRef] [PubMed]
- Golla, T.; Kneiphof, T.; Kuhlmann, H.; Weinmann, M.; Klein, R. Temporal upsampling of point cloud sequences by optimal transport for plant growth visualization. Comput. Graph. Forum 2020, 39, 167–179. [Google Scholar] [CrossRef]
- Ando, R.; Ozasa, Y.; Guo, W. Robust surface reconstruction of plant leaves from 3D point clouds. Plant Phenomics 2021. [Google Scholar] [CrossRef] [PubMed]
- Zhou, L.; Wu, G.; Zuo, Y.; Chen, X.; Hu, H. A comprehensive review of vision-based 3d reconstruction methods. Sensors 2024, 24, 2314. [Google Scholar] [CrossRef]
- Lin, H.Y.; Tsai, C.L.; Tran, V.L. Depth measurement based on stereo vision with integrated camera rotation. IEEE Trans. Instrum. Meas. 2021, 70, 1–10. [Google Scholar] [CrossRef]
- Hou, C.; Xu, J.; Tang, Y.; Zhuang, J.; Tan, Z.; Chen, W.; Wei, S.; Huang, H.; Fang, M. Detection and localization of citrus picking points based on binocular vision. Precis. Agric. 2024, 25, 2321–2355. [Google Scholar] [CrossRef]
- Tang, Y.; Zhou, H.; Wang, H.; Zhang, Y. Fruit detection and positioning technology for a Camellia oleifera C. Abel orchard based on improved YOLOv4-tiny model and binocular stereo vision. Expert Syst. Appl. 2023, 211, 118573. [Google Scholar] [CrossRef]
- Dandrifosse, S.; Bouvry, A.; Leemans, V.; Dumont, B.; Mercatoris, B. Imaging wheat canopy through stereo vision: Overcoming the challenges of the laboratory to field transition for morphological features extraction. Front. Plant Sci. 2020, 11, 96. [Google Scholar] [CrossRef]
- Ge, L.; Yang, Z.; Sun, Z.; Zhang, G.; Zhang, M.; Zhang, K.; Zhang, C.; Tan, Y.; Li, W. A method for broccoli seedling recognition in natural environment based on binocular stereo vision and Gaussian mixture model. Sensors 2019, 19, 1132. [Google Scholar] [CrossRef]
- Yang, X.; Zhong, J.; Lin, K.; Wu, J.; Chen, J.; Si, H. Progress in Binocular Stereo Vision Technology and Its Application in Smart Agriculture. Trans. Chin. Soc. Agric. Eng. 2025, 41, 27–39. [Google Scholar] [CrossRef]
- Bai, Y.; Zhang, B.; Xu, N.; Zhou, J.; Shi, J.; Diao, Z. Vision-based navigation and guidance for agricultural autonomous vehicles and robots: A review. Comput. Electron. Agric. 2023, 205, 107584. [Google Scholar] [CrossRef]
- Liang, X.; Zhou, F.; Chen, H.; Liang, B.; Xu, X.; Yang, W. 3D Reconstruction and Trait Extraction of Maize Plants Based on Motion Recovery Structure. J. Agric. Mach. 2020, 51, 209–219. [Google Scholar]
- Gao, L.; Zhao, Y.; Han, J.; Liu, H. Research on multi-view 3D reconstruction technology based on SFM. Sensors 2022, 22, 4366. [Google Scholar] [CrossRef] [PubMed]
- Zhao, Y.; He, S. Improvement of Feature Point Detection and Matching Methods in Incremental SFM. Laser J. 2020, 41, 59–66. [Google Scholar] [CrossRef]
- Wei, K.; Liu, S.; Chen, Q.; Huang, S.; Zhong, M.; Zhang, J.; Sun, H.; Wu, K.; Fan, S.; Ye, Z.; et al. Fast Multi-View 3D reconstruction of seedlings based on automatic viewpoint planning. Comput. Electron. Agric. 2024, 218, 108708. [Google Scholar] [CrossRef]
- Arief, M.A.A.; Nugroho, A.P.; Putro, A.W.; Sutiarso, L.; Cho, B.K.; Okayasu, T. Development and Application of a Low-Cost 3-Dimensional (3D) Reconstruction System Based on the Structure from Motion (SfM) Approach for Plant Phenotyping. J. Biosyst. Eng. 2024, 49, 326–336. [Google Scholar] [CrossRef]
- Brandolini, F.; Patrucco, G. Structure-from-motion (SFM) photogrammetry as a non-invasive methodology to digitalize historical documents: A highly flexible and low-cost approach? Heritage 2019, 2, 2124–2136. [Google Scholar] [CrossRef]
- Xin, C.; Xujia, Q.; Xiaogang, X. 3D Reconstruction Method for Field-grown Soybean Plants Based on SfM and Instant-NGP. Trans. Chin. Soc. Agric. Eng. 2025, 41, 171–180. [Google Scholar]
- Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. Nerf: Representing scenes as neural radiance fields for view synthesis. Commun. ACM 2021, 65, 99–106. [Google Scholar] [CrossRef]
- Gao, K.; Gao, Y.; He, H.; Lu, D.; Xu, L.; Li, J. Nerf: Neural radiance field in 3d vision, a comprehensive review. arXiv 2022, arXiv:2210.00379. [Google Scholar]
- Wu, S.; Hu, C.; Tian, B.; Huang, Y.; Yang, S.; Li, S.; Xu, S. A 3D reconstruction platform for complex plants using OB-NeRF. Front. Plant Sci. 2025, 16, 1449626. [Google Scholar] [CrossRef] [PubMed]
- Zhu, L.; Jiang, W.; Sun, B.; Chai, M.; Li, S.; Ding, Y. 3D Modeling and Phenotypic Parameter Acquisition of Seedling Vegetables Based on Neural Radiance Fields. J. Agric. Mach. 2024, 55, 184–192. [Google Scholar] [CrossRef]
- Martin-Brualla, R.; Radwan, N.; Sajjadi, M.S.; Barron, J.T.; Dosovitskiy, A.; Duckworth, D. Nerf in the wild: Neural radiance fields for unconstrained photo collections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 7210–7219. [Google Scholar] [CrossRef]
- Flynn, J.; Neulander, I.; Philbin, J.; Snavely, N. Deepstereo: Learning to predict new views from the world’s imagery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 5515–5524. [Google Scholar] [CrossRef]
- Wang, J.; Zhen, Z.; Zhao, Y.; Ma, Y.; Zhao, Y. 3D-CNN with Multi-Scale Fusion for Tree Crown Segmentation and Species Classification. Remote Sens. 2024, 16, 4544. [Google Scholar] [CrossRef]
- Zhang, J.; Zheng, C. 3D Reconstruction Network from Single Image Based on Multi-scale CNN-RNN. Appl. Res. Comput. 2020, 37, 3487–3491. [Google Scholar] [CrossRef]
- Li, Y.; Si, S.; Liu, X.; Zou, L.; Wu, W.; Liu, X.; Zhang, L. Three-dimensional reconstruction of cotton plant with internal canopy occluded structure recovery. Comput. Electron. Agric. 2023, 215, 108370. [Google Scholar] [CrossRef]
- Hou, L.J.; Shen, Y.S.; Liu, X.C.; Chen, X.M.; Jia, R.C.; Fan, G.W.; Shen, C. 3D Gaussian Sputtering Super-Resolution Visual Scene Construction Algorithm. China Test 2024, 50, 13–20. [Google Scholar]
- Kerbl, B.; Kopanas, G.; Leimkühler, T.; Drettakis, G. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph. 2023, 42, 139. [Google Scholar] [CrossRef]
- Chen, J.; Li, Q.; Jiang, D. From Images to Loci: Applying 3D Deep Learning to Enable Multivariate and Multitemporal Digital Phenotyping and Mapping the Genetics Underlying Nitrogen Use Efficiency in Wheat. Plant Phenomics 2024, 6, 0270. [Google Scholar] [CrossRef]
- Jiang, L.; Sun, J.; Chee, P.W.; Li, C.; Fu, L. Cotton3DGaussians: Multiview 3D Gaussian Splatting for boll mapping and plant architecture analysis. Comput. Electron. Agric. 2025, 234, 110293. [Google Scholar] [CrossRef]
- Wang, H.; Wang, J.; Agapito, L. Morpheus: Neural dynamic 360deg surface reconstruction from monocular rgb-d video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 20965–20976. [Google Scholar] [CrossRef]
- Li, W.; Zhu, D.; Wang, Q. A single view leaf reconstruction method based on the fusion of ResNet and differentiable render in plant growth digital twin system. Comput. Electron. Agric. 2022, 193, 106712. [Google Scholar] [CrossRef]
- Wang, L.; Hu, Y.; Jiang, H.; Shi, W.; Ni, X. Monitor geomatical information of plant by reconstruction 3D model based on Kinect V2. In Proceedings of the 2018 ASABE Annual International Meeting. American Society of Agricultural and Biological Engineers, Detroit, MI, USA, 29 July–1 August 2018; p. 1. [Google Scholar] [CrossRef]
- Atencia Payares, L.K.; Gomez-del Campo, M.; Tarquis, A.M.; García, M. Thermal imaging from UAS for estimating crop water status in a Merlot vineyard in semi-arid conditions. Irrig. Sci. 2025, 43, 87–103. [Google Scholar] [CrossRef]
- Yu, S.; Liu, X.; Tan, Q.; Wang, Z.; Zhang, B. Sensors, systems and algorithms of 3D reconstruction for smart agriculture and precision farming: A review. Comput. Electron. Agric. 2024, 224, 109229. [Google Scholar] [CrossRef]
- Si, K.; Niu, C. Triple Multimodal Image Fusion Algorithm Based on Hybrid Differential Convolution and Efficient Visual Transformer Network. Infrared Laser Eng. 2024, 53, 331–345. [Google Scholar] [CrossRef]
- Tran, X.T.; Do, T.; Pal, N.R.; Jung, T.P.; Lin, C.T. Multimodal fusion for anticipating human decision performance. Sci. Rep. 2024, 14, 13217. [Google Scholar] [CrossRef]
- Xiang, S.; Zeng, Z.; Jiang, J.; Zhang, D.; Liu, N. An Efficient Dense Reconstruction Algorithm from LiDAR and Monocular Camera. Symmetry 2024, 16, 1496. [Google Scholar] [CrossRef]
- Cai, H.; Pang, W.; Chen, X.; Wang, Y.; Liang, H. A novel calibration board and experiments for 3D LiDAR and camera calibration. Sensors 2020, 20, 1130. [Google Scholar] [CrossRef]
- Lu, J.; Lan, Y.; Wu, Z.; Liang, X.; Chang, H.; Deng, X.; Wu, Z.; Tang, Y. Optimization of ICP Point Cloud Registration for Plant 3D Modeling. Trans. Chin. Soc. Agric. Eng. 2022, 38, 183–191. [Google Scholar]
- Wang, M.; Liu, H.; Tang, H.; Zhang, M.; Shen, X. IFNet: Data-driven multisensor estimate fusion with unknown correlation in sensor measurement noises. Inf. Fusion 2025, 115, 102750. [Google Scholar] [CrossRef]
- Sepaskhah, A.R.; Fahandezh-Saadi, S.; Zand-Parsa, S. Logistic model application for prediction of maize yield under water and nitrogen management. Agric. Water Manag. 2011, 99, 51–57. [Google Scholar] [CrossRef]
- Boxiang, X.; Hong, Q. Kinematic Analysis Method for Physical Modeling and Parameter Determination of Corn Leaf. J. Comput.-Aided Des. Comput. Graph. 2019, 31, 718–725. [Google Scholar]
- Yi, L.; Li, H.; Guo, J.; Deussen, O.; Zhang, X. Tree growth modelling constrained by growth equations. Comput. Graph. Forum 2018, 37, 239–253. [Google Scholar] [CrossRef]
- Chen, H.; Liu, S.; Wang, C.; Wang, C.; Gong, K.; Li, Y.; Lan, Y. Point cloud completion of plant leaves under occlusion conditions based on deep learning. Plant Phenomics 2023, 5, 0117. [Google Scholar] [CrossRef]
- Vinodkumar, P.K.; Karabulut, D.; Avots, E.; Ozcinar, C.; Anbarjafari, G. Deep learning for 3d reconstruction, augmentation, and registration: A review paper. Entropy 2024, 26, 235. [Google Scholar] [CrossRef]
- Burgess, A.J.; Retkute, R.; Preston, S.P.; Jensen, O.E.; Pound, M.P.; Pridmore, T.P.; Murchie, E.H. The 4-dimensional plant: Effects of wind-induced canopy movement on light fluctuations and photosynthesis. Front. Plant Sci. 2016, 7, 1392. [Google Scholar] [CrossRef]
- Yu, B.; Ren, J.; Han, J.; Wang, F.; Liang, J.; Shi, B. EventPS: Real-time photometric stereo using an event camera. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 9602–9611. [Google Scholar] [CrossRef]
- Zhu, R.; Sun, K.; Yan, Z.; Yan, X.; Yu, J.; Shi, J.; Hu, Z.; Jiang, H.; Xin, D.; Zhang, Z.; et al. Analysing the phenotype development of soybean plants using low-cost 3D reconstruction. Sci. Rep. 2020, 10, 7055. [Google Scholar] [CrossRef]
- Zhu, C.; Gao, Z.; Xue, W.; Tu, H.; Zhang, Q. High-speed deformation measurement with event-based cameras. Exp. Mech. 2023, 63, 987–994. [Google Scholar] [CrossRef]
- Nkrumah, I.; Moshrefizadeh, M.; Tahri, O.; Blasch, E.; Palaniappan, K.; AliAkbarpour, H. EC-WAMI: Event camera-based pose optimization in remote sensing and wide-area motion imagery. Sensors 2024, 24, 7493. [Google Scholar] [CrossRef] [PubMed]
- Xie, H.; Yao, H.; Sun, X.; Zhou, S.; Zhang, S. Pix2vox: Context-aware 3d reconstruction from single and multi-view images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 2690–2698. [Google Scholar] [CrossRef]
- Elms, E.; Latif, Y.; Park, T.H.; Chin, T.J. Event-based structure-from-orbit. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 19541–19550. [Google Scholar] [CrossRef]
- Logeshwaran, J.; Srivastava, D.; Kumar, K.S.; Rex, M.J.; Al-Rasheed, A.; Getahun, M.; Soufiene, B.O. Improving crop production using an agro-deep learning framework in precision agriculture. BMC Bioinform. 2024, 25, 341. [Google Scholar] [CrossRef]
- Wang, Y.D.; Zhou, J.; Sun, J.W.; Wang, K.; Jiang, Z.Z.; Zhang, Z. Research on the Remote Operation Visualization System for Orchard Robots Based on Augmented Reality. Trans. Chin. Soc. Agric. Mach. 2023, 54, 22–31+78. [Google Scholar]
Technology | Development Period | Maturity/Application | Key Features and Applications |
---|---|---|---|
Structured Light | Originating in the 1980s, algorithm optimization and system implementation in the 1990s, and gradually maturing after the 2010s | High Precision, Widely Adopted in Controlled Environments | Used for precise plant morphology modeling in controlled environments (e.g., greenhouses). |
Laser Scanning | Originating in the 1960s, and with the development of computer technology since the 2000s, it has been widely applied in various domains | Mature in Large-Scale Applications | Applied in large-scale environments (e.g., forestry, orchard), offering high precision and detailed canopy modeling. |
ToF (Time-of-Flight) | Started in the 1990s, entered commercial applications in the 2000s, and the development of high-performance compact sensors after 2010 has driven technological advancements | Emerging in Real-Time Applications | Applied in fast dynamic reconstruction for precision agriculture, with strong real-time capabilities. |
Stereo Vision | From the development of basic 3D reconstruction using stereo cameras in the 1980s, to the introduction of optimization algorithms in the 1990s, and significant improvements in matching accuracy and application performance in the 2010s | Cost-Effective, Well-Established | Used in open-field crop monitoring under natural light, offering low-cost and flexible solutions. |
SfM (Structure from Motion) | Developed starting from the 1980s, optimization algorithms were introduced in the 1990s, and matching accuracy and application performance were significantly improved in the 2010s | Mature, Widely Used in Large-Scale Field Studies | Applied to large-scale plant phenotyping through multi-view image analysis, cost-effective and adaptable. |
NeRF (Neural Radiance Fields) | First proposed in 2020, widely applied in multiple fields, and still continuously developing and optimizing | High Fidelity, Emerging in Controlled and Static Environments | Suitable for reconstructing complex plant geometries and fine textures in static or controlled environments with large datasets. |
CNN-based Methods | Proposed in the 1980s, rapidly developed in the early 2010s, and matured and widely applied after 2020 | Rapidly Growing, High-Adaptability | Highly automated and adaptable for various types of plants, increasingly driving high-accuracy 3D reconstruction in dynamic environments. |
3D Gaussian Splatting (3DGS) | First proposed in 2023, widely discussed in academia during 2023-2024, and widely applied across multiple fields since 2024 | Emerging, Real-Time Rendering | Effective for generating high-quality 3D scenes in static environments, with relatively fast training speed and real-time rendering capabilities. |
Technology Type | Imaging Technology | Principle | Advantages | Disadvantages |
---|---|---|---|---|
Active | Structured Light Method | Encoded structured light | High precision, fast speed, resistant to ambient light interference | Difficult reconstruction for highly reflective or transparent surfaces |
Time-of-Flight (TOF) Method | Reflection light time difference | Cost-effective, simple structure, strong real-time performance | Limited resolution for small targets at close range | |
Laser Scanning Method | Laser ranging | High precision, large coverage, non-contact measurement | High equipment cost, expensive maintenance | |
Passive | Stereo Vision Method | Analyzing image disparity | Low cost, mature technology, suitable for natural light conditions | High camera calibration requirements, accuracy affected by image alignment and synchronization |
Structure from Motion (SfM) | Recovering camera motion and 3D structure from multi-view images | Allows 3D reconstruction from uncalibrated moving devices, strong adaptability | Requires multiple perspective images, high image quality and quantity requirements | |
Deep Learning | Neural Radiance Fields (NeRF) | Deep learning-based light propagation simulation | Realistic rendering effects, capable of handling complex lighting and details | High computational resource demands, long training and rendering time |
CNN-based Methods | Convolutional neural network extracts features and predicts 3D structures | High automation, strong adaptability | Requires a large amount of annotated data, performance is scene-dependent | |
3D Gaussian Splatting (3DGS) | Representing the scene as 3D Gaussian primitives, rendered through differentiable rasterization | High-quality, realistic scenes; fast, real-time rendering; relatively fast training speed | High memory and disk usage; poor compatibility with existing rendering pipelines; supports only static scenes |
Application Scenario | Method | Accuracy/Error | Reference |
---|---|---|---|
Shape measurement of spherical fruits (apple, citrus, pear) | Binocular structured light + multi-view point cloud reconstruction | Max diameter RMSE = 1.82 mm, Volume RMSE = 6.01 cm3, Deformation index R2 = 0.97 | [27] |
Maize plant localization | ToF camera + RANSAC + ICP registration | Position RMSE = 3.4 cm, Std. dev. ±1.3 cm | [28] |
Tree trunk diameter reconstruction in forests | Mobile LiDAR + Hough transform + SLAM | DBH RMSE = 1.93 cm | [79] |
Wheat canopy structure extraction | Stereo vision + Semi-Global Block Matching (SGBM) + SVM segmentation | Canopy height RMSE ≈ 1.1 cm, Leaf area RMSE = 0.37 cm2, Angle error < 1.5° | [55] |
Field soybean plant 3D modeling | SfM + Instant-NGP | 3D model RMSE = 0.15 | [65] |
Seedling vegetable phenotyping (e.g., chili, tomato) | NeRF + Point Cloud Library (PCL) analysis | Plant height RMSE = 0.86 cm, Stem diameter RMSE = 0.017 cm, Leaf area RMSE = 0.75∼3.22 cm2 | [69] |
Cotton canopy point cloud reconstruction (with occlusion recovery) | ISN + GAN + FLPRA | Leaf completion RMSE ≈ 0.395 cm, Overall reconstruction accuracy = 82.70% | [74] |
Cotton boll 3D reconstruction and trait analysis | Multi-view image + 3D Gaussian Splatting + YOLOv11x | Boll count RMSE ≈ 9.23%, Plant height RMSE ≈ 2.38%, Volume RMSE ≈ 8.17% | [78] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, S.; Cui, Z.; Yang, J.; Wang, B. A Review of Optical-Based Three-Dimensional Reconstruction and Multi-Source Fusion for Plant Phenotyping. Sensors 2025, 25, 3401. https://doi.org/10.3390/s25113401
Li S, Cui Z, Yang J, Wang B. A Review of Optical-Based Three-Dimensional Reconstruction and Multi-Source Fusion for Plant Phenotyping. Sensors. 2025; 25(11):3401. https://doi.org/10.3390/s25113401
Chicago/Turabian StyleLi, Songhang, Zepu Cui, Jiahang Yang, and Bin Wang. 2025. "A Review of Optical-Based Three-Dimensional Reconstruction and Multi-Source Fusion for Plant Phenotyping" Sensors 25, no. 11: 3401. https://doi.org/10.3390/s25113401
APA StyleLi, S., Cui, Z., Yang, J., & Wang, B. (2025). A Review of Optical-Based Three-Dimensional Reconstruction and Multi-Source Fusion for Plant Phenotyping. Sensors, 25(11), 3401. https://doi.org/10.3390/s25113401