End-to-End Deep Learning Approach to Automated Phenotyping of Greenhouse-Grown Plant Shoots
Abstract
:1. Introduction
2. Methods
2.1. Image Data Acquisition and Preprocessing
2.2. Image Segmentation Using a Pre-Trained U-Net Model
2.3. Phenotypic Plant Traits
2.4. Plant Trait Derivation Using End-to-End Model
2.5. Performance Measures
- R2 coefficient of the linear correlation between predicted and ground-truth traits for all images.
- Mean Square Error (MSE).
- Maximum Error (MAXERR),
- Ratio of Squared Norms (L2RAT).
2.6. Computational Implementation
3. Results
3.1. End-to-End Model Generation
3.2. Comparison and Validation of Regression and Segmentation Models vs. Ground Truth
3.3. End-to-End Model Explainability
3.4. Software Performance and Implementation
4. Discussion
5. Conclusions
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Lobet, G. Image analysis in plant sciences: Publish then perish. Trends Plant Sci. 2013, 18, 422–431. [Google Scholar] [CrossRef] [PubMed]
- Minervini, M.; Scharr, H.; Tsaftaris, S.A. Image analysis: The new bottleneck in plant phenotyping. IEEE Signal Process. Mag. 2015, 32, 126–131. [Google Scholar] [CrossRef]
- Fahlgren, N.; Gehan, M.A.; Baxter, I. A comparison of machine learning methods for leaf segmentation in plant phenotyping. Front. Plant Sci. 2015, 5, 567. [Google Scholar]
- Tsaftaris, S.A.; Minervini, M.; Scharr, H. Machine learning for plant phenotyping needs image processing. Trends Plant Sci. 2016, 21, 989–991. [Google Scholar] [CrossRef] [PubMed]
- Pound, M.P.; Atkinson, J.A.; Townsend, A.J.; Wilson, M.H.; Griffiths, M.; Jackson, A.S.; Bulat, A.; Tzimiropoulos, G.; Wells, D.M.; Pridmore, T.P.; et al. Deep machine learning provides state-of-the-art performance in image-based plant phenotyping. GigaScience 2017, 6, 1–10. [Google Scholar] [CrossRef] [PubMed]
- Narisetti, N.; Henke, M.; Neumann, K.; Stolzenburg, F.; Altmann, T.; Gladilin, E. Deep Learning Based Greenhouse Image Segmentation and Shoot Phenotyping (DeepShoot). Front. Plant Sci. 2022, 13, 906410. [Google Scholar] [CrossRef] [PubMed]
- Okyere, F.G.; Cudjoe, D.; Sadeghi-Tehran, P.; Virlet, N.; Riche, A.B.; Castle, M.; Greche, L.; Mohareb, F.; Simms, D.; Mhada, M.; et al. Machine Learning Methods for Automatic Segmentation of Images of Field- and Glasshouse-Based Plants for High-Throughput Phenotyping. Plants 2023, 12, 2035. [Google Scholar] [CrossRef] [PubMed]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef] [PubMed]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Region-Based Convolutional Networks for Accurate Object Detection and Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 142–158. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Hüther, P.; Schandry, N.; Jandrasits, K.; Bezrukov, I.; Becker, C. ARADEEPOPSIS, an Automated Workflow for Top-View Plant Phenomics using Semantic Segmentation of Leaf States. Plant Cell 2020, 32, 3674–3688. [Google Scholar] [CrossRef] [PubMed]
- Giuffrida, M.V.; Dobrescu, A.; Doerner, P.; Tsaftaris, S.A. Leaf Counting Without Annotations Using Adversarial Unsupervised Domain Adaptation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 2590–2599. [Google Scholar] [CrossRef]
- Khoroshevsky, F.; Zhou, K.; Bar-Hillel, A.; Hadar, O.; Rachmilevitch, S.; Ephrath, J.E.; Lazarovitch, N.; Edan, Y. A CNN-based framework for estimation of root length, diameter, and color from in situ minirhizotron images. Comput. Electron. Agric. 2024, 227, 109457. [Google Scholar] [CrossRef]
- Cheng, Y.; Ren, N.; Hu, A.; Zhou, L.; Qi, C.; Zhang, S.; Wu, Q. An Improved 2D Pose Estimation Algorithm for Extracting Phenotypic Parameters of Tomato Plants in Complex Backgrounds. Remote Sens. 2024, 16, 4385. [Google Scholar] [CrossRef]
- Glasmachers, T. Limits of End-to-End Learning. In Proceedings of the Ninth Asian Conference on Machine Learning, Seoul, Republic of Korea; 15–17 November 2017; Proceedings of Machine Learning Research; Zhang, M.L., Noh, Y.K., Eds.; Yonsei University: Seoul, Republic of Korea, 2017; Volume 77, pp. 17–32. [Google Scholar]
- Narisetti, N.; Henke, M.; Seiler, C.; Junker, A.; Ostermann, J.; Altmann, T.; Gladilin, E. Fully-automated root image analysis (faRIA). Sci. Rep. 2021, 11, 16047. [Google Scholar] [CrossRef] [PubMed]
- Li, Y.; Huang, Y.; Wang, M.; Zhao, Y. An improved U-Net-based in situ root system phenotype segmentation method for plants. Front. Plant Sci. 2023, 14, 1115713. [Google Scholar] [CrossRef] [PubMed]
- Yi, X.; Wang, J.; Wu, P.; Wang, G.; Mo, L.; Lou, X.; Liang, H.; Huang, H.; Lin, E.; Maponde, B.; et al. AC-UNet: An improved UNet-based method for stem and leaf segmentation in Betula luminifera. Front. Plant Sci. 2023, 14, 1268098. [Google Scholar] [CrossRef] [PubMed]
- Ubbens, J.R.; Cieslak, M.; Prusinkiewicz, P.; Stavness, I. The use of plant models in deep learning: An application to leaf counting in rosette plants. Plant Methods 2018, 14, 6. [Google Scholar] [CrossRef] [PubMed]
- Pound, M.P.; Atkinson, J.A.; Wells, D.M.; Pridmore, T.P.; French, A.P. Deep learning for multi-task plant phenotyping. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2055–2063. [Google Scholar]
- Olenskyj, A.G.; Sams, B.S.; Fei, Z.; Singh, V.; Raja, P.V.; Bornhorst, G.M.; Earles, J.M. End-to-end deep learning for directly estimating grape yield from ground-based imagery. Comput. Electron. Agric. 2022, 198, 107081. [Google Scholar] [CrossRef]
- Henke, M.; Neumann, K.; Altmann, T.; Gladilin, E. Semi-Automated Ground Truth Segmentation and Phenotyping of Plant Structures Using k-Means Clustering of Eigen-Colors (kmSeg). Agriculture 2021, 11, 1098. [Google Scholar] [CrossRef]
Trait Name | Description |
---|---|
Area | The pixel count of the plant region |
C.hull | The pixel count of the convex hull of the plant region |
Height | Vertical dimension of the plant region |
Width | Horizontal dimension of the plant region |
H_99 | 99th percentile of the vertical distribution of plant pixels |
W_99 | 99th percentile of the horizontal distribution of plant pixels |
Red | Average red color of plant pixels |
Green | Average green color of plant pixels |
Blue | Average blue color of plant pixels |
Option | Value |
---|---|
Optimizer | Adam |
Batch size | 32 |
Initial learn rate | 0.005 |
Metrics | RSME |
Validation patience | 5 |
Validation frequency | 35 |
Max. number of epochs | 100 |
Image Modality | # Training | # Testing | # Validation |
---|---|---|---|
A, top | 210 | 53 | 57 |
B, top | 170 | 43 | 59 |
B, side | 248 | 62 | 77 |
M, top | 209 | 52 | 72 |
M, side | 105 | 26 | 33 |
Image Type | Plant Trait | MSE | MAXERR | L2RAT | t-Test vs. gt | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|
U-Net | e2e | U-Net | e2e | U-Net | e2e | U-Net | e2e | U-Net | e2e | ||
A, top | Area | 9.8 × 10−1 | 9.9 × 10−1 | 1.9 × 106 | 4.1 × 105 | 5.0 × 103 | 1.8 × 103 | 9.3 × 10−1 | 1.0 | - | - |
A, top | C.hull | 9.8 × 10−1 | 9.5 × 10−1 | 5.6 × 106 | 4.8 × 106 | 9.5 × 103 | 9.2 × 103 | 9.6 × 10−1 | 1.0 | - | - |
A, top | Height | 9.9 × 10−1 | 9.5 × 10−1 | 1.3 × 101 | 1.5 × 102 | 1.4 × 101 | 4.2 × 101 | 9.9 × 10−1 | 9.4 × 10−1 | - | - |
A, top | Width | 9.6 × 10−1 | 8.6 × 10−1 | 1.9 × 101 | 1.5 × 102 | 2.4 × 101 | 3.8 × 101 | 9.8 × 10−1 | 9.6 × 10−1 | - | - |
A, top | H_99 | 9.9 × 10−1 | 8.0 × 10−1 | 3.0 | 8.1 × 101 | 1.2 × 101 | 2.4 × 101 | 1.0 | 9.7 × 10−1 | - | - |
A, top | W_99 | 1.0 | 8.8 × 10−1 | 6.0 × 10−1 | 7.4 × 101 | 4.0 | 2.3 × 101 | 1.0 | 9.8 × 10−1 | - | - |
A, top | Red | 1.0 | 9.3 × 10−1 | 0.0 | 8.0 × 10−4 | 2.0 × 10−2 | 8.7 × 10−2 | 1.0 | 9.5 × 10−1 | - | - |
A, top | Green | 9.9 × 10−1 | 9.8 × 10−1 | 1.0 × 10−4 | 8.0 × 10−4 | 3.3 × 10−2 | 8.5 × 10−2 | 1.0 | 9.5 × 10−1 | - | - |
A, top | Blue | 9.9 × 10−1 | 9.7 × 10−1 | 0.0 | 3.0 × 10−4 | 2.0 × 10−2 | 5.7 × 10−2 | 9.9 × 10−1 | 9.4 × 10−1 | - | - |
B, top | Area | 9.5 × 10−1 | 9.2 × 10−1 | 1.8 × 106 | 1.7 × 106 | 5.3 × 103 | 6.4 × 103 | 9.3 × 10−1 | 7.9 × 10−1 | - | - |
B, top | C.hull | 9.0 × 10−1 | 9.3 × 10−1 | 3.7 × 107 | 1.8 × 107 | 1.5 × 104 | 1.3 × 104 | 8.7 × 10−1 | 9.2 × 10−1 | - | - |
B, top | Height | 9.1 × 10−1 | 7.8 × 10−1 | 1.5 × 103 | 5.0 × 102 | 9.8 × 101 | 5.6 × 101 | 9.0 × 10−1 | 9.6 × 10−1 | - | - |
B, top | Width | 8.9 × 10−1 | 7.8 × 10−1 | 1.4 × 103 | 3.4 × 102 | 8.8 × 101 | 5.9 × 101 | 7.4 × 10−1 | 9.5 × 10−1 | * | - |
B, top | H_99 | 8.9 × 10−1 | 9.0 × 10−1 | 1.7 × 102 | 8.7 × 101 | 4.0 × 101 | 1.9 × 101 | 9.3 × 10−1 | 9.7 × 10−1 | - | - |
B, top | W_99 | 9.4 × 10−1 | 8.0 × 10−1 | 1.8 × 102 | 2.4 × 102 | 3.7 × 101 | 3.5 × 101 | 9.6 × 10−1 | 9.3 × 10−1 | - | - |
B, top | Red | 8.6 × 10−1 | 8.2 × 10−1 | 1.5 × 10−3 | 8.0 × 10−4 | 1.6 × 10−1 | 1.0 × 10−1 | 7.9 × 10−1 | 1.0 | * | - |
B, top | Green | 9.0 × 10−1 | 8.5 × 10−1 | 1.2 × 10−3 | 1.4 × 10−3 | 1.1 × 10−1 | 2.0 × 10−1 | 8.8 × 10−1 | 1.0 | - | - |
B, top | Blue | 8.3 × 10−1 | 9.1 × 10−1 | 2.1 × 10−3 | 7.0 × 10−4 | 1.8 × 10−1 | 1.3 × 10−1 | 6.8 × 10−1 | 9.8 × 10−1 | * | - |
B, side | Area | 9.4 × 10−1 | 9.9 × 10−1 | 5.6 × 106 | 1.1 × 105 | 6.1 × 103 | 1.1 × 103 | 4.8 × 10−1 | 9.9 × 10−1 | * | - |
B, side | C.hull | 9.1 × 10−1 | 9.7 × 10−1 | 5.6 × 107 | 6.4 × 106 | 1.9 × 104 | 9.6 × 103 | 6.2 × 10−1 | 9.8 × 10−1 | * | - |
B, side | Height | 9.3 × 10−1 | 9.5 × 10−1 | 8.7 × 102 | 3.5 × 102 | 8.9 × 101 | 5.0 × 101 | 8.6 × 10−1 | 1.1 | - | - |
B, side | Width | 6.8 × 10−1 | 6.4 × 10−1 | 8.7 × 102 | 1.3 × 102 | 6.7 × 101 | 2.6 × 101 | 8.1 × 10−1 | 1.0 | * | - |
B, side | H_99 | 7.4 × 10−1 | 6.7 × 10−1 | 7.2 × 101 | 4.4 × 101 | 3.4 × 101 | 2.3 × 101 | 9.7 × 10−1 | 9.8 × 10−1 | - | - |
B, side | W_99 | 9.7 × 10−1 | 9.1 × 10−1 | 8.5 × 101 | 1.5 × 102 | 2.5 × 101 | 5.5 × 101 | 9.5 × 10−1 | 1.0 | * | * |
B, side | Red | 6.7 × 10−1 | 9.1 × 10−1 | 4.4 × 10−3 | 5.0 × 10−4 | 1.3 × 10−1 | 6.3 × 10−2 | 6.5 × 10−1 | 1.0 | * | - |
B, side | Green | 6.7 × 10−1 | 9.1 × 10−1 | 3.3 × 10−3 | 4.0 × 10−4 | 1.3 × 10−1 | 5.8 × 10−2 | 7.2 × 10−1 | 9.7 × 10−1 | * | - |
B, side | Blue | 5.1 × 10−1 | 8.7 × 10−1 | 4.6 × 10−3 | 4.0 × 10−4 | 1.5 × 10−1 | 6.5 × 10−2 | 5.4 × 10−1 | 9.9 × 10−1 | * | - |
M, top | Area | 9.8 × 10−1 | 9.7 × 10−1 | 1.0 × 106 | 5.0 × 105 | 2.8 × 103 | 2.6 × 103 | 6.8 × 10−1 | 8.9 × 10−1 | - | - |
M, top | C.hull | 8.3 × 10−1 | 8.2 × 10−1 | 5.8 × 107 | 2.8 × 107 | 2.4 × 104 | 1.5 × 104 | 7.0 × 10−1 | 9.1 × 10−1 | - | - |
M, top | Height | 9.4 × 10−1 | 8.9 × 10−1 | 4.4 × 102 | 8.7 × 102 | 5.9 × 101 | 9.2 × 101 | 9.3 × 10−1 | 9.8 × 10−1 | - | - |
M, top | Width | 8.6 × 10−1 | 9.0 × 10−1 | 1.1 × 103 | 2.8 × 102 | 1.2 × 102 | 5.8 × 101 | 8.2 × 10−1 | 9.8 × 10−1 | - | - |
M, top | H_99 | 8.2 × 10−1 | 8.6 × 10−1 | 6.4 × 102 | 2.0 × 102 | 8.3 × 101 | 4.0 × 101 | 8.8 × 10−1 | 1.0 | - | - |
M, top | W_99 | 9.7 × 10−1 | 8.0 × 10−1 | 3.7 × 101 | 3.4 × 102 | 1.6 × 101 | 6.6 × 101 | 9.8 × 10−1 | 1.0 | * | - |
M, top | Red | 9.5 × 10−1 | 7.9 × 10−1 | 1.9 × 10−3 | 2.7 × 10−3 | 2.1 × 10−1 | 1.8 × 10−1 | 8.8 × 10−1 | 9.4 × 10−1 | - | - |
M, top | Green | 9.8 × 10−1 | 9.1 × 10−1 | 1.3 × 10−3 | 2.0 × 10−3 | 2.1 × 10−1 | 2.0 × 10−1 | 9.4 × 10−1 | 9.3 × 10−1 | - | - |
M, top | Blue | 9.6 × 10−1 | 9.1 × 10−1 | 2.5 × 10−3 | 2.3 × 10−3 | 2.5 × 10−1 | 2.8 × 10−1 | 8.7 × 10−1 | 9.5 × 10−1 | - | - |
M, side | Area | 9.5 × 10−1 | 9.9 × 10−1 | 7.8 × 105 | 9.2 × 104 | 1.9 × 103 | 6.8 × 102 | 5.4 × 10−1 | 1.1 | - | - |
M, side | C.hull | 8.9 × 10−1 | 9.5 × 10−1 | 1.1 × 108 | 1.2 × 107 | 2.5 × 104 | 9.5 × 103 | 4.4 × 10−1 | 9.7 × 10−1 | * | - |
M, side | Height | 9.1 × 10−1 | 9.2 × 10−1 | 1.3 × 103 | 3.8 × 102 | 7.7 × 101 | 4.9 × 101 | 8.9 × 10−1 | 9.8 × 10−1 | - | - |
M, side | Width | 9.4 × 10−1 | 9.7 × 10−1 | 1.7 × 103 | 2.4 × 102 | 1.2 × 102 | 3.1 × 101 | 6.7 × 10−1 | 1.0 | - | - |
M, side | H_99 | 6.2 × 10−2 | 5.2 × 10−1 | 8.4 × 102 | 4.3 × 101 | 7.0 × 101 | 1.3 × 101 | 9.4 × 10−1 | 9.9 × 10−1 | - | - |
M, side | W_99 | 7.9 × 10−1 | 9.6 × 10−1 | 9.8 × 102 | 1.7 × 102 | 9.8 × 101 | 3.8 × 101 | 1.1 | 1.1 | - | - |
M, side | Red | 8.2 × 10−1 | 8.3 × 10−1 | 2.0 × 10−3 | 1.4 × 10−3 | 1.4 × 10−1 | 9.4 × 10−2 | 8.5 × 10−1 | 1.1 | - | - |
M, side | Green | 8.4 × 10−1 | 7.2 × 10−1 | 8.0 × 10−4 | 9.0 × 10−4 | 1.0 × 10−1 | 8.1 × 10−2 | 9.2 × 10−1 | 9.9 × 10−1 | - | - |
M, side | Blue | 3.4 × 10−1 | 8.6 × 10−1 | 2.7 × 10−3 | 6.0 × 10−4 | 1.3 × 10−1 | 7.1 × 10−2 | 7.8 × 10−1 | 1.0 | * | - |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gladilin, E.; Narisetti, N.; Neumann, K.; Altmann, T. End-to-End Deep Learning Approach to Automated Phenotyping of Greenhouse-Grown Plant Shoots. Agronomy 2025, 15, 1117. https://doi.org/10.3390/agronomy15051117
Gladilin E, Narisetti N, Neumann K, Altmann T. End-to-End Deep Learning Approach to Automated Phenotyping of Greenhouse-Grown Plant Shoots. Agronomy. 2025; 15(5):1117. https://doi.org/10.3390/agronomy15051117
Chicago/Turabian StyleGladilin, Evgeny, Narendra Narisetti, Kerstin Neumann, and Thomas Altmann. 2025. "End-to-End Deep Learning Approach to Automated Phenotyping of Greenhouse-Grown Plant Shoots" Agronomy 15, no. 5: 1117. https://doi.org/10.3390/agronomy15051117
APA StyleGladilin, E., Narisetti, N., Neumann, K., & Altmann, T. (2025). End-to-End Deep Learning Approach to Automated Phenotyping of Greenhouse-Grown Plant Shoots. Agronomy, 15(5), 1117. https://doi.org/10.3390/agronomy15051117