Next Article in Journal
Physiological Changes of Arabica Coffee under Different Intensities and Durations of Water Stress in the Brazilian Cerrado
Previous Article in Journal
Countering Triple Negative Breast Cancer via Impeding Wnt/β-Catenin Signaling, a Phytotherapeutic Approach
Previous Article in Special Issue
Gene Co-Expression Network Tools and Databases for Crop Improvement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LiDAR Platform for Acquisition of 3D Plant Phenotyping Database

Facultad de Ingeniería, Universidad de Ibagué, Ibagué 730002, Colombia
*
Author to whom correspondence should be addressed.
Plants 2022, 11(17), 2199; https://doi.org/10.3390/plants11172199
Submission received: 1 July 2022 / Revised: 26 July 2022 / Accepted: 10 August 2022 / Published: 25 August 2022
(This article belongs to the Special Issue Plant Bioinformatics: Applications and Databases)

Abstract

:
Currently, there are no free databases of 3D point clouds and images for seedling phenotyping. Therefore, this paper describes a platform for seedling scanning using 3D Lidar with which a database was acquired for use in plant phenotyping research. In total, 362 maize seedlings were recorded using an RGB camera and a SICK LMS4121R-13000 laser scanner with angular resolutions of 45° and 0.5° respectively. The scanned plants are diverse, with seedling captures ranging from less than 10 cm to 40 cm, and ranging from 7 to 24 days after planting in different light conditions in an indoor setting. The point clouds were processed to remove noise and imperfections with a mean absolute precision error of 0.03 cm, synchronized with the images, and time-stamped. The database includes the raw and processed data and manually assigned stem and leaf labels. As an example of a database application, a Random Forest classifier was employed to identify seedling parts based on morphological descriptors, with an accuracy of 89.41%.

1. Introduction

By 2050, the world’s population is expected to rise by almost 10 billion, while the average rate of increase in crop production is only about 1.3% per year [1]. Therefore, new technologies have been developed to improve agricultural yield without affecting the environment to ensure global food sustainability. One way to improve production is to find among plants those more productive varieties, resistant to diseases or stress, among other advantages. To distinguish them, their phenotype is determined, i.e., the specific observable aspects of a plant or its visible characteristics, such as internal factors, related to the genetics itself, and external factors associated with the environment, adaptation, regulated by abiotic factors [2].
The plant phenotype is formed during plant growth from the influence of the species-specific genotype and its interaction with abiotic and biotic factors. Over the years, phenotyping measurement and analysis have been a laborious, expensive, and time-consuming task, so research in this field has focused on the development of automated, multifunctional and high-throughput phenotyping technologies to advance in breeding [3]. Therefore, in the phenotyping study, the measurement of the 3D morphology of a plant plays an important role. Morphological traits provide a viable way to evaluate stress, yield, growth, anatomy, and overall plant development [4,5,6,7]. Plant morphology can be analyzed at three scales: canopy scale in the field, individual plant and organ scale indoors, and micro-scale in laboratories [8]. Indoor applications usually employ pot-grow plants and combine non-destructive data acquisition techniques such as RGB imaging, depth imaging, and laser scanning.
Currently, single-plant phenotyping platforms are fairly advanced [9,10,11], and their use indoors guarantees adequate light conditions for imaging and low airflow, avoiding disturbances in the measurements. Different studies address population phenotypes by looking for plants that have accelerated growth or better production and thus find better genotypes. Other studies evaluate individual phenotypes, which estimate characteristics such as height, volume, and the number of leaves to obtain parameters to determine the best phenotypes. Among the studies reviewed, none was found that separates the plant organs in order to determine the phenotype, since the particular characteristics of each organ could be used to obtain a better approximation of the phenotype. On the other hand, field platforms for individual plants still need to be improved for more detailed feature extraction [12].
To improve the identification of varieties and their characteristics, the phenotype is analyzed in detail using Machine Learning techniques. For this purpose, it is often important to acquire a large amount of high-resolution and accurate data. Several acquisition techniques have been employed for this purpose, which can be grouped into two classes. The first, employing multiple cameras [13] or one single camera at different angles [14,15,16]. The second uses depth cameras based on Kinect or Light Detection and Ranging (LiDAR) [17,18]. The latter technology has a lower computational cost due to the fewer points on the plant, compared to data obtained using photographic cameras.
According to The Food and Agriculture Organization (FAO), maize is among the world’s five most important crops. It is expected to absorb a significant proportion (more than 22%) of the harvested area by 2050. As a result, more than half of the increase in food demand for cereals is expected to come from maize. Therefore, many researchers have developed efficient, high-throughput phenotyping platforms and methods to acquire traits from maize plants [10,19,20,21,22,23,24]. Although several papers have been published on data acquisition methods for 3D point cloud plant phenotyping using cameras and Kinects [18,25,26], few have been developed for 3D acquisition using LiDAR [27]. Despite these developments, their verification has to be performed with closed and incomplete databases, which presents drawbacks for future research. Therefore, this study presents a new prototype for the acquisition of plant morphological data based on a LiDAR sensor that allows the rotational and translational displacement of the seedling placed on a rotating platform and the vertical movement of the sensor. The platform was tested on a database of indoor potted maize seedlings containing 362 three-dimensional scans and 2749 images.
The structure of the paper is as follows. In Section 2 the development of the LiDAR platform (Section 2.1) and the application of the constructed point cloud (Section 2.2) are discussed. Subsequently, the results obtained in the project are presented and discussed in Section 3. Conclusions are presented in Section 4.

2. Materials and Methods

2.1. Platform LiDAR

The physical configuration of the developed platform consists of a turntable, driven by a stepper motor, where the plant to be scanned is placed and which allows a 360° rotation at a resolution of 0.1°. As shown in Figure 1, the scanning system consists of a LiDAR sensor (LMS4121R-13000, SICK AG) with visible red light, which emits a laser beam that scans the plant vertically. The main features of the laser sensor are presented in Table 1. The combination of rotary movement and vertical scanning creates a 3D point cloud of the plant with a 360° view.
The LiDAR measurement method is based on the phase correlation principle. The sensor emits a continuous laser beam, which is reflected when it makes contact with an object and is sensed by the scanner’s receiver. The resulting phase delay between the emitted and received 0 is used to determine the distance in centimeters. This device has a scanning frequency of 600 Hz and an aperture angle of 70°. The point furthest from the center of the plant in the horizontal plane is located at a minimum distance of 70 cm from the sensor in the sensor’s working range. This system has some flexibility to set the appropriate LiDAR platform height and distance to the rotation stage depending on the size and shape of the plant.
The LiDAR sensor requires a different power supply than the other devices. For this purpose, an intrinsically safe power supply was used. The communication between the computer system and the LiDAR is also performed through the Transmission Control Protocol (TCP)/Internet Protocol (IP), so a previous configuration of the sensor in the Local Area Network (LAN)/Wide Area Network (WAN) network was necessary.
The sensor was also configured to perform the measurements according to the needs of the proposed system, using the SOPAS ET configuration software. For this purpose, a fixed IP address was initially established for the sensor and a session was initiated to make changes to the predefined parameters in this software. Within the basic configuration, sensor input 1 was set as the control signal and the median and edge detection filters were activated, as this allows better results in data acquisition. Finally, this setting was saved permanently for the continuous use of the sensor, as shown in Figure 2.
For data acquisition, a computer with an Intel Core i7 1.10 GHz × 12 processors and 16 GB RAM was used, running the Ubuntu 18.04 operating system and a Melodic distribution of Robot Operating System ROS. As shown in Figure 3, the stepper motor manipulation was performed by the computer system using an Arduino Nano and a V44 A3967 power driver, synchronized with the laser sensor to acquire a profile of information at each angle of rotation of the platform. Using a high-resolution Logitech Brio camera, the process of taking images of the plants at different angles of rotation was carried out in order to obtain their ground truth information.
The measured distance S between the LiDAR and the target, together with the beam angle θ , were obtained using the individual measurement points generated by the sensor. The angle of the rotation stage ϕ was calculated by means of a worm and wheel mechanism adjusted to the resolution of the stepper motor (Figure 4).
The employed motor has a reduction ratio of 1:100, a speed of 3024 RPM, and a number of steps per revolution of 10,000. The worm gear mechanism adds a reduction ratio of 1:36, so the speed is reduced to 0.084 RPM and the number of steps per revolution is increased to 360,000. Hence, the resolution of the complete mechanism is 0.001° and the selected angular resolution of the platform was 0.1°. Since this angle is very small, the vibration of the seedlings is negligible. However, to ensure that it is zero, a rest time of 5 s was given between each acquisition.
Using the coordinate systems illustrated in Figure 4, the vertical scan plane of the LiDAR passing through the center of the rotating disc O is taken as the XZ-plane, having its origin at O. The distance d between the LiDAR and the center of the rotation stage, together with the distance h and the tilt angle of the sensor φ were calculated by means of a platform calibration process. For this, a target of size 40 cm × 4.5 cm, placed on the disc, centered at O, and made of a low-reflective material to reduce laser beam scattering, was scanned beforehand. The scanning result is a vertical line, from which information about the distances and inclination between the sensor and the turntable is extracted.
The LiDAR measurements were converted into Cartesian XYZ coordinates using homogeneous transformations. Firstly, being a 2D LiDAR, in the X Z plane, the Cartesian coordinates of the sensor are defined by Equation (1).
X Y Z ω = s c o s ( θ ) 0 s s i n ( θ ) 1 .
This plane must then be rotated around the Y axis, taking into account the tilt of the sensor in the X Z plane. For this purpose, the following homogeneous transformation is used:
R = 1 0 0 0 0 c o s ( φ ) s i n ( φ ) 0 0 s i n ( φ ) c o s ( φ ) 0 0 0 0 1 X Y Z ω .
Then, to align the coordinate planes, the following translation transformation is used.
t = 1 0 0 d 0 1 0 0 0 0 0 h 0 0 0 1 R .
Finally, to obtain the XYZ coordinates of each LiDAR scan within the reference frame, a rotation transformation is performed given by the disk angle ϕ , as shown in Equation (4). Algorithm 1 describes the process of acquisition and reconstruction of the 3D point cloud using the notation presented in Figure 4. It includes the most relevant steps, such as platform calibration, sensor data acquisition, the transformation of the data into homogeneous coordinates, and the generation of the point cloud and images.
X Y Z ω = c o s ( ϕ ) s i n ( ϕ ) 0 0 s i n ( ϕ ) c o s ( ϕ ) 0 0 0 0 1 0 0 0 0 1 t .
The platform control was written in Python 3.6, using the s e r i a l , o p e n c v , c v _ b r i d g e , and o p e n 3 d libraries to process the information obtained from the devices. The distance and intensity measurements sensed by the LiDAR were obtained with the ros node s i c k _ l m s _ 4 x x x [28] and the images were acquired with the Logitech camera using the u s b _ c a m node [29]. A camera calibration process was previously performed with the ros package c a m e r a _ c a l i b r a t i o n using a checkerboard as a target, obtaining the camera, distortion, rectification, and projection matrices. Using the ros package i m a g e _ p r o c the image distortion was removed, which was registered in the ros topic i m a g e _ r e c t _ c o l o r , as shown in Figure 5. Finally, through the multiplatform framework kivyMD and its respective libraries and ros nodes ( s i m p l e _ g u i ), an interface between the different device control commands and the user was created.
The graphic interface shown in Figure 6 was implemented in order to allow interaction between the user and the prototype. The interface executes in a specific order the different rosnodes used for the initialization of the platform devices and data collection. Then, a calibration process is carried out, which must be repeated each time the platform where the plant is located is moved. In addition, it allows the input of the parameters for the experiments, such as the angular resolution of the imaging platform θ , initial angle ( ω i ), and final angle ( ω f ) of scanning. As a result of the 3D scan, three types of files have generated A file in TXT flat format with the information of the Cartesian coordinates of the 3D model obtained; a set of RGB images taken throughout the process and another set of the same size in rosbag format with the synchronized information of the rostopics used during the 3D reconstruction process.
To determine the accuracy of the measurements at each coordinate, a small cube was scanned, since only the length of one of its edges needs to be detected to know its size. In this way, the lengths of the cube’s sides were obtained, as shown in Figure 7. The edges located on the top face of the cube were denoted with the letter “U”, the front ones with “L”, and those on the bottom face with “D”.
Algorithm 1: 3D model generation with the proposed LiDAR scanning platform.
Plants 11 02199 i001
Once the edge lengths were obtained, the measurement error was calculated as follows:
Indiv . Error [ % ] = m R e f R e f 100 ,
where m is the measured value and R e f is the actual one. The measurement errors in the X, Y, and Z coordinates were obtained as the average of the individual errors per axis. The average accuracy is given by the mean of the errors in each coordinate and the absolute error of all measurements by:
Abs . Error = ( m m ¯ ) 2 N ( N 1 ) ,
where m ¯ is the mean value and N is the amount of data.
To determine the accuracy of the measurements made with the platform, 142 repetitions of the distance determined from the point cloud were compared with a reference measurement obtained manually. The CloudCompare software was used to determine the distance in the point cloud, taking into account that the measurements obtained with the LiDAR correspond to the light beam emitted with a sweep angle of zero degrees with respect to the horizon, which corresponds to the case with the minimum angle of incidence. This process was repeated four times varying the distance between the sensor and the cube, in a range between the minimum value detected by the LiDAR and the maximum length allowed by the platform, as shown in Figure 8. Once the experiment was performed, a mean dispersion of 0.071 cm was obtained. Thus, a third-degree polynomial regression was performed, resulting in Equation (7), which represents the trend of the precision of the s i g m a platform as a function of the distance between the seed and the d laser sensor.
σ ( d ) = 0.0639 d 3 + 0.1139 d 2 + 0.0473 d + 0.0589 .
It is worth noting that the accuracy measured using a light beam at an angle of incidence close to zero degrees and that, although not evident in the above equation, the scattering is proportional to the angle of incidence between the light beam and the illuminated object, which has a significant impact on the quality of the three-dimensional reconstruction and, consequently, affects the determination of phenotypic characteristics. For this reason, each profile acquisition performed by the platform is performed one hundred times to obtain the average value of each point at each angle and to accumulate a measurement with measurement noise reduction in the final reconstruction.
A database of maize plants at different phenological stages was constructed using the proposed platform by obtaining 362-point clouds. For this purpose, 38 seeds were planted in pots, and days after planting (DAP) were used as a parameter. The cultivated maize plants were scanned under laboratory conditions by placing the pots on the platform and measuring the light intensity in lux at the initial instant. The process was carried out with a first sowing up to the scion stage and six more sowings up to the seedling stage, to evaluate different growth scenarios.
The angle of incidence, light conditions, material, texture, and measuring range influence the fidelity of the measurements and thus the exact determination of the phenotypic characteristics acquired from the point clouds. Some of these factors affect the quality of the measurements, producing noise. Thus, the reflectance of the material, its color, and its texture cause the laser to deflect and produce unwanted values. Noise can be removed by using classical filtering techniques such as Statistical Outlier Removal [30].
Due to the nature of the laser, when the structure to be reconstructed is very thin, the beam passes through it and is not detected. For this reason, some parts of the seedling will have disconnected sections. To solve this problem, different techniques based on mathematical morphology or growth by regions are used. Another type of solution consists of reconstructing the missing sections from the information obtained from photographs taken of the seedling simultaneously during the acquisition of the point cloud.

2.2. Application

To demonstrate the reconstruction reliability, a phenotypic analysis of characteristics such as height, volume, and classification of seedling organs was performed. For such phenotypes, it is first necessary to segment the point cloud. This is performed by limiting the working area on the Z-axis, eliminating the pot and outliers above the maximum allowed height of the seedling.
  • Seedling height h:
    To determine the height of the plant, the point in the cloud with the highest value on the Z-axis is required.
    h = m a x z .
  • Total Volume v:
    To calculate the volume of the plant, a voxelization of the points with a distance of 25 cm is performed. Then the voxel count is denoted in the (9) equation as the summation of the V parameter is performed and multiplied by the distance value used. The distance value was calculated experimentally.
    v = 2.5 × 10 3 i = 1 N V i .
  • Classification of organs:
    In order to separate the organs of the seedling, a classification is made with respect to the stem and leaves. The database is split in two, taking the first days of shoot and seedling up to the third leaf as one database and the rest as another. In each database, 60% of the point clouds are used for training a Random Forest classifier, 20% for tuning the classifier parameters, and the rest for model validation.
    Once this result has been obtained, filtering of leaf segments that were considered stems is carried out. To do this, a virtual ring is used that goes up from the base of the plant to the highest point [31]. The radius of the ring was 0.0015 and was estimated and validated experimentally.

3. Results and Discussion

As mentioned in Section 2, the point cloud of a cube-shaped reference object was used to establish the precision error of the reconstruction performed. The results obtained are shown in Table 2. As can be seen, the error was less than 3 cm with respect to the ground truth.
The constructed platform was used for 3D scanning of plants at different phenological stages. Figure 9a shows the development of a plant from the first plantation, which was scanned during three stages of its life cycle. The growth process of a plant from the second plantation is also presented (see Figure 9b). The color scale in the point clouds corresponds to the intensity values delivered by the LiDAR sensor. Each reconstruction took about 42.5 min with the default values set in Section 2.
A database containing 362 seedlings was created using the designed platform. In Table 3 it can be seen each group of data taken from each planting campaign. The database, which will be freely accessible, is distributed in seven folders, one per campaign. Each one contains the folders of each plant, with an identifying name and the day of scanning according to its DBH. Each folder includes eight images of the plant taken every 45 degrees, a file with the raw point cloud taken with the LiDAR, and a file containing the processed point cloud, which contains the stem and leaf labels.
As mentioned in Section 2.2, the database was used to estimate the phenotypic characteristics of each plant. Figure 10 shows the percentage error obtained by comparing the actual height of eight randomly selected plants with the estimated one. Table 4 presents the estimated volume of a seedling over time and Table 5 the stem and leaf classification accuracy obtained with 70 samples, which was on average 89.41%.

4. Conclusions

This paper presents a new prototype LiDAR platform for plant phenotyping in a controlled environment. The acquisition time is relatively long (42.5 min), in order to obtain a high accuracy (0.0325) and precision (0.0310) of the points obtained. With this equipment, a freely accessible database of 362 maize plants was constructed and used to obtain three phenotypic parameters, height, volume, and classification of stalks and leaves, in order to verify the reliability of the database. The first two parameters were compared with the real values, obtaining an error of 2.3% and 7.1% respectively.

Author Contributions

Conceptualization, H.F.M. and M.G.F.; methodology, M.G.F., H.F.M., D.M. and J.B.-L.; software, D.M. and H.F.M.; validation, H.F.M., M.G.F., D.M. and J.B.-L.; formal analysis and investigation, H.F.M., M.G.F., D.M. and J.B.-L.; data curation, D.M., M.G.F. and J.B.-L.; writing—original draft preparation H.F.M., M.G.F., D.M. and J.B.-L.; writing—review and editing H.F.M., M.G.F., D.M. and J.B.-L.; supervision H.F.M., M.G.F., D.M. and J.B.-L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the OMICAS program: “Optimización Multiescala In-silico de Cultivos Agrícolas Sostenibles”, anchored at the Pontificia Universidad Javeriana in Cali and funded within the Colombian Scientific Ecosystem by The World Bank, the Colombian Ministry of Science, Technology and Innovation, the Colombian Ministry of Education, the Colombian Ministry of Industry and Tourism, and ICETEX, under grant ID: FP44842-217-2018 and OMICAS Award ID: 792-61187.

Data Availability Statement

Our generated dataset is available online at: 1st campaign: https://osf.io/fcgwk/; 2nd campaign: https://osf.io/x5cn9/; 3rd campaign: https://osf.io/5vykw/; 4th campaign: https://osf.io/ks7my/; 5th campaign: https://osf.io/tnhxy/; 6th campaign: https://osf.io/h63jq/; 7th campaign: https://osf.io/7uvm4/.

Acknowledgments

We would like to thank the Universidad de Ibagué for the technical support in this work under reserach project 19-517-COL.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. United Nations Department of Economic and Social Affairs Population Division. Available online: https://n9.cl/vbs5ri (accessed on 6 October 2021).
  2. Li, L.; Zhang, Q.; Huang, D. A Review of Imaging Techniques for Plant Phenotyping. Sensors 2014, 14, 20078–20111. [Google Scholar] [CrossRef] [PubMed]
  3. Li, Z.; Guo, R.; Li, M.; Chen, Y.; Li, G. A review of computer vision technologies for plant phenotyping. Comput. Electron. Agric. 2020, 176, 105672. [Google Scholar] [CrossRef]
  4. Fahlgren, N.; Gehan, M.A.; Baxter, I. Lights, camera, action: High-throughput plant phenotyping is ready for a close-up. Curr. Opin. Plant Biol. 2015, 24, 93–99. [Google Scholar] [CrossRef]
  5. Gao, M.; Yang, F.; Wei, H.; Liu, X. Individual Maize Location and Height Estimation in Field from UAV-Borne LiDAR and RGB Images. Remote Sens. 2022, 14, 2292. [Google Scholar] [CrossRef]
  6. Chen, Q.; Gao, T.; Zhu, J.; Wu, F.; Li, X.; Lu, D.; Yu, F. Individual Tree Segmentation and Tree Height Estimation Using Leaf-Off and Leaf-On UAV-LiDAR Data in Dense Deciduous Forests. Remote Sens. 2022, 14, 2787. [Google Scholar] [CrossRef]
  7. Gyawali, A.; Aalto, M.; Peuhkurinen, J.; Villikka, M.; Ranta, T. Comparison of Individual Tree Height Estimated from LiDAR and Digital Aerial Photogrammetry in Young Forests. Sustainability 2022, 14, 3720. [Google Scholar] [CrossRef]
  8. Wang, Y.; Wen, W.; Wu, S.; Wang, C.; Yu, Z.; Guo, X.; Zhao, C. Maize Plant Phenotyping: Comparing 3D Laser Scanning, Multi-View Stereo Reconstruction, and 3D Digitizing Estimates. Remote Sens. 2018, 11, 63. [Google Scholar] [CrossRef]
  9. Zhang, X.; Huang, C.; Wu, D.; Qiao, F.; Li, W.; Duan, L.; Wang, K.; Xiao, Y.; Chen, G.; Liu, Q.; et al. High-Throughput Phenotyping and QTL Mapping Reveals the Genetic Architecture of Maize Plant Growth. Plant Physiol. 2017, 173, 1554–1564. [Google Scholar] [CrossRef]
  10. Cabrera-Bosquet, L.; Fournier, C.; Brichet, N.; Welcker, C.; Suard, B.; Tardieu, F. High-throughput estimation of incident light, light interception and radiation-use efficiency of thousands of plants in a phenotyping platform. New Phytol. 2016, 212, 269–281. [Google Scholar] [CrossRef]
  11. Guo, Q.; Wu, F.; Pang, S.; Zhao, X.; Chen, L.; Liu, J.; Xue, B.; Xu, G.; Li, L.; Jing, H.; et al. Crop 3D—A LiDAR based platform for 3D high-throughput crop phenotyping. Sci. China Life Sci. 2018, 61, 328–339. [Google Scholar] [CrossRef]
  12. Young, S.N.; Kayacan, E.; Peschel, J.M. Design and field evaluation of a ground robot for high-throughput phenotyping of energy sorghum. Precis. Agric. 2019, 20, 697–722. [Google Scholar] [CrossRef]
  13. Leotta, M.J.; Vandergon, A.; Taubin, G. Interactive 3D Scanning Without Tracking. In Proceedings of the XX Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI 2007), Minas Gerais, Brazil, 7–10 October 2007; pp. 205–212. [Google Scholar] [CrossRef]
  14. Quan, L.; Wang, J.; Tan, P.; Yuan, L. Image-based modeling by joint segmentation. Int. J. Comput. Vis. 2007, 75, 135–150. [Google Scholar] [CrossRef]
  15. Pollefeys, M.; Koch, R.; Vergauwen, M.; Van Gool, L. An automatic method for acquiring 3D models from photographs: Applications to an archaeological site. In Proceedings of the ISPRS International Workshop on Photogrammetric Measurements, Object Modeling and Documentation in Architecture and Industry, Thessaloniki, Greece, 7–9 July 1999. [Google Scholar]
  16. Leiva, F.; Vallenback, P.; Ekblad, T.; Johansson, E.; Chawade, A. Phenocave: An Automated, Standalone, and Affordable Phenotyping System for Controlled Growth Conditions. Plants 2021, 10, 1817. [Google Scholar] [CrossRef] [PubMed]
  17. Murcia, H.F.; Tilaguy, S.; Ouazaa, S. Development of a Low-Cost System for 3D Orchard Mapping Integrating UGV and LiDAR. Plants 2021, 10, 2804. [Google Scholar] [CrossRef]
  18. Murcia, H.; Sanabria, D.; Méndez, D.; Forero, M.G. A Comparative Study of 3D Plant Modeling Systems Based on Low-Cost 2D LiDAR and Kinect. In Proceedings of the Mexican Conference on Pattern Recognition, Mexico City, Mexico, 23–26 June 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 272–281. [Google Scholar] [CrossRef]
  19. Brichet, N.; Fournier, C.; Turc, O.; Strauss, O.; Artzet, S.; Pradal, C.; Welcker, C.; Tardieu, F.; Cabrera-Bosquet, L. A robot-assisted imaging pipeline for tracking the growths of maize ear and silks in a high-throughput phenotyping platform. Plant Methods 2017, 13, 96. [Google Scholar] [CrossRef]
  20. Reiser, D.; Vázquez-Arellano, M.; Paraforos, D.S.; Garrido-Izard, M.; Griepentrog, H.W. Iterative individual plant clustering in maize with assembled 2D LiDAR data. Comput. Ind. 2018, 99, 42–52. [Google Scholar] [CrossRef]
  21. Vázquez-Arellano, M.; Reiser, D.; Paraforos, D.S.; Garrido-Izard, M.; Burce, M.E.C.; Griepentrog, H.W. 3-D reconstruction of maize plants using a time-of-flight camera. Comput. Electron. Agric. 2018, 145, 235–247. [Google Scholar] [CrossRef]
  22. Vázquez-Arellano, M.; Paraforos, D.S.; Reiser, D.; Garrido-Izard, M.; Griepentrog, H.W. Determination of stem position and height of reconstructed maize plants using a time-of-flight camera. Comput. Electron. Agric. 2018, 154, 276–288. [Google Scholar] [CrossRef]
  23. Bao, Y.; Tang, L.; Srinivasan, S.; Schnable, P.S. Field-based architectural traits characterisation of maize plant using time-of-flight 3D imaging. Biosyst. Eng. 2019, 178, 86–101. [Google Scholar] [CrossRef]
  24. Qiu, Q.; Sun, N.; Bai, H.; Wang, N.; Fan, Z.; Wang, Y.; Meng, Z.; Li, B.; Cong, Y. Field-Based High-Throughput Phenotyping for Maize Plant Using 3D LiDAR Point Cloud Generated With a “Phenomobile”. Front. Plant Sci. 2019, 10, 554. [Google Scholar] [CrossRef]
  25. McCormick, R.F.; Truong, S.K.; Mullet, J.E. 3D sorghum reconstructions from depth images identify QTL regulating shoot architecture. Plant Physiol. 2016, 172, 823–834. [Google Scholar] [CrossRef] [PubMed]
  26. Paulus, S.; Schumann, H.; Kuhlmann, H.; Léon, J. High-precision laser scanning system for capturing 3D plant architecture and analysing growth of cereal plants. Biosyst. Eng. 2014, 121, 1–11. [Google Scholar] [CrossRef]
  27. Thapa, S.; Zhu, F.; Walia, H.; Yu, H.; Ge, Y. A Novel LiDAR-Based Instrument for High-Throughput, 3D Measurement of Morphological Traits in Maize and Sorghum. Sensors 2018, 18, 1187. [Google Scholar] [CrossRef]
  28. Lehning, M.; SICK. sick_scan. Available online: https://github.com/SICKAG/sick_scan (accessed on 6 October 2021).
  29. Pitzer, B.; Toris, R. usb_cam. Available online: https://github.com/ros-drivers/usb_cam (accessed on 6 October 2021).
  30. Balta, H.; Velagic, J.; Bosschaerts, W.; De Cubber, G.; Siciliano, B. Fast statistical outlier removal based method for large 3D point clouds of outdoor environments. IFAC-PapersOnLine 2018, 51, 348–353. [Google Scholar] [CrossRef]
  31. Gelard, W.; Devy, M.; Herbulot, A.; Burger, P. Model-based segmentation of 3D point clouds for phenotyping sunflower plants. In Proceedings of the 12 International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Porto, Portugal, 27 February 2017. [Google Scholar]
Figure 1. Illustration of the proposed platform based on a LiDAR sensor and a turntable for 3D plant reconstruction. (a) Frontal view, (b) Isometric view, (c) Top view, (d) Lateral view.
Figure 1. Illustration of the proposed platform based on a LiDAR sensor and a turntable for 3D plant reconstruction. (a) Frontal view, (b) Isometric view, (c) Top view, (d) Lateral view.
Plants 11 02199 g001
Figure 2. LiDAR configuration in the SOPAS ET software.
Figure 2. LiDAR configuration in the SOPAS ET software.
Plants 11 02199 g002
Figure 3. Schematic diagram of the connections for the electronic devices.
Figure 3. Schematic diagram of the connections for the electronic devices.
Plants 11 02199 g003
Figure 4. Schematic of the developed scanning system. The functional diagram shows the LiDAR device and the rotation platform and its reference frames.
Figure 4. Schematic of the developed scanning system. The functional diagram shows the LiDAR device and the rotation platform and its reference frames.
Plants 11 02199 g004
Figure 5. Diagram of the active ros nodes during 3D reconstruction.
Figure 5. Diagram of the active ros nodes during 3D reconstruction.
Plants 11 02199 g005
Figure 6. User Interface for LiDAR platform.
Figure 6. User Interface for LiDAR platform.
Plants 11 02199 g006
Figure 7. Measurement of the cube edges. (a) U1 and U2. (b) U3 and U4. (c) L1 and D1. (d) L2 and D2. (e) L3 and D3. (f) L4 and D4.
Figure 7. Measurement of the cube edges. (a) U1 and U2. (b) U3 and U4. (c) L1 and D1. (d) L2 and D2. (e) L3 and D3. (f) L4 and D4.
Plants 11 02199 g007
Figure 8. Diagram of the lengths used to measure the precision were measured with the proposed device.
Figure 8. Diagram of the lengths used to measure the precision were measured with the proposed device.
Plants 11 02199 g008
Figure 9. Three-dimensional (3D) point clouds of plants 01 and 08 in three growth phases. The color scale corresponds to the signal strength received by the Lidar.
Figure 9. Three-dimensional (3D) point clouds of plants 01 and 08 in three growth phases. The color scale corresponds to the signal strength received by the Lidar.
Plants 11 02199 g009
Figure 10. Height estimation error of 8-point clouds.
Figure 10. Height estimation error of 8-point clouds.
Plants 11 02199 g010
Table 1. Main features of LiDAR sensor.
Table 1. Main features of LiDAR sensor.
FeatureLMS4121R-13000
ApplicationIndoor
Reading fieldFront
Light sourceVisible red light
Laser class2 (IEC 60825-1:2014, EN 60825-1:2014)
Aperture angle70°
Scanning frequency600 Hz
Angular resolution0.0833°
Working range70 cm … 300 cm
Table 2. Measurement error obtained with the cube’s 3D point cloud.
Table 2. Measurement error obtained with the cube’s 3D point cloud.
Ground TruthPoint CloudError Calculation
Coord.ID Ref [cm] m [cm]Indiv.
Error [%]
Prom.
Error [%]
m m ¯ ( m m ¯ ) 2
XU15.55.43831.12181.67730.01340.0002
U25.55.55731.0418 0.10560.0111
D15.55.38082.1673 0.07090.0050
D25.55.36922.3782 0.08250.0068
YU35.55.54310.78362.14180.09140.0084
U45.55.61412.0745 0.16240.0264
D35.55.33083.0764 0.12090.0146
D45.55.35522.6327 0.09650.0093
ZL15.55.43631.15821.67550.01540.0002
L25.55.49020.1782 0.03850.0015
L35.55.30513.5436 0.14660.0215
L45.55.60021.8218 0.14850.0220
Accuracy [%]1.8315
m ¯ 5.4517 Σ ( m m ¯ ) 2 0.1271
Abs. Error [cm]0.0310
Table 3. Point clouds of maize plants separated by campaigns with their respective acquisition dates and number of scanned data.
Table 3. Point clouds of maize plants separated by campaigns with their respective acquisition dates and number of scanned data.
CampaignDate# PlantsLink
First7 July 2021 to 21 October 202121https://osf.io/fcgwk/
Second21 October 2021 to 12 November 202140https://osf.io/x5cn9/
Third2 February 2022 to 18 February 202245https://osf.io/5vykw/
Fourth28 February 2022 to 23 March 202280https://osf.io/ks7my/
Fifth28 March 2022 to 9 April 202255https://osf.io/tnhxy/
Sixth25 April 2022 to 17 May 202280https://osf.io/h63jq/
Seventh23 May 2022 to 14 June 202241https://osf.io/7uvm4/
Table 4. Estimation of the volume of a seedling in its different temporal stages.
Table 4. Estimation of the volume of a seedling in its different temporal stages.
Maiz08
TAP (h)Volume (cm 3 )
147.751.2068
167.911.5688
176.963.6859
192.027.3712
200.4811.5586
297.3019.7802
321.0524.1338
345.1638.5767
384.0267.4274
432.32100.5300
456.84107.0390
Table 5. Accuracy obtained on 70 specimens by classifying their stems and leaves respectively.
Table 5. Accuracy obtained on 70 specimens by classifying their stems and leaves respectively.
Name (Campaign 6)Accuracy (%)Name (Campaign 6)Accuracy (%)
01_0193.4405_1193.34
01_0289.1606_0198.92
01_0397.9706_0287.41
01_0489.0306_0394.10
01_0578.7106_0487.31
01_0693.6906_0598.02
01_0783.6106_0686.13
01_0891.2606_0789.54
01_0995.1206_0882.57
01_1092.2306_0978.65
01_1191.6306_1079.32
02_0490.3406_1191.80
02_0588.8109_0190.91
02_0686.3009_0296.38
04_0195.7509_0395.66
04_0294.5909_0490.42
04_0391.9609_0593.55
04_0496.0209_0694.53
04_0584.0909_0789.51
04_0697.0009_0894.05
04_0795.5209_0995.33
04_0889.2409_1090.12
04_0998.7909_1194.18
04_1092.2509_1294.64
04_1190.6412_0190.00
04_1287.1112_0287.63
05_0179.9112_0381.86
05_0288.5812_0490.02
05_0390.2212_0589.71
05_0491.4212_0691.28
05_0593.2812_0786.62
05_0695.2212_0885.76
05_0786.7712_0990.09
05_0894.3412_1090.77
05_1096.2112_1196.06
Average accuracy = 89.41%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Forero, M.G.; Murcia, H.F.; Méndez, D.; Betancourt-Lozano, J. LiDAR Platform for Acquisition of 3D Plant Phenotyping Database. Plants 2022, 11, 2199. https://doi.org/10.3390/plants11172199

AMA Style

Forero MG, Murcia HF, Méndez D, Betancourt-Lozano J. LiDAR Platform for Acquisition of 3D Plant Phenotyping Database. Plants. 2022; 11(17):2199. https://doi.org/10.3390/plants11172199

Chicago/Turabian Style

Forero, Manuel G., Harold F. Murcia, Dehyro Méndez, and Juan Betancourt-Lozano. 2022. "LiDAR Platform for Acquisition of 3D Plant Phenotyping Database" Plants 11, no. 17: 2199. https://doi.org/10.3390/plants11172199

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop