Next Article in Journal
Putative Silicon Transporters and Effect of Temperature Stresses and Silicon Supplementation on Their Expressions and Tissue Silicon Content in Poinsettia
Previous Article in Journal
Comparative Analysis of the Complete Chloroplast Genome of Mainland Aster spathulifolius and Other Aster Species
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Destructive Measurement of Three-Dimensional Plants Based on Point Cloud

1
College of Information and Electrical Engineering, China Agricultural University, Qinghuadonglu No. 17, HaiDian District, Beijing 100083, China
2
Engineering Practice Innovation Center, China Agricultural University, Qinghuadonglu No. 17, HaiDian District, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Plants 2020, 9(5), 571; https://doi.org/10.3390/plants9050571
Submission received: 26 March 2020 / Revised: 19 April 2020 / Accepted: 27 April 2020 / Published: 29 April 2020
(This article belongs to the Section Plant Modeling)

Abstract

:
In agriculture, information about the spatial distribution of plant growth is valuable for applications. Quantitative study of the characteristics of plants plays an important role in the plants’ growth and development research, and non-destructive measurement of the height of plants based on machine vision technology is one of the difficulties. We propose a methodology for three-dimensional reconstruction under growing plants by Kinect v2.0 and explored the measure growth parameters based on three-dimensional (3D) point cloud in this paper. The strategy includes three steps—firstly, preprocessing 3D point cloud data, completing the 3D plant registration through point cloud outlier filtering and surface smooth method; secondly, using the locally convex connected patches method to segment the leaves and stem from the plant model; extracting the feature boundary points from the leaf point cloud, and using the contour extraction algorithm to get the feature boundary lines; finally, calculating the length, width of the leaf by Euclidean distance, and the area of the leaf by surface integral method, measuring the height of plant using the vertical distance technology. The results show that the automatic extraction scheme of plant information is effective and the measurement accuracy meets the need of measurement standard. The established 3D plant model is the key to study the whole plant information, which reduces the inaccuracy of occlusion to the description of leaf shape and conducive to the study of the real plant growth status.

1. Introduction

With the development of protected vegetable production, vegetable seedlings are increasingly recognized by producers, which has become an important pillar of vegetable industry development. In agriculture, information about the spatial distribution of crop growth is valuable for applications such as biomass and yield estimation or increasing field work efficiency in terms of fertilizing, applying pesticides and irrigation [1,2]. Many researchers have carried out experiments on plant growth detection based on computer vision technology, mainly covering three aspects: the detection of external growth parameters, fruit maturity, and the detection of nutritional components [3].
Plant height is an important phenotypic morphology parameters that can be used not only as an indicator of overall plant growth vigor, but a parameter to estimate traits [4,5]. The key technology to obtain plant height is to present the highest and the lowest points on the image (or point cloud), and obtain the height by distance formula [6,7,8]. Using the Euclidean distance [9,10] to present the plant height is commonly used on 2D images, extracting the plant skeleton on the region of interest (ROI) and detecting the lowest and highest points of the skeleton. The accuracy of pixel extraction in the interest domain of two-dimensional images is an influencing factor of plant height measurement accuracy. Image processing is used to segment the skeleton, such as color space calibration, or morphological method to present the skeleton. Then the height of the plant can be obtained according to the marker points or mapping the color space mapped to the depth space. Calcuation the plant height by vertical distance [11,12,13,14,15] is a method on 3D point clouds, calculating the difference of lowest and highest point of the height-axis on the ROI. In the study of 3D model, some researchers computed the mesh distance [16,17] of every rasterized plant point to the ground. In the process of three-dimensional plant height, the focus is the accuracy of modeling, especially in the field experiment, the spacing between plants, noise points of the soil and weeds are the key factors affecting the measurement accuracy. When measuring the plant’s height indoors, the sensors are often placed horizontally with the object, so the height’s coordinate system is the y-axis; when measuring the plant’s height outdoors, the sensors are placed vertically to the ground, so the height coordinate system is the z-axis.
Leaf is an important organ of vegetation, it determines the primary production of photosynthesis, plant evaporation and characterization of plant growth [18]. In a study of automatic measurements for plant leaves, the method of segmenting leaf based on two-dimensional images processing is widely used, for example, tobacco [19], leafy vegetables [20], pest-damaged leaf [21]. Further, the stereo vision method can also be used for leaf segmentation and measurement [18,22,23]. With the development of three-dimensional vision, more researchers segment the main organs of the plant after reconstruction the whole plant, extract the leaves, and study morphological features of leaves. For the point cloud organ segmentation method, the currently commonly used methods are segmentation of the whole plant by region growing algorithm [24,25,26], segmentation of the plant by locally convex connected patches (LCCP) segmentation [27,28], segmentation of the plant by Histogram clustering algorithm [29], segmentation starting finding the stem in a 3D point cloud and clustering the leaves by geometrical constraint [30,31].
Some researchers have also developed measurement systems to reconstruct 3D point cloud for plant of growth information [32,33,34,35,36], for example, Andújar et al. [37] explored the possibilities of using Kinect Fusion algorithms to reconstruct 3D point clouds of weed-infested maize crops under real field conditions. Yamamoto et al. [38] measured growth information of strawberry in greenhouse through the color and depth images by Kinect. Li et al. [39] developed a technique system for the measurement, reconstruction, and trait extraction of rice canopy architectures. The method of using Kinect and turntable to perform phenotypic information for indoor plants is necessary to rotate the turntable multiple times interval at a fixed angle [40,41,42,43], and then use the point cloud registering algorithm for 3D reconstruction.
In this paper, a new method of the reconstruction point cloud method for the greenhouse plant’s morphological parameters based on Kinect v2.0 and turntable is explored. The processed 3D plant model contains spatial information, and then the key growth parameters, leaf’s length, width, surface area and plant height are extracted. The specific flow of this article is shown in Figure 1, firstly, obtaining the point clouds of the plant by rotating the turntable 8 times intervals, and reconstruction of the 3D plant model by the registering method; secondly, segmenting the leaves of the 3D model and determining the edge points of leaves, presenting the phenotypic parameters of leaves; finally, calculating the plant’s height by vertical distance.

2. Preprocessing of a Three-Dimensional Plant Model

In this experiment, a Kinect camera and an electric turntable were used to build a 3D plant model according to the following instructions. Fix the camera position and put the plant on the turntable, the height h of Kinect is set to 0.7 m, and the distance d between Kinect and plant is 1.0 m. Figure 2 shows the experiment devices which are placed at fixed positions, and marks the world coordinate system of the point cloud.
Place the plant on the turntable and record the initial point cloud of the plant. Turn the turntable clockwise 45°, then stop the turntable and record the data as a piece of point cloud. Stop running until the turntable rotates 360° and 8 piece of point clouds are obtained. The original point cloud contains invalid data, so in this experiment the outlier points should be removed by statistical outlier filter [44], the distance d i from the query point to all the neighboring points is calculated, and the threshold standard deviation ε is set to judge whether the point is an outlier. Eight pieces of point clouds are registered by iterative closest point (ICP) [45] algorithm to obtain the initial three-dimensional point cloud model. In Figure 3, eight pieces of point clouds are obtained from different directions, related to the rotation angle of the turntable, a n g l e 0 means the turntable rotates 0°, a n g l e 1 means rotating 45°, angle2 means rotating 90°, angle3 means rotating 135°, angle4 means rotating 180°, angle5 means rotating 225°, angle6 means rotating 270°, angle7 means rotating 315°, anglen is recorded every 45° (n ∈ [0, 7]) [46]. The point cloud of the turntable and the surrounding scene is marked as p b , the point cloud of ROI from each angle can be obtained by subtracting p b . Figure 3 shows the plant’s point clouds obtained after rotating the turntable 8 times interval, and the three-dimensional points of ROI are color-rendered according to x-axis for displaying clearly. The reason for selecting eight directions to collect experimental data is that the two adjacent point clouds can fill the holes at the edge of the point cloud after registering, so the three-dimensional point cloud can reduce the influence of holes on accuracy.
Plant reconstruction is divided into two parts in this research. The first part is to register angle0 with 7, 1, 2 and angle4 with 3, 5, 6 separately. The former result is called P o s 0 and the latter result is called P o s 1 . The second part is to stitch the P o s 0 and P o s 1 point clouds to get a complete 360° reconstruction model. Here, the commonly used stitching method is the ICP algorithm, and this method has many advantages. Not only the registration between point set is considered, but also the registration from point set to model and model to model is also considered in the ICP algorithm.
The basic principle of the ICP algorithm is—in the target point cloud P ( i ) and the source point cloud Q ( i ) , find the nearest neighbor point ( p i , q i ) according to certain constraints, and then calculate the optimal matching parameters R and t to make the error function minimum. The error function is E ( R , t ) in Equation (1) [45],
E ( R , t ) = 1 n i = 1 n q i ( R p i + t ) 2
where n is the number of nearest neighbor pairs, p i is the point in the target point cloud P, q i is the nearest point of the source point cloud Q, R is the rotation matrix, t is the translation vector. The traditional ICP algorithm can be summarized in two steps—(1) compute correspondences between the two point clouds. (2) calculate the transformation that minimizes the distance between the corresponding points. The disadvantages [47] of this method are as follows—noise or abnormal data may cause the algorithm to fail to converge; the selection of the initial value has an important impact on the final registration results. The algorithm focuses on the similarity between the target and the source, and takes the minimum error as the evaluation standard, ignores whether there is a location relationship between the target point cloud and the source point cloud.
The improved ICP algorithm in this paper is more suitable for registering point clouds from different directions using a turntable. This stitching method first uses a rotation matrix and a translation vector to process the relationship between adjacent point clouds, then uses the ICP algorithm to iteratively obtain R, t. Obviously, the point clouds in eight directions obtained in this paper have positional relations. The point clouds in our experiment are formed by multiple rotations from the same plant, plant leaves from different directions are similar in shape, which can cause the ICP algorithm falling into local optimality, so this algorithm is improved. According to the position relationship of adjacent patch point clouds, firstly, rotate the source point cloud at angle θ 1 , then perform ICP registration of each point cloud, the following formula is in Equation (2),
ω 1 = c o s θ 1 0 s i n θ 1 0 1 0 s i n θ 1 0 c o s θ 1 , σ 1 = ( x t x s , y t y s , z t z s ) E ( R 1 , t 1 ) = 1 n i = 1 n q i ( R 1 ( ω 1 p i + σ 1 ) + t 1 ) 2
where ( x t , y t , z t ) is the centroid of the target point cloud, ( x s , y s , z s ) is the centroid of the source point cloud, ω 1 is the rotation matrix around the y-axis for angle θ 1 , and the value of ω 1 is based on the rotation relationship of target point cloud and source point cloud. After obtaining the centroid point three-dimensional vector of 8 pieces of point clouds, calculate the distance σ 1 between target point cloud and source point cloud, take target point cloud as the registration center, move source point cloud to the centriod coordinate point according to the distance σ 1 .
In this experiment, take a n g l e 0 as the target point, rotate a n g l e 7 , 1 , 2 to register the two point clouds respectively; take a n g l e 4 as the target point, rotate a n g l e 3 , 5 , 6 to register the two point clouds respectively. For example, take a n g l e 1 as the source point cloud, and a n g l e 0 as the target point cloud in Figure 4a, color green represents a n g l e 0 , red represents a n g l e 1 , the relationship between them is rotating a n g l e 1 of 45° anticlockwise along the a n g l e 0 , so θ 1 is 45°. The iterations number of matching target and source point cloud n is 1000 times. The traditional ICP algorithm cannot judge the relationship between source point cloud and target point cloud as in Figure 4b, it shows that the stem and leaves do not match correctly and the position relationships of point clouds are neglected in the matching process, so the stitching error is large. The location relationship between the source and target point cloud should be determined in this research, that is, applying the ICP algorithm after rotation θ 1 , and the results are shown in Figure 4c by our method, after 1000 iterations, the leaves and stem match well. According to the rotation direction and angle of the turntable, the problem of ignoring the relationship between target point cloud and original point cloud in ICP registration can be solved by calculating the rotation angle in advance when registering the adjacent point clouds.
Finally, the result of the point cloud should be merged. The merged point cloud based on a n g l e 0 is P o s 0 , and the merged point cloud based on a n g l e 4 is P o s 1 . Point cloud P o s 0 are processed by using Equation (2), rotate a n g l e 1 counterclockwise 45° and match with a n g l e 0 ; rotate angle2 counterclockwise 90° and match with angle0; rotate angle7 clockwise 45° and match with label 0. Then, merge the results as P o s 0 . Similarly, point cloud P o s 1 are processed in the same way, rotate label 5 counterclockwise 45° and match with label 4; rotate label 6 counterclockwise 90° and match with a n g l e 4 ; rotate angle3 clockwise 45° and match with angle4. Finally, merge the results as Pos1.
Similarly, after obtaining Pos0 and P o s 1 point cloud, the ICP method cannot be used to match them directly, because P o s 0 and P o s 1 are in the different 3D coordinate systems, it should rotate P o s 1 by 180 and keep it in the same coordinate system with P o s 0 .
For the P o s 0 and P o s 1 point clouds in Figure 5a, color red represents P o s 0 , green represents P o s 1 , half of the 360° model has been registered separately, so the correct registering of them directly affected the accuracy of the whole model. Because of the randomness in the growth of leaves, the centroids of P o s 0 and P o s 1 could not overlap. If ICP registration is directly used for registering, it will fall into local optimal and mismatching, as shown in the Figure 5b, which cannot reflect the relationship between P o s 0 and P o s 1 point cloud correctly. For this situation, our approach is to extract the difference between P o s 0 and P o s 1 centroid, move the target point cloud to the source point cloud, that is the local movement between these two centroid vectors and then using the ICP method to obtain a complete plant point cloud model as shown in Figure 5c. The algorithm cannot fall into the local optimum, and the registration effect of P o s 0 and P o s 1 is better. Here, σ 2 is the difference between P o s 0 and P o s 1 ’s centroid vector, as in Equation (3):
ω 2 = c o s θ 2 0 s i n θ 2 0 1 0 s i n θ 2 0 c o s θ 2 , σ 2 = ( x p o s 0 x p o s 1 , y p o s 0 y p o s 1 , z p o s 0 z p o s 1 ) E ( R 2 , t 2 ) = 1 n i = 1 n q i ( R 2 ( ω 2 p i + σ 2 ) + t 2 ) 2
where ( x p o s 0 , y p o s 0 , z p o s 0 ) is the centroid of P o s 0 , ( x p o s 1 , y p o s 1 , z p o s 1 ) is the centroid of P o s 1 , ω 2 is the rotation matrix around the y-axis for angle θ 2 , and θ 2 is 180°. After obtaining the centroid point three-dimensional vector of P o s 0 and P o s 1 , calculate the distance σ 2 of them, take P o s 0 as the registration center, move P o s 1 to the centriod coordinate point with P o s 0 according to the distance σ 2 . At this time, the registration and establishment of the 3D model is completed.
To create complete models, glossy surfaces and occlusions in the point cloud must be accounted for. A solution is to use a resampling algorithm, which attempts to recreate the missing parts of the surface through high-order polynomial interpolation between surrounding data points. By performing the resampling process, small errors can be corrected, and the d o u b l e - w a l l artifacts generated by registering 8 angles point cloud data together can be smoothed. Moving least squares (MLS) surface reconstruction [48] method can estimate normal vector based on polynomial reconstruction, and can also smooth and resample noisy data. In order to achieve the smoothness of the surface, the K nearest-neighbor radius [49] of the fitting is set as 0.5 mm, and the whole 360° point cloud is smoothed. As shown in Figure 6, a leaf is extracted from the whole point cloud registering by P o s 0 and P o s 1 point cloud, and the smooth point cloud obtained by MLS processing on it. MLS can smooth the surface and filter noise points.
Through the above steps, a whole point cloud model with less data, accurate normal and curvature variance is obtained, which is beneficial to the following operations such as feature point extraction and feature boundary line collection.

3. Measurement the Phenotypic Parameters of a Plant

3.1. Extraction of Boundary Points on a Plant Leaf

In this section, we discuss how to segment leaves and stems on a complete plant model, extract the contour of each leaf, and calculate the phenotype of the leaf according to the three-dimensional points of the contour. In this paper, kdtree is used to organize the data of leaf point cloud and realize fast nearest neighbor retrieval based on fast library for approximate nearest neighbors (FLANN) [50]. The spatial topological relationship between data points is established to facilitate k-nearest neighbor search.
The pepper plants are used as the research objects to generate three-dimensional models by eight pieces of point cloud. On the basis of the 3D model, the stem and leaves should be divided into different parts, so the phenotypic characteristics of each leaf can be calculated. The leaves for measurement are extracted from pepper plants by the locally convex connected patches method (LCCP) [51]. The principle of LCCP method is, for the plant model, firstly to calculate the convexity-concavity relationship of adjacent patches. The relationship is judged by extended connectivity criterion and sanity criterion. Extended connectivity criterion uses the angle between the center line vector x 1 , x 2 and the normal vector n 1 , n 2 of the adjacent patches. The vector from n 1 to n 2 is set to t , the angle between t and n 1 is a 1 , and the angle between t and n 2 is a 2 . Obviously, if a 1 > a 2 , the relationship of the two patches p i , p j is concave, otherwise is convex. d is the difference between the vector x 1 , x 2 , s is the cross product between the vector n 1 , n 2 . If the relationship of the two patches is convex, it should be further judged by sanity criterion. When the angle between d and s is greater than the threshold δ , it can be sure that the relationship between them is convex, otherwise is concave.
After marking the concavity-convexity relationship of each adjacent small region, the region growth algorithm is used to cluster the small regions in Figure 7a into larger objects in Figure 7b, representing by small blocks of different colors. This algorithm restricted by the convexity of small regions can clearly distinguish the boundary of stem and leaves as shown in Figure 7b, that red line represents concavity and blue line represents convexity between adjacent patches. There is obvious concavity and convexity change at the junction of fruit stem and leaves, which meets the criteria of LCCP segmentation.
The convexity-concavity relationship of the plant point cloud is obtained by LCCP algorithm, which displays in different colors. From the effect of segmentation, the closer the leaves are to the bottom of the plant, as shown in Figure 7b, the relationship between leaves and stem are more obvious; while the closer the leaves are to the top of the plant, for example, the relationship between leaf 3 and leaf 4 is not obvious, so they could not be segmented correctly by the LCCP method. Because the leaves at the top of the plant are closer towards the stem, they are treated as a whole, so it is necessary to manually segment the leaves.
For each divided leaf, the boundary condition [52] is used to determine whether the three-dimensional point of the leaf is an internal point or external contour point. For any point p, a tangent plane is established according to the point and its k neighborhood points, and project each point on this plane; then, judge whether the point p is a boundary feature point, repeat the above operations until all points are judged, and the boundary feature point set is obtained. When the target point is a boundary point, only a few or no points will appear in its upper region. According to this phenomenon, the discrimination mechanism of boundary feature points can be established. As shown in Figure 8, the upper part of point P is divided into two areas I and II by the vertical line and the horizontal line passing through point p. p is a boundary feature point can be distinguished according to whether there are data points in these two areas. Specifically, according to the boundary determination method, the edge point is the key to get the phenotype of leaf, taking k = 10 as an example, it can be divided into three situations: (1) there are no points in the either areas, p is the boundary point; (2) there are no points in one area, p is the boundary point; (3) there are points in both areas, find out the vector closest to the vertical line of these two areas, and the angle between them. If it is greater than the set threshold ϵ , p is the boundary point, threshold is π 2 in this paper. In all three cases, the points are boundary points and need to be preserved. According to this method, edge extraction is performed on the segmented 3D leaves to extract points.
The boundary points of the divided leaf are the basis of calculating the length and width of each leaf. Taking the green pepper plant at seeding stage as the research object, the LCCP algorithm is used to segment the leaves and stem, the boundary point judgment method is used to get the internal point and the external point, and then the edge points of each leaf are obtained, the leaves and stem are displayed in different colors. There are five leaves, and the edge points are displayed in black in Figure 9, which are the key points to obtain the leaf’s phenotypic parameters.

3.2. Calculation the Length, Width and Surface Area of Leaf

Calculate the length and width according to the contour of each leaf. On the three-dimensional points of the boundary, manually select p t o p at the top of the tip and p b o t t o m at the bottom of the leaf to calculate the Euclidean distance between them as the length of the leaf; and select the maximum distance perpendicular to leaf length on the edge point as p s i d e 1 and p s i d e 2 respectively to calculate the Euclidean distance between them as the width of the leaf in Equation (4),
l l e a f = ( x t x b ) 2 + ( y t y b ) 2 + ( z t z b ) 2 w l e a f = ( x 1 x 2 ) 2 + ( x 1 x 2 ) 2 + ( x 1 x 2 ) 2 ,
where ( x t , y t , z t ) is the coordinate of p t o p , ( x b , y b , z b ) is the coordinate of p b o t t o m , ( x 1 , y 1 , z 1 ) is the coordinate of p s i d e 1 , and ( x 2 , y 2 , z 2 ) is the coordinate of p s i d e 2 . l l e a f is the length of leaf, w l e a f is the width of leaf. In Figure 10, the blue three-dimensional points are the leaf contour obtained by boundary conditions, it is shown that the length and width of leaf can be obtained by using the Euclidean distance method, where length refers to the three-dimensional distance between the top and the bottom of the leaf, and width refers to the three-dimensional distance between the midpoint on both sides of the leaf.
In the 3D plant model, ( x , y ) of the point cloud coordinates ( x , y , z ) can be regarded as a coordinate pair obtained by sampling the coordinates on the x O y plane. For grid sample points, it should generate two matrices of the same size with the vector x as the rows and the vector y as the columns, where the rows of x start from the minimum value x m i n to the maximum value x m a x of the model, and generate data every 0.1 mm, which integrate as Matrix X I ; Similarly, the columns of y start from the minimum value y m i n to the maximum value y m a x , and generate data every 0.1 mm, which integrate as matrix Y I . X I and Y I form a uniform grid. X I is a row vector that determines a matrix with a fixed number of columns, Y I is a column vector that determines a matrix with a fixed number of rows. Fit the surface ( X I , Y I , Z I ) formed by z = f ( x , y ) to the scattered data in the vector ( x , y , z ) . Perform cubic interpolation of data on the surface at the query point specified by ( X I , Y I ) and return the interpolated value Z I to generate a smooth surface. The surface passes through the data points defined by ( x , y ) to form a complete mesh surface.
The leaf area is calculated according to the three-dimensional points. For the reconstructed leaf surface, the area is calculated by surface integral algorithm, and the area of the small rectangular block is added to get the surface area. When calculating the gradient, the step size needs to correspond to the small rectangular block’s length d x and width d y . d x and d y in the Equation (5) are the slopes of rectangular blocks in x-axis and y-axis direction respectively. The area obtained from the integration of surface s on region D x y is:
A = D x y 1 + z x 2 + z y 2 d x d y .
The fitted surface of leaf is as shown in Figure 11, the leaf is a closed surface, and the value of d x and d y are set at 0.0001, that means, the area of small square shall be obtained in every step of 0.1 mm.

3.3. Measurement the Height of a Plant

The height of the plant is obtained by vertical distance along y direction, by comparing all points’ three-dimensional coordinates, the maximum value y i m a x and minimum value y i m i n value along the y-directions are calculated in Equation (6). The height of the plant H is presented with flowerpot, H 1 is the vertical distance of the whole point cloud. Because the height of the flowerpot H 2 is included in the height H 1 , in order to get the true height of the plant, the actual height of the flowerpot should be subtracted here. In Equation (7), the height of the cuboid H 1 is the sum of the plant H and the flowerpot H 2 , and the height of the flowerpot has been subtracted in this experiment.
y i m i n y y i m a x , ( 1 i n ) H 1 = y m a x y m i n ,
H 2 = y m a x p o t y m i n p o t H = H 1 H 2 .
The height of the flowerpot H 2 can be obtained in Figure 7b after the result of LCCP segmentation. The concavity-convexity relationship between the flowerpot and the stem of the plant separate the two into different spaces. Therefore, the maximum value y m a x p o t and the minimum value y m i n p o t of flowerpot in the y-axis direction can be used as the vertical distance of the flowerpot.
In Figure 12, the red arrow represents the x-direction, the blue arrow represents the y-direction, and the green arrow represents the z-direction, the external minimum cuboid is drawn by the main direction coordinate system. Then, for the y-axis of interest in this paper, the actual height difference between the cuboid’s vertical distance and the flowerpot is obtained to get the height of the plant.

4. Experiment

The experimental results are analyzed by comparing the measured values with actual values. Plant height, leaf length, width and surface area are the main analysis indexes. According to the experiment in this paper, the absolute error (AE) of plant height, the mean absolute error (MAE), root mean square error (RMSE) of leaf length, leaf width and surface area of each plant are calculated. Ten seedling plants of pepper are selected as the test objects, denoted as p l a n t n , (n ∈ [1, 10]).
Based on Kinect V2, the 3D model of green pepper was constructed and the plant’s phenotypic parameters including plant’s height, leaf’s length, width and area were measured. The AE of plant height is the difference between the actual value and the measured value, and the MAE of plant height is the average value of 10 plants’ AE. The minimum actual value of plant height of 10 green peppers is 11.12 cm, and the maximum actual value is 21.09 cm. In Figure 13, the maximum AE of plant height is 0.66 cm, the minimum AE is 0.15 cm, and the MAE of plant height is 0.392 cm.
The leaf phenotype is an important growth parameter for pepper seedling plants. In this paper, pepper plants’ point clouds were segmented by the LCCP method, four leaves of each green pepper were selected as a whole for measurement, and recorded these as measured values. The average length, width, area errors of four leaves for each green pepper are denoted by M A E l , M A E w and M A E a . The maximum actual value of M A E l is 0.365 cm, and the minimum actual value of M A E l is 0.1825 cm; the maximum actual value of M A E w is 0.305 cm, and the minimum actual value of M A E w is 0.2175 cm; the maximum actual value of M A E a is 1.1125 cm2, and the minimum actual value of M A E a is 0.821 cm2. The MAE of the leaf’s length, width and area of these 10 peppers are denoted as M A E l e a f s l e n g t h , M A E l e a f s w i d t h and M A E l e a f s a r e a . In Figure 14, M A E l e a f s l e n g t h is 0.2537 cm, M A E l e a f s w i d t h is 0.2676 cm and M A E l e a f s a r e a is 0.956 cm2. M A E l and M A E w are directly proportional to M A E a , because the leaf’s area is related to the value of length and width, with the increase of M A E l and M A E w , the M A E a become larger.
In Table 1 the measurement accuracy in this paper was compared with other references. In these methods, Reference [24,25] use binocular vision to obtain the point cloud, Reference [28] uses the 3D laser scanner to obtain the point cloud, and References [3,15] use Kinect to obtain point cloud of regions of interest (ROIs). In Reference [25], VisualSFM was used to register the point cloud of eggplant, pepper and cucumber, segmented these plants by region growing method, and measured the leave’s parameters of these plants. In Reference [24] 3DSOM was used to register the point cloud of Gossypium hirsutum, segmented the plant by region growing and mesh method, and measured the leaves’ parameters of the plant. In Reference [28], target ball extraction algorithm [44] was used to register the point cloud of apple tree branches, segmented the plant by LCCP and Kmeans clustering algorithm, and measure the leaves’ parameters of the plant. In Reference [3], higher solution RGB images was used to register the point cloud of pepper, measured the leaves’ parameters and plant’s height. In Reference [15] segmented the cucumber plants using the Euclidean clustering method, and using the vertical instance method to obtain the height of the plants. Compared with other methods, MAE and RMSE of our method have lower comprehensive errors, and our method has advantages of leaf’s surface area measurement. The measurement errors of plant height, leaf length and width are also at the same level with other measurement methods.

5. Conclusions

The research objects of this paper are the plants in the early growing period. Taking green pepper plants as an example, each leaf plays an important role in the growth of plants. The information of the sum of single leaf on the whole plant is useful for assessing the growth state. The established three-dimensional model is the key to study the whole plant information. Using depth camera and turntable to complete point cloud acquisition and registration is the basic operation for plant segmentation, height measurement and a leaf’s phenotypic features extraction. This paper processes the point clouds according to the characteristics of rotation by equal intervals, which can avoid the process of registration falling into the local optimal solution and ignoring the overall information. The plant model obtained by rotation and registration can reduce the inaccuracy of leaf shape description due to different shooting angles.
The use of the LCCP algorithm to segment the plant point cloud model is a popular three-dimensional point segmentation method. This method avoids the process of selecting initial points of region growing algorithm. The LCCP algorithm uses concave-convex features to segment points, while there is a natural convexity-concavity relationship between leaves and stems. This is another reason why we choose this segmentation algorithm to segment the plant’s point cloud but the convexity-concavity relationship of the adjacent leaves’ point clouds is not obvious, sometimes the young leaves on the top of the plant need to be selected manually. For plants in the growing period, the Leaf Area Index has a greater influence on the growth state of the plant. At this time, the leaves of the plant are dense and the shape of each leaf is larger. Obtaining each leaf individually is impossible, because there is often occlusion between dense leaves, which cannot be reduced by changing the shooting angle. Therefore, it is necessary to find new phenotypic features that can represent plant growth instead of leaves’ parameters.
Our experiment was done indoors, and the shooting camera and the object were placed horizontally, so the three-dimensional points of y-axis coordinate system was used to determine the plant height. At present we have not reproduced the experiment outdoors. Because the outdoor scene is complicated as the plants are generally in the ground, it is difficult to complete the 3D modeling of plants according to the proposed rotation method. However, this paper proposed a general method, which can be extended to the related fields of 3D model building indoors.

Author Contributions

Conceptualization, Y.W. and Y.C.; Data curation, Y.W. and Y.C.; Formal analysis, Y.C.; Funding acquisition, Y.C.; Investigation, Y.W. and Y.C.; Methodology, Y.W.; Project administration, Y.C.; Resources, Y.C.; Software, Y.W. and Y.C.; Supervision, Y.C.; Validation, Y.W.; Visualization, Y.W.; Writing—Original draft preparation, Y.W. and Y.C.; Writing—Review and editing, Y.W. and Y.C. All authors have read and agreed to the published version of the manuscript.

Acknowledgments

This project is supported by the Research and Development of Greenhouse Cluster Control System, s20163081109.

Conflicts of Interest

The authors also thank the editor and anonymous reviewers for providing helpful suggestions for improving the quality of this manuscript.

Abbreviations

The following abbreviations are used in this manuscript:
3DThree-dimensional
RGB-DRed, Green, Blue, and Depth
TOFTime of Flight
ROIRegion of Interest
v2.0Version 2.0
ICPIterative Closet Point
MLSMoving Least Squares
PosPosition
FLANNFast Library for Approximate Nearest Neighbors
LCCPLocally Convex Connected Patches
minminimum
maxmaximum
AEAbsolute Error
RMSERoot Mean Squared Error
MAEMean Absolute Error

References

  1. Chaudhury, A.; Ward, C.; Talasaz, A.; Ivanov, A.G.; Huner, N.P.; Grodzinski, B.; Patel, R.V.; Barron, J.L. Computer Vision Based Autonomous Robotic System for 3D Plant Growth Measurement. In Proceedings of the 12th Conference on Computer and Robot Vision, Halifax, NS, Canada, 3–5 June 2015. [Google Scholar]
  2. Scharr, H.; Minervini, M.; French, A.P.; Klukas, C.; Kramer, D.M.; Liu, X.M.; Luengo, I.; Pape, J.M.; Polder, G.; Vukadinovic, D.; et al. Leaf segmentation in plant phenotyping: A collation study. Mach. Vis. Appl. 2016, 27, 585–606. [Google Scholar] [CrossRef] [Green Version]
  3. Kaiyan, L.; Lihong, X.; Junhui, W. Advances in the application of computer-vision to plant growth monitoring. Trans. Chin. Soc. Agric. Eng. 2004, 20, 279–283. [Google Scholar]
  4. Mishra, K.B.; Mishra, A.; Klem, K.; Govindjee, G. Plant phenotyping: A perspective. Ind. J. Plant Physiol. 2016, 21, 514–527. [Google Scholar] [CrossRef]
  5. Qiu, R.; Wei, S.; Zhang, M.; Li, H.; Li, M. Sensors for measuring plant phenotyping: A review. Int. J. Agric. Biol. Eng. 2018, 11, 1–17. [Google Scholar] [CrossRef] [Green Version]
  6. Sritarapipat, T.; Rakwatin, P.; Kasetkasem, T. Automatic Rice Crop Height Measurement Using a Field Server and Digital Image Processing. Sensors 2014, 14, 900–926. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Hasegawa, M.; Kato, K.; Haga, S.; Misawa, T. Growth Estimation of Transplanted Paddy Rice by Digital Camera Image. Tohoku J. Crop Sci. 2001, 44, 77–78. [Google Scholar]
  8. Liping, G.; Aijun, X. Tree Height Measurement Method with Intelligent Terminal. J. Northeast Forest. Univ. 2018, 46, 28–34. [Google Scholar]
  9. Constantino, K.P.; Gonzales, E.J.; Lazaro, L.M.; Serrano, E.C.; Samson, B.P. Towards an Automated Plant Height Measurement and Tiller Segmentation of Rice Crops using Image Processing. In Mechatronics and Machine Vision in Practice 3; Billingsley, J., Brett, P., Eds.; Springer: Cham, Switzerland, 2018. [Google Scholar]
  10. Qiu, R.; Miao, Y.; Ji, Y.; Zhang, M.; Li, H.; Liu, G. Measurement of Individual Maize Height Based on RGB-D Camera. Trans. Chin. Soc. Agric. Mach. 2017, 48, 211–219. [Google Scholar]
  11. Thi Phan, A.T.; Takahashi, K.; Rikimaru, A.; Higuchi, Y. Method for estimating rice plant height without ground surface detection using laser scanner measurement. J. Appl. Remote Sens. 2016, 10, 046018. [Google Scholar] [CrossRef] [Green Version]
  12. Li, H.; Wang, K.; Cao, Q.; Bian, H. Measurement of plant height based on stereoscopic vision under the condition of a single camera. In Proceedings of the Intelligent Control & Automation, Jinan, China, 6–9 July 2010. [Google Scholar]
  13. Young, K.C.; Zaman, Q.; Farooque, A.; Rehman, T.U.; Esau, T. An on-the-go ultrasonic plant height measurement system (UPHMS II) in the wild blueberry cropping system. Biosyst. Eng. 2017, 157, 35–44. [Google Scholar]
  14. Jiang, Y.; Li, C.; Paterson, A.H. High throughput phenotyping of cotton plant height using depth images under field conditions. Comp. Electron. Agric. 2016, 130, 57–68. [Google Scholar] [CrossRef]
  15. Yang, S.; Gao, W.; Mi, J.; Wu, M.; Wang, M.; Zheng, L. Method for Measurement of Vegetable Seedlings Height Based on RGB-D Camera. Trans. Chin. Soc. Agric. Mach. 2019, 50, 128–135. [Google Scholar]
  16. Vázquez-Arellano, M.; Paraforos, D.S.; Reiser, D.; Garrido-lzard, M.; Griepentrog, H.W. Determination of stem position and height of reconstructed maize plants using a time-of-flight camera. Comp. Electron. Agric. 2018, 154, 276–288. [Google Scholar] [CrossRef]
  17. Hämmerle, M.; Höfle, B. Direct derivation of maize plant and crop height from low-cost time-of-flight camera measurements. Plant Methods 2016, 12, 50. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Leemans, V.; Dumont, B.; Destain, M.F. Assessment of plant leaf area measurement by using stereo-vision. In Proceedings of the International Conference on 3D Imaging (IC3D), Liege, Belgium, 3–5 December 2013. [Google Scholar]
  19. Xia, Y.; Xu, D.; Du, J.; Zhang, L.; Wang, A. On-line measurement of tobacco leaf area based on machine vision. Nongye Jixie Xuebao/Trans. Chin. Soc. Agric. Mach. 2012, 43, 167–173. [Google Scholar]
  20. Lin, K.Y.; Wu, J.H.; Chen, J.; Si, H.P. Measurement of Plant Leaf Area Based on Computer Vision. In Proceedings of the Sixth International Conference on Measuring Technology and Mechatronics Automation, Zhangjiajie, China, 10–11 January 2014. [Google Scholar]
  21. Zhong, Q.; Zhou, P.; Fu, B.; Liu, K. Measurement of pest-damaged area of leaf based on auto-matching of representative leaf. Trans. Chin. Soc. Agric. Eng. 2010, 26, 216–221. [Google Scholar]
  22. Liu, Z.C.; Xu, L.H.; Lin, C.F. An Improved Stereo Matching Algorithm Applied to 3D Visualization of Plant Leaf. In Proceedings of the 8th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, 12–13 December 2015. [Google Scholar]
  23. Zhao, L.; Yang, L.L.; Cui, S.G.; Wu, X.L.; Liang, F.; Tian, L.G. Method for Non-Destructive Measurement of Leaf Area Based on Binocular Vision. Appl. Mech. Mater. 2014, 577, 664–667. [Google Scholar] [CrossRef]
  24. Paproki, A.; Sirault, X.; Berry, S.; Furbank, R.; Fripp, J. Novel mesh processing based technique for 3D plant analysis. BMC Plant Biol. 2012, 12, 63. [Google Scholar] [CrossRef] [Green Version]
  25. Hui, F.; Zhu, J.Y.; Hu, P.C.; Meng, L.; Zhu, B.L.; Guo, Y.; Li, B.G.; Ma, Y.T. Image-based dynamic quantification and high-accuracy 3D evaluation of canopy structure of plant populations. Ann. Bot. 2018, 121, 1079–1088. [Google Scholar] [CrossRef]
  26. Paulus, S.; Dupuis, J.; Mahlein, A.-K.; Kuhlmann, H. Surface feature based classification of plant organs from 3D laserscanned point clouds for plant phenotyping. BMC Bioinform. 2013, 14, 238. [Google Scholar] [CrossRef] [Green Version]
  27. Liu, H.; Liu, J.L.; Shen, Y.; Pan, C.K. Segmentation Method of Supervoxel Clusterings and Salient Map. Trans. Chin. Soc. Agric. Mach. 2018, 12, 172–179. [Google Scholar]
  28. Gang, L.; Weijie, Z.; Cailing, G. Apple Leaf Point Cloud Clustering Based on Dynamic-K-threshold and Growth Parameters Extraction. Trans. Chin. Soc. Agric. Mach. 2019, 50, 163–169, 178. [Google Scholar]
  29. Wahabzada, M.; Paulus, S.; Kersting, K.; Mahlein, A.K. Automated interpretation of 3D laserscanned point clouds for plant organ segmentation. BMC Bioinform. 2015, 16, 248. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Gélard, W.; Herbulot, A.; Devy, M.; Debaeke, P.; Mccormick, R.F.; Truong, S.K.; Mullet, J. Leaves Segmentation in 3D Point Cloud. In Proceedings of the International Conference on Advanced Concepts for Intelligent Vision Systems, Antwerp, Belgium, 18–21 September 2017. [Google Scholar]
  31. Gelard, W.; Devy, M.; Herbulot, A.; Burger, P. Model-based Segmentation of 3D Point Clouds for Phenotyping Sunflower Plants. In Proceedings of the International Joint Conference on Computer Vision Imaging and Computer Graphics Theory and Applications, Porto, Portugal, 27 February–1 March 2017; pp. 459–467. [Google Scholar]
  32. Vázquez-Arellano, M.; Reiser, D.; Paraforos, D.S.; Garrido-Izard, M.; Burce, M.E.C.; Griepentrog, H.W. 3-D reconstruction of maize plants using a time-of-flight camera. Comp. Electron. Agric. 2018, 145, 235–247. [Google Scholar] [CrossRef]
  33. Bao, Y.; Tang, L.; Shah, D. Robotic 3D Plant Perception and Leaf Probing with Collision-Free Motion Planning for Automated Indoor Plant Phenotyping. In Proceedings of the ASABE International Meeting, Spokane, WA, USA, 16–19 July 2017. [Google Scholar]
  34. Dupuis, J.; Paulus, S.; Behmann, J.; Plumer, L.; Kuhlmann, H. A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors. Sensors 2014, 14, 7563–7579. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Zheng, B.; Shi, L.; Ma, Y.; Deng, Q.; Li, B.; Guo, Y. Three-dimensional digitization in situ of rice canopies and virtual stratified-clipping method. Sci. Agric. Sin. 2009, 42, 1181–1189. [Google Scholar]
  36. Shah, D.; Tang, L.; Gai, J.; Putta-Venkata, R. Development of a Mobile Robotic Phenotyping System for Growth Chamber-based Studies of Genotype x Environment Interactions. IFAC Papers OnLine 2016, 49, 248–253. [Google Scholar] [CrossRef]
  37. Andújar, D.; Dorado, J.; Ribeiro, A. An Approach to the Use of Depth Cameras for Weed Volume Estimation. Sensors 2016, 16, 972. [Google Scholar] [CrossRef] [Green Version]
  38. Yamamoto, S.; Hayashi, S.; Saito, S.; Ochiai, Y. Measurement of growth information of a strawberry plant using a natural interaction device. In Proceedings of the American Society of Agricultural and Biological Engineers Annual International Meeting, Dallas, TX, USA, 29 July–1 August 2012; pp. 5547–5556. [Google Scholar]
  39. Li, X.; Wang, X.; Wei, H.; Zhu, X.; Huang, H. A technique system for the measurement, reconstruction and character extraction of rice plant architecture. PLoS ONE 2017, 12, e0177205. [Google Scholar] [CrossRef] [Green Version]
  40. Sun, G.; Wang, X. Three-Dimensional Point Cloud Reconstruction and Morphology Measurement Method for Greenhouse Plants Based on the Kinect Sensor Self-Calibration. Agronomy 2019, 9, 596. [Google Scholar] [CrossRef] [Green Version]
  41. Heming, J.; Yujia, M.; Zhikai, X.; Baizhuo, Z.; Xiaoxu, P.; Jinduo, L.I. Reconstruction of three dimensional model of plant based on point cloud stitching. Appl. Sci. Technol. 2019, 46, 19–24. [Google Scholar]
  42. Yang, M.; Cui, J.; Jeong, E.S.; Cho, S.I. Development of High-resolution 3D Phenotyping System Using Depth Camera and RGB Camera. In Proceedings of the ASABE International Meeting, Spokane, WA, USA, 16–19 July 2017. [Google Scholar] [CrossRef]
  43. Hu, Y.; Wang, L.; Xiang, L.; Wu, Q.; Jiang, H. Automatic Non-Destructive Growth Measurement of Leafy Vegetables Based on Kinect. Sensors 2018, 18, 806. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Rusu, R.B.; Marton, Z.C.; Blodow, N.; Dollha, M. Towards 3D Point Cloud Based Object Maps for Household Environments. Robot. Auton. Syst. 2008, 56, 927–941. [Google Scholar] [CrossRef]
  45. Besl, P.J.; Mckay, H.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  46. Yawei, W.; Yifei, C. Fruit Morphological Measurement Based on Three-Dimensional Reconstruction. Agronomy 2020, 10, 455. [Google Scholar]
  47. Gelfand, N.; Ikemoto, L.; Rusinkiewicz, S.; Levoy, M. Geometrically Stable Sampling for the ICP Algorithm. In Proceedings of the Fourth International Conference on 3-D Digital Imaging and Modeling, Banff, AB, Canada, 6–10 October 2003; pp. 260–267. [Google Scholar]
  48. Lancaster, P.; Salkauskas, K. Surfaces generated by moving least squares methods. Math. Comput. 1981, 37, 141–158. [Google Scholar] [CrossRef]
  49. Available online: http://www.pointclouds.org/documentation/tutorials/resampling.php (accessed on 5 January 2020).
  50. Available online: http://docs.pointclouds.org/trunk/kdtree__flann_8hpp_source.html (accessed on 24 January 2020).
  51. Stein, S.C.; Wörgötter, F.; Schoeler, M. Convexity based object partitioning for robot applications. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014. [Google Scholar]
  52. Hu, Z. Research on Boundary Detection and Hole Repairing Based on 3D Laser Scanning Point Cloud; China University of Mining and Technology: Xuzhou, China, 2016. [Google Scholar]
Sample Availability: Samples of the compounds are available from the authors.
Figure 1. Flowchart of this paper for obtaining a plant’s morphological parameters.
Figure 1. Flowchart of this paper for obtaining a plant’s morphological parameters.
Plants 09 00571 g001
Figure 2. Measurement system of plant phenotypic parameters.
Figure 2. Measurement system of plant phenotypic parameters.
Plants 09 00571 g002
Figure 3. Color render plant’s point clouds of a n g l e 0 7 according to x-axis direction. (a) point cloud of a n g l e 0 ; (b) point cloud of a n g l e 1 ; (c) point cloud of a n g l e 2 ; (d) point cloud of a n g l e 3 ; (e) point cloud of a n g l e 4 ; (f) point cloud of a n g l e 5 ; (g) point cloud of a n g l e 6 ; (h) point cloud of a n g l e 7 .
Figure 3. Color render plant’s point clouds of a n g l e 0 7 according to x-axis direction. (a) point cloud of a n g l e 0 ; (b) point cloud of a n g l e 1 ; (c) point cloud of a n g l e 2 ; (d) point cloud of a n g l e 3 ; (e) point cloud of a n g l e 4 ; (f) point cloud of a n g l e 5 ; (g) point cloud of a n g l e 6 ; (h) point cloud of a n g l e 7 .
Plants 09 00571 g003
Figure 4. Comparison registering point cloud of a n g l e 0 and a n g l e 1 by traditional ICP algorithm with our algorithm. (a) Point cloud of a n g l e 0 and a n g l e 1 . (b) Registering point cloud of a n g l e 0 and a n g l e 1 by traditional algorithm. (c) Registering point cloud of a n g l e 0 and a n g l e 1 by our algorithm.
Figure 4. Comparison registering point cloud of a n g l e 0 and a n g l e 1 by traditional ICP algorithm with our algorithm. (a) Point cloud of a n g l e 0 and a n g l e 1 . (b) Registering point cloud of a n g l e 0 and a n g l e 1 by traditional algorithm. (c) Registering point cloud of a n g l e 0 and a n g l e 1 by our algorithm.
Plants 09 00571 g004
Figure 5. Comparison registering P o s 0 and P o s 1 point cloud by traditional ICP algorithm with our algorithm. (a) P o s 0 and P o s 1 point cloud. (b) Registering P o s 0 and P o s 1 point cloud by traditional ICP algorithm. (c) Registering P o s 0 and P o s 1 point cloud by our ICP algorithm.
Figure 5. Comparison registering P o s 0 and P o s 1 point cloud by traditional ICP algorithm with our algorithm. (a) P o s 0 and P o s 1 point cloud. (b) Registering P o s 0 and P o s 1 point cloud by traditional ICP algorithm. (c) Registering P o s 0 and P o s 1 point cloud by our ICP algorithm.
Plants 09 00571 g005
Figure 6. Comparison the original point cloud and moving least squares (MLS) processed point cloud. (a) Picking a leaf’s point cloud from the whole point cloud. (b) Smoothing these point clouds with MLS method.
Figure 6. Comparison the original point cloud and moving least squares (MLS) processed point cloud. (a) Picking a leaf’s point cloud from the whole point cloud. (b) Smoothing these point clouds with MLS method.
Plants 09 00571 g006
Figure 7. Segementation leaves and stem with locally convex connected patches method (LCCP) algorithm. (a) Segementation plant’s point cloud into small regions. (b) Small regions into larger objects by convexity-concavity relationship.
Figure 7. Segementation leaves and stem with locally convex connected patches method (LCCP) algorithm. (a) Segementation plant’s point cloud into small regions. (b) Small regions into larger objects by convexity-concavity relationship.
Plants 09 00571 g007
Figure 8. Three judgment conditions for obtaining the boundary points from the three-dimensional point cloud. (a) both areas have no points; (b) either one of the areas has no points; (c) the angle is smaller than ϵ .
Figure 8. Three judgment conditions for obtaining the boundary points from the three-dimensional point cloud. (a) both areas have no points; (b) either one of the areas has no points; (c) the angle is smaller than ϵ .
Plants 09 00571 g008
Figure 9. Extraction edge points of leaves by boundary conditions method.
Figure 9. Extraction edge points of leaves by boundary conditions method.
Plants 09 00571 g009
Figure 10. Fitting length and width of leaf with Euclidean distance algorithm.
Figure 10. Fitting length and width of leaf with Euclidean distance algorithm.
Plants 09 00571 g010
Figure 11. Fitting surface of leaf with double integral algorithm.
Figure 11. Fitting surface of leaf with double integral algorithm.
Plants 09 00571 g011
Figure 12. Calculation the plant’s height by vertical distance method based on y-axis.
Figure 12. Calculation the plant’s height by vertical distance method based on y-axis.
Plants 09 00571 g012
Figure 13. Values of absolute error (AE) for plant’s height.
Figure 13. Values of absolute error (AE) for plant’s height.
Plants 09 00571 g013
Figure 14. Values of mean absolute error (MAE) for leaves’ phenotypic characteristics.
Figure 14. Values of mean absolute error (MAE) for leaves’ phenotypic characteristics.
Plants 09 00571 g014
Table 1. MAE, RMSE between this paper and other papers in the terms of plant height, leaf length, leaf width and leaf area.
Table 1. MAE, RMSE between this paper and other papers in the terms of plant height, leaf length, leaf width and leaf area.
ExprimentObjectIndexLength (cm)Width (cm)Area (cm2)Plant Height (cm)
This PaperPepperRMSE0.26390.27350.9640.417
Fang [25]Cucumber0.280.324.34-
Eggplant0.160.233.89-
Pepper0.330.151.33-
Paproki [24]Gossypium hirsutum0.970.728-1.9
Liu [28]apple tree0.590.538--
This PaperPepperMAE0.25370.26760.9570.392
Kaiyan [3]Pepper0.1830.124-0.344
Yang [15]Cucumber---0.23
Liu [28]apple tree0.550.51--

Share and Cite

MDPI and ACS Style

Wang, Y.; Chen, Y. Non-Destructive Measurement of Three-Dimensional Plants Based on Point Cloud. Plants 2020, 9, 571. https://doi.org/10.3390/plants9050571

AMA Style

Wang Y, Chen Y. Non-Destructive Measurement of Three-Dimensional Plants Based on Point Cloud. Plants. 2020; 9(5):571. https://doi.org/10.3390/plants9050571

Chicago/Turabian Style

Wang, Yawei, and Yifei Chen. 2020. "Non-Destructive Measurement of Three-Dimensional Plants Based on Point Cloud" Plants 9, no. 5: 571. https://doi.org/10.3390/plants9050571

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop