Next Article in Journal
Target Site Resistance to Acetolactate Synthase Inhibitors in Diplotaxis erucoides and Erucaria hispanica–Mechanism of Resistance and Response to Alternative Herbicides
Next Article in Special Issue
Autonomous Mowers Working in Narrow Spaces: A Possible Future Application in Agriculture?
Previous Article in Journal
Digital Count of Corn Plants Using Images Taken by Unmanned Aerial Vehicles and Cross Correlation of Templates
Previous Article in Special Issue
Determining Irrigation Depths for Soybean Using a Simulation Model of Water Flow and Plant Growth and Weather Forecasts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time Localization Approach for Maize Cores at Seedling Stage Based on Machine Vision

1
Key Laboratory of Modern Precision Agriculture System Integration Research, Ministry of Education, China Agriculture University, Beijing 100083, China
2
Key Laboratory of Agricultural Information Acquisition Technology, Ministry of Agriculture, and Rural Affairs, China Agriculture University, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Agronomy 2020, 10(4), 470; https://doi.org/10.3390/agronomy10040470
Submission received: 25 February 2020 / Revised: 24 March 2020 / Accepted: 27 March 2020 / Published: 28 March 2020
(This article belongs to the Special Issue Precision Agriculture for Sustainability)

Abstract

:
To realize quick localization of plant maize, a new real-time localization approach is proposed for maize cores at the seedling stage, which can meet the basic demands for localization and quantitative fertilization in precision agriculture and reduce environmental pollution and the use of chemical fertilizers. In the first stage, by taking pictures of maize at the seedling stage in a field with a monocular camera, the maize is segmented from the weed background of the picture. And then the three most-effective methods (i.e., minimum cross entropy, ISODATA, and the Otsu algorithm) are found from six common segmentation algorithms after comparing the accuracy rate of extracting maize and the time efficiency of segmentation. In the second stage, plant core from segmented maize image is recognized, and localized, based on different brightness of the rest part of maize core and plant. Then the geometric center of maize core is considered as localization point. the best effect of extracting maize core was found from the minimum cross entropy method based on gray level. According to experimental validation using many field pictures, under weedy conditions on sunny days, the proposed method has a minimum recognition rate of 88.37% for maize cores and is more robust at excluding weeds.

1. Introduction

With the development and application of precision agricultural technology, precise fertilization is increasingly popular in agricultural production, being an important technological means for controlling excessive application of chemical fertilizers., variable-rate fertilization [1] and localization and quantitative fertilization can be used to combat existing excess fertilization and environmental pollution. For maize, localization and fertilization can be realized by localizing the real-time position of each maize plant, and then appropriate chemical fertilizer is applied to each plant with a fertilizer distributor at a fixed point. This is an improvement on the current main fertilization methods of drill fertilization and broadcast application, thereby decreasing fertilization, and enhancing efficiency.
Implementing weeding, spraying, and navigation work through recognition and localization for crops, such as maize, has long been a research focus regarding the technology and equipment of precise agriculture [2,3,4]. The predecessors have distinguished weeds through localizing position of crops with machine vision technology [5,6,7], to implement weeding work [8,9] and determine crop diseases and position for spraying. Also, by extracting crop rows and generating navigation lines [10], it is convenient to design routes for agricultural machines in fields [11].
At the seedling stage, the agricultural requirements for maize are for fertilization up to 10 cm on either side of each row and to a depth of 10 cm. However, precise recognition made by predecessors for crops and weeds [12,13,14,15] mainly lies in how to accurately detect distribution of weeds and crops, and how to determine weedy conditions in field (e.g., location, density, variety) in intelligent way. In most cases (e.g., when leaves grow asymmetrically or indicate defective conditions), the results of maize image localization [16,17] do not reflect the central position of each maize plant. Direct localization of the root position may shield the leaves [18,19], which will have large influences on the precision of localization and fertilization based on machine vision. Therefore, by collecting image information about the maize canopy, in a vertical way with a monocular camera, the central position formed from wrapped leaves at the top of the maize stem (i.e., the core position), is used to determine the position of each maize plant in the present study.
According to the literature, there are multiple approaches to image segmentation. However, the common methods used for maize segmentation [20,21,22,23,24,25,26,27,28,29,30,31] have not yet been systematically compared with respect to their ability to recognize maize cores. To guide real-world applications of localization and fertilization for maize, this study collected a large number of images of maize at the seedling stage, under different weather and field conditions (i.e., images with more weeds on sunny days (MS), images with fewer weeds on sunny days (FS), images with more weeds on cloudy days (MC), and images with fewer weeds on cloudy days(FC)). In this study, we first apply a minimum cross entropy model to recognize the maize cores, and we systematically compare different image segmentation algorithms [25,26,27,28,29,30,31] with respect to accuracy and time efficiency when recognizing the maize cores. We find that the minimum cross entropy model based on the gray level has a better performance, and we recommend using such a model in the real-time localization of maize cores under complex field environments.

2. Materials and Methods

2.1. Design of Image Recognition System

The main parts of the image recognition system are (i) a Lenovo ThinkPadP52s graphic workstation, (ii) an LBAS-U350-60C industrial camera, (iii) a lens, and (iv) a USB cable and holder. The image processing runs on Intel processors with a processing rate of 2.00 GHz and 32 GB of memory. The camera resolution is 2592 × 2048 pixels, and the frame rate for image acquisition is 12 fps. When shooting the canopy vertically, the lens of the camera was approximately 0.5 m from the canopy. This study used an HC1205A prime lens with a field of view of 0.53 m × 0.4 m and a minimum depth of field (focus distance) of 0.1 m. The auto white balance settings and exposure for the camera system were determined based on changes in the illuminance at the time of exposure, to correct the color and to adjust the exposure parameters. We reduced the exposure time and improved the gain to decrease possible smear problems in the video, by using an adjustment function for the aperture of the lens, within the proper range of adjustment.
Images of maize at the seedling stage (the period of emergence with four to six leaves) were collected between May and June in 2017–2019. To increase the data acquisition time and data volume, CAU 86 Maize was planted in different time in zone with area of 10 m × 15 m in the Shangzhuang Experimental Station, and the planting density was designed as a line spacing of 0.6 m and a row spacing of 0.2 m. The camera was fixed on a universal adjustable holder, in vertical high angle shot way, to collect images of maize canopy, with mobile trolley during movement in the field. Many images and videos were taken of maize in the seedling stage and during growth, under different light conditions in the large field. For the present experiment to validate the real-time localization approach for maize core, 219 typical images were selected from a total of 7778 images. See Figure 1 for an example of the original pictures that were collected. In the present study, the monocular camera was used to collect images of the maize canopy dynamically.

2.2. Real-time Recognition Algorithm for Maize Cores at Seedling Stage

In the process of operating a fertilizer distributor in a field, a high accuracy rate and real-time performance for the core localization approach is required to implement localization and fertilization, using collected images of maize.
First the images is processed based on the extra-green index, which realized significant inhibition for shadows, weeds, and soil in the images. The vegetation zone of the images were highlighted by extracting the G component and fading the R and B components [21]. The extra-green index was used to distinguish between vegetation and background.
Next, an image segmentation algorithm is used to separate the maize zone from the weedy background, based on connected-domain analysis and gap filling [32].
Finally, the maize cores are recognized based on their different brightness from that of the rest of the maize plant. The geometric center of each maize core is considered as the localization point.
By comparing the accuracy rates and segmentation times of six common algorithms(i.e., (i) the continuous max-flow algorithm, (ii) minimum cross entropy, (iii) ISODATA, (iv) Otsu, (v) k-means, and (vi) fuzzy thresholding segmentation) [25,26,27,28,29,30,31], we select the three with the best segmentation results (i.e., minimum cross entropy, ISODATA, and the Otsu algorithm) and use them to recognize the positions of maize cores based on four core brightness indexes (i.e., gray, Y, vHSV, and extra-green) [21,33,34,35] (12 combined strategies in total). After experimental validation with many field images, the core localization effects of 12 combined strategies are compared. We find that the minimum cross entropy method based on gray is best at extracting maize cores, which can deliver real-time and accurate fertilization in the field. The minimum cross entropy method [26] solves the problem of selecting threshold values in image segmentation, through the minimum cross entropy between the image and other zones. Figure 2 shows a flowchart of the algorithm for the real-time recognition of maize cores at the seedling stage.

2.2.1. Extraction of Maize Zone at Seedling Stage

The canopy structure of the maize will be distinguished better from the soil background, using the extra-green characteristic method to extract characteristics. The method is based on the different reflection characteristics of red, green and blue visible light bands of vegetation and soil background [36]. However, maize and weeds have similar spectra, and weeds influence the extraction of the maize zone in images [20]. Based on the fact that the area proportion of the canopy of maize is higher than that of weeds, the latter can be removed by calculating area of image object. Therefore, the method used herein to extract the maize zone is to use the extra-green index to distinguish between vegetation and background (Figure 3b). Then image segmentation is used to separate the vegetation from the background (Figure 3c). Connected-domain analysis and the removal of small-area objects [32] will eliminate the influences of weeds and noise, which is based on the fact that the area proportion of the canopy of maize is higher than that of weeds, the latter can be removed by calculating area of image object. (Figure 3d).

Index Structure

Based on large difference in color of soil and plant, vegetation and background will be distinguished in field environment. Extracting green characteristic of maize before image graying, will greatly decline calculation amount of subsequent graying and improve real-time performance. Using obvious green vegetation in RGB true-color images, Woebbecke et al. [21] extracted the G component and fading the R and B components, which realized significant inhibition for shadows, weeds, and soil in the images, and highlighted the vegetation zone. The common methods for calculating the extra-green characterization factor include 2G−R−B [21] and 1.262G + 0.884R + 0.311B [22]. After calculation, 2G−R−B is better at distinguishing between vegetation and soil, therefore we use 2G−R−B herein to calculate the extra-green characteristic factor.

Image Segmentation

Current methods for vegetation segmentation remain challenged by light conditions, shadows, and complicated backgrounds. Under differing light conditions, the method based on learning supervision requires many training samples and depends on a training stage to acquire reliable segmentation results, thereby increasing the calculation time and failing to meet real-time application requirements. Also, the method based on color index segmentation is influenced greatly by light conditions [20]. Therefore, in the present study we subjected six common image segmentation methods based on threshold values (i.e., (i) the continuous max-flow algorithm, (ii) minimum cross entropy, (iii) ISODATA, (iv) Otsu, (v) k-means, and (vi) fuzzy thresholding segmentation). Three for core recognition and localization will be selected from a contrast experiment for the same set of sample images, in consideration of the segmentation effects and time.
Based on the active contour model and the level-set method, the method of image segmentation can improve model robustness for initial values and noise interference, as well as the stability of the numerical algorithm; however, real-time performance cannot be realized, because of the large amount of calculations required to solve the model [23]. Besides, the effect of applying the active contour model is not ideal under poor image contrast and a complicated background [24]. Therefore, the image segmentation method based on the active contour model and the level-set method was excluded from the contrast experiment.
In the experiment, the following six common image segmentation methods were applied to the gray images obtained by the extra-green index, thereby classifying the images into soil background and vegetation.
  • Continuous Max-flow Model and Algorithm
Yuan et al. [25] proposed the continuous max-flow and min-cut method, which is superior given its fast convergence and wide selection of parameters. Based on image structure network, this method converts the energy-functional minimization problem into the min-cut problem. Then max-flow/min-cut theory is use to convert the min-cut problem into the max-flow problem. The solution of the image segmentation model will be obtained by solving the max-flow problem. The continuous max-flow image segmentation method has attracted wide attention because of its small measurement error and parallel realization.
b.
Minimum Cross Entropy
Li et al. [26] proposed the method of image thresholding segmentation with minimum cross entropy. The method solves the problem of selecting threshold values in image segmentation, through the minimum cross entropy between the image and other zones. This method makes an unbiased estimation of a binary image from the perspective of information theory, thereby no prior knowledge is required about the gray distribution of the image. This thresholding segmentation is simple to implement and can run quickly, while binarization image is also suitable to template matching through correlation and real-time recognition in hardware.
c.
ISODATA
The ISODATA algorithm [27] uses a merging and splitting mechanism, and exhibits high calculation efficiency and good adaptability. Before segmentation, the algorithm can determine artificially the number of classes, in which sample number and maximum iteration shall be contained at least. To a certain extent, it has decreased blind clustering based on existing knowledge and expert experience, which will help achieve better segmentation effects.
d.
Otsu algorithm
With a simple and stable calculation process, the Otsu algorithm [28] is a method for selecting threshold values automatically without artificially setting other parameters. Its main concept is to establish a target zone and a background zone for binarization segmentation, based on statistical characteristics, maximizing the between-class variance of both zones.
e.
k-means clustering
The k-means clustering algorithm is a typical object-oriented and unsupervised real-time clustering algorithm. The data based on minimum error function is classified into predetermined Grade K, and the clustering centers are determined by iterative operation of sample average. Because of its conciseness and high efficiency, the k-means clustering algorithm is among the most widely used of its type [29,30].
f.
Fuzzy thresholding segmentation
Santiago et al. [31] proposed a local fuzzy thresholding method for multi-regional image segmentation. This eliminates the fake shadows and noise of traditional thresholding methods, realizes full automation, and avoids manual intervention.
After segmenting the image of the maize canopy, binarization was realized, with the vegetation classed as zone 1 and the background classed as zone 0.

Extraction of Maize Zone

After segmenting the image of the maize canopy, the connected-domain with the maximum area (the light blue area shown in Figure 3d) is extracted from the image as the maize zone, based on the higher area proportion of maize at the seedling stage than that of weeds. The influences of noise and light conditions can lead to gaps in the maize connected-domain. These gaps are filled through image morphological method to obtain the images of the maize zone. Figure 4 shows a flowchart of the connected-domain analysis [32] for images after removing the soil background.

2.2.2. Recognition and Localization of Maize Cores

From analyzing the images of the maize zone, the maize cores are less bright than the other parts of the plant, because the leaves wrap around the core zone at the top of the maize stem. The maize zone segmented from the canopy image includes the maize cores and other parts of the plant. Herein, core recognition, and localization involves three steps, namely (1) selection of brightness index (i.e., gray, Y, vHSV, and extra-green), (2) Extraction of Maize Core, using image segmentation to separate the maize cores from the other parts of the maize zone, and (3) Core Recognition and Localization, noise elimination and calculated the centroid of the core zone (Figure 5).

Selection of Brightness Index

To describe the difference in brightness between the maize core and the other parts of the plant, from the literature [16,17,21,33,34,35] we selected four brightness indexes. Then the suitable brightness index will be selected by comparing their effectiveness at core recognition.
  • Gray
The processing of grayscale transformation is to convert color image into grayscale image. The purpose of grayscale transformation is to reduce the amount of color data in the image, so as to accelerate the subsequent image processing. The naked eye is more sensitive to the brightness component in YCbCr [33]. To maintain the color difference between a maize core and its background in the color maize zone image, we use the common gray = 0.299R + 0.587G + 0.114B in image processing as the brightness index.
b.
Y
Zheng et al. [35] designed a new method for vein extraction to transform the gray scale by using the information of hue and intensity. The brightness index namely Y = (((H + 90)/360)/360 + 1 − V)/2, where H and V are the hue component and value (intensity) component, respectively, of pixel color in the HSV color space. This brightness index can maintain the color difference between a core and its background from a color maize zone image, showing stronger adaptability to the grayscale transformation than common methods during core extraction.
c.
vHSV
HSV (hue, saturation, value) is a color space that is based on the intuitive characteristics of color. The HSV color space is used to describe color quantitatively, and is applied in many image-analysis tasks [34]. Herein, the HSV color space is used to calculate the color information in the color maize zone image, and the brightness value V is used as the brightness index. The three components of the HSV color space are relatively independent, which is one of the reasons for selecting the HSV color space for core recognition, while the H, and S components are influenced slightly by light conditions and shadows. This form of color expression is very close to the Munsell color model, and is the same as visual perception of color information. Because the human eye can distinguish maize cores easily, it is better to select this color model to a certain extent.
d.
Extra-green
When comparing and analyzing the reflection characteristics of maize cores and leaves in red, green, and blue visible light bands, we see slight differences among the R, G, and B values of a maize core, while the G value of maize leaves is much higher than the R value and B value. Therefore, we use the extra-green index (2G−R−B) for extracting characteristics [21], to better distinguish the maize core from the other parts of the maize zone.

Extraction of Maize Core

Previous experimental results of segmenting maize zone from the maize canopy images suggest that three segmentation algorithms are better than the others (i.e.,minimum cross entropy, ISODATA, and Otsu),therefore only these three image segmentation results are tested here. Based on the core being less bright than other parts in a maize zone image, we classified the maize zone image into cores, and other plant parts, and then compared the effectiveness of core extraction through 12 combined strategies (from three segmentation methods and four brightness indexes).

Core Recognition and Localization

The influences of noise and light mean that the “core sections” after image segmentation are multiple non-continuous small zones; see Figure 5c. We conducted connected-domain analysis [32] for the “core sections” and selected the zone with the largest area as the actual core zone, because the area of noise interference is small. Finally, we calculated the centroid of the actual core zone, namely the position of the maize core. The calculation steps are described as follows.
Given an m × n dimensional binary image I ( x , y ) , where the maize core zone is A and the background zone is B, i.e.,
I ( x , y ) = { 1 ( x , y ) A 0 ( x , y ) B ,
the centroid ( x 0 , y 0 ) of the core zone is defined as
{ x 0 = ( x , y ) A x I ( x , y ) ( x , y ) A I ( x , y ) y 0 = ( x , y ) A y I ( x , y ) ( x , y ) A I ( x , y ) .
When the sum of the squares of the distances from the x-coordinate x 0 of the core position to the x-coordinates of all other points in the target is compared with that of any other point, its value is minimum (the same applies for y0).

3. Results and Discussion

3.1. Effects of Segmentation for Maize Zone at Seedling Stage

In this study, the image samples were classified into four categories according to their illumination intensity and the amount of weeds. The purpose of classifying samples is to verify whether the accuracy of our method is affected, and how much it is affected, by interference due to different degrees of illumination and weed backgrounds. Based on illumination intensity and amount of weeds, we classified 219 images of maize samples into 42 images with more weeds on sunny days (MS), 86 images with fewer weeds on sunny days (FS), 42 images with more weeds on cloudy days (MC), and 49 images with fewer weeds on cloudy days (FC). Six segmentation methods (i.e., (i) the continuous max-flow algorithm, (ii) minimum cross entropy, (iii) ISODATA, (iv) Otsu, (v) k-means, and (vi) fuzzy thresholding segmentation) were used to segment the maize zone from the background weeds. By comparing the accuracy rate and segmentation time, we selected the three fastest and most-accurate methods; see Figure 6 for the effects of the six segmentation methods.
After obtaining the image processing results for the extracted maize zone, the results were classified and analyzed using Excel. We divided the segmented images of the maize zone into four processing results, namely, (a) the maize zone is extracted correctly; (b) the background with part of the weeds is also extracted when the maize zone is extracted; (c) the soil background is also extracted when the maize zone is extracted; and (d) incorrect extraction, i.e., the maize zone is not extracted or is extracted incompletely. Such statistical and classification results are useful to strictly distinguish the image segmentation results and to avoid the influence of subjective factors. To adhere to the requirements of the subsequent analysis, we labeled (a) as “correct extractions“ (CE); integrated (b) and (c) and labeled them as “multiple extractions“ (ME); and labeled (d) as “wrong extractions“ (WE). Figure 7 shows the statistical results of the six segmentation methods (i.e., (i) the continuous max-flow algorithm, (ii) minimum cross entropy, (iii) ISODATA, (iv) Otsu, (v) k-means, and (vi) fuzzy thresholding segmentation). The x-coordinate indicates the three processing results for the six segmentation methods, while the y-coordinate indicates the amount of each processing result. Different colors represent different sample classes, i.e., image samples collected under different illumination and weed conditions.
According to Figure 7, the extraction effects of the six segmentation methods in maize images with fewer weeds on a sunny day are better. Compared with the other methods, the continuous max-flow algorithm shows a higher accuracy rate. In particular, the effect of a sample with more weeds on a cloudy day is better. However, the six methods were also assessed based on the segmentation time, in which three applicable segmentation methods will be selected after comprehensive consideration.
Figure 8 shows the average time (avg) and standard deviation (σ) for extracting images of the four classes of samples with the six segmentation methods. According to Figure 8, because of their shorter segmentation times and smaller standard deviations, minimum cross entropy, ISODATA, and Otsu run stably, and faster. Then combined with the statistical results for the accuracy rate as shown in Figure 7, we selected these three segmentation methods for core recognition and localization, i.e., minimum cross entropy, ISODATA, and Otsu.

3.2. Evaluation of Effectiveness of Core Localization

Based on the difference in brightness between the maize core and the other parts of the plant, we selected three segmentation methods (i.e., minimum cross entropy, ISODATA, and Otsu) and four brightness indexes (i.e., gray, Y, vHSV, and extra-green) for core recognition. The effects of 12 combination strategies were compared (from three segmentation methods and four brightness indexes); see Figure 9 for the effects.
The quantity statistics for the effects of core recognition and localization with the 12 combined strategies were made. The deviation of the recognized core position from the actual position is considered as failed recognition. and the core recognition rate is calculated for different classes of samples (i.e., MS, FS, MC, FC). See Figure 9 for the results. In the process of collecting information about the effects of core recognition, the standard of identification is whether a recognized localization point for a maize core comes from the core area. Accordingly, the recognition and localization results were easily and visually identified. and the image segmentation results were strictly distinguished, avoiding the influence of subjective factors.
In this paper, the core recognition ratio was calculated after counting the number of successful core recognitions. Then the histogram shown in Figure 10 was generated. The x-coordinate indicates the results of recognizing maize cores with the four brightness indexes(i.e., gray, Y, vHSV, and extra-green) over the three processing results of the three segmentation methods (i.e., minimum cross entropy, ISODATA, and Otsu), while the y-coordinate indicates the recognition ratio, where different colors indicate different sample classes(i.e., MS, FS, MC, FC). The core recognition ratio of 12 combined strategies in Figure 10 was used to compare and analyze the effects of core recognition, as well as the anti-interference capacities.
According to the statistical results shown in Figure 10, the effect of extracting the maize core with minimum cross entropy based on gray was found the best, and we consider this method as a new method for real-time core localization. After validation with four categories of sample images (i.e., MS, FS, MC, FC), there is a slight difference in the recognition rate for MS and FS samples with the minimum cross entropy model based on gray, indicating good robustness against weed interference on sunny days. Besides, the accuracy rate of recognition on sunny days is over 88.37%, which is significantly higher than the recognition results of the other 11 combined strategies. Among the statistical results, the recognition rate for samples with this method(minimum cross entropy based on gray) is 54.74–79.59%, not lower than the other recognition results on cloudy days, which is influenced mainly by the quantity of weeds. As a result, under low light conditions, the core recognition rate of this method is greatly influenced by weeds. Supplementary lighting can be considered to improve the stability of this method herein. Under field conditions, the method of the minimum cross entropy model based on gray is more applicable to the environment with fewer weeds on a sunny day. As shown in Figure 11, core recognition may be inaccurate when there are more weeds on a cloudy day.

3.3. Spatial Orientation of the Maize Core

To comply with agronomic requirements, the localization fertilization requires the granular fertilizer to be applied and distributed to the position of 10 cm on the side of the maize line and 10 cm in depth, through ditching and earth-covering fertilizer functions, which is distinguished from a traditional fertilization. When the maize core is recognized in the maize canopy image, the localization fertilization method needs to calculate the relative distance L between the maize plant core P and the fertilization mouth O of the fertilizer, in the projection direction of the fertilizing line.
When collecting data, the camera is vertically installed at the front of the fertilizer with the lens directed downward. Therefore, in the projection direction of the fertilizing line, the spatial orientation distance L depends on the relative distance L1, which between the core P and the central field of view point O of the camera, and the relative distance L2 from the point O of the camera to the fertilization mouth F, see Figure 12. Due to the fixed installation angle and height and the other camera parameters, L2 is a measurable value. The distance L1 can be calculated using a camera calibration method based on a single-plane checkerboard [37]. The error due to vibration can be decreased and compensated for, by installing a shock attenuation device and an angle stabilometer as required.
Due to the consistency of the maize variety and seedling management, based on the real measurements, the height of a sample maize plant should not exceed 10 cm with respect to the deviation, compared to the average plant height 30 cm. In this study, based on the parameters of the camera and lens, the field of view is 53.33 cm × 40 cm, and the distance in the marching direction |PQ| is 40 cm for the object distance of the collected image. As a result, the distance L1 from the recognized maize core P, localized to the central field of view point O, does not exceed 20 cm. When there are large changes in the height of the maize plant, as the lens is in focus and the field of view Angle is constant, according to the theory of similar triangles, the distance error of the core localization ΔL1 is less than 4 cm, as shown in the following calculations and Figure 13.
When plant height was higher than the average, but within 10 cm,
| AE | | OE | = | AB | | OP |
Δ L 1 = | OP | | AB |
Similarly, when the plant height was below the average but within 10 cm,
Δ L 1 = | CD | | OP |
If the plant distance is 20 cm and the stem diameter is approximately 2 cm, the error ΔL1 is within the requirements for applying localization and fertilization. The maize core is localized in front of the fertilizer device, then the relative distance L is calculated in the projection direction of the fertilizing line, and the localization fertilization is finally conducted for maize by controlling the marching speed of the fertilization device.

4. Conclusions

The real-time core localization can be realized by the minimum cross entropy method based on gray level herein, for dynamically collected maize images in an environment with weeds. The positions of maize plants can be quickly localized for fertilization, and the basic demands of localization fertilization in precision agriculture can be met by this method. According to experimental validation, in an environment with weeds on a sunny day, the core recognition rate of the methods herein can exceed 88.37%. The method is more applicable to recognizing individual maize cores in an environment with fewer weeds on a sunny day, but the recognition results may be inaccurate when there are much more weeds. The recognition rate for samples with this method(minimum cross entropy based on gray) is 54.74–79.59%, not lower than the other recognition results on cloudy days, which is influenced mainly by the quantity of weeds. Future work is to solve the problem of localizing multiple maize plants in a complicated environment.

Author Contributions

Conceptualization, Z.Z. and G.L.; Data curation, Z.Z.; Formal analysis, Z.Z.; Funding acquisition, G.L.; Investigation, Z.Z.; Methodology, Z.Z. and G.L.; Project administration, G.L.; Resources, Z.Z. and S.Z.; Software, Z.Z.; Supervision, Z.Z.; Validation, Z.Z.; Visualization, Z.Z.; Writing—original draft, Z.Z.; Writing—review & editing, Z.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China, grant number 2016YFD0200605.

Acknowledgments

The authors thank (i) Shanjie Yang for help in collecting the images of maize in the field and (ii) the editors and anonymous reviewers for their helpful comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Quebrajo, L.; Pérez-Ruiz, M.; Rodriguez-Lizana, A.; Agüera, J. An approach to precise nitrogen management using hand-held crop sensor measurements and winter wheat yield mapping in a Mediterranean environment. Sensors 2015, 15, 5504–5517. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. de Lara, A.; Longchamps, L.; Khosla, R. Soil water content and high-resolution imagery for precision irrigation: Maize yield. Agronomy 2019, 9, 174. [Google Scholar] [CrossRef] [Green Version]
  3. Hou, M.J.; Tian, F.; Zhang, L.; Li, S.E.; Du, T.S.; Huang, M.S.; Yuan, Y.S. Estimating crop transpiration of soybean under different irrigation treatments using thermal infrared remote sensing imagery. Agronomy 2019, 9, 8. [Google Scholar] [CrossRef] [Green Version]
  4. Maes, W.H.; Steppe, K. Perspectives for remote sensing with unmanned aerial vehicles in precision agriculture. Trends Plant. Sci. 2019, 24, 152–164. [Google Scholar] [CrossRef] [PubMed]
  5. Bai, J.; Xu, Y.; Wei, X.; Zhang, J.; Shen, B. Weed identification from winter rape at seedling stage based on spectrum characteristics analysis. Trans. Chin. Soc. Agric. Eng. 2013, 29, 128–134. [Google Scholar]
  6. Sun, J.; He, X.; Tan, W.; Wu, X.; Shen, J.; Lu, H. Recognition of crop seedling and weed recognition based on dilated convolution and global pooling in CNN. Trans. Chin. Soc. Agric. Eng. 2018, 34, 159–165. [Google Scholar]
  7. Wang, C.; Li, Z. Weed recognition using SVM model with fusion height and monocular image features. Trans. Chin. Soc. Agric. Eng. 2016, 32, 165–174. [Google Scholar]
  8. Utstumo, T.; Urdal, F.; Brevik, A.; Dørum, J.; Netland, J.; Overskeid, Ø.; Berge, T.W.; Gravdahl, J.T. Robotic in-row weed control in vegetables. Comput. Electron. Agric. 2018, 154, 36–45. [Google Scholar] [CrossRef]
  9. Bogue, R. Domestic robots: Has their time finally come? Ind. Robot. 2017, 44, 129–136. [Google Scholar] [CrossRef]
  10. Meng, Q.; Zhang, M.; Yang, G.; Qiu, R.; Xiang, M. Guidance line recognition of agricultural machinery based on particle swarm optimization under natural illumination. Trans. Chin. Soc. Agric. Mach. 2016, 47, 11–20. [Google Scholar]
  11. Lottes, P.; Hörferlin, M.; Sander, S.; Stachniss, C. Effective vision-based classification for separating sugar beets and weeds for precision farming. J. Field Robot. 2017, 34, 1160–1178. [Google Scholar] [CrossRef]
  12. Thorp, K.R.; Tian, L.F. A review on remote sensing of weeds in agriculture. Precis Agric. 2004, 5, 477–508. [Google Scholar] [CrossRef]
  13. Van Der Weide, R.Y.v.d.; Bleeker, P.O.; Achten, V.T.J.M.; Lotz, L.A.P.; Fogelberg, F.; Melander, B. Innovation in mechanical weed control in crop rows. Weed Res. (Oxf) 2008, 48, 215–224. [Google Scholar] [CrossRef]
  14. Cordill, C.; Grift, T.E. Design and testing of an intra-row mechanical weeding machine for corn. Biosyst. Eng. 2011, 110, 247–252. [Google Scholar] [CrossRef]
  15. Usha, K.; Singh, B. Potential applications of remote sensing in horticulture—A review. Sci. Hortic. 2013, 153, 71–83. [Google Scholar] [CrossRef]
  16. Mao, W.; Wang, H.; Zhao, B.; Zhang, Y.; Zhou, P.; Zhang, X. Weed detection method based the centre color of corn seedling. Nongye Gongcheng Xuebao Trans. Chin. Soc. Agric. Eng. 2009, 25, 161–164. [Google Scholar]
  17. Wei, S.; Zhang, Y.E.; Mei, S. Fast recognition method of maize core based on top view image. Nongye Jixie Xuebao Trans. Chin. Soc. Agric. Mach. 2017, 48, 136–141. [Google Scholar]
  18. Hu, L.; Luo, X.; Zeng, S.; Zhang, Z.; Chen, X.; Lin, C. Plant recognition and localization for intra-row mechanical weeding device based on machine vision. Nongye Gongcheng Xuebao Trans. Chin. Soc. Agric. Eng. 2013, 29, 12–18. [Google Scholar]
  19. Song, Y.; Liu, Y.; Liu, L.; Zhu, D.; Jiao, J.; Chen, L. Extraction method of navigation baseline of corn roots based on machine vision. Trans. Chin. Soc. Agric. Mach. 2017, 48, 38–44. [Google Scholar]
  20. Hamuda, E.; Glavin, M.; Jones, E. A survey of image processing techniques for plant extraction and segmentation in the field. Comput. Electron. Agric. 2016, 125, 184–199. [Google Scholar] [CrossRef]
  21. Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.; Mortensen, D.A. Color indices for weed identification under various soil, residue, and lighting conditions. Trans. ASAE 1995, 38, 259–269. [Google Scholar] [CrossRef]
  22. Burgos-Artizzu, X.P.; Ribeiro, A.; Guijarro, M.; Pajares, G. Real-time image processing for crop/weed discrimination in maize fields. Comput. Electron. Agric. 2011, 75, 337–346. [Google Scholar] [CrossRef] [Green Version]
  23. Honggen, L.U.O.; Limin, Z.H.U.; Han, D. A survey on image segmentation using active contour and level set method. J. Image Graph. 2006, 11, 301–309. [Google Scholar]
  24. Wang, X.; Fang, L. Survey of image segmentation based on active contour model. Pattern Recognit. Artif. Intell. 2013, 26, 751–760. [Google Scholar]
  25. Yuan, J.; Bae, E.; Tai, X.C. A study on continuous max-flow and Min-cut approaches. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 2217–2224. [Google Scholar]
  26. Li, C.H.; Lee, C.K. Minimum cross entropy thresholding. Pattern Recognit. 1993, 26, 617–625. [Google Scholar] [CrossRef]
  27. Ridler, T.W.; Calvard, S. Picture thresholding using an iterative selection method. IEEE Trans. Syst. Man Cybern. 1978, 8, 630–632. [Google Scholar]
  28. Otsu, N. A threshold selection method FROM gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  29. Arthur, D.; Vassilvitskii, S. k-means ++: The Advantages of Careful Seeding; SIAM: Philadelphia, PA, USA, 2007; pp. 1027–1035. [Google Scholar]
  30. Gavioli, A.; de Souza, E.G.; Bazzi, C.L.; Schenatto, K.; Betzek, N.M. Identification of management zones in precision agriculture: An evaluation of alternative cluster analysis methods. Biosyst. Eng. 2019, 181, 86–102. [Google Scholar] [CrossRef]
  31. Aja-Fernández, S.; Curiale, A.H.; Vegas-Sánchez-Ferrero, G. A local fuzzy thresholding methodology for multiregion image segmentation. Knowl. Based. Syst. 2015, 83, 1–12. [Google Scholar] [CrossRef]
  32. Haralick, R.M.; Shapiro, L.G. Computer and Robot Vision, Vol. I.; Addison-Wesley: Boston, MA, USA, 1992; pp. 28–48. [Google Scholar]
  33. Song, K.; Ren, X. Image segmentation of disease speckle of corn leaf based on YCbCr color space. Nongye Gongcheng Xuebao Trans. Chin. Soc. Agric. Eng. 2008, 24, 202–205. [Google Scholar]
  34. Li, Z.; Wang, S.; Sun, J. Image segmentation in object recognition of mature eggplant. Nongye Jixie Xuebao Trans. Chin. Soc. Agric. Mach. 2009, 40, 105–108+196. [Google Scholar]
  35. Zheng, X.; Wang, X. Leaf vein extraction based on gray-scale morphology. Int. J. Image Graph. Signal Process. 2010, 2, 25–31. [Google Scholar] [CrossRef]
  36. Su, W.; Jiang, K.; Yan, A.; Liu, Z.; Zhang, M.; Wang, W. Monitoring of planted lines for breeding corn using UAV remote sensing image. Trans. Chin. Soc. Agric. Eng. 2018, 34, 92–98. [Google Scholar]
  37. Zhang, Z.Y. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Canopy structure of maize.
Figure 1. Canopy structure of maize.
Agronomy 10 00470 g001
Figure 2. Flowchart of algorithm for real-time recognition of maize cores at seedling stage.
Figure 2. Flowchart of algorithm for real-time recognition of maize cores at seedling stage.
Agronomy 10 00470 g002
Figure 3. An example extraction of a maize zone.
Figure 3. An example extraction of a maize zone.
Agronomy 10 00470 g003
Figure 4. Flowchart of connected-domain analysis.
Figure 4. Flowchart of connected-domain analysis.
Agronomy 10 00470 g004
Figure 5. An example of localization for a maize core.
Figure 5. An example of localization for a maize core.
Agronomy 10 00470 g005
Figure 6. Effects of the six segmentation methods.
Figure 6. Effects of the six segmentation methods.
Agronomy 10 00470 g006
Figure 7. Statistical chart of the six segmentation results for images of different classes of maize samples.
Figure 7. Statistical chart of the six segmentation results for images of different classes of maize samples.
Agronomy 10 00470 g007
Figure 8. Statistical chart of average time and standard deviation for different methods.
Figure 8. Statistical chart of average time and standard deviation for different methods.
Agronomy 10 00470 g008
Figure 9. Effects of real-time localization for maize core under the 12 combined strategies.
Figure 9. Effects of real-time localization for maize core under the 12 combined strategies.
Agronomy 10 00470 g009
Figure 10. Statistical chart of recognition rate for maize core.
Figure 10. Statistical chart of recognition rate for maize core.
Agronomy 10 00470 g010
Figure 11. Image of maize with more weeds on cloudy day.
Figure 11. Image of maize with more weeds on cloudy day.
Agronomy 10 00470 g011
Figure 12. Diagram of core localization and fertilization with the fertilizer.
Figure 12. Diagram of core localization and fertilization with the fertilizer.
Agronomy 10 00470 g012
Figure 13. Diagram of the real localization error for a maize core.
Figure 13. Diagram of the real localization error for a maize core.
Agronomy 10 00470 g013

Share and Cite

MDPI and ACS Style

Zong, Z.; Liu, G.; Zhao, S. Real-Time Localization Approach for Maize Cores at Seedling Stage Based on Machine Vision. Agronomy 2020, 10, 470. https://doi.org/10.3390/agronomy10040470

AMA Style

Zong Z, Liu G, Zhao S. Real-Time Localization Approach for Maize Cores at Seedling Stage Based on Machine Vision. Agronomy. 2020; 10(4):470. https://doi.org/10.3390/agronomy10040470

Chicago/Turabian Style

Zong, Ze, Gang Liu, and Shuo Zhao. 2020. "Real-Time Localization Approach for Maize Cores at Seedling Stage Based on Machine Vision" Agronomy 10, no. 4: 470. https://doi.org/10.3390/agronomy10040470

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop