Next Article in Journal
IoT-Based Geotechnical Monitoring of Unstable Slopes for Landslide Early Warning in the Darjeeling Himalayas
Previous Article in Journal
Realizing Efficient Security and Privacy in IoT Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A SVM and SLIC Based Detection Method for Paddy Field Boundary Line

School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200000, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(9), 2610; https://doi.org/10.3390/s20092610
Submission received: 7 April 2020 / Revised: 23 April 2020 / Accepted: 29 April 2020 / Published: 3 May 2020
(This article belongs to the Section Intelligent Sensors)

Abstract

:
Visual based route and boundary detection is a key technology in agricultural automatic navigation systems. The variable illumination and lack of training samples has a bad effect on visual route detection in unstructured farmland environments. In order to improve the robustness of the boundary detection under different illumination conditions, an image segmentation algorithm based on support vector machine was proposed. A superpixel segmentation algorithm was adopted to solve the lack of training samples for a support vector machine. A sufficient number of superpixel samples were selected for extraction of color and texture features, thus a 19-dimensional feature vector was formed. Then, the support vector machine model was trained and used to identify the paddy ridge field in the new picture. The recognition F1 score can reach 90.7%. Finally, Hough transform detection was used to extract the boundary of the ridge field. The total running time of the proposed algorithm is within 0.8 s and can meet the real-time requirements of agricultural machinery.

1. Introduction

Different types of automation systems for farm vehicles have been invented and introduced in agricultural fields [1,2]. Although autonomous agricultural vehicles using global positioning systems (GPS) have been in use for many years [3,4], when the satellite signals are poor, or the farmland environment is dynamic, visual environment recognition and navigation become key means for intelligent agricultural machinery navigation [5,6,7], and the vision-based navigation route detection algorithm is the core content. The navigation route mainly includes the crop line formed by the cultivated crops and the boundary line of the farmland.
The vision-based navigation route detection algorithm mainly includes two stages, image segmentation and route extraction. Earlier researchers used monochrome cameras sensitive to near-infrared spectroscopy to acquire images [8,9,10,11,12], because in the near-infrared spectroscopy crops performed more clearly with respect to the color value of the land and visually brighter. It is relatively easy to separate crops from land. With the development of color cameras, researchers have found that there is a large difference between crops and land in the green color channel. The difference in green color values can effectively divide crops and land. Compared with monochrome cameras, the segmentation algorithm based on green color difference is more robust to the changes of natural light and shadows in the image [13,14]. However, during the fallow period of harvesting or sowing, the color of crops is no longer green, but more brown and black. The algorithm based on green color difference segmentation is difficult to distinguish between crops and land. In response to this problem, some researchers have proposed that crop rows show periodic changes in the image, and crop rows are at the peak of periodic changes. By searching for pixel values of periodic changes in the image, the farmland image is segmented to determine the position of crop rows in the image [15,16]. Meanwhile, the farmland image segmentation algorithm is susceptible to natural light, researchers have reduced the effect of light on the image by converting the original image to an invariant light map [17]. After the crops and land are segmented, the routes are determined by fitting straight lines in the segmentation map. There are many ways to fit straight lines, including Hough transform [8], linear regression [9], fixed template matching [12], or random sampling consistent fitting (RANSAC) [18]. However, when there are many weeds in the farmland, straight line fitting in the binary segmentation map of green plants will cause a lot of “wrong lines”. Aiming at the problem of straight line misdetection caused by overgrown weeds, Bossu et al. [14] used wavelet transform to analyze the image in the frequency domain, and then locate the crop rows to avoid errors caused by straight line fitting.
Obviously, in the course of route detection, the core algorithm is image segmentation. During image segmentation, the above-mentioned route detection algorithm based on color difference is a detection algorithm based on models and knowledge, while the field boundary detection algorithm based on machine learning is a training learning method based on feature data. It is used in different lighting conditions and different detection conditions. With sufficient training samples, through appropriate feature extraction and selection, machine learning algorithms can obtain accurate classification results of different parts based on feature data, and the classification results are less affected by natural light. With the improvement of hardware computing power, machine learning algorithms have been applied in machine vision [19].
This paper proposes a paddy field boundary detection algorithm based on machine learning. A simple linear iterative clustering algorithm (SLIC) [20] was used to perform superpixel segmentation preprocessing of paddy field images, extract color features and texture features of superpixels, and form a 19-dimensional feature vector as the input of a support vector machine (SVM) [21]. The trained SVM model was used to classify different blocks in the image and segment the image. Then, the Hough transform was used to extract the paddy field boundary.

2. Materials and Methods

A ZED camera was installed on a Yanmar VP6E paddy field broadcast machine. The camera collected pictures at a resolution of 1280 × 720 and at a rate of 30 frames per second. Color images were obtained mainly in the paddy field in Pudong District, Shanghai and Songjiang District, Shanghai.

2.1. SLIC Superpixel Segmentation for Paddy Field Pictures

The superpixel segmentation of SLIC paddy field pictures is divided into two steps:
1. Initialize the paddy field image clustering center.
Set the number of cluster centers to k and the vector value of the i-th cluster center is Ci(li, ai, bi, xi, yi), where (li, ai, bi) is the channel value of the cluster center in the CIELAB color space and (xi, yi) is the coordinate of the cluster center in the image. The initialized cluster centers are evenly distributed in step length S (Equation (1)).
S = N / k .
Unless the label of the cluster center is i (i = 1, 2, ∙∙∙, k), the labels of other pixels are initialized to −1.
2. Iterative clustering process.
Each pixel belongs to the cluster center closest to it. The search range of the cluster center is 2S × 2S, instead of global search. After all pixels are assigned to the nearest cluster center, the i-th cluster center vector value is updated by the mean value of the vector values of all pixels belonging to the i-th cluster center. The iteration termination condition is set to the number of iterations n (Equation (2)).
n n t ,
where nt is the preset number of iterations.
In the iterative clustering process, the distance between the pixel and the cluster center is measured by weighted Euclidean distance. The distance calculation formula is as follows:
D = ( d c N c ) 2 + ( d s N s ) 2 ,
where d c = ( l l i ) 2 + ( a a i ) 2 + ( b b i ) 2 and d s = ( x x i ) 2 + ( y y i ) 2 ; (l, a, b, x, y) is vector value of pixel(x, y); Nc and Ns are weighting factors.
SLIC superpixel segmentation for a paddy field image (Figure 1a) can generate compact, nearly uniform superpixels (Figure 1b), and the algorithm runs fast.

2.2. Paddy Field Superpixel Features Extraction

For each superpixel, extract 9-dimensional color features and 10-dimensional texture features to form a 19-dimensional feature vector νj (Equation (4)).
v j = ( μ j r , μ j g , μ j b , μ j h , μ j s , μ j v , μ j m , σ j h , σ j s , σ j v , h j 1 , h j 2 , h j 3 , h j 4 , h j 5 , h j 6 , h j 7 , h j 8 , h j 9 ) ,
where j is j-th superpixel; (μjr, μjg, μjb, μjh, μjs, μjv) are the RGB three-channel mean and the HSV (Hue, Saturation, Value) three-channel mean of all pixels in the superpixel; (σjh, σjs, σjv) are HSV three-channel variance of all pixels in the superpixel; μjm is the mean of the gradient amplitude of all pixels in the superpixel; (hj1, hj2, hj3, hj4, hj5, hj6, hj7, hj8, hj9) is 9-dimensional vector based on pixel gradient direction histogram in superpixel.

2.2.1. Color Features Extraction

Color features are statistical characteristics of pixel values in superpixels based on RGB and HSV color spaces, which are obtained by calculating the RGB three-channel mean, HSV three-channel mean, and HSV three-channel variance of all pixels in the superpixel. The HSV color space is used because the HSV color space [22] has low sensitivity to natural light changes. The HSV three-channel variance calculation formula is as follows:
σ 2 = ( z μ ) 2 ,
where z is the HSV three-channel value of pixel (x, y) in superpixel; μ is mean of HSV three channels of all pixels in superpixel; p(z) is probability that pixel value is z in superpixel.
The super pixel color feature maps are shown in Figure 2.

2.2.2. Texture Features Extraction

Different from the color features, the texture features of the image characterize the pixel value distribution relationship between the pixel and its spatial neighborhood, and describe the surface properties of the object corresponding to the image area. The texture features extracted in this paper include two parts: One is the average gradient magnitude of all pixels in the superpixel, and the other is a 9-dimensional vector based on the pixel gradient direction histogram.
The calculation formula of the pixel gradient amplitude G(x, y) is as follows:
G ( x , y ) = G x ( x , y ) 2 + G y ( x , y ) 2 ,
where G x ( x , y ) = I ( x + 1 , y ) I ( x , y ) and G y ( x , y ) = I ( x , y + 1 ) I ( x , y ) ; I(x, y) is the gray value of the pixel(x, y).
In actual processing, when calculating the horizontal gradient Gx and vertical gradient Gy, Sobel edge operator [23] is used for convolution with image I (Equation (7)).
G x = [ 1 0 + 1 2 0 + 2 1 0 + 1 ] I   and   G y = [ + 1 + 2 + 1 0 0 0 1 2 1 ] I .
The superpixel gradient amplitude mean is shown in Figure 3.
The second part of the texture feature is a 9-dimensional vector based on pixel gradient direction histogram in superpixel. The gradient direction value of the pixel (x, y) is shown below:
α ( x , y ) = tan 1 ( G x ( x , y ) G y ( x , y ) ) ,
where α(x, y) is the value of the pixel(x, y) gradient direction, and the range is 0–360°.
Construct a 9-dimensional texture feature vector with the following rules:
  • 0–360° degree is evenly divided into 9 parts (Figure 4), and each part corresponds to a value of a 9-dimensional feature vector. Initialize each value of the 9-dimensional vector to 0.
  • Traverse all the pixels in the superpixel, and calculate the weighted value in the corresponding angle area according to the gradient direction value of the pixel. That is, when the gradient direction valueα(x, y) of the pixel(x, y) satisfies:
α ( x , y ) p l ,
where pl is the l-th angle range, l = 1,2,3, ∙∙∙, 9.
Calculate the weighting value of the vector value hl corresponding to pl, and use the pixel amplitude value as the weighting coefficient:
h l + = G ( x , y ) × 1

2.3. Paddy Field Ridge Recognition Based on Support Vector Machine

2.3.1. Obtaining and Processing of Training Samples for SVM

The training sample set in this paper consists of manually labeled superpixels. The 19-dimensional feature vector of the superpixels is used as input for the training data of the SVM model. The training label is −1 and 1. After the superpixel segmentation of collected paddy field images, the software is designed by OpenCV library, and the superpixels are marked by clicking the mouse. The superpixels of the ridge area are defined as positive sample (1), while the superpixels of the non-ridge area are defined as negative sample (−1). The superpixels are derived from twenty paddy field pictures collected in different paddy fields at different times of the day, and annotated to obtain 1266 positive samples and 2749 negative samples, thereby the training sample set (νj, mj) is obtained and j = 1,2,3,∙∙∙,4015, mj (−1, 1).

2.3.2. Ridge Recognition SVM Model Training

The goal of the paddy field ridge recognition SVM model training is to find a hyperplane that can divide the two types of samples of ridge field and non-ridge field and maximize the interval between the two types of samples. The hyperplane can be expressed by the following linear equation:
ω T v + b = 0 ,
where ω is the normal vector of the hyperplane and determines the direction of the hyperplane; b is the displacement term and determines the distance between the hyperplane and the origin; v is the training data set.
In order to maximize the interval between the two types of samples, the SVM needs to solve the following secondary optimization problem:
min 1 2 ω 2 s . t .   y j ( ω T v j + b ) 1
After comparing the radial basis kernel function, polynomial kernel function and linear kernel function, the linear kernel function is selected as the kernel function of the SVM model because the classification results of the linear kernel function are better.
In addition, a cross-validation method is used to avoid SVM overfitting the training samples, and that is, the training samples are randomly divided into two parts of 9:1; 90% of the samples are used as model training samples for training the SVM model, and 10% of the samples are used as the validation set. It is used to verify the accuracy of the model, and the model with the highest accuracy in the validation set is used as the output model.
In the process of model training, marked positive and negative samples are used as model input, SVM model training parameters are set, linear kernel function of the model is selected, and iterative training was conducted. Finally, the optimal SVM model is selected by verifying the data set.
The linear kernel function is as follows:
κ ( x i , x j ) = x i T x j .

2.3.3. Application and Verification of Ridge Recognition SVM Model

After SLIC superpixel segmentation, features are extracted for each superpixel, and a trained SVM model is loaded to predict each superpixel type. Figure 5 shows the effect of field classification recognition, where the red area indicates the field ridge. It can be seen from the classification results that the superpixels of the paddy field are basically accurately classified.

3. Experiment Results and Analysis

The algorithm processor is NVIDIA Jetson TX2 Processor @2.0 GHz CPU, 8.0 GB memory, and 32 GB data storage (NVIDIA, Santa Clara, CA, USA). The software used is the Ubuntu16.04 operating system (An open sourse free oeperatingn system). In the superpixel segmentation and SVM classification stages, the Superpixel SLIC and SVM functions from OpenCV, a computer vision open source library, are used.
The images were collected at 9:00 a.m. and 3:00 p.m. respectively in Songjiang District, Shanghai, and at 4:00 p.m. in Pudong District, Shanghai. Among the images taken in Songjiang District and Pudong District, 50 paddy field images containing ridges were selected. Therefore, there are about a dozen images at each time point, and each image can generate 500 superpixels, so the number of training samples at each time point is several thousand.
When the Yanmar VP6E paddy field broadcast machine is traveling at a speed of 0.8 m/s, the ZED camera (Stereolabs, San Francisco, California, USA) collects paddy field pictures in real time, and Jetson TX2 completes the task of identifying a paddy field ridge in 0.6 s. For the paddy field in Songjiang District at different times, the field ridge recognition after SVM classification is shown in Figure 6 and Figure 7. Combined with the classification result graph of Figure 5, it can be seen that the algorithm used can well identify different types of field ridges in paddy fields at different times in Songjiang District.
In order to verify that the field recognition algorithm has a certain generalization ability, this algorithm is applied in the paddy field at 4:00 p.m. in Pudong District, Shanghai (Figure 8). The classification result shows that the algorithm can effectively detect paddy field ridges in different paddy fields.
The F1 score [24] is used as the evaluation index of the paddy field ridge recognition algorithm. Among the pictures taken in Songjiang District and Pudong District, 50 paddy field pictures containing ridges were selected. The classification results of superpixels were compared with the results of actual manual labeling, and the accuracy and recall of superpixel recognition were also counted. 50 paddy field images were divided into 21,875 superpixels after superpixel segmentation. After manual labeling, the number of positive samples (superpixels in the ridge field) was 3365, and the number of negative samples (superpixels in the non-ridge field) was 18,510.
As can be seen from Table 1, the proposed algorithm can accurately segment the ridge field and non-ridge field in the paddy field, and the F1 score index reaches 90.7%, thereby the effectiveness of the algorithm is verified.
The route is the information that directly guides the automatic driving of agricultural machinery. After classifying a paddy field, therefore, it is necessary to further extract the boundary of the ridge field to use it as the route for agricultural machinery.
Taking the paddy field classification result map (Figure 6) as an example, the process of field boundary extraction is explained. Based on the classification results of SVM, the paddy field image is binarized to obtain a binary map (Figure 9a). A canny edge detection operator [25] is used to detect all edges of the binary map, and the edge map is obtained (Figure 9b). Finally, the Hough transform is used to detect all the straight lines in the edge map (Figure 9c). Because the Hough transform often detects more than one straight line, select the straight line closest to the bottom center of the picture, that is, coordinates (640,720) as the boundary of the field (Figure 9d). It can be seen from the Figure 9 that the boundary of a ridge field can be effectively extracted.
As shown in Figure 9d, the green line is the manually labeled field boundary and the red line is extracted by the algorithm. The angle between the two lines is used as the criterion for judging the detection accuracy of the boundary line. Calculate the angle between two lines in the image coordinate system. Take the top left corner of the image as the origin, the horizontal axis as the X-axis, the vertical axis as the Y-axis. In this coordinate system, the angle between the red or green line and the X-axis can be calculated, and then the angle can be obtained. Count the angle between the red and green lines in the 50 paddy field pictures selected above, with an average value of 1.63° and a variance of 0.14(°)2. Therefore, the accuracy of the field boundary extracted by the proposed algorithm can meet the requirements of automatic navigation.
Jetson TX2 completes the task of identifying the paddy field ridge in one image in 0.6 s. The navigation line extraction time is within 0.2 s, so the total processing time of the algorithm to process a frame of image is within 0.8 s. Meanwhile, since the forward looking distance of the ZED camera can reach 10 m, and the travel speed of the paddy field broadcast machine is 0.8 m/s, the algorithm processing time of 0.8 s can meet the real-time detection requirements.

4. Conclusions

Aiming at the paddy field ridge detection problem, support vector machine classification is used instead of the traditional image segmentation algorithm to segment the ridge field and non-ridge field in the image, and the ridge boundary is extracted using Hough transform. On a NVIDIA Jetson TX2 hardware platform, the total running time of the proposed algorithm is within 0.8 s, thus effectively meeting the real-time requirements.
  • Preprocessing the SLIC superpixel segmentation of the original paddy field image effectively reduces the calculation amount of subsequent algorithm processing, and provides a large number of samples for the SVM model training. Only 20 pictures are needed to obtain thousands of samples, which solves the problem that machine learning algorithms require a large number of samples.
  • Considering that there are certain differences in color and texture between the ridge field and non-ridge field areas, a 9-dimensional color feature vector and a 10-dimensional texture feature vector are extracted during the feature extraction stage, making full use of image information to make up for the disadvantages of traditional image segmentation algorithms relying on picture color information.
  • Processing multiple farmland images in different time periods and different plots, the results show that the proposed algorithm can accurately segment the ridge field and non-ridge part of a paddy field, and the F1 score index reaches 90.7%.

Author Contributions

Conceptualization, Y.L.; methodology, Y.L.; software, D.C.; investigation, Y.L., D.C. and Y.H.; resources, Y.L.; visualization, Z.H. and D.C.; validation, Y.H. and L.G., writing—original draft preparation, Z.H. and D.C.; writing—review and editing, Y.L., Z.H., Y.H. and L.G.; supervision, Y.L., L.G. and C.L.; project administration, Y.L. and C.L.; funding acquisition, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Key R & D Plan of China under Grant NO.2016YFD0700505, Shanghai Agriculture Applied Technology and Development Program under Grant No.2019-02-08-00-08-F01122.

Acknowledgments

Thanks to Shanghai Star Modern Agriculture Equipment Co., Ltd. for their support with equipment and the experiment field.

Conflicts of Interest

The authors declare no conflict of interest

References

  1. Mousazadeh, H. A technical review on navigation systems of agricultural autonomous off-road vehicles. J. Terramechan. 2013, 50, 211–232. [Google Scholar] [CrossRef]
  2. Reina, G.; Milella, A.; Rouveure, R.; Nielsen, M.; Worst, R.; Blas, M.R. Ambient awareness for agricultural robotic vehicles. Biosyst. Eng. 2016, 146, 114–132. [Google Scholar] [CrossRef]
  3. Weidong, Z.; Chun, W.; Jing, H. Development of Agriculture Machinery Aided Guidance System Based on GPS and GIS. In Proceedings of the Computer and Computing Technologies in Agriculture—Third IFIP International, Beijing, China, 14–17 October 2009. [Google Scholar]
  4. Liu, J.; Yuan, J.; Cai, J.; Tao, C.; Wang, L.; Cheng, W. Autopilot system of agricultural vehicles based on GPS/INS and steer-by-wire. Trans. Chin. Soc. Agric. Eng. 2016, 1. [Google Scholar] [CrossRef]
  5. Gerrish, J.B.; Stockman, G.C.; Mann, L.; Hu, G. Path-finding by Image Processing in Agricultural Field Operations. SAE Tech. Paper Ser. 1986, 95. [Google Scholar] [CrossRef]
  6. Billingsley, J.; Schoenfisch, M. The successful development of a vision guidance system for agriculture. Comput. Electron. Agric. 1997, 16, 147–163. [Google Scholar] [CrossRef]
  7. English, A.; Ross, P.; Ball, D.; lan Corke, P. Vision based guidance for robot navigation in agriculture. In Proceedings of the IEEE International Conference on Robotics and Automation, Hong Kong, China, 31 May–5 June 2014. [Google Scholar]
  8. Åstrand, B.; Baerveldt, A.-J. A vision based row-following system for agricultural field machinery. Mechatronics 2005, 15, 251–269. [Google Scholar] [CrossRef]
  9. Han, S.; Zhang, Q.; Ni, B.; Reid, J.F. A guidance directrix approach to vision-based vehicle guidance systems. Comput. Electron. Agric. 2004, 43, 179–195. [Google Scholar] [CrossRef]
  10. Marchant, J. Tracking of row structure in three crops using image analysis. Comput. Electron. Agric. 1996, 15, 161–179. [Google Scholar] [CrossRef]
  11. Searcy, S.W.; Reid, J.F. Vision-Based Guidance of an Agricultural Tractor. In Proceedings of the American Control Conference, Seattle, WA, USA, 18–20 June 1986. [Google Scholar]
  12. Tillett, N.; Hague, T.; Miles, S. Inter-row vision guidance for mechanical weed control in sugar beet. Comput. Electron. Agric. 2002, 33, 163–177. [Google Scholar] [CrossRef]
  13. Bakker, T.; Wouters, H.; Van Asselt, K.; Bontsema, J.; Tang, L.; Müller, J.; Van Straten, G. A vision based row detection system for sugar beet. Comput. Electron. Agric. 2008, 60, 87–95. [Google Scholar] [CrossRef]
  14. Bossu, J.; Gée, C.; Jones, G.; Truchetet, F. Wavelet transform to discriminate between crop and weed in perspective agronomic images. Comput. Electron. Agric. 2009, 65, 133–143. [Google Scholar] [CrossRef]
  15. Hague, T.; Tillett, N. A bandpass filter-based approach to crop row location and tracking. Mechatronics 2001, 11, 1–12. [Google Scholar] [CrossRef]
  16. Søgaard, H.; Olsen, H.J. Determination of crop rows by image analysis without segmentation. Comput. Electron. Agric. 2003, 38, 141–158. [Google Scholar] [CrossRef]
  17. Lategahn, H.; Beck, J.; Kitt, B.; Stiller, C. How to learn an illumination robust image feature for place recognition. In Proceedings of the IEEE Intelligent Vehicles Symposium, Gold Coast City, Australia, 26–28 June 2013; pp. 285–291. [Google Scholar]
  18. Hiremath, S.; Van Evert, F.K.; Ter Braak, C.; Stein, A.; Van Der Heijden, G. Image-based particle filtering for navigation in a semi-structured agricultural environment. Biosyst. Eng. 2014, 121, 85–95. [Google Scholar] [CrossRef]
  19. Shinzato, P.; Wolf, D. A Road Following Approach Using Artificial Neural Networks Combinations; Kluwer Academic Publishers: Berlin, Germany, 2011; Volume 62, pp. 527–546. [Google Scholar]
  20. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC Superpixels Compared to State-of-the-Art Superpixel Methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Lang. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  22. Hamuda, E.; Mc Ginley, B.; Glavin, M.; Jones, E. Automatic crop detection under field conditions using the HSV colour space and morphological operations. Comput. Electron. Agric. 2017, 133, 97–107. [Google Scholar] [CrossRef]
  23. Gao, W.; Yang, L.; Zhang, X.; Liu, H. An Improved Sobel Edge Detection. In Proceedings of the IEEE International Conference on Computer Science & Information Technology, Chengdu, China, 9–11 July 2010; pp. 67–71. [Google Scholar]
  24. Huang, H.; Xu, H.; Wang, X.; Silamu, W. Maximum F1-Score Discriminative Training Criterion for Automatic Mispronunciation Detection. IEEE/ACM Trans. Audio Speech Lang. Process. 2015, 23, 787–797. [Google Scholar]
  25. Wang, B.; Fan, S.S. An Improved CANNY Edge Detection Algorithm. In Proceedings of the Second International Workshop on Computer Science and Engineering, Qingdao, China, 28–30 October 2009. [Google Scholar]
Figure 1. Effect of the simple linear iterative clustering algorithm (SLIC) segmentation process: (a) paddy field image; (b) SLIC superpixel segmentation for (a).
Figure 1. Effect of the simple linear iterative clustering algorithm (SLIC) segmentation process: (a) paddy field image; (b) SLIC superpixel segmentation for (a).
Sensors 20 02610 g001
Figure 2. Graphic representation of superpixel color features: (a) R mean feature; (b) G mean feature; (c) B mean feature; (d) H mean feature; (e) S mean feature; (f) V mean feature; (g) H variance feature; (h) S variance feature; (i) V variance feature.
Figure 2. Graphic representation of superpixel color features: (a) R mean feature; (b) G mean feature; (c) B mean feature; (d) H mean feature; (e) S mean feature; (f) V mean feature; (g) H variance feature; (h) S variance feature; (i) V variance feature.
Sensors 20 02610 g002
Figure 3. Graphic representation of superpixel gradient amplitude mean feature.
Figure 3. Graphic representation of superpixel gradient amplitude mean feature.
Sensors 20 02610 g003
Figure 4. Block diagram of 0–360° angle.
Figure 4. Block diagram of 0–360° angle.
Sensors 20 02610 g004
Figure 5. Classification results of paddy field with horizontal ridge at 9:00 a.m. in Songjiang District, Shanghai.
Figure 5. Classification results of paddy field with horizontal ridge at 9:00 a.m. in Songjiang District, Shanghai.
Sensors 20 02610 g005
Figure 6. Classification results of a paddy field with a vertical ridge at 9:00 a.m. in Songjiang District.
Figure 6. Classification results of a paddy field with a vertical ridge at 9:00 a.m. in Songjiang District.
Sensors 20 02610 g006
Figure 7. Classification results of a paddy field with a vertical ridge at 3:00 p.m. in Songjiang District.
Figure 7. Classification results of a paddy field with a vertical ridge at 3:00 p.m. in Songjiang District.
Sensors 20 02610 g007
Figure 8. Classification results of a paddy field with a vertical ridge at 4:00 p.m. in Pudong District.
Figure 8. Classification results of a paddy field with a vertical ridge at 4:00 p.m. in Pudong District.
Sensors 20 02610 g008
Figure 9. Graphic representation of farmland ridge boundary extraction: (a) binary map; (b) canny edge map; (c) Hough transform detection map; (d) boundary extraction map.
Figure 9. Graphic representation of farmland ridge boundary extraction: (a) binary map; (b) canny edge map; (c) Hough transform detection map; (d) boundary extraction map.
Sensors 20 02610 g009
Table 1. Statistical analysis of the classification results.
Table 1. Statistical analysis of the classification results.
CameraTPTNFPFNAccuracy (%)Recall (%)F1 Score (%)
ZED310018,13737326589.392.190.7
where TP indicates a superpixel that is predicted to be 1, and is actually 1; TN indicates a superpixel that is predicted to be 0, and is actually also 0; FP is a superpixel that is predicted to be 1, and is actually 0; and FN is a superpixel that is predicted to be 0, and is actually 1.

Share and Cite

MDPI and ACS Style

Li, Y.; Hong, Z.; Cai, D.; Huang, Y.; Gong, L.; Liu, C. A SVM and SLIC Based Detection Method for Paddy Field Boundary Line. Sensors 2020, 20, 2610. https://doi.org/10.3390/s20092610

AMA Style

Li Y, Hong Z, Cai D, Huang Y, Gong L, Liu C. A SVM and SLIC Based Detection Method for Paddy Field Boundary Line. Sensors. 2020; 20(9):2610. https://doi.org/10.3390/s20092610

Chicago/Turabian Style

Li, Yanming, Zijia Hong, Daoqing Cai, Yixiang Huang, Liang Gong, and Chengliang Liu. 2020. "A SVM and SLIC Based Detection Method for Paddy Field Boundary Line" Sensors 20, no. 9: 2610. https://doi.org/10.3390/s20092610

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop