# A Novel Method of Identifying Paddy Seed Varieties

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Materials and Methods

#### 2.1. Image Capture System and Experimental Samples

#### 2.2. Image Segmentation

- Step 1
- Red and hue band images are obtained from the original image.
- Step 2
- Red and hue band images are treated using a smoothing operator and converted into binary images with an optimum threshold value using Otsu’s method [9].
- Step 3
- Complete paddy seed binary images are obtained using the OR logic operator and filling operator on the hue and red binary images, respectively.
- Step 4
- The entire segmented image is obtained using the AND operator on the binary and original images.

#### 2.3. Feature Extraction

- The lemma, palea, glume, and chaff tip of a seed are illustrated in Figure 4.
- $\overline{\mathrm{AB}}$ is the longest line segment in the seed contour.
- O is the midpoint of $\overline{\mathrm{AB}}$.
- $\overline{\mathrm{CD}}$ is the perpendicular bisector of $\overline{\mathrm{AB}}$, and thus O is the intersection of $\overline{\mathrm{AB}}$ and $\overline{\mathrm{CD}}$.
- $\overline{\mathrm{CO}}$ crosses the lemma.
- $\overline{\mathrm{DO}}$ crosses the palea.
- $\overline{{\mathrm{P}}_{1}{\mathrm{P}}_{2}}$ is the perpendicular line crossing the 1/5 position of $\overline{\mathrm{AB}}$.
- $\overline{{\mathrm{P}}_{3}{\mathrm{P}}_{4}}$ is the perpendicular line crossing the 4/5 position of $\overline{\mathrm{AB}}$.

_{K}and R

_{L}) to the side of the chaff tip are indicated by red curves in Figure 5. Points L, L

_{u}, L

_{d}, K, K

_{u}, and K

_{d}(Figure 6) on the concaves were crucial for feature extraction. The hull points L

_{d}, L

_{u}, K

_{u}, and K

_{d}of the seed contour were obtained using the convex hull algorithm [10].

- (1)
- $\overline{\mathrm{AB}}$ is the longest line segment on the seed contour.
- (2)
- $\overline{\mathrm{CD}}$ is the perpendicular bisector of $\overline{\mathrm{AB}}$.
- (3)
- The chaff-tip width ($\overline{\mathrm{LK}}$) is as illustrated in Figure 7.
- (4)
- The height (h
_{c}= max(h_{i})) of the chaff tip is the maximum height of the chaff tip from $\overline{\mathrm{LK}}$, where d_{i}is the distance between a point on the chaff-tip contour and $\overline{\mathrm{LK}}$, as illustrated in Figure 7. - (5)
- The depth d
_{K}is the maximum distance between $\overline{{\mathrm{K}}_{\mathrm{u}}{\mathrm{K}}_{\mathrm{d}}}$ and the concave R_{K}(d_{K}⊥$\overline{{\mathrm{K}}_{\mathrm{u}}{\mathrm{K}}_{\mathrm{d}}}$), as indicated in Figure 8. d_{K}is obtained when d_{K}= max(d_{i}) at point K. - (6)
- The depth d
_{L}can be similarly computed, as shown in Figure 8. - (7)
- The interior angle φ is described by $\overline{{\mathrm{KK}}_{\mathrm{d}}}$ and $\overline{{\mathrm{LL}}_{\mathrm{d}}}$ and illustrated in Figure 9.

#### 2.4. Classifier

_{K}and d

_{L}, and the interior angle φ) were applied in a BPNN [11]. The BPNN classifier consisted of input, hidden, and output layers. The input features were normalized between 0 and 1. The output layer was composed of nodes related to the three categories: TK9, TN11, and TK14. The number of nodes in the hidden layer (${n}_{h}$) was calculated using the following formula [12]:

_{i}and n

_{o}are the number of input and output nodes, respectively, k = −2, 0, 2. The structure of the BPNN classifier is illustrated in Figure 10, wherein W

_{ij}and b

_{ij}are the weight and bias of the input layer in the hidden layer and W

_{jk}and b

_{jk}are the weight and bias of the hidden layer in the output layer. X

_{i}, H

_{j}, and O

_{k}denote the input layer, hidden layer, and output layer values, respectively.

## 3. Results and Discussion

## 4. Conclusions

## Acknowledgments

## Author Contributions

## References

- MousaviRad, S.J.; Rezaee, K.; Nasri, K. A new method for identification of Iranian rice kernel varieties using optimal morphological features and an ensemble classifier by image processing. Majlesi J. Multimedia Process.
**2012**, 1, 1–8. [Google Scholar] - Mebatsion, H.K.; Paliwal, J.; Jayas, D.S. Automatic classification of non-touching cereal grains in digital images using limited morphological and color features. Comput. Electron. Agric.
**2012**, 90, 99–105. [Google Scholar] [CrossRef] - Kuo, T.Y.; Chung, C.L.; Chen, S.Y.; Lin, H.A.; Kuo, Y.F. Identifying rice grains using image analysis and sparse-representation-based classification. Comput. Electron. Agric.
**2016**, 127, 716–725. [Google Scholar] [CrossRef] - Lee, C.Y.; Yan, L.; Wang, T.F.; Lee, S.R.; Park, C.W. Intelligent classification methods of grain kernels using computer vision analysis. Meas. Sci. Technol.
**2011**, 22, 64006–64012. [Google Scholar] [CrossRef] - Sun, T.H.; Tien, F.C.; Kuo, R.J. Automated thermal fuse inspection using machine vision and artificial neural networks. J. Intell. Manuf.
**2016**, 27, 639–651. [Google Scholar] [CrossRef] - Huang, K.Y. Detection and classification of areca nuts with machine vision. Comput. Math. Appl.
**2012**, 64, 739–746. [Google Scholar] [CrossRef] - Zhang, Y.; Wu, L.; Neggaz, N.; Wang, S.; Wei, G. Remote-sensing image classification based on an improved probabilistic neural network. Sensors
**2009**, 9, 7516–7539. [Google Scholar] [CrossRef] [PubMed] - Zhang, Y.; Wang, S.; Ji, G.; Phillips, P. Fruit classification using computer vision and feedforward neural network. J. Food Eng.
**2014**, 143, 167–177. [Google Scholar] [CrossRef] - Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
- Cormen, T.H.; Leiserson, C.E.; Rivest, R.L.; Stein, C. Finding the convex hull. In Introduction to Algorithms, 3rd ed.; The MIT Press: London, UK, 2009. [Google Scholar]
- Hagan, M.T.; Demuth, H.B.; Beale, M.H.; Jesus, O.D. Neural Network Design, 2nd ed.; eBook; Oklahoma State University: Stillwater, OK, USA, 2014. [Google Scholar]
- Rocco, F.; Governi, L.; Volpe, Y. ANN-based method for olive Ripening Index automatic prediction. J. Food Eng.
**2010**, 101, 318–328. [Google Scholar]

n_{h} = 8 | |||

Variety | TK9 | TN11 | TK14 |

TK9 | 606 | 4 | 15 |

TN11 | 23 | 665 | 15 |

TK14 | 27 | 11 | 640 |

Classification Accuracy (%) | 92.38 | 97.79 | 95.52 |

Average accuracy (%) | 95.26 | ||

n_{h} = 10 | |||

Variety | TK9 | TN11 | TK14 |

TK9 | 608 | 5 | 13 |

TN11 | 21 | 662 | 10 |

TK14 | 27 | 13 | 647 |

Classification Accuracy (%) | 92.68 | 97.35 | 96.57 |

Average accuracy (%) | 95.56 | ||

n_{h} = 12 | |||

Variety | TK9 | TN11 | TK14 |

TK9 | 608 | 5 | 12 |

TN11 | 21 | 662 | 15 |

TK14 | 27 | 13 | 643 |

Classification Accuracy (%) | 92.68 | 97.35 | 95.97 |

Average accuracy (%) | 95.36 |

Variety | TK9 | TN11 | TK14 |
---|---|---|---|

TK9 | 613 | 8 | 10 |

TN11 | 25 | 651 | 10 |

TK14 | 18 | 21 | 646 |

Total | 656 | 680 | 670 |

Classification accuracy (%) | 93.44 | 95.7 | 96.4 |

Average accuracy (%) | 95.21 |

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Huang, K.-Y.; Chien, M.-C.
A Novel Method of Identifying Paddy Seed Varieties. *Sensors* **2017**, *17*, 809.
https://doi.org/10.3390/s17040809

**AMA Style**

Huang K-Y, Chien M-C.
A Novel Method of Identifying Paddy Seed Varieties. *Sensors*. 2017; 17(4):809.
https://doi.org/10.3390/s17040809

**Chicago/Turabian Style**

Huang, Kuo-Yi, and Mao-Chien Chien.
2017. "A Novel Method of Identifying Paddy Seed Varieties" *Sensors* 17, no. 4: 809.
https://doi.org/10.3390/s17040809