# Automated Counting Grains on the Rice Panicle Based on Deep Learning Method

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Materials and Methods

#### 2.1. Image Collections

#### 2.1.1. Description of Image Capture

#### 2.1.2. Image Sets

#### 2.2. Grain Detection Method

#### 2.2.1. Image Annotation

#### 2.2.2. Grain Detection Based on Faster R-CNN with FPN

^{2}, 64

^{2}, 128

^{2}, 256

^{2}, 512

^{2}} correspond to the five features {P2, P3, P4, P5, P6}. Each feature layer processes three candidate frames with 1:1, 1:2, and 2:1 aspect ratio. P6 is specifically designed for RPN networks and was used to process 512-dimensional candidate boxes. It is obtained by down-sampling from P5.

_{0}is 4, w is the width of the ROI, and h is the length of the ROI.

_{0}is 4, which corresponds to the level of the box with a length and width of 224. If the length and width of the box are divided by 2 related to 224, then the value of k will be reduced by 1, and so on.

#### 2.2.3. Training of the Grain Detection Model

#### 2.3. Evaluation Metrics

_{p}and B

_{l}are the predicted bounding box and the labelled bounding box, respectively; B

_{p}∩ B

_{l}is the intersection of the detected bounding box and the ground truth bounding box. B

_{p}∪ B

_{l}is the union of two boxes.

^{2}), root mean square error (RMSE), the relative RMSE (rRMSE), and mean absolute error (MAE). These metrics were computed using the following equations:

_{j}and e

_{j}are the manual calculation and model detection of image j, respectively; n is the total number of the detected images.

## 3. Results and Discussion

#### 3.1. The Behavior of the Grain Detection Model during the Training Process

#### 3.2. Performance of the Grain Detection Model

#### 3.3. Comparison with the Other C-NN Models

#### 3.4. Further Analysis of Testing Results

#### 3.4.1. Effects of the Number of Grains

#### 3.4.2. Effects of Lighting Conditions

#### 3.5. Application of the Grain Detection Model

^{2}= 0.998). Besides, the regression line was highly consistent with the 1:1 line. Also, the model had similar results for detecting dry grains (Figure 8b). These demonstrated that the proposed grain detection model can be applied to another rice variety, and provide accurate grain detections, regardless of the grain moisture condition.

## 4. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Li, R.; Li, M.; Ashraf, U.; Liu, S.; Zhang, J. Exploring the Relationships between Yield and Yield-Related Traits for Rice Varieties Released in China from 1978 to 2017. Front. Plant Sci.
**2019**, 10, 543. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Rashid, K.; Kahliq, I.; Farooq, M.O.; Ahsan, M.Z. Correlation and cluster analysis of some yield and yield related traits in Rice (Oryza sativa). J. Recent Adv. Agric.
**2014**, 2, 290–295. [Google Scholar] - Kang, K.; Shim, Y.; Gi, E.; An, G.; Paek, N.-C. Mutation of ONAC096 Enhances Grain Yield by Increasing Panicle Number and Delaying Leaf Senescence during Grain Filling in Rice. Int. J. Mol. Sci.
**2019**, 20, 5241. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Luo, Y.; Lao, L.; Ai, B.; Zhang, M.; Xie, J.; Zhang, F. Development of a drought stress-resistant rice restorer line through Oryza sativa–rufipogon hybridization. J. Genet.
**2019**, 98, 55. [Google Scholar] [CrossRef] [PubMed] - Cox, N.; Smith, L.M. A Rice Transcription Factor Controls Grain Length through Cell Number. Plant Physiol.
**2019**, 180, 1781–1783. [Google Scholar] [CrossRef] [Green Version] - Duan, L.; Yang, W.; Bi, K.; Chen, S.; Luo, Q.; Liu, Q. Fast discrimination and counting of filled/unfilled rice spikelets based on bi-modal imaging. Comput. Electron. Agric.
**2011**, 75, 196–203. [Google Scholar] [CrossRef] - Duan, L.; Yang, W.; Huang, C.; Liu, Q. A novel machine-vision-based facility for the automatic evaluation of yield-related traits in rice. Plant Methods
**2011**, 7, 44. [Google Scholar] [CrossRef] [Green Version] - Al-Tam, F.; Adam, H.; Dos Anjos, A.; Lorieux, M.; Larmande, P.; Ghesquière, A.; Jouannic, S.; Shahbazkia, H.R. P-TRAP: A Panicle Trait Phenotyping tool. BMC Plant Biol.
**2013**, 13, 122. [Google Scholar] [CrossRef] [Green Version] - Mebatsion, H.; Paliwal, J. A Fourier analysis based algorithm to separate touching kernels in digital images. Biosyst. Eng.
**2011**, 108, 66–74. [Google Scholar] [CrossRef] - Tan, S.; Ma, X.; Mai, Z.; Qi, L.; Wang, Y. Segmentation and counting algorithm for touching hybrid rice grains. Comput. Electron. Agric.
**2019**, 162, 493–504. [Google Scholar] [CrossRef] - Lin, P.; Chen, Y.; He, Y.; Hu, G. A novel matching algorithm for splitting touching rice kernels based on contour curvature analysis. Comput. Electron. Agric.
**2014**, 109, 124–133. [Google Scholar] [CrossRef] - Zhao, S.; Gu, J.; Zhao, Y.; Hassan, M.; Li, Y.; Ding, W. A method for estimating spikelet number per panicle: Integrating image analysis and a 5-point calibration model. Sci. Rep.
**2015**, 5, 16241. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Gong, L.; Lin, K.; Wang, T.; Liu, C.; Yuan, Z.; Zhang, D.; Hong, J. Image-Based On-Panicle Rice [Oryza sativa L.] Grain Counting with a Prior Edge Wavelet Correction Model. Agronomy
**2018**, 8, 91. [Google Scholar] [CrossRef] [Green Version] - Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric.
**2018**, 147, 70–90. [Google Scholar] [CrossRef] [Green Version] - Khaki, S.; Pham, H.; Han, Y.; Kuhl, A.; Kent, W.; Wang, L. Convolutional Neural Networks for Image-Based Corn Kernel Detection and Counting. Sensors
**2020**, 20, 2721. [Google Scholar] [CrossRef] [PubMed] - Jiang, B.; Wang, P.; Zhuang, S.; Li, M. Leaf Counting with Multi-Scale Convolutional Neural Network Features and Fisher Vector Coding. Symmetry
**2019**, 11, 516. [Google Scholar] [CrossRef] [Green Version] - Hasan, M.; Chopin, J.P.; Laga, H.; Miklavcic, S.J. Detection and analysis of wheat spikes using Convolutional Neural Networks. Plant Methods
**2018**, 14, 100. [Google Scholar] [CrossRef] [Green Version] - Uzal, L.C.; Grinblat, G.; Namías, R.; Larese, M.; Bianchi, J.; Morandi, E.; Granitto, P. Seed-per-pod estimation for plant breeding using deep learning. Comput. Electron. Agric.
**2018**, 150, 196–204. [Google Scholar] [CrossRef] - Desai, S.V.; Balasubramanian, V.N.; Fukatsu, T.; Ninomiya, S.; Guo, W. Automatic estimation of heading date of paddy rice using deep learning. Plant Methods
**2019**, 15, 1–11. [Google Scholar] [CrossRef] [Green Version] - Wu, W.; Liu, T.; Zhou, P.; Yang, T.; Li, C.; Zhong, X.; Sun, C.; Liu, S.; Guo, W. Image analysis-based recognition and quantification of grain number per panicle in rice. Plant Methods
**2019**, 15, 122. [Google Scholar] [CrossRef] - Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN towards Real-Time Object Detection with Region Proposal Network. IEEE Trans. Pattern Anal. Mach. Intell.
**2017**, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Lin, T.-Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Darrenl. Labelimg: Labelimg is a Graphical Image Annotation Tool and Label Object Bounding Boxes in Images. 2017. Available online: https://github.com/tzutalin/labelImg (accessed on 23 November 2020).
- Fei-Fei, L.; Deng, J.; Russakovsky, O.; Berg, A.; Li, K. ImageNet Dataset. 2019. Available online: http://www.image-net.org/ (accessed on 23 November 2020).
- Padilla, R. Most Popular Metrics Used to Evaluate Object Detection Algorithms. 2018. Available online: https://github.com/rafaelpadilla/Object-Detection-Metrics (accessed on 23 November 2020).

**Figure 1.**Illustration of the process of image capturing: (

**a**) structure of a rice panicle; (

**b**) rice primary branch and image taking with a mobile phone; (

**c**) image captured showing the rice primary branch.

**Figure 2.**Examples of labeling grains with bounding boxes: (

**a**) drawing bounding boxes using the LABELIMG software; (

**b**) small scale, (

**c**) large scale, (

**d**) blurred condition, (

**e**) sunny condition with background shadows, (

**f**) cloudy condition.

**Figure 4.**The changes of loss and accuracy values during training process of the grain detection model: (

**a**) loss; (

**b**) accuracy.

**Figure 5.**The Precision-Recall curve of the grain detection model compared with the original Faster R-CNN method and SSD method.

**Figure 6.**Three examples that show the grain detection results using the grain detection model, the original Faster R-CNN model and SSD model with a cutoff confidence value of 0.9; the texts indicate the confidence scores; (

**a**,

**e**): original image; (

**b**,

**f**): detection results of grain detection model; (

**c**,

**g**) detection results of the Faster R-CNN model; (

**d**,

**h**) detection results of the SSD model.

**Figure 7.**Examples of images causing grain detection errors: (

**a**) most area of the grain was covered and blurred; (

**b**) most area of the grain was covered and under lighting reflection; (

**c**) two bonded grains under light reflection.

**Figure 8.**Comparison between the manual observation and model results for counting grains. The solid red line is regression line, the black dashed line is 1:1 line. (

**a**) fresh grains; (

**b**) dry grains.

Image Set | Rice Variety | No. of Samples | Imaging Conditions | No. of Samples |
---|---|---|---|---|

Original image set | Guguangyouzhan | 796 | sunny | 378 |

cloudy | 315 | |||

blurred | 103 | |||

Verification image set | Zhenguiai | 70 | fresh | 35 |

dry | 35 |

Project | Content |
---|---|

CPU | Intel [email protected] x8 |

RAM | 62G |

GPU | GeForce GTX 1080 Ti |

GPU memory | 11G |

Operating System | Ubuntu 16.04 LTS |

Cuda | Cuda 9.0 with Cudnn v7 |

Data processing | Python 3.6, OpenCV, LabelImg, etc. |

Deep learning framework | Pytorch |

Deep learning algorithm | Faster RCNN ResNet50 with FPN |

**Table 3.**Precision and recall from grain detection model using testing set images at different confidence values (0.4, 0.5, 0.6, 0.7, 0.8, 0.9) set as the cutoff points.

Confidence Value | Manual Grain Counting | Correctly Identified (True Positive) | Incorrectly Identified (False Positive) | Missed Grain (False Negative) | Precision (%) | Recall (%) | Accuracy (%) |
---|---|---|---|---|---|---|---|

0.9 | 1779 | 1770 | 0 | 9 | 100.0 | 99.5 | 99.5 |

0.8 | 1779 | 1774 | 0 | 5 | 100.0 | 99.7 | 99.7 |

0.7 | 1779 | 1774 | 1 | 5 | 99.9 | 99.7 | 99.7 |

0.6 | 1779 | 1775 | 6 | 4 | 99.6 | 99.8 | 99.4 |

0.5 | 1779 | 1775 | 12 | 4 | 99.3 | 99.8 | 99.1 |

0.4 | 1779 | 1775 | 17 | 4 | 99.0 | 99.8 | 98.8 |

mean | 1779 | 1774 | 6 | 5 | 99.6 | 99.7 | 99.4 |

**Table 4.**The precision, recall and accuracy of the grain detection model (No. 1) and the original Faster R-CNN method (No.2) and SSD method (No.3).

No. | Manual Grain Counting | True Positive | False Positive | False Negative | Precision (%) | Recall (%) | Accuracy (%) |
---|---|---|---|---|---|---|---|

1 | 1779 | 1770 | 0 | 9 | 100.0 | 99.5 | 99.5 |

2 | 1779 | 1707 | 3 | 72 | 99.8 | 95.9 | 95.7 |

3 | 1779 | 1324 | 0 | 455 | 100.0 | 74.4 | 74.4 |

The Number of Grains in an Image | Total Number of the Images | Manual Counting | True Positive | False Positive | False Negative | Precision (%) | Recall (%) |
---|---|---|---|---|---|---|---|

1–9 | 71 | 500 | 500 | 0 | 0 | 100.0 | 100.0 |

10–14 | 47 | 578 | 575 | 0 | 3 | 100.0 | 99.5 |

>14 | 42 | 701 | 695 | 0 | 6 | 100.0 | 99.1 |

Total | 160 | 1779 | 1770 | 0 | 9 | 100.0 | 99.5 |

Lighting Condition | Total Number of Images | Manual Counting | True Positive | False Positive | False Negative | Precision (%) | Recall (%) |
---|---|---|---|---|---|---|---|

Sunny | 97 | 1024 | 1017 | 0 | 7 | 100.0 | 99.3 |

Cloudy | 63 | 755 | 753 | 0 | 2 | 100.0 | 99.7 |

Total | 160 | 1779 | 1770 | 0 | 9 | 100.0 | 99.5 |

Grain Moisture Condition | Total Number of Images | Manual Counting | True Positive | False Positive | False Negative | Precision (%) | Recall (%) |
---|---|---|---|---|---|---|---|

Fresh | 35 | 446 | 444 | 0 | 2 | 100.0 | 99.6 |

Dry | 35 | 446 | 443 | 2 | 3 | 99.6 | 99.3 |

Total | 70 | 892 | 887 | 2 | 5 | 99.7 | 99.4 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Deng, R.; Tao, M.; Huang, X.; Bangura, K.; Jiang, Q.; Jiang, Y.; Qi, L.
Automated Counting Grains on the Rice Panicle Based on Deep Learning Method. *Sensors* **2021**, *21*, 281.
https://doi.org/10.3390/s21010281

**AMA Style**

Deng R, Tao M, Huang X, Bangura K, Jiang Q, Jiang Y, Qi L.
Automated Counting Grains on the Rice Panicle Based on Deep Learning Method. *Sensors*. 2021; 21(1):281.
https://doi.org/10.3390/s21010281

**Chicago/Turabian Style**

Deng, Ruoling, Ming Tao, Xunan Huang, Kemoh Bangura, Qian Jiang, Yu Jiang, and Long Qi.
2021. "Automated Counting Grains on the Rice Panicle Based on Deep Learning Method" *Sensors* 21, no. 1: 281.
https://doi.org/10.3390/s21010281