A Lightweight Recognition Method for Rice Growth Period Based on Improved YOLOv5s
Abstract
:1. Introduction
2. Materials and Methods
2.1. Image Collection
2.2. Construction of Rice Growth Period Dataset
2.3. Improved YOLOv5s Network Structure
2.3.1. Improvements of Backbone Networks
- Depthwise separable convolution (DSC) [30]. An important part of MobileNetV3 is the DSC, and its core idea is to decompose the standard convolution operation into depthwise convolution and pointwise convolution, as shown in Figure 6. DSC can be divided into two parts: depthwise convolution and pointwise convolution, as shown in the Figure 6. The depthwise convolution is completely performed on a two-dimensional plane, and each channel is convolved by only one convolution kernel, where the number of convolution kernels is equal to the number of channels in the previous layer, which cannot effectively utilize the feature information of different channels at the same spatial position. The pointwise convolution operation is very similar to traditional convolution operation, where the convolution operation can weigh and combine the feature map in the depth direction of the previous step, generating a new feature map [30]. For several convolution kernels, there are several output characteristic maps. The depthwise convolution uses different convolution kernels for each input channel, that is, the number of groups in the network is equal to the number of channels in the network, thereby reducing the computational complexity of the convolution [31]. Assuming the size of the convolution kernel is , M is the number of input channels, N is the number of output channels, and is the size of the output feature map, the computation of standard convolution and DSC are calculated in Formulas (1) and (2), respectively.
- h-swish activation function and Squeeze-and-Excitation (SE) module. h-swish is an improvement on the swish function [32], where the sigmoid function is replaced by ReLU6(x + 3)/6. This replacement reduces problems such as gradient disappearance caused by the increase in network layers. It can also greatly reduce computational complexity, improve model performance, and increase model detection efficiency. The core idea of the SE attention mechanism [29] is to model the interdependence between channels and generate corresponding weights for each channel to improve significant features and suppress unimportant features. The SE attention mechanism network consists of two steps: Squeeze and Excitation. Squeeze compresses the current feature map into a global compression feature vector by performing global average pooling (GAP) on the extracted features. Excitation obtains the normalized weights of each channel through two fully connected layers and multiplies the weighted features as inputs to the next layer of the network. The input X has a size of H × W × C, where C is the number of feature channels, and H × W is the height and width of the feature map. Fc represents a fully connected layer, ReLU and h-swish are activation functions, and Y multiplies the weight coefficients generated for each channel with all elements of the corresponding channel. It enhances important features, weakens unimportant features, and makes the extracted features more directional. The SE attention mechanism structure is shown in Figure 7a.
- Inverted residual structure with linear bottlenecks. It first uses a 1 × 1 convolution to reduce dimensionality, then extracts features through a 3 × 3 convolution, and finally uses a 1 × 1 convolution to increase dimensionality in the residual structure. This results in a structure that resembles an hourglass with a small middle and two large ends [33]. However, in the inverted residual structure, dimensionality is increased first using a 1 × 1 convolution, then features are extracted using a 3 × 3 DSC, and finally dimensionality is reduced using a 1 × 1 convolution. The order of dimensionality reduction and increase is swapped, and SC is replaced by DSC convolution, resulting in a shuttle-shaped structure with a small middle and two large ends. In addition, after dimensionality reduction in the convolution layer, non-linear transformations, such as ReLU, are not added in order to avoid information loss as much as possible. The purpose of this is to minimize the risk of losing information. The inverted residual structure is shown in Figure 7b.
2.3.2. Improvement of Neck Network
3. Experimental and Analysis
3.1. Experimental Setup
3.2. Evaluation Metrics
3.3. Self-Comparison on Improved YOLOv5s
3.3.1. Influence on MobileNetV3 and GsConv
3.3.2. Recognition Results of five Growth Periods of Rice
3.4. Comparison of Different Models
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Alfred, R.; Obit, J.H.; Yee, C.C.P.; Haviluddin, H.; Lim, Y. Towards Paddy Rice Smart Farming: A Review on Big Data, Machine Learning and Rice Production Tasks. IEEE Access 2021, 9, 50358–50380. [Google Scholar] [CrossRef]
- Jiang, X.; Fang, S.; Huang, X.; Liu, Y.; Guo, L. Rice Mapping and Growth Monitoring Based on Time Series GF-6 Images and Red-Edge Bands. Remote Sens. 2021, 13, 579. [Google Scholar] [CrossRef]
- Yu, Z.; Cao, Z.; Wu, X.; Bai, X.; Qin, Y.; Zhuo, W.; Xiao, Y.; Zhang, X.; Xue, H. Automatic image-based detection technology for two critical growth stages of maize: Emergence and three-leaf stage. Agric. For. Meteorol. 2013, 174, 65–84. [Google Scholar] [CrossRef]
- Çağlar, K.; Taşkın, G.; Erten, E. Paddy-rice phenology classification based on machine-learning methods using multitemporal co-polar X-band SAR images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2509–2519. [Google Scholar]
- Zheng, J.; Song, X.; Yang, G.; Du, X.; Mei, X.; Yang, X. Remote sensing monitoring of rice and wheat canopy nitrogen: A review. Remote Sens. 2022, 14, 5712. [Google Scholar] [CrossRef]
- Liu, S.; Peng, D.; Zhang, B.; Chen, Z.; Yu, L.; Chen, J.; Pan, Y.; Zheng, S.; Hu, J.; Lou, Z.; et al. The Accuracy of Winter Wheat Identification at Different Growth Stages Using Remote Sensing. Remote Sens. 2022, 14, 893. [Google Scholar] [CrossRef]
- Sapkota, B.; Singh, V.; Neely, C.; Rajan, N.; Bagavathiannan, M. Detection of Italian ryegrass in wheat and prediction of competitive interactions using remote-sensing and machine-learning techniques. Remote Sens. 2020, 12, 2977. [Google Scholar] [CrossRef]
- Wang, L.; Wang, P.; Li, L.; Xun, L.; Kong, Q.; Liang, S. Developing an integrated indicator for monitoring maize growth condition using remotely sensed vegetation temperature condition index and leaf area index. Comput. Electron. Agric. 2018, 152, 340–349. [Google Scholar] [CrossRef]
- Ji, Z.; Pan, Y.; Zhu, X.; Wang, J.; Li, Q. Prediction of Crop Yield Using Phenological Information Extracted from Remote Sensing Vegetation Index. Sensors 2021, 21, 1406. [Google Scholar] [CrossRef] [PubMed]
- Sethy, P.K.; Behera, S.K.; Kannan, N.; Narayanan, S.; Pandey, C. Smart paddy field monitoring system using deep learning and IoT. Concurr. Eng. 2021, 29, 16–24. [Google Scholar] [CrossRef]
- Rasti, S.; Bleakley, C.J.; Holden, N.; Whetton, R.; Langton, D.; O’hare, G. A survey of high resolution image processing techniques for cereal crop growth monitoring. Inf. Process. Agric. 2022, 9, 300–315. [Google Scholar] [CrossRef]
- Bai, X.; Cao, Z.; Zhao, L.; Zhang, J.; Lv, C.; Li, C.; Xie, J. Rice heading stage automatic observation by multi-classifier cascade based rice spike detection method. Agric. For. Meteorol. 2018, 259, 260–270. [Google Scholar] [CrossRef]
- Zhang, Y.; Xiao, D.; Liu, Y. Automatic identification algorithm of the rice tiller period based on PCA and SVM. IEEE Access 2021, 9, 86843–86854. [Google Scholar] [CrossRef]
- Kevin, K.; Norbert, K.; Raghav, K.; Roland, S.; Achim, W.; Helge, A. Soybean leaf coverage estimation with machine learning and thresholding algorithms for field phenotyping. In Proceedings of the British Machine Vision Conference, Newcastle, UK, 3–6 September 2018; BMVA Press: Durham, UK, 2018; pp. 3–6. [Google Scholar]
- Zhang, Z.; Liu, H.; Meng, Z.; Chen, J. Deep learning-based automatic recognition network of agricultural machinery images. Comput. Electron. Agric. 2019, 166, 104978. [Google Scholar] [CrossRef]
- Rasti, S.; Bleakley, C.; Silvestre, G.; Holden, N.; Langton, D.; O’hare, G. Crop growth stage estimation prior to canopy closure using deep learning algorithms. Neural Comput. Appl. 2021, 33, 1733–1743. [Google Scholar] [CrossRef]
- Wang, S.; Li, Y.; Yuan, J.; Song, L.; Liu, X.; Liu, X. Recognition of cotton growth period for precise spraying based on convolution neural network. Inf. Process. Agric. 2021, 8, 219–231. [Google Scholar] [CrossRef]
- Jiehua, L.; Wenzhong, G.; Sen, L.; Chaowu, W.; Yu, Z.; Chunjiang, Z. Strawberry Growth Period Recognition Method Under Greenhouse Environment Based on Improved YOLOv4. Smart Agric. 2021, 3, 99–110. [Google Scholar]
- Tian, Y.; Yang, G.; Wang, Z.; Wang, H.; Li, E.; Liang, Z. Apple detection during different growth stages in orchards using the improved YOLO-V3 model. Comput. Electron. Agric. 2019, 157, 417–426. [Google Scholar] [CrossRef]
- Roy, A.M.; Bose, R.; Bhaduri, J. A fast accurate fine-grain object detection model based on YOLOv4 deep neural network. Neural Comput. Appl. 2022, 34, 3895–3921. [Google Scholar] [CrossRef]
- Ahmed, K.R. Smart Pothole Detection Using Deep Learning Based on Dilated Convolution. Sensors 2021, 21, 8406. [Google Scholar] [CrossRef]
- Cardellicchio, A.; Solimani, F.; Dimauro, G.; Petrozza, A.; Summerer, S.; Cellini, F.; Renò, V. Detection of tomato plant phenotyping traits using YOLOv5-based single stage detectors. Comput. Electron. Agric. 2023, 207, 107757. [Google Scholar] [CrossRef]
- Guo, W.; Fukatsu, T.; Ninomiya, S. Automated characterization of flowering dynamics in rice using field-acquired time-series RGB images. Plant Methods 2015, 11, 7. [Google Scholar] [CrossRef]
- Hong, S.; Jiang, Z.; Liu, L.; Wang, J.; Zhou, L.; Xu, J. Improved Mask R-CNN Combined with Otsu Preprocessing for Rice Panicle Detection and Segmentation. Appl. Sci. 2022, 12, 11701. [Google Scholar] [CrossRef]
- Liu, G.; Hu, Y.; Chen, Z.; Guo, J.; Ni, P. Lightweight object detection algorithm for robots with improved YOLOv5. Eng. Appl. Artif. Intell. 2023, 123, 106217. [Google Scholar] [CrossRef]
- Guo, G.; Zhang, Z. Road damage detection algorithm for improved YOLOv5. Sci. Rep. 2022, 12, 15523. [Google Scholar] [CrossRef] [PubMed]
- Howard, A.; Sandler, M.; Chu, G.; Chen, L.-C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for MobileNetV3. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Li, H. Slim-neck by GSConv A better design paradigm of detector architecturs for autonomos vehicles. arXiv 2022, arXiv:2206.02424. [Google Scholar]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1800–1807. [Google Scholar]
- Howard, A.G. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Ramachandran, P. Searching for activation functions. arXiv 2017, arXiv:1710.05941. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar]
Growth Period | Training | Validation |
---|---|---|
turning green | 548 | 137 |
tillering | 912 | 228 |
jointing | 834 | 208 |
heading | 592 | 148 |
milky | 990 | 247 |
Model | Params (MB) | GFLOPs | P (%) | R (%) | mAP (0.5) (%) | mAP (0.5:0.95) (%) |
---|---|---|---|---|---|---|
YOLOv5s(Baseline) | 7.06 | 16.3 | 98.5 | 98.9 | 99.4 | 97.3 |
YOLOv5s + MobileNetV3 | 1.39 | 2.5 | 98.2 | 97.4 | 99.3 | 89.4 |
YOLOv5s + GsConv | 6.58 | 15.2 | 92.6 | 98.0 | 97.3 | 73.4 |
YOLOv5s + MobileNetV3 + GsConv | 1.24 | 2.3 | 96.2 | 98.9 | 98.7 | 94.2 |
Model | Params (MB) | GFLOPs | P (%) | R (%) | mAP (0.5) (%) | mAP (0.5:0.95) (%) |
---|---|---|---|---|---|---|
Faster R-CNN | 13.7 | 37.0 | 98.1 | 96.3 | 96.7 | 93.2 |
YOLOv4 | 10.6 | 18.2 | 93.5 | 92.6 | 90.5 | 88.7 |
YOLOv7 | 36.5 | 103.2 | 98.7 | 97.4 | 99.6 | 98.6 |
YOLOv8 | 11.1 | 28.4 | 99.1 | 99.0 | 99.5 | 98.8 |
Small-YOLOv5 | 1.24 | 2.3 | 96.2 | 98.9 | 98.7 | 94.2 |
Model | Params (MB) | GFLOPs | P (%) | R (%) | mAP (0.5) (%) | mAP (0.5:0.95) (%) |
---|---|---|---|---|---|---|
YOLOv4-tiny | 4.6 | 5.2 | 92.1 | 90.7 | 88.9 | 86.7 |
YOLOv7-tiny | 5.1 | 8.2 | 93.2 | 95.1 | 95.6 | 93.7 |
YOLOv5n | 1.76 | 4.1 | 94.6 | 95.1 | 98.8 | 91.7 |
YOLOv8n | 3.0 | 8.1 | 99.3 | 99.3 | 99.5 | 98.8 |
Small-YOLOv5 | 1.24 | 2.3 | 96.2 | 98.9 | 98.7 | 94.2 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, K.; Wang, J.; Zhang, K.; Chen, M.; Zhao, H.; Liao, J. A Lightweight Recognition Method for Rice Growth Period Based on Improved YOLOv5s. Sensors 2023, 23, 6738. https://doi.org/10.3390/s23156738
Liu K, Wang J, Zhang K, Chen M, Zhao H, Liao J. A Lightweight Recognition Method for Rice Growth Period Based on Improved YOLOv5s. Sensors. 2023; 23(15):6738. https://doi.org/10.3390/s23156738
Chicago/Turabian StyleLiu, Kaixuan, Jie Wang, Kai Zhang, Minhui Chen, Haonan Zhao, and Juan Liao. 2023. "A Lightweight Recognition Method for Rice Growth Period Based on Improved YOLOv5s" Sensors 23, no. 15: 6738. https://doi.org/10.3390/s23156738
APA StyleLiu, K., Wang, J., Zhang, K., Chen, M., Zhao, H., & Liao, J. (2023). A Lightweight Recognition Method for Rice Growth Period Based on Improved YOLOv5s. Sensors, 23(15), 6738. https://doi.org/10.3390/s23156738