Image Recognition Technology in Smart Agriculture: A Review of Current Applications Challenges and Future Prospects
Abstract
:1. Introduction
- Compared to other studies in the field, this review provides a comprehensive summary of agricultural IRT applications, enabling researchers to quickly and thoroughly understand the various technologies and their research foundations.
- We provide an overview of application areas such as plant disease detection, crop identification, and quality assessment. Additionally, we discuss how to optimize these applications by adopting deep learning and CV techniques.
- We present future research opportunities aimed at further improving crop recognition accuracy, achieving precision agriculture, and exploring the potential of new technologies in agricultural automation.
Year | Reference | Plant Disease Detection | Crop Identification | Crop Yield Prediction | Quality Assessment | Soil Nutrient Management | Fruit Grading |
---|---|---|---|---|---|---|---|
2018 | Diego et al. [30] | √ | √ | ||||
2019 | Kirtan et al. [31] | √ | √ | ||||
2020 | Tian et al. [32] | √ | √ | √ | |||
2022 | Dhanya et al. [33] | √ | √ | √ | |||
2022 | Mamat et al. [34] | √ | √ | √ | |||
2024 | Lian et al. [35] | √ | √ | ||||
2024 | Our | √ | √ | √ | √ | √ | √ |
2. Overview of the Development of Image Recognition Technology
2.1. Image Recognition Technology
2.2. Development of Image Recognition Technology
2.2.1. Inception Stage
2.2.2. Breakthrough Stage
2.2.3. Deep Development Stage
2.2.4. Application Expansion Stage
3. Application of Image Recognition Technology in Smart Agriculture
3.1. Application of Image Recognition Technology in Plant Disease Detection and Classification
3.1.1. The Process of Plant Disease Detection
3.1.2. Deep Learning Technologies for Plant Disease Detection
- Disease identification performance of models or algorithms in a complex environment is deficient and needs to be enhanced further.
- Sample collection and annotation, model training and validation, model interpretation and decision-making still rely on manual intervention, and the accuracy of recognition will be directly affected.
- The applicability and stability of the system under different environmental and cultivation conditions are unknown, and the expandability of new technologies across different crop species and their practical application is limited.
- CNN requires lots of image data for training, but now, the data are limited to plant disease identification research [66].
Research Object | Year | Image Processing Task | Network Framework | Accuracy | Laboratory or Field Use | Notes |
---|---|---|---|---|---|---|
Banana | 2021 | Banana fungal disease detection | Artificial Neural Networks (ANN), Probabilistic Neural Networks (PNN) | 95.40% | Laboratory | Lighting, shadows, and leaf overlap can affect the extraction of texture features and classification accuracy [64]. |
Wheat | 2021 | Detection of five fungal diseases in wheat | EfficientNet-B0 | 94.20% | Field | An additional workforce is required to develop and maintain the mobile application [72]. |
Pomegranate | 2021 | Pomegranate leaf disease identification | SVM, CNN | 98.07% | Laboratory | Disease manifestations may vary across regions, and the model needs to be trained on data from a broader range of areas to enhance its generalization ability [73]. |
Tomato | 2022 | Tomato disease detection | VGG, ResNet | N/A | Field | Insufficient sample size and high pattern variation lead to confusion with other categories, resulting in false positives or lower average precision [61]. |
Rice | 2022 | Rice disease detection | FRCNN, EfficientNet-B0 | 96.43% | Laboratory | The model cannot fully cover all rice varieties and growth environments with limited collected samples [62]. |
Apple | 2022 | Apple disease detection | KNN | 99.78% | Laboratory | KNN requires calculating the similarity with all training images when querying an image, leading to high computational complexity and affecting real-time performance [74]. |
Rice | 2022 | Rice disease identification | CNN | 99.28% | Laboratory | To improve overall classification accuracy, there may be a tendency to overlook the precise differentiation of subcategories, leading to errors in subclassification [75]. |
Tomato | 2022 | Tomato disease identification | CNN | 96.30% | Laboratory | After significant changes in image acquisition conditions, the feasibility of vision transformers in image recognition tasks decreases [76]. |
Rice | 2022 | Rice disease identification and detection | Moblnc-Net | 99.21% | Field | MobileNet, as a lightweight network, faces limitations in computational resources when processing high-resolution images or during real-time monitoring [77]. |
Soybean | 2023 | Soybean leaf disease identification | ImageNet, TRNet18 | 99.53% | N/A | No specific experiments and results have been provided for cross-crop applications, making the actual effectiveness of transferability unknown [78]. |
Corn | 2023 | identification of nine corn diseases | VGGNet | 98.10%~100% | Laboratory | Pre-trained models, such as VGG16, perform well on large-scale datasets but may overfit when applied to smaller datasets [79]. |
Sweet pepper | 2023 | Detection of bacterial spot disease in sweet pepper | YOLOv3 | 90.00% | Field | When detecting multiple diseases on the same leaf, features of different diseases may overlap, leading to false positives or missed detections [80]. |
Wheat | 2024 | Recognition of wheat rust | Imp-DenseNet | 98.32% | Laboratory | No specific optimization measures to address confounding factors are described [81]. |
Crop leaf disease | 2024 | Detection of 19 types of crop leaf diseases | YOLOv8 | 98.00% | Laboratory | Model demonstrates real-time, high-precision detection in complex environments [63]. |
- Current research in plant disease detection mainly focuses on the classification and analysis of single leaves, and the leaves need to be adjusted to face up and put under a uniform background [82]. However, in an actual field environment, disease identification should cover all parts of the plant since they are distributed in different orientations. Therefore, image acquisition should focus on multiple perspectives and be conducted under conditions close to the natural environment to improve the model’s effectiveness in practical applications.
- Most researchers extracted the regions of interest directly from collected images, thus ignored occlusion issues led to lower recognition accuracy, and the generalization ability of the model was limited [83,84]. The occlusion is caused by various factors, including the natural swing of leaves, interference from branches or other plants, uneven external lighting, complex backgrounds, overlapping of leaves, and other environmental factors. Different degrees of occlusion impacted the recognition process, leading to incorrect identification or missed detection. In recent years, despite the significant process in DL algorithms, accurately identifying plant diseases and pests under extreme conditions (such as in dimly lit environments) remains a considerable challenge.
- In actual agricultural production, plants are often affected by several diseases at once rather than just one. However, most current research worked on detecting individual diseases, and it was difficult to identify and classify various diseases at the same time. This limitation was particularly prominent in practical applications due to the complexity production environment, where plants were threatened by two or more pathogens, such as fungi, bacteria, and viruses. When various diseases occur concurrently, their symptoms are mixed or combined, which needs a higher requirement on the recognition capacity of the model. More excellent model structures and algorithms for multimodal data fusion should be explored to solve this issue. For example, integrating image recognition with other sensor data (such as temperature and humidity) could enhance the accuracy of simultaneous multi-disease identification.
3.2. Application of Image Recognition in Crop Identification
3.2.1. Advances in Image Recognition for Crop Identification
3.2.2. Technologies for Crop Identification
- With the application of CNN, the identification of crop species was no longer limited to single crops, and multiple crops in the same image could able to be classified and recognized at the same time, even in complex environments. Moreover, the concurrent identification of multiple crops was improved owe to the integration of multispectral and hyperspectral imaging technologies. The spectral information of different wavelengths could be captured, and other crop species could be effectively distinguished.
- Currently, roadblocks brought by the diversity of perspectives, growth conditions, and image quality when collecting the datasets for crop identification should be emphasized, which limits the adaptability and stability of models in different crop species and growing environments. Although ResNet, DenseNet, and more excellent models achieved some achievements in specific environments, they were still in their infancy in complex environments. Different perspectives, lighting conditions, seasonal changes, and crop growth stages all lead to changes in image features, making it difficult for the model to generalize across all actual situations.
- To achieve high crop identification accuracy, DL models typically rely on ample computing resources, which presents significant challenges for adaptability in real-time and large-scale field applications. In complex environments, particularly in resource-limited rural areas or small farms, these models with high computational demands were difficult to deploy and use effectively. Therefore, more improvements are needed for the existing image recognition models to enhance their applicability and efficiency in practical agricultural production. Based on current technology, an effective strategy was introduced in lightweight model architectures, e.g., MobileNet and EfficientNet. These models significantly reduced compute and storage requirements, thus obtaining high accuracy and making the models more suitable for application in resource-constrained complex environments. TL and data enhancement models could also be applied to improve the model’s generalization ability with limited labeled data and enhance the applicability in different crop species and growth stages.
3.3. Application of Image Recognition in Crop Yield Prediction and Quality Assessment
3.3.1. Methods for Crop Yield Forecasting and Quality Assessment
3.3.2. Technologies for Crop Yield Forecasting and Quality Assessment
3.4. Other Applications of Image Recognition in Smart Agriculture
- Soil testing and nutrient management, Jiang et al. [132,133] use hyperspectral imaging and multispectral cameras to conduct detailed analyses of soil and identify its chemical composition, nutrient levels, moisture content, and pH value. When combined with DL, this approach enables the rapid classification of soil types and the detection of key nutrients (nitrogen, phosphorus, and potassium) [134,135,136]. The application of this technology provides more precise fertilization plans for agricultural production and enhances the scientific management of crop growth.
- Agricultural machinery operation quality assessment, Türköz et al. [137,138] developed a monitoring system based on CV and DL, which installed cameras and sensors on agricultural machinery to capture and analyze image data of the operation process in real-time. These data could be used to assess the performance of the machinery, such as the precision of harvesting, any missed sections, or potential crop damage, thereby ensuring consistency and accuracy of the operations. However, in practical use, these systems are still subject to interference from vibrations, complex lighting conditions, and changes in the field environment.
- The grading of crops, CV and 3D imaging technologies are widely used to detect and classify the quality characteristics of agricultural products such as fruits and vegetables, including size, color, shape, and surface defects. Through the application of deep learning models, the system can identify and grade the products without causing any damage. A summary of representative applications of image recognition in other smart agriculture domains is presented in Table 5.
4. Challenges and Prospects
- Scarcity of Agricultural Datasets: The current agricultural image recognition datasets primarily originated from common crop species led to insufficient diversity of these datasets [143,144]. For example, in the CropNet dataset (Sourced from the U.S. Department of Agriculture, Sentinel-2 satellite imagery, and WRF-HRRR meteorological data), more than 75% of the data are concentrated on major crops like rice, wheat, and maize, while data on less common crops, such as peanuts and sweet potatoes, account for less than 10%. Due to this imbalance, models tend to overfit the features of common crops, leading to lower accuracy on rare or economically important crops, with prediction accuracy for common crops reaching 85%, while accuracy for less common crops is only around 60%. Furthermore, the focus on common crops limits climate change research, as studies generally concentrate on crops like rice and maize, leaving other crops underrepresented in climate adaptation research [145]. The construction of agricultural image datasets faced various challenges, such as complex backgrounds and various lighting conditions, which increased the difficulty of accurate annotation. Additionally, crops grown in complex environments where other objects or different species of crops might be around them increased the difficulty of image segmentation [146]. Therefore, IRT used in smart agriculture must adapt to different environments, which raises higher requirements for the related datasets. To solve these challenges, advanced DL technologies such as transfer learning, GANs, model architecture optimization, physics-informed neural networks, and deep synthetic minority oversampling technologies could be employed. By adopting these innovative technologies, the application limitations of current IRT in smart agricultural can be effectively overcome and better meet the practical applications’ needs.
- Multimodal Data Fusion: In addition to addressing the challenges posed by dataset scarcity, another promising direction for enhancing crop image recognition is multimodal data fusion. By combining data from different sensor types, such as visible light images, infrared imagery, and thermal imaging, recognition systems can leverage complementary information to improve accuracy and robustness. For instance, infrared and thermal images can help to detect crop disease or stress in conditions with poor lighting or overcast weather, where visible light images may lack contrast. These complementary datasets provide a more holistic view of the crop environment, allowing the model to better differentiate between healthy and diseased areas. Furthermore, fusion techniques such as deep learning-based multi-input models or feature-level fusion can enhance the model’s performance in complex environments, such as those with occlusions or varying backgrounds. The integration of multimodal data will help in improving crop disease detection in diverse environmental settings, thus strengthening the adaptability of image recognition systems for smart agriculture. Given the dynamic nature of agricultural environments, the development and application of multimodal data fusion will be key to overcoming the limitations of traditional image recognition systems, enhancing their scalability and practical application in diverse farming scenarios.
- Semi-Supervised and Unsupervised Learning: To solve the problems of annotation difficulties and data scarcity, unsupervised learning was proposed, which utilized substantial amounts of unlabeled visual data for training. Unsupervised learning significantly reduced the dependence on annotated datasets by exploiting the intrinsic structure and features of the data to build models without annotated data. Semi-supervised learning combined the strengths of both supervised and unsupervised learning, and it somewhat applied some supervision to unannotated data. In the smart agricultural industry, particularly in rare disease detection, collecting large-scale datasets was expensive and complex, and data annotation was time-consuming for experts. Semi-supervised and unsupervised learning effectively solved these problems [147,148,149]. Transfer learning is another critical technique. Training a general image recognition model on a large number of annotated samples and then fine-tuning it on a small specific agricultural application sample could enhance the model’s generalization ability and make it more adaptable to different types of farming images [150]. This approach reduces the reliance on large-scale annotated data and enables the model to handle diverse application scenarios better in smart agriculture.
- Application of Green AI: AI technology is developing rapidly, and the concept of Green AI is being developed to make AI technology more energy-efficient and reduce carbon emissions and resource consumption [151]. Integration of Green AI with IRT will optimize algorithms and model structures to minimize computational resource usage. It employs renewable energy and low-carbon computing resources during training and inference processes. These approaches will drive IRT’s efficient, energy-saving, and environmentally friendly application [152].
5. Conclusions
- (1)
- Insufficient biodiversity representation in training datasets: Current agricultural datasets often lack diversity, covering only a limited range of crops and environments, which hinders the model’s ability to generalize across different conditions. Additionally, many datasets fail to effectively capture the interactions and variations between different biological entities during the crop growth process [156].
- (2)
- Scalability limitations in real-world agricultural settings: IRT systems face difficulties when deployed in large-scale, dynamic agricultural environments. The variability in lighting, weather conditions, and terrain can severely impact the accuracy and consistency of image recognition models. Moreover, the need for real-time processing and high throughput in these environments often exceeds the capabilities of current systems, limiting their effectiveness in practical applications [157].
- (3)
- Inadequate multimodal data integration: Current IRT systems often rely on single-modal data, such as RGB images, which limits their ability to capture the complexity of agricultural environments fully. The integration of other modalities, such as thermal [158], hyperspectral [159], and LiDAR data [160], remains underdeveloped. Without effective multimodal data fusion, models struggle to account for various environmental factors, plant health indicators, and growth stages, resulting in suboptimal performance in diverse agricultural scenarios [161].
- (1)
- Development of large-scale, ecologically diverse image repositories: Future research should focus on building datasets that cover various crops, environments, and growth conditions to enhance the model’s adaptability and performance across different agricultural settings.
- (2)
- Implementation of lightweight transformer variants for edge computing deployment: Future research should focus on developing optimized, resource-efficient transformer models suitable for edge devices with limited computational power, specifically for image recognition tasks [162]. These models should maintain high performance while reducing computational overhead, enabling real-time image processing and decision-making in field applications where low latency is critical.
- (3)
- Multispectral data fusion techniques combining RGB, thermal, and hyperspectral inputs: To enhance crop monitoring accuracy, research could center on developing methods to integrate multispectral data from diverse sources effectively. For instance, by combining RGB imagery, thermal imaging, and hyperspectral data, models can capture more comprehensive information about crop health, growth stages, and environmental conditions.
6. Limitation and Future Research
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Nomenclature
Abbreviations | Definition |
AI | Artificial Intelligence |
IRT | Image Recognition Technology |
IC | Integrating Computer |
IoT | Internet of Things |
GPS | Global Positioning System |
GIS | Geographic Information System |
WSN | Wireless Sensor Networks |
CNN | Convolutional Neural Networks |
RNN | Recurrent Neural Networks |
GAN | Generative Adversarial Networks |
UAVs | Unmanned Aerial Vehicles |
SVM | Support Vector Machines |
DL | Deep Learning |
CV | Computer Vision |
KNN | K-Nearest Neighbors |
NB | Naive Bayes |
DWT | Discrete Wavelet Transform |
DTCWT | Dual-Tree Complex Wavelet Transform |
ANN | Artificial Neural Networks |
PNN | Probabilistic Neural Networks |
LACIE | Large Area Crop Inventory Experiment |
ASPP | Atrous Spatial Pyramid Pooling |
3DPNN | 3D Polynomial Neural Network |
BN | Batch Normalization |
DCN | Dual Convolutional Network |
EAI | Explainable Artificial Intelligence |
ViT | Vision Transformer |
DeiT | Data-efficient Image Transformer |
References
- Ghazal, S.; Munir, A.; Qureshi, W.S. Computer vision in smart agriculture and precision farming: Techniques and applications. Artif. Intell. Agric. 2024, 13, 64–83. [Google Scholar] [CrossRef]
- Qazi, S.; Khawaja, B.A.; Farooq, Q.U. IoT-equipped and AI-enabled next generation smart agriculture: A critical review, current challenges and future trends. IEEE Access 2022, 10, 21219–21235. [Google Scholar] [CrossRef]
- Boahen, J.; Choudhary, M. Advancements in Precision Agriculture: Integrating Computer Vision for Intelligent Soil and Crop Monitoring in the Era of Artificial Intelligence. Int. J. Sci. Res. Eng. Manag. 2024, 8, 1–5. [Google Scholar] [CrossRef]
- Fu, X.; Ma, Q.; Yang, F.; Zhang, C.; Zhao, X.; Chang, F.; Han, L. Crop pest image recognition based on the improved ViT method. Inf. Process. Agric. 2024, 11, 249–259. [Google Scholar] [CrossRef]
- Sharma, K.; Shivandu, S.K. Integrating artificial intelligence and Internet of Things (IoT) for enhanced crop monitoring and management in precision agriculture. Sens. Int. 2024, 5, 100292. [Google Scholar] [CrossRef]
- Adli, H.K.; Remli, M.A.; Wan Salihin Wong, K.N.S.; Ismail, N.A.; González-Briones, A.; Corchado, J.M.; Mohamad, M.S. Recent advancements and challenges of AIoT application in smart agriculture: A review. Sensors 2023, 23, 3752. [Google Scholar] [CrossRef]
- Raihan, A. A systematic review of Geographic Information Systems (GIS) in agriculture for evidence-based decision making and sustainability. Glob. Sustain. Res. 2024, 3, 1–24. [Google Scholar] [CrossRef]
- Ting, Y.T.; Chan, K.Y. Optimising performances of LoRa based IoT enabled wireless sensor network for smart agriculture. J. Agric. Food Res. 2024, 16, 101093. [Google Scholar] [CrossRef]
- Altalak, M.; Ammad Uddin, M.; Alajmi, A.; Rizg, A. Smart Agriculture Applications Using Deep Learning Technologies: A Survey. Appl. Sci. 2022, 12, 5919. [Google Scholar] [CrossRef]
- Mendoza-Bernal, J.; González-Vidal, A.; Skarmeta, A.F. A Convolutional Neural Network approach for image-based anomaly detection in smart agriculture. Expert Syst. Appl. 2024, 247, 123210. [Google Scholar] [CrossRef]
- Das, S.; Tariq, A.; Santos, T.; Kantareddy, S.S.; Banerjee, I. Recurrent neural networks (RNNs): Architectures, training tricks, and introduction to influential research. In Machine Learning for Brain Disorders; Humana: New York, NY, USA, 2023; pp. 117–138. [Google Scholar]
- Akkem, Y.; Biswas, S.K.; Varanasi, A. A comprehensive review of synthetic data generation in smart farming by using variational autoencoder and generative adversarial network. Eng. Appl. Artif. Intell. 2024, 131, 107881. [Google Scholar] [CrossRef]
- Joshi, K.; Kumar, V.; Anandaram, H.; Kumar, R.; Gupta, A.; Krishna, K.H. A review approach on deep learning algorithms in computer vision. In Intelligent Systems and Applications in Computer Vision; CRC Press: Boca Raton, FL, USA, 2023; pp. 1–15. [Google Scholar]
- Banik, A.; Patil, T.; Vartak, P.; Jadhav, V. Machine learning in agriculture: A neural network approach. In Proceedings of the IEEE 2023 4th International Conference for Emerging Technology (INCET), Belgaum, India, 26–28 May 2023; pp. 1–6. [Google Scholar]
- Gong, X.; Zhang, S. A High-Precision Detection Method of Apple Leaf Diseases Using Improved Faster R-CNN. Agriculture 2023, 13, 240. [Google Scholar] [CrossRef]
- Abbas, I.; Liu, J.; Amin, M.; Tariq, A.; Tunio, M.H. Strawberry Fungal Leaf Scorch Disease Identification in Real-Time Strawberry Field Using Deep Learning Architectures. Plants 2021, 10, 2643. [Google Scholar] [CrossRef]
- Tyagi, N.; Raman, B.; Garg, N. Classification of Hard and Soft Wheat Species Using Hyperspectral Imaging and Machine Learning Models. In International Conference on Neural Information Processing; Springer: Singapore, 2023; pp. 565–576. [Google Scholar]
- Gutiérrez, S.; Fernández-Novales, J.; Diago, M.P.; Tardaguila, J. On-the-go hyperspectral imaging under field conditions and machine learning for the classification of grapevine varieties. Front. Plant Sci. 2018, 9, 1102. [Google Scholar] [CrossRef]
- Ferreira, M.P.; de Almeida, D.R.A.; de Almeida Papa, D.; Minervino, J.B.S.; Veras, H.F.P.; Formighieri, A.; Ferreira, E.J.L. Individual tree detection and species classification of Amazonian palms using UAV images and deep learning. For. Ecol. Manag. 2020, 475, 118397. [Google Scholar] [CrossRef]
- Alanazi, A.; Wahab, N.H.A.; Al-Rimy, B.A.S. Hyperspectral Imaging for Remote Sensing and Agriculture: A Comparative Study of Transformer-Based Models. In Proceedings of the 2024 IEEE 14th Symposium on Computer Applications & Industrial Electronics (ISCAIE), Penang, Malaysia, 24–25 May 2024; pp. 129–136. [Google Scholar]
- Changjie, H.; Changhui, Y.; Shilong, Q.; Yang, X.; Bin, H.; Hanping, M. Design and experiment of double disc cotton topping device based on machine vision. Nongye Jixie Xuebao/Trans. Chin. Soc. Agric. Mach. 2023, 54, 1–5. [Google Scholar]
- Du, P.; Bai, X.; Tan, K.; Xue, Z.; Samat, A.; Xia, J.; Liu, W. Advances of four machine learning methods for spatial data handling: A review. J. Geovisual. Spat. Anal. 2020, 4, 1–25. [Google Scholar] [CrossRef]
- Chen, Y.; Liu, K.; Xin, Y.; Zhao, X. Soil Image Segmentation Based on Mask R-CNN. In Proceedings of the IEEE 2023 3rd International Conference on Consumer Electronics and Computer Engineering (ICCECE), Guangzhou, China, 6–8 January 2023; pp. 507–510. [Google Scholar]
- Narang, G.; Galdelli, A.; Pietrini, R.; Solfanelli, F.; Mancini, A. A Data Collection Framework for Precision Agriculture: Addressing Data Gaps and Overlapping Areas with IoT and Artificial Intelligence. In Proceedings of the 2024 IEEE International Workshop on Metrology for Agriculture and Forestry (MetroAgriFor), Padua, Italy, 21–23 October 2024; pp. 580–585. [Google Scholar]
- Valenzuela, J.L. Advances in postharvest preservation and quality of fruits and vegetables. Foods 2023, 12, 1830. [Google Scholar] [CrossRef] [PubMed]
- Dhanya, V.G.; Subeesh, A.; Susmita, C.; Amaresh; Saji, S.J.; Dilsha, C.; Keerthi, C.; Nunavath, A.; Singh, A.N.; Kumar, S. High Throughput Phenotyping Using Hyperspectral Imaging for Seed Quality Assurance Coupled with Machine Learning Methods: Principles and Way Forward. Plant Physiol. Rep. 2024, 29, 749–768. [Google Scholar] [CrossRef]
- Dewi, D.A.; Kurniawan, T.B.; Thinakaran, R.; Batumalay, M.; Habib, S.; Islam, M. Efficient Fruit Grading and Selection System Leveraging Computer Vision and Machine Learning. J. Appl. Data Sci. 2024, 5, 1989–2001. [Google Scholar] [CrossRef]
- Ji, W.; Wang, J.; Xu, B.; Zhang, T. Apple Grading Based on Multi-Dimensional View Processing and Deep Learning. Foods 2023, 12, 2117. [Google Scholar] [CrossRef] [PubMed]
- Ryan, M.; Isakhanyan, G.; Tekinerdogan, B. An interdisciplinary approach to artificial intelligence in agriculture. NJAS Impact Agric. Life Sci. 2023, 95, 2168568. [Google Scholar] [CrossRef]
- Patrício, D.I.; Rieder, R. Computer Vision and Artificial Intelligence in Precision Agriculture for Grain Crops: A Systematic Review. Comput. Electron. Agric. 2018, 153, 69–81. [Google Scholar] [CrossRef]
- Jha, K.; Doshi, A.; Patel, P.; Shah, M. A Comprehensive Review on Automation in Agriculture Using Artificial Intelligence. Artif. Intell. Agric. 2019, 2, 1–12. [Google Scholar] [CrossRef]
- Tian, H.; Wang, T.; Liu, Y.; Qiao, X.; Li, Y. Computer Vision Technology in Agricultural Automation—A Review. Inf. Process. Agric. 2020, 7, 1–19. [Google Scholar] [CrossRef]
- Dhanya, V.G.; Subeesh, A.; Kushwaha, N.L.; Vishwakarma, D.K.; Kumar, T.N.; Ritika, G.; Singh, A.N. Deep Learning Based Computer Vision Approaches for Smart Agricultural Applications. Artif. Intell. Agric. 2022, 6, 211–229. [Google Scholar] [CrossRef]
- Mamat, N.; Othman, M.F.; Abdoulghafor, R.; Belhaouari, S.B.; Mamat, N.; Mohd Hussein, S.F. Advanced Technology in Agriculture Industry by Implementing Image Annotation Technique and Deep Learning Approach: A Review. Agriculture 2022, 12, 1033. [Google Scholar] [CrossRef]
- Lei, L.; Yang, Q.; Yang, L.; Shen, T.; Wang, R.; Fu, C. Deep Learning Implementation of Image Segmentation in Agricultural Applications: A Comprehensive Review. Artif. Intell. Rev. 2024, 57, 149. [Google Scholar] [CrossRef]
- Manaa, M.J.; Abbas, A.R.; Shakur, W.A. A systematic review for image enhancement using deep learning techniques. AIP Conf. Proc. 2023, 2977, 020114. [Google Scholar]
- Qi, Y.; Yang, Z.; Sun, W.; Lou, M.; Lian, J.; Zhao, W.; Ma, Y. A comprehensive overview of image enhancement techniques. Arch. Comput. Methods Eng. 2021, 29, 583–607. [Google Scholar] [CrossRef]
- Tian, Z.; Qu, P.; Li, J.; Sun, Y.; Li, G.; Liang, Z.; Zhang, W. A Survey of Deep Learning-Based Low-Light Image Enhancement. Sensors 2023, 23, 7763. [Google Scholar] [CrossRef] [PubMed]
- Wali, A.; Naseer, A.; Tamoor, M.; Gilani, S.A.M. Recent progress in digital image restoration techniques: A review. Digit. Signal Process. 2023, 141, 104187. [Google Scholar] [CrossRef]
- Archana, R.; Jeevaraj, P.E. Deep learning models for digital image processing: A review. Artif. Intell. Rev. 2024, 57, 11. [Google Scholar] [CrossRef]
- Gao, Z.; Lu, Z.; Wang, J.; Ying, S.; Shi, J. A convolutional neural network and graph convolutional network based framework for classification of breast histopathological images. IEEE J. Biomed. Health Inform. 2022, 26, 3163–3173. [Google Scholar] [CrossRef] [PubMed]
- Zhao, Z.Q.; Zheng, P.; Xu, S.T.; Wu, X. Object detection with deep learning: A review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef]
- Sun, J.; Shen, Z.; Wang, Y.; Bao, H.; Zhou, X. LoFTR: Detector-free local feature matching with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 8922–8931. [Google Scholar]
- Hashemi, N.S.; Aghdam, R.B.; Ghiasi, A.S.B.; Fatemi, P. Template matching advances and applications in image analysis. arXiv 2016, arXiv:1610.07231. [Google Scholar]
- Zhao, X.; Wang, L.; Zhang, Y.; Han, X.; Deveci, M.; Parmar, M. A review of convolutional neural networks in computer vision. Artif. Intell. Rev. 2024, 57, 99. [Google Scholar] [CrossRef]
- Yang, R.; Timofte, R.; Li, B.; Li, X.; Guo, M.; Zhao, S.; Zhang, L.; Chen, Z.; Zhang, D.; Arora, Y.; et al. NTIRE 2024 Challenge on Blind Enhancement of Compressed Image: Methods and Results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 6524–6535. [Google Scholar]
- Lee, M.F.R.; Huang, Y.M.; Sun, J.Y.; Chen, X.Q.; Huang, T.F. Deep learning based face recognition for security robot. In Proceedings of the 2022 18th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications (MESA), Taipei, Taiwan, 28–30 November 2022; pp. 1–6. [Google Scholar]
- Kitsios, F.; Kamariotou, M.; Syngelakis, A.I.; Talias, M.A. Recent advances of artificial intelligence in healthcare: A systematic literature review. Appl. Sci. 2023, 13, 7479. [Google Scholar] [CrossRef]
- Li, S. Real-time traffic congestion detection technology in intelligent transportation systems. In Proceedings of the 2024 IEEE 13th International Conference on Communication Systems and Network Technologies (CSNT), Jabalpur, India, 6–7 April 2024; pp. 1029–1033. [Google Scholar]
- Burdon, J.J.; Barrett, L.G.; Yang, L.N.; He, D.C.; Zhan, J. Maximizing world food production through disease control. BioScience 2020, 70, 126–128. [Google Scholar] [CrossRef]
- Vishnoi, V.K.; Kumar, K.; Kumar, B. Plant disease detection using computational intelligence and image processing. J. Plant Dis. Prot. 2021, 128, 19–53. [Google Scholar] [CrossRef]
- He, D.C.; Burdon, J.J.; Xie, L.H.; Zhan, J. Triple bottom-line consideration of sustainable plant disease management: From economic, sociological and ecological perspectives. J. Integr. Agric. 2021, 20, 2581–2591. [Google Scholar] [CrossRef]
- Panchal, A.V.; Patel, S.C.; Bagyalakshmi, K.; Kumar, P.; Khan, I.R.; Soni, M. Image-based plant diseases detection using deep learning. Mater. Today Proc. 2023, 80, 3500–3506. [Google Scholar] [CrossRef]
- Mamba Kabala, D.; Hafiane, A.; Bobelin, L.; Canals, R. Image-based crop disease detection with federated learning. Sci. Rep. 2023, 13, 19220. [Google Scholar] [CrossRef] [PubMed]
- Jafar, A.; Bibi, N.; Naqvi, R.A.; Sadeghi-Niaraki, A.; Jeong, D. Revolutionizing agriculture with artificial intelligence: Plant disease detection methods, applications, and their limitations. Front. Plant Sci. 2024, 15, 1356260. [Google Scholar] [CrossRef]
- Li, Y.; Wang, D.; Yan, B. Pre-processing study of multi-sensor data. Int. J. Comput. Inf. Technol. 2023, 3, 19. [Google Scholar] [CrossRef]
- Polly, R.; Devi, E.A. A Deep Learning-based study of Crop Diseases Recognition and Classification. In Proceedings of the 2022 Second International Conference on Artificial Intelligence and Smart Energy (ICAIS), Coimbatore, India, 23–25 February 2022; pp. 296–301. [Google Scholar]
- Sarkar, C.; Gupta, D.; Gupta, U.; Hazarika, B.B. Leaf Disease Detection Using Machine Learning and Deep Learning: Review and Challenges. Appl. Soft Comput. 2023, 145, 110534. [Google Scholar] [CrossRef]
- Ahmed, I.; Habib, G.; Yadav, P.K. An approach to identify and classify agricultural crop diseases using machine learning and deep learning techniques. In Proceedings of the International Conference on Emerging Smart Computing and Informatics (ESCI), Pune, India, 1–3 March 2023; pp. 1–6. [Google Scholar]
- Reddy, M.S.S.; Khatravath, P.R.; Surineni, N.K.; Mulinti, K.R. Object Detection and Action Recognition using Computer Vision. In Proceedings of the IEEE International Conference on Sustainable Computing and Smart Systems (ICSCSS), Coimbatore, India, 14–16 June 2023; pp. 874–879. [Google Scholar]
- Fuentes, A.; Yoon, S.; Kim, S.C.; Park, D.S. A robust deep-learning-based detector for real-time tomato plant diseases and pests recognition. Sensors 2017, 17, 2022. [Google Scholar] [CrossRef]
- Patil, R.R.; Kumar, S.; Chiwhane, S.; Rani, R.; Pippal, S.K. An artificial-intelligence-based novel rice grade model for severity estimation of rice diseases. Agriculture 2022, 13, 47. [Google Scholar] [CrossRef]
- Abid, M.S.Z.; Jahan, B.; Al Mamun, A.; Hossen, M.J.; Mazumder, S.H. Bangladeshi Crops Leaf Disease Detection Using YOLOv8. Heliyon 2024, 10, e36694. [Google Scholar] [CrossRef]
- Mathew, D.; Kumar, C.S.; Cherian, K.A. Foliar fungal disease classification in banana plants using elliptical local binary pattern on multiresolution dual tree complex wavelet transform domain. Inf. Process. Agric. 2021, 8, 581–592. [Google Scholar] [CrossRef]
- Chen, F.C.; Jahanshahi, M.R. NB-CNN: Deep learning-based crack detection using convolutional neural network and Naïve Bayes data fusion. IEEE Trans. Ind. Electron. 2017, 65, 4392–4400. [Google Scholar] [CrossRef]
- Albert, E.T.A.; Bille, N.H.; Leonard, N.M.E. Detection: A convolutional network method for plant disease recognition. Innov. Agric. 2023, 6, 2–12. [Google Scholar] [CrossRef]
- Wang, D.; Zheng, T.F. Transfer learning for speech and language processing. In Proceedings of the IEEE 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Hong Kong, 16–19 December 2015; pp. 1225–1237. [Google Scholar]
- Mehri, F.; Heshmati, A.; Ghane, E.T.; Khazaei, M.; Mahmudiono, T.; Fakhri, Y. A probabilistic health risk assessment of potentially toxic elements in edible vegetable oils consumed in Hamadan, Iran. BMC Public Health 2024, 24, 218. [Google Scholar] [CrossRef]
- Ghosal, S.; Sarkar, K. Rice leaf diseases classification using CNN with transfer learning. In Proceedings of the 2020 IEEE Calcutta Conference (Calcon), Kolkata, India, 28–29 February 2020; pp. 230–236. [Google Scholar]
- Xu, M.; Yoon, S.; Jeong, Y.; Park, D.S. Transfer learning for versatile plant disease recognition with limited data. Front. Plant Sci. 2022, 13, 1010981. [Google Scholar] [CrossRef]
- Chen, J.; Zhang, D.; Nanehkaran, Y.A. Identifying plant diseases using deep transfer learning and enhanced lightweight network. Multimed. Tools Appl. 2020, 79, 31497–31515. [Google Scholar] [CrossRef]
- Genaev, M.A.; Skolotneva, E.S.; Gultyaeva, E.I.; Orlova, E.A.; Bechtold, N.P.; Afonnikov, D.A. Image-based wheat fungi diseases identification by deep learning. Plants 2021, 10, 1500. [Google Scholar] [CrossRef]
- Madhavan, M.V.; Thanh, D.N.H.; Khamparia, A.; Pande, S.; Malik, R.; Gupta, D. Recognition and classification of pomegranate leaves diseases by image processing and machine learning techniques. Comput. Mater. Contin. 2021, 66, 2939–2955. [Google Scholar] [CrossRef]
- Gu, Y.H.; Yin, H.; Jin, D.; Zheng, R.; Yoo, S.J. Improved multi-plant disease recognition method using deep convolutional neural networks in six diseases of apples and pears. Agriculture 2022, 12, 300. [Google Scholar] [CrossRef]
- Zhou, H.; Deng, J.; Cai, D.; Lv, X.; Wu, B.M. Effects of image dataset configuration on the accuracy of rice disease recognition based on convolution neural network. Front. Plant Sci. 2022, 13, 910878. [Google Scholar] [CrossRef]
- Wang, Y.; Chen, Y.; Wang, D. Convolution network enlightened transformer for regional crop disease classification. Electronics 2022, 11, 3174. [Google Scholar] [CrossRef]
- Chen, J.; Chen, W.; Zeb, A.; Yang, S.; Zhang, D. Lightweight inception networks for the recognition and detection of rice plant diseases. IEEE Sens. J. 2022, 22, 14628–14638. [Google Scholar] [CrossRef]
- Yu, M.; Ma, X.; Guan, H. Recognition method of soybean leaf diseases using residual neural network based on transfer learning. Ecol. Inform. 2023, 76, 102096. [Google Scholar] [CrossRef]
- Fan, X.; Guan, Z. Vgnet: A lightweight intelligent learning method for corn diseases recognition. Agriculture 2023, 13, 1606. [Google Scholar] [CrossRef]
- Mahesh, T.Y.; Mathew, M.P. Detection of bacterial spot disease in bell pepper plant using YOLOv3. IETE J. Res. 2023, 70, 2583–2590. [Google Scholar] [CrossRef]
- Chang, S.; Yang, G.; Cheng, J.; Feng, Z.; Fan, Z.; Ma, X.; Zhao, C. Recognition of wheat rusts in a field environment based on improved DenseNet. Biosyst. Eng. 2024, 238, 10–21. [Google Scholar] [CrossRef]
- Sathya, R.; Senthilvadivu, S.; Ananthi, S.; Bharathi, V.C.; Revathy, G. Vision based plant leaf disease detection and recognition model using machine learning techniques. In Proceedings of the IEEE 2023 7th International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 22–24 November 2023; pp. 458–464. [Google Scholar]
- Barbedo, J.G.A. Impact of dataset size and variety on the effectiveness of deep learning and transfer learning for plant disease classification. Comput. Electron. Agric. 2018, 153, 46–53. [Google Scholar] [CrossRef]
- Ferentinos, K.P. Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agric. 2018, 145, 311–318. [Google Scholar] [CrossRef]
- Khatri-Chhetri, A.; Pant, A.; Aggarwal, P.K.; Vasireddy, V.V.; Yadav, A. Stakeholders Prioritization of Climate-Smart Agriculture Interventions: Evaluation of a Framework. Agric. Syst. 2019, 174, 23–31. [Google Scholar] [CrossRef]
- Kamilaris, A.; Prenafeta-Boldú, F.X. A review of the use of convolutional neural networks in agriculture. J. Agric. Sci. 2018, 156, 312–322. [Google Scholar] [CrossRef]
- Yu, F.; Zhang, Q.; Xiao, J.; Ma, Y.; Wang, M.; Luan, R.; Zhang, H. Progress in the application of CNN-based image classification and recognition in whole crop growth cycles. Remote Sens. 2023, 15, 2988. [Google Scholar] [CrossRef]
- Fritz, S.; See, L.; Bayas, J.C.L.; Waldner, F.; Jacques, D.; Becker-Reshef, I.; McCallum, I. A comparison of global agricultural monitoring systems and current gaps. Agric. Syst. 2019, 168, 258–272. [Google Scholar] [CrossRef]
- Kambale, G.; Bilgi, N. A survey paper on crop disease identification and classification using pattern recognition and digital image processing techniques. IOSR J. Comput. Eng. 2017, 4, 14–17. [Google Scholar]
- Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. Deep learning techniques to classify agricultural crops through UAV imagery: A review. Neural Comput. Appl. 2022, 34, 9511–9536. [Google Scholar] [CrossRef]
- Nunes, D.; Werly, C.; Vianna, G.K.; da Cruz, S.M.S. Early discovery of tomato foliage diseases based on data provenance and pattern recognition. In Proceedings of the Provenance and Annotation of Data and Processes: 5th International Provenance and Annotation Workshop, IPAW 2014, Cologne, Germany, 9–13 June 2014; Revised Selected Papers. Springer International Publishing: Cham, Switzerland, 2015; pp. 229–231. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef] [PubMed]
- Eidinger, E.; Enbar, R.; Hassner, T. Age and gender estimation of unfiltered faces. IEEE Trans. Inf. Forensics Secur. 2014, 9, 2170–2179. [Google Scholar] [CrossRef]
- Abdallah, H.B.; Henry, C.J.; Ramanna, S. Plant species recognition with optimized 3D polynomial neural networks and variably overlapping time–coherent sliding window. Multimed. Tools Appl. 2024, 83, 80667–80700. [Google Scholar] [CrossRef]
- Nasiri, A.; Taheri-Garavand, A.; Fanourakis, D.; Zhang, Y.D.; Nikoloudakis, N. Automated grapevine cultivar identification via leaf imaging and deep convolutional neural networks: A proof-of-concept study employing primary Iranian varieties. Plants 2021, 10, 1628. [Google Scholar] [CrossRef]
- Tadiparthi, P.K.; Bugatha, S.; Bheemavarapu, P.K. A review of foreground segmentation based on convolutional neural networks. Int. J. Adv. Comput. Sci. Appl. 2022, 13, 9. [Google Scholar] [CrossRef]
- Liu, Y.; Su, J.; Shen, L.; Lu, N.; Fang, Y.; Liu, F.; Su, B. Development of a mobile application for identification of grapevine (Vitis vinifera L.) cultivars via deep learning. Int. J. Agric. Biol. Eng. 2021, 14, 172–179. [Google Scholar] [CrossRef]
- Yang, H.; Ni, J.; Gao, J.; Han, Z.; Luan, T. A novel method for peanut variety identification and classification by Improved VGG16. Sci. Rep. 2021, 11, 15756. [Google Scholar] [CrossRef]
- Wang, D.; Dong, Z.; Yang, G.; Li, W.; Wang, Y.; Wang, W.; Zhang, Y.; Lü, Z.; Qin, Y. APNet-YOLOv8s: A Real-Time Automatic Aquatic Plants Recognition Algorithm for Complex Environments. Ecol. Indic. 2024, 167, 112597. [Google Scholar] [CrossRef]
- Anami, B.S.; Malvade, N.N.; Palaiah, S. Deep learning approach for recognition and classification of yield affecting paddy crop stresses using field images. Artif. Intell. Agric. 2020, 4, 12–20. [Google Scholar] [CrossRef]
- Taheri-Garavand, A.; Nasiri, A.; Fanourakis, D.; Fatahi, S.; Omid, M.; Nikoloudakis, N. Automated in situ seed variety identification via deep learning: A case study in chickpea. Plants 2021, 10, 1406. [Google Scholar] [CrossRef]
- Tu, K.; Wen, S.; Cheng, Y.; Zhang, T.; Pan, T.; Wang, J.; Sun, Q. A non-destructive and highly efficient model for detecting the genuineness of maize variety ‘JINGKE 968’ using machine vision combined with deep learning. Comput. Electron. Agric. 2021, 182, 106002. [Google Scholar] [CrossRef]
- Liu, F.; Wang, F.; Wang, X.; Liao, G.; Zhang, Z.; Yang, Y.; Jiao, Y. Rapeseed variety recognition based on hyperspectral feature fusion. Agronomy 2022, 12, 2350. [Google Scholar] [CrossRef]
- Xu, P.; Tan, Q.; Zhang, Y.; Zha, X.; Yang, S.; Yang, R. Research on maize seed classification and recognition based on machine vision and deep learning. Agriculture 2022, 12, 232. [Google Scholar] [CrossRef]
- Ma, J.; Pang, L.; Guo, Y.; Wang, J.; Ma, J.; He, F.; Yan, L. Application of hyperspectral imaging to identify pine seed varieties. J. Appl. Spectrosc. 2023, 90, 916–923. [Google Scholar] [CrossRef]
- Pan, B.; Liu, C.; Su, B.; Ju, Y.; Fan, X.; Zhang, Y.; Jiang, J. Research on species identification of wild grape leaves based on deep learning. Sci. Hortic. 2024, 327, 112821. [Google Scholar] [CrossRef]
- Chlingaryan, A.; Sukkarieh, S.; Whelan, B. Machine learning approaches for crop yield prediction and nitrogen status estimation in precision agriculture: A review. Comput. Electron. Agric. 2018, 151, 61–69. [Google Scholar] [CrossRef]
- Schmidt, L.; Odening, M.; Schlanstein, J.; Ritter, M. Exploring the weather-yield nexus with artificial neural networks. Agric. Syst. 2022, 196, 103345. [Google Scholar] [CrossRef]
- Rashid, M.; Bari, B.S.; Yusup, Y.; Kamaruddin, M.A.; Khan, N. A Comprehensive Review of Crop Yield Prediction Using Machine Learning Approaches with Special Emphasis on Palm Oil Yield Prediction. IEEE Access 2021, 9, 63406–63439. [Google Scholar] [CrossRef]
- Ilyas, Q.M.; Ahmad, M.; Mehmood, A. Automated estimation of crop yield using artificial intelligence and remote sensing technologies. Bioengineering 2023, 10, 125. [Google Scholar] [CrossRef] [PubMed]
- Khaki, S.; Wang, L.; Archontoulis, S.V. A CNN-RNN Framework for Crop Yield Prediction. Front. Plant Sci. 2020, 10, 1750. [Google Scholar] [CrossRef]
- Kamilaris, A.; Prenafeta-Boldu, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
- Van Klompenburg, T.; Kassahun, A.; Catal, C. Crop yield prediction using machine learning: A systematic literature review. Comput. Electron. Agric. 2020, 177, 105709. [Google Scholar] [CrossRef]
- Bazrafshan, O.; Ehteram, M.; Latif, S.D.; Huang, Y.F.; Teo, F.Y.; Ahmed, A.N.; El-Shafie, A. Predicting crop yields using a new robust Bayesian averaging model based on multiple hybrid ANFIS and MLP models. Ain Shams Eng. J. 2022, 13, 10172. [Google Scholar] [CrossRef]
- Bawa, A.; Samanta, S.; Himanshu, S.K.; Singh, J.; Kim, J.; Zhang, T.; Ale, S. A support vector machine and image processing based approach for counting open cotton bolls and estimating lint yield from UAV imagery. Smart Agric. Technol. 2023, 3, 100140. [Google Scholar] [CrossRef]
- de Oliveira, G.S.; Marcato Junior, J.; Polidoro, C.; Osco, L.P.; Siqueira, H.; Rodrigues, L.; Matsubara, E. Convolutional neural networks to estimate dry matter yield in a Guineagrass breeding program using UAV remote sensing. Sensors 2021, 21, 3971. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1492–1500. [Google Scholar]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Reddy, J.; Niu, H.; Scott, J.L.L.; Bhandari, M.; Landivar, J.A.; Bednarz, C.W.; Duffield, N. Cotton Yield Prediction via UAV-Based Cotton Boll Image Segmentation Using YOLO Model and Segment Anything Model (SAM). Remote Sens. 2024, 16, 4346. [Google Scholar] [CrossRef]
- Rahnemoonfar, M.; Sheppard, C. Deep count: Fruit counting based on deep simulated learning. Sensors 2017, 17, 905. [Google Scholar] [CrossRef] [PubMed]
- Stein, M.; Bargoti, S.; Underwood, J. Image based mango fruit detection, localisation and yield estimation using multiple view geometry. Sensors 2016, 16, 1915. [Google Scholar] [CrossRef] [PubMed]
- Chen, S.W.; Shivakumar, S.S.; Dcunha, S.; Das, J.; Okon, E.; Qu, C.; Kumar, V. Counting apples and oranges with deep learning: A data-driven approach. IEEE Robot. Autom. Lett. 2017, 2, 781–788. [Google Scholar] [CrossRef]
- Dorj, U.O.; Lee, M.; Yun, S.S. Yield estimation in citrus orchards via fruit detection and counting using image processing. Comput. Electron. Agric. 2017, 140, 103–112. [Google Scholar] [CrossRef]
- Fu, L.; Liu, Z.; Majeed, Y.; Cui, Y. Kiwifruit yield estimation using image processing by an Android mobile phone. IFAC-PapersOnLine 2018, 51, 185–190. [Google Scholar] [CrossRef]
- Kestur, R.; Meduri, A.; Narasipura, O. MangoNet: A deep semantic segmentation architecture for a method to detect and count mangoes in an open orchard. Eng. Appl. Artif. Intell. 2019, 77, 59–69. [Google Scholar] [CrossRef]
- Fu, Z.; Jiang, J.; Gao, Y.; Krienke, B.; Wang, M.; Zhong, K.; Liu, X. Wheat growth monitoring and yield estimation based on multi-rotor unmanned aerial vehicle. Remote Sens. 2020, 12, 508. [Google Scholar] [CrossRef]
- Shahhosseini, M.; Hu, G.; Khaki, S.; Archontoulis, S.V. Corn yield prediction with ensemble CNN-DNN. Front. Plant Sci. 2021, 12, 709008. [Google Scholar] [CrossRef]
- Abreu Júnior, C.A.M.D.; Martins, G.D.; Xavier, L.C.M.; Vieira, B.S.; Gallis, R.B.D.A.; Fraga Junior, E.F.; Lima, J.V.D.N. Estimating coffee plant yield based on multispectral images and machine learning models. Agronomy 2022, 12, 3195. [Google Scholar] [CrossRef]
- Lu, W.; Du, R.; Niu, P.; Xing, G.; Luo, H.; Deng, Y.; Shu, L. Soybean yield preharvest prediction based on bean pods and leaves image recognition using deep learning neural network combined with GRNN. Front. Plant Sci. 2022, 12, 791256. [Google Scholar] [CrossRef]
- Islam, M.D.; Di, L.; Qamer, F.M.; Shrestha, S.; Guo, L.; Lin, L.; Phalke, A.R. Rapid rice yield estimation using integrated remote sensing and meteorological data and machine learning. Remote Sens. 2023, 15, 2374. [Google Scholar] [CrossRef]
- Jiang, X.; Luo, S.; Ye, Q.; Li, X.; Jiao, W. Hyperspectral estimates of soil moisture content incorporating harmonic indicators and machine learning. Agriculture 2022, 12, 1188. [Google Scholar] [CrossRef]
- Ge, X.; Ding, J.; Jin, X.; Wang, J.; Chen, X.; Li, X.; Xie, B. Estimating agricultural soil moisture content through UAV-based hyperspectral images in the arid region. Remote Sens. 2021, 13, 1562. [Google Scholar] [CrossRef]
- Peng, Y.; Wang, L.; Zhao, L.; Liu, Z.; Lin, C.; Hu, Y.; Liu, L. Estimation of soil nutrient content using hyperspectral data. Agriculture 2021, 11, 1129. [Google Scholar] [CrossRef]
- Yu, X.; Luo, Y.; Bai, B.; Chen, X.; Lu, C.; Peng, X. Prediction Model of Nitrogen, Phosphorus, and Potassium Fertilizer Application Rate for Greenhouse Tomatoes under Different Soil Fertility Conditions. Agronomy 2024, 14, 1165. [Google Scholar] [CrossRef]
- Rathore, M.; Singh, P.N. Application of Deep Learning to Improve the Accuracy of Soil Nutrient Classification. In Proceedings of the 2022 IEEE 2nd Mysore Sub Section International Conference (MysuruCon), Mysuru, India, 16–17 October 2022; pp. 1–5. [Google Scholar]
- Türköz, E.; Olcay, E.; Oksanen, T. Computer vision-based guidance assistance concept for plowing using RGB-D camera. In Proceedings of the 2021 IEEE International Conference on Imaging Systems and Techniques (IST), Kaohsiung, Taiwan, 24–26 August 2021; pp. 1–6. [Google Scholar]
- Zhang, Z.; Liu, H.; Meng, Z.; Chen, J. Deep learning-based automatic recognition network of agricultural machinery images. Comput. Electron. Agric. 2019, 166, 104978. [Google Scholar] [CrossRef]
- Mittal, S.; Dutta, M.K.; Issac, A. Non-destructive image processing based system for assessment of rice quality and defects for classification according to inferred commercial value. Measurement 2019, 148, 106969. [Google Scholar] [CrossRef]
- Sharma, A.; Jain, A.; Gupta, P.; Chowdary, V. Machine learning applications for precision agriculture: A comprehensive review. IEEE Access 2020, 9, 4843–4873. [Google Scholar] [CrossRef]
- Ismail, N.; Malik, O.A. Real-time visual inspection system for grading fruits using computer vision and deep learning techniques. Inf. Process. Agric. 2022, 9, 24–37. [Google Scholar] [CrossRef]
- Jabir, B.; Falih, N. Deep learning-based decision support system for weeds detection in wheat fields. Int. J. Electr. Comput. Eng. 2022, 12, 816. [Google Scholar] [CrossRef]
- Xu, W.; Sun, L.; Zhen, C.; Liu, B.; Yang, Z.; Yang, W. Deep learning-based image recognition of agricultural pests. Appl. Sci. 2022, 12, 12896. [Google Scholar] [CrossRef]
- Zheng, Y.Y.; Kong, J.L.; Jin, X.B.; Wang, X.Y.; Su, T.L.; Zuo, M. CropDeep: The crop vision dataset for deep-learning-based classification and detection in precision agriculture. Sensors 2019, 19, 1058. [Google Scholar] [CrossRef] [PubMed]
- Lin, F.; Guillot, K.; Crawford, S.; Zhang, Y.; Yuan, X.; Tzeng, N.F. An Open and Large-Scale Dataset for Multi-Modal Climate Change-aware Crop Yield Predictions. arXiv 2024, arXiv:2406.06081. [Google Scholar]
- Li, J.; Pu, F.; Chen, H.; Xu, X.; Yu, Y. Crop segmentation of unmanned aerial vehicle imagery using edge enhancement network. IEEE Geosci. Remote Sens. Lett. 2024, 21, 1–5. [Google Scholar] [CrossRef]
- Chen, Y.; Mancini, M.; Zhu, X.; Akata, Z. Semi-supervised and unsupervised deep visual learning: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 46, 1327–1347. [Google Scholar] [CrossRef] [PubMed]
- Li, S.; Kou, P.; Ma, M.; Yang, H.; Huang, S.; Yang, Z. Application of semi-supervised learning in image classification: Research on fusion of labeled and unlabeled data. IEEE Access 2024, 12, 27331–27343. [Google Scholar] [CrossRef]
- Güldenring, R.; Nalpantidis, L. Self-supervised contrastive learning on agricultural images. Comput. Electron. Agric. 2021, 191, 106510. [Google Scholar] [CrossRef]
- Gulzar, Y. Enhancing soybean classification with modified inception model: A transfer learning approach. Emir. J. Food Agric. 2024, 36, 1–9. [Google Scholar] [CrossRef]
- Silva, G.; Schulze, B.; Ferro, M. Performance and Energy Efficiency Analysis of Machine Learning Algorithms Towards Green AI: A Case Study of Decision Tree Algorithms; National Lab. for Scientific Computing: Petrópolis, Brazil, 2021. [Google Scholar]
- Entezari, A.; Aslani, A.; Zahedi, R.; Noorollahi, Y. Artificial intelligence and machine learning in energy systems: A bibliographic perspective. Energy Strategy Rev. 2023, 45, 101017. [Google Scholar] [CrossRef]
- Yang, W.; Yuan, Y.; Zhang, D.; Zheng, L.; Nie, F. An effective image classification method for plant diseases with improved channel attention mechanism aECAnet based on deep learning. Symmetry 2024, 16, 451. [Google Scholar] [CrossRef]
- Gui, J.; Chen, T.; Zhang, J.; Cao, Q.; Sun, Z.; Luo, H.; Tao, D. A survey on self-supervised learning: Algorithms, applications, and future trends. arXiv 2023, arXiv:2301.05712. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.A.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017. [Google Scholar]
- Drees, L.; Demie, D.T.; Paul, M.R.; Leonhardt, J.; Seidel, S.J.; Döring, T.F.; Roscher, R. Data-driven crop growth simulation on time-varying generated images using multi-conditional generative adversarial networks. Plant Methods 2024, 20, 93. [Google Scholar] [CrossRef] [PubMed]
- Agelli, M.; Corona, N.; Maggio, F.; Moi, P.V. Unmanned Ground Vehicles for Continuous Crop Monitoring in Agriculture: Assessing the Readiness of Current ICT Technology. Machines 2024, 12, 750. [Google Scholar] [CrossRef]
- Wang, Q.; Jin, P.; Wu, Y.; Zhou, L.; Shen, T. Infrared Image Enhancement: A Review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 103. [Google Scholar] [CrossRef]
- Ahmed, M.T.; Monjur, O.; Khaliduzzaman, A.; Kamruzzaman, M. A Comprehensive Review of Deep Learning-Based Hyperspectral Image Reconstruction for Agri-Food Quality Appraisal. Artif. Intell. Rev. 2025, 58, 96. [Google Scholar] [CrossRef]
- Karim, M.R.; Reza, M.N.; Jin, H.; Haque, M.A.; Lee, K.-H.; Sung, J.; Chung, S.-O. Application of LiDAR Sensors for Crop and Working Environment Recognition in Agriculture: A Review. Remote Sens. 2024, 16, 4623. [Google Scholar] [CrossRef]
- Botero-Valencia, J.; García-Pineda, V.; Valencia-Arias, A.; Valencia, J.; Reyes-Vera, E.; Mejia-Herrera, M.; Hernández-García, R. Machine Learning in Sustainable Agriculture: Systematic Review and Research Perspectives. Agriculture 2025, 15, 377. [Google Scholar] [CrossRef]
- Pintus, M.; Colucci, F.; Maggio, F. Emerging Developments in Real-Time Edge AIoT for Agricultural Image Classification. IoT 2025, 6, 13. [Google Scholar] [CrossRef]
- Ur Rehman, W.; Koondhar, M.A.; Afridi, S.K.; Albasha, L.; Smaili, I.H.; Touti, E.; Ahmed, M.M.R. The Role of 5G Network in Revolutionizing Agriculture for Sustainable Development: A Comprehensive Review. Energy Nexus 2025, 17, 100368. [Google Scholar] [CrossRef]
- Szekely, S.; Iheme, L.O.; O’Driscoll, F.; Kypuros, D.; Pang, V.; Shmigelsky, G. Scalable Architecture and Intelligent Edge with 5G-Advanced, MEC, IoT, UAVs and AI for a Sustainable Agriculture and Food Operations. J. Eng. Archit. 2024. [Google Scholar]
Research Object | Year | Image Processing Task | Network Framework | Accuracy | Laboratory or Field Use | Notes |
---|---|---|---|---|---|---|
Rice | 2020 | Rice variety identification | VGG16 | 92.89% | Laboratory | The number of collected samples was limited, and the model failed to cover all rice cultivars and growing environments comprehensively [100]. |
Grapes | 2021 | Grape variety identification | VGG16 | 99.00% | Laboratory | Image quality was affected by factors such as ambient lighting and cluttered backgrounds when acquiring visible spectrum images [95]. |
Grapes | 2021 | Grape variety identification | GoogLeNet | 99.91% | Laboratory | Limited to 21 common varieties in vineyards, and the dataset was small [96]. |
Peanuts | 2021 | Peanut variety Identification | VGG16 | 96.70% | Field | Only 12 peanut varieties were insufficient to validate the model’s applicability and robustness [98]. |
Chickpea | 2021 | Chickpea variety Identification | VGG16 | 94.00% | Laboratory | A single dataset source caused the model’s poor performance in other regions or on other species [101]. |
Corn | 2021 | Corn variety identification | VGG16 | 98.00% | Laboratory | Samples were drawn only from different years and batches, and lack of geographical diversity might lead to poor model performance in practical applications across various regions [102]. |
Rapeseed | 2022 | Rapeseed variety Identification | N/A | 96.35% | Field | Hyperspectral image processing and feature extraction are highly complex, making the demands on computational resources and real-time performance in practical applications uncertain [103]. |
Corn seed | 2022 | Corn seed variety identification | P-ResNet | 99.70% | Laboratory | Only classification accuracy was analyzed, and no other evaluation indicators (such as recall and F1 score) were discussed, so the model performance could not be comprehensively evaluated [104]. |
Pine nut | 2023 | Pine nut variety Identification | LeNet-5 | 99.00% | N/A | Black and white correction and area segmentation were performed, but potential errors introduced by these preprocessing steps were not addressed [105]. |
Wild grape | 2024 | Identification of wild grape varieties | GoogleNet, ResNet-50, ResNet-101, VGG-16 | 98.64% | Laboratory | The lack of images under different growth stages, lighting conditions, and diverse environmental backgrounds affected the model’s performance in practical applications [106]. |
Aquatic plants | 2024 | Aquatic plants recognition | APNet-YOLOv8s | 99.1% | Laboratory | Real-time recognition for complex environments, designed for aquatic plant detection under challenging conditions [99]. |
Research Object | Year | Image Processing Task | Network Framework | Accuracy | Laboratory or Field Use | Notes |
---|---|---|---|---|---|---|
Mango | 2019 | Mango production forecasts | MangoNet | 84.00% | Field | In open orchards, mangoes are partially shaded by leaves or other fruits [126]. |
Wheat | 2020 | Wheat production forecasts | N/A | N/A | Laboratory | No validation and further experiments on wheat across different ecological sites were conducted, and the results lacked generalizability [127]. |
Corn | 2022 | Corn production forecasts | CNN, DNN | 77.00% | Laboratory | No genotype information was included, and the absence of these data limits the model’s prediction accuracy and ability to generalize across different maize varieties [128]. |
Coffee beans | 2022 | Coffee bean production forecast | N/A | N/A | Field | Insufficient biennial treatment of coffee crops made a difference [129]. |
Soybeans | 2022 | Soybean production forecasts | GRNN, FRCNN, FPN | 97.43% | Laboratory | The data samples were primarily from soybeans grown in laboratory conditions, which could not fully reflect the complex growth conditions of soybeans in actual farmland [130]. |
Rice | 2023 | Rice yield projections | N/A | 92.00% | Field | MODIS with low spatial resolution but high temporal resolution was used in this study, which could affect the prediction accuracy [131]. |
Cotton | 2024 | Cotton yield prediction | YOLO + SAM | R² = 0.913 | Laboratory | It relies solely on image segmentation and pixel counting for yield prediction without considering the impact of cotton boll distribution, growth stages, and environmental factors on yield [120]. |
Fields | Year | Image Processing Task | Laboratory or Field Use | Notes |
---|---|---|---|---|
Rice quality assessment | 2019 | Defect detection and classification | Field | Rice grains are small. If multiple rice grains overlap, it will affect the accuracy of quality assessment [139]. |
Crop irrigation management | 2020 | Crop moisture monitoring | Laboratory | Crops’ growth states are affected by seasonal, climatic, and soil changes, making it difficult for the image recognition system to adapt quickly, leading to inaccurate water supply [140]. |
Soil nutrient management | 2021 | Soil nutrient distribution detection | Field | Soil nutrient distribution is often very uneven in space, with significant regional differences [133]. |
Agricultural machinery operation quality assessment | 2021 | Operational Status Monitoring | Field | Vibrations, dust, and mud can impair image quality and affect agricultural machinery performance evaluation [137]. |
Soil testing | 2022 | Soil type classification | Laboratory | Soil’s spectral reflectance varies with moisture, particle size, and organic content, causing misidentification due to similarities under different conditions [132]. |
Fruit grading | 2022 | Defect classification and maturity judgment | Field | Certain fruits coated with preservatives or wax after harvest, affecting their surface reflection spectrum and interfering with image recognition accuracy, especially in color or defect detection [141]. |
Weed detection | 2022 | Weed area detection and location | Laboratory | Some weeds closely resemble crops, leading to misidentification by image recognition systems [142]. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jiang, C.; Miao, K.; Hu, Z.; Gu, F.; Yi, K. Image Recognition Technology in Smart Agriculture: A Review of Current Applications Challenges and Future Prospects. Processes 2025, 13, 1402. https://doi.org/10.3390/pr13051402
Jiang C, Miao K, Hu Z, Gu F, Yi K. Image Recognition Technology in Smart Agriculture: A Review of Current Applications Challenges and Future Prospects. Processes. 2025; 13(5):1402. https://doi.org/10.3390/pr13051402
Chicago/Turabian StyleJiang, Chunxia, Kangshu Miao, Zhichao Hu, Fengwei Gu, and Kechuan Yi. 2025. "Image Recognition Technology in Smart Agriculture: A Review of Current Applications Challenges and Future Prospects" Processes 13, no. 5: 1402. https://doi.org/10.3390/pr13051402
APA StyleJiang, C., Miao, K., Hu, Z., Gu, F., & Yi, K. (2025). Image Recognition Technology in Smart Agriculture: A Review of Current Applications Challenges and Future Prospects. Processes, 13(5), 1402. https://doi.org/10.3390/pr13051402