Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (203)

Search Parameters:
Keywords = pest and disease images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 1382 KiB  
Review
Application of Non-Destructive Technology in Plant Disease Detection: Review
by Yanping Wang, Jun Sun, Zhaoqi Wu, Yilin Jia and Chunxia Dai
Agriculture 2025, 15(15), 1670; https://doi.org/10.3390/agriculture15151670 - 1 Aug 2025
Viewed by 367
Abstract
In recent years, research on plant disease detection has combined artificial intelligence, hyperspectral imaging, unmanned aerial vehicle remote sensing, and other technologies, promoting the transformation of pest and disease control in smart agriculture towards digitalization and artificial intelligence. This review systematically elaborates on [...] Read more.
In recent years, research on plant disease detection has combined artificial intelligence, hyperspectral imaging, unmanned aerial vehicle remote sensing, and other technologies, promoting the transformation of pest and disease control in smart agriculture towards digitalization and artificial intelligence. This review systematically elaborates on the research status of non-destructive detection techniques used for plant disease identification and detection, mainly introducing the following two types of methods: spectral technology and imaging technology. It also elaborates, in detail, on the principles and application examples of each technology and summarizes the advantages and disadvantages of these technologies. This review clearly indicates that non-destructive detection techniques can achieve plant disease and pest detection quickly, accurately, and without damage. In the future, integrating multiple non-destructive detection technologies, developing portable detection devices, and combining more efficient data processing methods will become the core development directions of this field. Full article
(This article belongs to the Section Crop Protection, Diseases, Pests and Weeds)
Show Figures

Figure 1

24 pages, 9664 KiB  
Article
Frequency-Domain Collaborative Lightweight Super-Resolution for Fine Texture Enhancement in Rice Imagery
by Zexiao Zhang, Jie Zhang, Jinyang Du, Xiangdong Chen, Wenjing Zhang and Changmeng Peng
Agronomy 2025, 15(7), 1729; https://doi.org/10.3390/agronomy15071729 - 18 Jul 2025
Viewed by 328
Abstract
In rice detection tasks, accurate identification of leaf streaks, pest and disease distribution, and spikelet hierarchies relies on high-quality images to distinguish between texture and hierarchy. However, existing images often suffer from texture blurring and contour shifting due to equipment and environment limitations, [...] Read more.
In rice detection tasks, accurate identification of leaf streaks, pest and disease distribution, and spikelet hierarchies relies on high-quality images to distinguish between texture and hierarchy. However, existing images often suffer from texture blurring and contour shifting due to equipment and environment limitations, which affects the detection performance. In view of the fact that pests and diseases affect the whole situation and tiny details are mostly localized, we propose a rice image reconstruction method based on an adaptive two-branch heterogeneous structure. The method consists of a low-frequency branch (LFB) that recovers global features using orientation-aware extended receptive fields to capture streaky global features, such as pests and diseases, and a high-frequency branch (HFB) that enhances detail edges through an adaptive enhancement mechanism to boost the clarity of local detail regions. By introducing the dynamic weight fusion mechanism (CSDW) and lightweight gating network (LFFN), the problem of the unbalanced fusion of frequency information for rice images in traditional methods is solved. Experiments on the 4× downsampled rice test set demonstrate that the proposed method achieves a 62% reduction in parameters compared to EDSR, 41% lower computational cost (30 G) than MambaIR-light, and an average PSNR improvement of 0.68% over other methods in the study while balancing memory usage (227 M) and inference speed. In downstream task validation, rice panicle maturity detection achieves a 61.5% increase in mAP50 (0.480 → 0.775) compared to interpolation methods, and leaf pest detection shows a 2.7% improvement in average mAP50 (0.949 → 0.975). This research provides an effective solution for lightweight rice image enhancement, with its dual-branch collaborative mechanism and dynamic fusion strategy establishing a new paradigm in agricultural rice image processing. Full article
(This article belongs to the Collection AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

19 pages, 1957 KiB  
Article
Resource-Efficient Cotton Network: A Lightweight Deep Learning Framework for Cotton Disease and Pest Classification
by Zhengle Wang, Heng-Wei Zhang, Ying-Qiang Dai, Kangning Cui, Haihua Wang, Peng W. Chee and Rui-Feng Wang
Plants 2025, 14(13), 2082; https://doi.org/10.3390/plants14132082 - 7 Jul 2025
Cited by 2 | Viewed by 429
Abstract
Cotton is the most widely cultivated natural fiber crop worldwide, yet it is highly susceptible to various diseases and pests that significantly compromise both yield and quality. To enable rapid and accurate diagnosis of cotton diseases and pests—thus supporting the development of effective [...] Read more.
Cotton is the most widely cultivated natural fiber crop worldwide, yet it is highly susceptible to various diseases and pests that significantly compromise both yield and quality. To enable rapid and accurate diagnosis of cotton diseases and pests—thus supporting the development of effective control strategies and facilitating genetic breeding research—we propose a lightweight model, the Resource-efficient Cotton Network (RF-Cott-Net), alongside an open-source image dataset, CCDPHD-11, encompassing 11 disease categories. Built upon the MobileViTv2 backbone, RF-Cott-Net integrates an early exit mechanism and quantization-aware training (QAT) to enhance deployment efficiency without sacrificing accuracy. Experimental results on CCDPHD-11 demonstrate that RF-Cott-Net achieves an accuracy of 98.4%, an F1-score of 98.4%, a precision of 98.5%, and a recall of 98.3%. With only 4.9 M parameters, 310 M FLOPs, an inference time of 3.8 ms, and a storage footprint of just 4.8 MB, RF-Cott-Net delivers outstanding accuracy and real-time performance, making it highly suitable for deployment on agricultural edge devices and providing robust support for in-field automated detection of cotton diseases and pests. Full article
(This article belongs to the Special Issue Precision Agriculture in Crop Production)
Show Figures

Figure 1

24 pages, 1991 KiB  
Article
Robust Deep Neural Network for Classification of Diseases from Paddy Fields
by Karthick Mookkandi and Malaya Kumar Nath
AgriEngineering 2025, 7(7), 205; https://doi.org/10.3390/agriengineering7070205 - 1 Jul 2025
Viewed by 387
Abstract
Agriculture in India supports millions of livelihoods and is a major force behind economic expansion. Challenges in modern agriculture depend on environmental factors (such as soil quality and climate variability) and biotic factors (such as pests and diseases). These challenges can be addressed [...] Read more.
Agriculture in India supports millions of livelihoods and is a major force behind economic expansion. Challenges in modern agriculture depend on environmental factors (such as soil quality and climate variability) and biotic factors (such as pests and diseases). These challenges can be addressed by advancements in technology (such as sensors, internet of things, communication, etc.) and data-driven approaches (such as machine learning (ML) and deep learning (DL)), which can help with crop yield and sustainability in agriculture. This study introduces an innovative deep neural network (DNN) approach for identifying leaf diseases in paddy crops at an early stage. The proposed neural network is a hybrid DL model comprising feature extraction, channel attention, inception with residual, and classification blocks. Channel attention and inception with residual help extract comprehensive information about the crops and potential diseases. The classification module uses softmax to obtain the score for different classes. The importance of each block is analyzed via an ablation study. To understand the feature extraction ability of the modules, extracted features at different stages are fed to the SVM classifier to obtain the classification accuracy. This technique was experimented on eight classes with 7857 paddy crop images, which were obtained from local paddy fields and freely available open sources. The classification performance of the proposed technique is evaluated according to accuracy, sensitivity, specificity, F1 score, MCC, area under curve (AUC), and receiver operating characteristic (ROC). The model was fine-tuned by setting the hyperparameters (such as batch size, learning rate, optimizer, epoch, and train and test ratio). Training, validation, and testing accuracies of 99.91%, 99.87%, and 99.49%, respectively, were obtained for 20 epochs with a learning rate of 0.001 and sgdm optimizer. The proposed network robustness was studied via an ablation study and with noisy data. The model’s classification performance was evaluated for other agricultural data (such as mango, maize, and wheat diseases). These research outcomes can empower farmers with smarter agricultural practices and contribute to economic growth. Full article
Show Figures

Figure 1

17 pages, 8706 KiB  
Article
Rice Canopy Disease and Pest Identification Based on Improved YOLOv5 and UAV Images
by Gaoyuan Zhao, Yubin Lan, Yali Zhang and Jizhong Deng
Sensors 2025, 25(13), 4072; https://doi.org/10.3390/s25134072 - 30 Jun 2025
Viewed by 370
Abstract
Traditional monitoring methods rely on manual field surveys, which are subjective, inefficient, and unable to meet the demand for large-scale, rapid monitoring. By using unmanned aerial vehicles (UAVs) to capture high-resolution images of rice canopy diseases and pests, combined with deep learning (DL) [...] Read more.
Traditional monitoring methods rely on manual field surveys, which are subjective, inefficient, and unable to meet the demand for large-scale, rapid monitoring. By using unmanned aerial vehicles (UAVs) to capture high-resolution images of rice canopy diseases and pests, combined with deep learning (DL) techniques, accurate and timely identification of diseases and pests can be achieved. We propose a method for identifying rice canopy diseases and pests using an improved YOLOv5 model (YOLOv5_DWMix). By incorporating deep separable convolutions, the MixConv module, attention mechanisms, and optimized loss functions into the YOLOv5 backbone, the model’s speed, feature extraction capability, and robustness are significantly enhanced. Additionally, to tackle the challenges posed by complex field environments and small datasets, image augmentation is employed to train the YOLOv5_DWMix model for the recognition of four common rice canopy diseases and pests. Results show that the improved YOLOv5 model achieves 95.6% average precision in detecting these diseases and pests, a 4.8% improvement over the original YOLOv5 model. The YOLOv5_DWMix model is effective and advanced in identifying rice diseases and pests, offering a solid foundation for large-scale, regional monitoring. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

27 pages, 3134 KiB  
Article
A Hybrid Deep Learning Approach for Cotton Plant Disease Detection Using BERT-ResNet-PSO
by Chetanpal Singh, Santoso Wibowo and Srimannarayana Grandhi
Appl. Sci. 2025, 15(13), 7075; https://doi.org/10.3390/app15137075 - 23 Jun 2025
Viewed by 478
Abstract
Cotton is one of the most valuable non-food agricultural products in the world. However, cotton production is often hampered by the invasion of disease. In most cases, these plant diseases are a result of insect or pest infestations, which can have a significant [...] Read more.
Cotton is one of the most valuable non-food agricultural products in the world. However, cotton production is often hampered by the invasion of disease. In most cases, these plant diseases are a result of insect or pest infestations, which can have a significant impact on production if not addressed promptly. It is, therefore, crucial to accurately identify leaf diseases in cotton plants to prevent any negative effects on yield. This paper presents a hybrid deep learning approach based on Bidirectional Encoder Representations from Transformers with Residual network and particle swarm optimization (BERT-ResNet-PSO) for detecting cotton plant diseases. This approach starts with image pre-processing, which they pass to a BERT-like encoder after linearly embedding the image patches. It results in segregating disease regions. Then, the output of the encoded feature is passed to ResNet-based architecture for feature extraction and further optimized by PSO to increase the classification accuracy. The approach is tested on a cotton dataset from the Plant Village dataset, where the experimental results show the effectiveness of this hybrid deep learning approach, achieving an accuracy of 98.5%, precision of 98.2% and recall of 98.7% compared to the existing deep learning approaches such as ResNet50, VGG19, InceptionV3, and ResNet152V2. This study shows that the hybrid deep learning approach is capable of dealing with the cotton plant disease detection problem effectively. This study suggests that the proposed approach is beneficial to help avoid crop losses on a large scale and support effective farming management practices. Full article
Show Figures

Figure 1

24 pages, 9889 KiB  
Article
An Intelligent Management System and Advanced Analytics for Boosting Date Production
by Shaymaa E. Sorour, Munira Alsayyari, Norah Alqahtani, Kaznah Aldosery, Anfal Altaweel and Shahad Alzhrani
Sustainability 2025, 17(12), 5636; https://doi.org/10.3390/su17125636 - 19 Jun 2025
Viewed by 689
Abstract
The date palm industry is a vital pillar of agricultural economies in arid and semi-arid regions; however, it remains vulnerable to challenges such as pest infestations, post-harvest diseases, and limited access to real-time monitoring tools. This study applied the baseline YOLOv11 model and [...] Read more.
The date palm industry is a vital pillar of agricultural economies in arid and semi-arid regions; however, it remains vulnerable to challenges such as pest infestations, post-harvest diseases, and limited access to real-time monitoring tools. This study applied the baseline YOLOv11 model and its optimized variant, YOLOv11-Opt, to automate the detection, classification, and monitoring of date fruit varieties and disease-related defects. The models were trained on a curated dataset of real-world images collected in Saudi Arabia and enhanced through advanced data augmentation techniques, dynamic label assignment (SimOTA++), and extensive hyperparameter optimization. The experimental results demonstrated that YOLOv11-Opt significantly outperformed the baseline YOLOv11, achieving an overall classification accuracy of 99.04% for date types and 99.69% for disease detection, with ROC-AUC scores exceeding 99% in most cases. The optimized model effectively distinguished visually complex diseases, such as scale insert and dry date skin, across multiple date types, enabling high-resolution, real-time inference. Furthermore, a visual analytics dashboard was developed to support strategic decision-making by providing insights into production trends, disease prevalence, and varietal distribution. These findings underscore the value of integrating optimized deep learning architectures and visual analytics for intelligent, scalable, and sustainable precision agriculture. Full article
(This article belongs to the Special Issue Sustainable Food Processing and Food Packaging Technologies)
Show Figures

Figure 1

21 pages, 3278 KiB  
Article
Enhancing Bee Mite Detection with YOLO: The Role of Data Augmentation and Stratified Sampling
by Hong-Gu Lee, Jeong-Yong Shin, Su-Bae Kim, Min-Jee Kim, Moon S. Kim, Hoyoung Lee and Changyeun Mo
Agriculture 2025, 15(11), 1221; https://doi.org/10.3390/agriculture15111221 - 3 Jun 2025
Viewed by 646
Abstract
Beekeeping is facing a serious crisis due to climate change and diseases such as bee mites (Varroa destructor), which have led to declining populations, collapsing colonies, and reduced beekeeping productivity. Bee mites are small, reddish-brown in color, and difficult to distinguish [...] Read more.
Beekeeping is facing a serious crisis due to climate change and diseases such as bee mites (Varroa destructor), which have led to declining populations, collapsing colonies, and reduced beekeeping productivity. Bee mites are small, reddish-brown in color, and difficult to distinguish from bees. Rapid bee mite detection techniques are essential for overcoming this crisis. This study developed a technology for recognizing bee mites and beekeeping objects in beecombs using the You Only Look Once (YOLO) object detection algorithm. The dataset was constructed by acquiring RGB images of beecombs containing mites. Regions of interest with a size of 640 × 640 pixels centered on the bee mites were extracted and labeled as seven classes: bee mites, bees, mite-infected bees, larvae, abnormal larvae, and cells. Image processing, data augmentation, and stratified data distribution methods were applied to enhance the object recognition performance. Four datasets were constructed using different augmentation and distribution strategies, including random and stratified sampling. The datasets were partitioned into training, testing, and validation sets in a 7:2:1 ratio, respectively. A YOLO-based model for the detection of bee mites and seven beekeeping-related objects was developed for each dataset. The F1 scores for the detection of bee mites and seven beekeeping-related objectives using the YOLO model based on original datasets were 94.1% and 91.9%, respectively. The model applied data augmentation, and stratified sampling achieved the highest performance, with F1 scores of 97.4% and 96.4% for the detection of bee mites and seven beekeeping-related objects, respectively. The results underscore the efficacy of using the YOLO architecture on RGB images of beecombs for simultaneously detecting bee mites and various beekeeping-related objects. This advanced mite detection method is expected to contribute significantly to the early identification of pests and disease outbreaks, offering a valuable tool for enhancing beekeeping practices. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

22 pages, 640 KiB  
Review
A Review of Optical-Based Three-Dimensional Reconstruction and Multi-Source Fusion for Plant Phenotyping
by Songhang Li, Zepu Cui, Jiahang Yang and Bin Wang
Sensors 2025, 25(11), 3401; https://doi.org/10.3390/s25113401 - 28 May 2025
Viewed by 901
Abstract
In the context of the booming development of precision agriculture and plant phenotyping, plant 3D reconstruction technology has become a research hotspot, with widespread applications in plant growth monitoring, pest and disease detection, and smart agricultural equipment. Given the complex geometric and textural [...] Read more.
In the context of the booming development of precision agriculture and plant phenotyping, plant 3D reconstruction technology has become a research hotspot, with widespread applications in plant growth monitoring, pest and disease detection, and smart agricultural equipment. Given the complex geometric and textural characteristics of plants, traditional 2D image analysis methods are difficult to meet the modeling requirements, highlighting the growing importance of 3D reconstruction technology. This paper reviews active vision techniques (such as structured light, time-of-flight, and laser scanning methods), passive vision techniques (such as stereo vision and structure from motion), and deep learning-based 3D reconstruction methods (such as NeRF, CNN, and 3DGS). These technologies enhance crop analysis accuracy from multiple perspectives, provide strong support for agricultural production, and significantly promote the development of the field of plant research. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

18 pages, 2795 KiB  
Article
Study on the Detection of Chlorophyll Content in Tomato Leaves Based on RGB Images
by Xuehui Zhang, Huijiao Yu, Jun Yan and Xianyong Meng
Horticulturae 2025, 11(6), 593; https://doi.org/10.3390/horticulturae11060593 - 26 May 2025
Viewed by 917
Abstract
Chlorophyll is a key substance in plant photosynthesis, and its content detection methods are of great significance in the field of agricultural AI. These methods provide important technical support for crop growth monitoring, pest and disease identification, and yield prediction, playing a crucial [...] Read more.
Chlorophyll is a key substance in plant photosynthesis, and its content detection methods are of great significance in the field of agricultural AI. These methods provide important technical support for crop growth monitoring, pest and disease identification, and yield prediction, playing a crucial role in improving agricultural productivity and the level of intelligence in farming. This paper aims to explore an efficient and low-cost non-destructive method for detecting chlorophyll content (SPAD) and investigate the feasibility of smartphone image analysis technology in predicting chlorophyll content in greenhouse tomatoes. This study uses greenhouse tomato leaves as the experimental object and analyzes the correlation between chlorophyll content and image color features. First, leaf images are captured using a smartphone, and 42 color features based on the red, green, and blue (R, G, B) color channels are constructed to assess their correlation with chlorophyll content. The experiment selects eight color features most sensitive to chlorophyll content, including B, (2G − R − B)/(2G + R + B), GLA, RGBVI, g, g − b, ExG, and CIVE. Based on this, this study constructs and evaluates the predictive performance of multiple models, including multiple linear regression (MLR), ridge regression (RR), support vector regression (SVR), random forest (RF), and the Stacking ensemble learning model. The experimental results indicate that the Stacking ensemble learning model performs the best in terms of prediction accuracy and stability (R2 = 0.8359, RMSE = 0.8748). The study confirms the feasibility of using smartphone image analysis for estimating chlorophyll content, providing a convenient, cost-effective, and efficient technological approach for crop health monitoring and precision agriculture management. This method helps agricultural workers to monitor crop growth in real-time and optimize management decisions. Full article
(This article belongs to the Section Vegetable Production Systems)
Show Figures

Figure 1

21 pages, 5571 KiB  
Article
YOLOv11-RDTNet: A Lightweight Model for Citrus Pest and Disease Identification Based on an Improved YOLOv11n
by Qiufang Dai, Shiyao Liang, Zhen Li, Shilei Lyu, Xiuyun Xue, Shuran Song, Ying Huang, Shaoyu Zhang and Jiaheng Fu
Agronomy 2025, 15(5), 1252; https://doi.org/10.3390/agronomy15051252 - 21 May 2025
Viewed by 888
Abstract
Citrus pests and diseases severely impact fruit yield and quality. However, existing object detection models face limitations in complex backgrounds, target occlusion, and small target recognition, and they struggle to be efficiently deployed on resource-constrained devices. To address these issues, this study proposes [...] Read more.
Citrus pests and diseases severely impact fruit yield and quality. However, existing object detection models face limitations in complex backgrounds, target occlusion, and small target recognition, and they struggle to be efficiently deployed on resource-constrained devices. To address these issues, this study proposes a lightweight pest and disease detection model, YOLOv11-RDTNet, based on the improved YOLOv11n. This model integrates multi-scale features and attention mechanisms to enhance recognition performance in complex scenarios, while adopting a lightweight design to reduce computational costs and improve deployment adaptability. The model introduces three key enhancement features: First, shallow RFD (SRFD) and deep RFD (DRFD) downsampling modules replace traditional convolution modules, improving image feature extraction accuracy and robustness. Second, the Dynamic Group Shuffle Transformer (DGST) module replaces the original C3k2 module, reducing the model’s parameter count and computational demand, further enhancing efficiency and performance. Lastly, the lightweight Task Align Dynamic Detection Head (TADDH) replaces the original detection head, significantly reducing the parameter count and improving accuracy in small-object detection. After processing the collected images, we obtained 1382 images and constructed a dataset containing five types of citrus pests and diseases: anthracnose, canker, yellow vein disease, coal pollution disease, and leaf miner moth. We applied data augmentation on the dataset and conducted experimental validation. Experimental results showed that the YOLOv11-RDTNet model had a parameter count of 1.54 MB, an mAP50 of 87.0%, and a model size of 3.4 MB. Compared to the original YOLOv11 model, the YOLOv11-RDTNet model reduced the parameter count by 40.3%, improved mAP50 by 4.8%, and reduced the model size from 5.5 MB to 3.4 MB. This model not only improved detection accuracy and reduced computational load but also achieved a balance in performance, size, and speed, making it more suitable for deployment on mobile devices. Additionally, the research findings provided an effective tool for citrus pest and disease detection with small sample sizes, offering valuable insights for citrus pest and disease detection in agricultural practices. Full article
(This article belongs to the Special Issue Smart Pest Control for Building Farm Resilience)
Show Figures

Figure 1

30 pages, 10124 KiB  
Review
Innovations in Sensor-Based Systems and Sustainable Energy Solutions for Smart Agriculture: A Review
by Md. Mahadi Hasan Sajib and Abu Sadat Md. Sayem
Encyclopedia 2025, 5(2), 67; https://doi.org/10.3390/encyclopedia5020067 - 20 May 2025
Viewed by 1577
Abstract
Smart agriculture is transforming traditional farming by integrating advanced sensor-based systems, intelligent control technologies, and sustainable energy solutions to meet the growing global demand for food while reducing environmental impact. This review presents a comprehensive analysis of recent innovations in smart agriculture, focusing [...] Read more.
Smart agriculture is transforming traditional farming by integrating advanced sensor-based systems, intelligent control technologies, and sustainable energy solutions to meet the growing global demand for food while reducing environmental impact. This review presents a comprehensive analysis of recent innovations in smart agriculture, focusing on the deployment of IoT-based sensors, wireless communication protocols, energy-harvesting methods, and automated irrigation and fertilization systems. Furthermore, the paper explores the role of artificial intelligence (AI), machine learning (ML), computer vision, and big data analytics in monitoring and managing key agricultural parameters such as crop health, pest and disease detection, soil conditions, and water usage. Special attention is given to decision-support systems, precision agriculture techniques, and the application of remote and proximal sensing technologies like hyperspectral imaging, thermal imaging, and NDVI-based indices. By evaluating the benefits, limitations, and emerging trends of these technologies, this review aims to provide insights into how smart agriculture can enhance productivity, resource efficiency, and sustainability in modern farming systems. The findings serve as a valuable reference for researchers, practitioners, and policymakers working towards sustainable agricultural innovation. Full article
(This article belongs to the Section Engineering)
Show Figures

Graphical abstract

37 pages, 4964 KiB  
Review
A Comprehensive Review of Deep Learning Applications in Cotton Industry: From Field Monitoring to Smart Processing
by Zhi-Yu Yang, Wan-Ke Xia, Hao-Qi Chu, Wen-Hao Su, Rui-Feng Wang and Haihua Wang
Plants 2025, 14(10), 1481; https://doi.org/10.3390/plants14101481 - 15 May 2025
Cited by 7 | Viewed by 1413
Abstract
Cotton is a vital economic crop in global agriculture and the textile industry, contributing significantly to food security, industrial competitiveness, and sustainable development. Traditional technologies such as spectral imaging and machine learning improved cotton cultivation and processing, yet their performance often falls short [...] Read more.
Cotton is a vital economic crop in global agriculture and the textile industry, contributing significantly to food security, industrial competitiveness, and sustainable development. Traditional technologies such as spectral imaging and machine learning improved cotton cultivation and processing, yet their performance often falls short in complex agricultural environments. Deep learning (DL), with its superior capabilities in data analysis, pattern recognition, and autonomous decision-making, offers transformative potential across the cotton value chain. This review highlights DL applications in seed quality assessment, pest and disease detection, intelligent irrigation, autonomous harvesting, and fiber classification et al. DL enhances accuracy, efficiency, and adaptability, promoting the modernization of cotton production and precision agriculture. However, challenges remain, including limited model generalization, high computational demands, environmental adaptability issues, and costly data annotation. Future research should prioritize lightweight, robust models, standardized multi-source datasets, and real-time performance optimization. Integrating multi-modal data—such as remote sensing, weather, and soil information—can further boost decision-making. Addressing these challenges will enable DL to play a central role in driving intelligent, automated, and sustainable transformation in the cotton industry. Full article
(This article belongs to the Section Plant Modeling)
Show Figures

Figure 1

22 pages, 26533 KiB  
Article
A Hybrid Machine Learning Approach for Detecting and Assessing Zyginidia pullula Damage in Maize Leaves
by Havva Esra Bakbak, Caner Balım and Aydogan Savran
Appl. Sci. 2025, 15(10), 5432; https://doi.org/10.3390/app15105432 - 13 May 2025
Viewed by 458
Abstract
This study presents a novel approach for the detection and severity assessment of pest-induced damage in maize plants, focusing on the Zyginidia pullula pest. A newly developed dataset is utilized, where maize plant images are initially classified into two primary categories: healthy and [...] Read more.
This study presents a novel approach for the detection and severity assessment of pest-induced damage in maize plants, focusing on the Zyginidia pullula pest. A newly developed dataset is utilized, where maize plant images are initially classified into two primary categories: healthy and infected. Subsequently, infected samples are categorized into three distinct severity levels: low, medium, and high. Both traditional and deep learning-based feature extraction techniques are employed to achieve this. Specifically, hand-crafted feature extraction methods, including Gabor filters, Gray Level Co-occurrence Matrix, and Hue-Saturation-Value color space, are combined with CNN-based models such as ResNet-50, DenseNet-201, and EfficientNet-B2. The maize images undergo preprocessing and segmentation using Contrast Limited Adaptive Histogram Equalization and U2Net, respectively. Extracted features are then fused and subjected to Principal Component Analysis for dimensionality reduction. The classification task is performed using Support Vector Machines, Random Forest, and Artificial Neural Networks, ensuring robust and accurate detection. The experimental results demonstrate that the proposed hybrid approach outperforms individual feature extraction methods, achieving a classification accuracy of up to 92.55%. Furthermore, integrating multiple feature representations significantly enhances the model’s ability to differentiate between varying levels of pest damage. Unlike previous studies that primarily focus on plant disease detection, this research uniquely addresses the quantification of pest-induced damage, offering a valuable tool for precision agriculture. The findings of this study contribute to the development of automated, scalable, and efficient pest monitoring systems, which are crucial for minimizing yield losses and improving agricultural sustainability. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

27 pages, 411 KiB  
Systematic Review
Artificial Neural Networks for Image Processing in Precision Agriculture: A Systematic Literature Review on Mango, Apple, Lemon, and Coffee Crops
by Christian Unigarro, Jorge Hernandez and Hector Florez
Informatics 2025, 12(2), 46; https://doi.org/10.3390/informatics12020046 - 6 May 2025
Viewed by 1534
Abstract
Precision agriculture is an approach that uses information technologies to improve and optimize agricultural production. It is based on the collection and analysis of agricultural data to support decision making in agricultural processes. In recent years, Artificial Neural Networks (ANNs) have demonstrated significant [...] Read more.
Precision agriculture is an approach that uses information technologies to improve and optimize agricultural production. It is based on the collection and analysis of agricultural data to support decision making in agricultural processes. In recent years, Artificial Neural Networks (ANNs) have demonstrated significant benefits in addressing precision agriculture needs, such as pest detection, disease classification, crop state assessment, and soil quality evaluation. This article aims to perform a systematic literature review on how ANNs with an emphasis on image processing can assess if fruits such as mango, apple, lemon, and coffee are ready for harvest. These specific crops were selected due to their diversity in color and size, providing a representative sample for analyzing the most commonly employed ANN methods in agriculture, especially for fruit ripening, damage, pest detection, and harvest prediction. This review identifies Convolutional Neural Networks (CNNs), including commonly employed architectures such as VGG16 and ResNet50, as highly effective, achieving accuracies ranging between 83% and 99%. Additionally, it discusses the integration of hardware and software, image preprocessing methods, and evaluation metrics commonly employed. The results reveal the notable underuse of vegetation indices and infrared imaging techniques for detailed fruit quality assessment, indicating valuable opportunities for future research. Full article
Show Figures

Figure 1

Back to TopTop