Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (53)

Search Parameters:
Keywords = GhostNetV2

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 5022 KB  
Article
GLL-YOLO: A Lightweight Network for Detecting the Maturity of Blueberry Fruits
by Yanlei Xu, Haoxu Li, Yang Zhou, Yuting Zhai, Yang Yang and Daping Fu
Agriculture 2025, 15(17), 1877; https://doi.org/10.3390/agriculture15171877 - 3 Sep 2025
Cited by 1 | Viewed by 682
Abstract
The traditional detection of blueberry maturity relies on human experience, which is inefficient and highly subjective. Although deep learning methods have improved accuracy, they require large models and complex computations, making real-time deployment on resource-constrained edge devices difficult. To address these issues, a [...] Read more.
The traditional detection of blueberry maturity relies on human experience, which is inefficient and highly subjective. Although deep learning methods have improved accuracy, they require large models and complex computations, making real-time deployment on resource-constrained edge devices difficult. To address these issues, a GLL-YOLO method based on the YOLOv8 network is proposed to deal with problems such as fruit occlusion and complex backgrounds in mature blueberry detection. This approach utilizes the GhostNetV2 network as the backbone. The LIMC module is suggested to substitute the original C2f module. Meanwhile, a Lightweight Shared Convolution Detection Head (LSCD) module is designed to build the GLL-YOLO model. This model can accurately detect blueberries at three different maturity stages: unripe, semi-ripe, and ripe. It significantly reduces the number of model parameters and floating-point operations while maintaining high accuracy. Experimental results show that GLL-YOLO outperforms the original YOLOv8 model in terms of accuracy, with mAP improvements of 4.29%, 1.67%, and 1.39% for unripe, semi-ripe, and ripe blueberries, reaching 94.51%, 91.72%, and 93.32%, respectively. Compared to the original model, GLL-YOLO improved the accuracy, recall rate, and mAP by 2.3%, 5.9%, and 1%, respectively. Meanwhile, GLL-YOLO reduces parameters, FLOPs, and model size by 50%, 39%, and 46.7%, respectively, while maintaining accuracy. This method has the advantages of a small model size, high accuracy, and good detection performance, providing reliable support for intelligent blueberry harvesting. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

22 pages, 66579 KB  
Article
Cgc-YOLO: A New Detection Model for Defect Detection of Tea Tree Seeds
by Yuwen Liu, Hao Li, Kefan Yu, Hui Zhu, Binjie Zhang, Wangyu Wu and Hongbo Mu
Sensors 2025, 25(17), 5446; https://doi.org/10.3390/s25175446 - 2 Sep 2025
Viewed by 628
Abstract
Tea tree seeds are highly sensitive to dehydration and cannot be stored for extended periods, making surface defect detection crucial for preserving their germination rate and overall quality. To address this challenge, we propose Cgc-YOLO, an enhanced YOLO-based model specifically designed to detect [...] Read more.
Tea tree seeds are highly sensitive to dehydration and cannot be stored for extended periods, making surface defect detection crucial for preserving their germination rate and overall quality. To address this challenge, we propose Cgc-YOLO, an enhanced YOLO-based model specifically designed to detect small-scale and complex surface defects in tea seeds. A high-resolution imaging system was employed to construct a dataset encompassing five common types of tea tree seeds, capturing diverse defect patterns. Cgc-YOLO incorporates two key improvements: (1) GhostBlock, derived from GhostNetV2, embedded in the Backbone to enhance computational efficiency and long-range feature extraction; and (2) the CPCA attention mechanism, integrated into the Neck, to improve sensitivity to local textures and boundary details, thereby boosting segmentation and localization accuracy. Experimental results demonstrate that Cgc-YOLO achieves 97.6% mAP50 and 94.9% mAP50–95, surpassing YOLO11 by 2.3% and 3.1%, respectively. Furthermore, the model retains a compact size of only 8.5 MB, delivering an excellent balance between accuracy and efficiency. This study presents a robust and lightweight solution for nondestructive detection of tea seed defects, contributing to intelligent seed screening and storage quality assurance. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

16 pages, 2990 KB  
Article
Walnut Surface Defect Classification and Detection Model Based on Enhanced YOLO11n
by Xinyi Ma, Zhongjia Hao, Shuangyin Liu and Jingbin Li
Agriculture 2025, 15(15), 1707; https://doi.org/10.3390/agriculture15151707 - 7 Aug 2025
Cited by 1 | Viewed by 606
Abstract
Aiming at the challenges in practical production lines, including the difficulty in accurately capturing external defects on continuously rolling walnuts, distinguishing subtle defects, and differentiating narrow fissures from natural walnut textures, this paper proposes an improved walnut external defect detection model named YOLO11-GME, [...] Read more.
Aiming at the challenges in practical production lines, including the difficulty in accurately capturing external defects on continuously rolling walnuts, distinguishing subtle defects, and differentiating narrow fissures from natural walnut textures, this paper proposes an improved walnut external defect detection model named YOLO11-GME, based on YOLO11n. Firstly, the original backbone network is replaced with the lightweight GhostNetV1 network, enhancing model precision while meeting real-time detection speed requirements. Secondly, a Mixed Local Channel Attention (MLCA) mechanism is incorporated into the neck to strengthen the network’s ability to capture features of subtle defects, thereby improving defect recognition accuracy. Finally, the EIoU loss function is adopted to enhance the model’s localization capability for irregularly shaped defects and reduce false detection rates by improving the scale sensitivity of bounding box regression. Experimental results demonstrate that the improved YOLO11-GME model achieves a mean Average Precision (mAP) of 96.2%, representing improvements of 8.6%, 7%, and 5.8% compared to YOLOv5n, YOLOv8n, and YOLOv10n, respectively, and a 5.9% improvement over the original YOLOv11. Precision rates for the normal, fissure, and inferior categories increased by 8.7%, 5.3%, and 3.7%, respectively. The frame rate remains at 43.92 FPS, approaching the original model’s 51.02 FPS. These results validate that the YOLO11-GME model enhances walnut external defect detection accuracy while maintaining real-time detection speed, providing robust technical support for defect detection and classification in industrial walnut production. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

22 pages, 6645 KB  
Article
Visual Detection on Aircraft Wing Icing Process Using a Lightweight Deep Learning Model
by Yang Yan, Chao Tang, Jirong Huang, Zhixiong Cen and Zonghong Xie
Aerospace 2025, 12(7), 627; https://doi.org/10.3390/aerospace12070627 - 12 Jul 2025
Viewed by 604
Abstract
Aircraft wing icing significantly threatens aviation safety, causing substantial losses to the aviation industry each year. High transparency and blurred edges of icing areas in wing images pose challenges to wing icing detection by machine vision. To address these challenges, this study proposes [...] Read more.
Aircraft wing icing significantly threatens aviation safety, causing substantial losses to the aviation industry each year. High transparency and blurred edges of icing areas in wing images pose challenges to wing icing detection by machine vision. To address these challenges, this study proposes a detection model, Wing Icing Detection DeeplabV3+ (WID-DeeplabV3+), for efficient and precise aircraft wing leading edge icing detection under natural lighting conditions. WID-DeeplabV3+ adopts the lightweight MobileNetV3 as its backbone network to enhance the extraction of edge features in icing areas. Ghost Convolution and Atrous Spatial Pyramid Pooling modules are incorporated to reduce model parameters and computational complexity. The model is optimized using the transfer learning method, where pre-trained weights are utilized to accelerate convergence and enhance performance. Experimental results show WID-DeepLabV3+ segments the icing edge at 1920 × 1080 within 0.03 s. The model achieves the accuracy of 97.15%, an IOU of 94.16%, a precision of 97%, and a recall of 96.96%, representing respective improvements of 1.83%, 3.55%, 1.79%, and 2.04% over DeeplabV3+. The number of parameters and computational complexity are reduced by 92% and 76%, respectively. With high accuracy, superior IOU, and fast inference speed, WID-DeeplabV3+ provides an effective solution for wing-icing detection. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

31 pages, 20469 KB  
Article
YOLO-SRMX: A Lightweight Model for Real-Time Object Detection on Unmanned Aerial Vehicles
by Shimin Weng, Han Wang, Jiashu Wang, Changming Xu and Ende Zhang
Remote Sens. 2025, 17(13), 2313; https://doi.org/10.3390/rs17132313 - 5 Jul 2025
Cited by 3 | Viewed by 1650
Abstract
Unmanned Aerial Vehicles (UAVs) face a significant challenge in balancing high accuracy and high efficiency when performing real-time object detection tasks, especially amidst intricate backgrounds, diverse target scales, and stringent onboard computational resource constraints. To tackle these difficulties, this study introduces YOLO-SRMX, a [...] Read more.
Unmanned Aerial Vehicles (UAVs) face a significant challenge in balancing high accuracy and high efficiency when performing real-time object detection tasks, especially amidst intricate backgrounds, diverse target scales, and stringent onboard computational resource constraints. To tackle these difficulties, this study introduces YOLO-SRMX, a lightweight real-time object detection framework specifically designed for infrared imagery captured by UAVs. Firstly, the model utilizes ShuffleNetV2 as an efficient lightweight backbone and integrates the novel Multi-Scale Dilated Attention (MSDA) module. This strategy not only facilitates a substantial 46.4% reduction in parameter volume but also, through the flexible adaptation of receptive fields, boosts the model’s robustness and precision in multi-scale object recognition tasks. Secondly, within the neck network, multi-scale feature extraction is facilitated through the design of novel composite convolutions, ConvX and MConv, based on a “split–differentiate–concatenate” paradigm. Furthermore, the lightweight GhostConv is incorporated to reduce model complexity. By synthesizing these principles, a novel composite receptive field lightweight convolution, DRFAConvP, is proposed to further optimize multi-scale feature fusion efficiency and promote model lightweighting. Finally, the Wise-IoU loss function is adopted to replace the traditional bounding box loss. This is coupled with a dynamic non-monotonic focusing mechanism formulated using the concept of outlier degrees. This mechanism intelligently assigns elevated gradient weights to anchor boxes of moderate quality by assessing their relative outlier degree, while concurrently diminishing the gradient contributions from both high-quality and low-quality anchor boxes. Consequently, this approach enhances the model’s localization accuracy for small targets in complex scenes. Experimental evaluations on the HIT-UAV dataset corroborate that YOLO-SRMX achieves an mAP50 of 82.8%, representing a 7.81% improvement over the baseline YOLOv8s model; an F1 score of 80%, marking a 3.9% increase; and a substantial 65.3% reduction in computational cost (GFLOPs). YOLO-SRMX demonstrates an exceptional trade-off between detection accuracy and operational efficiency, thereby underscoring its considerable potential for efficient and precise object detection on resource-constrained UAV platforms. Full article
Show Figures

Figure 1

18 pages, 2924 KB  
Article
Nondestructive Detection and Quality Grading System of Walnut Using X-Ray Imaging and Lightweight WKNet
by Xiangpeng Fan and Jianping Zhou
Foods 2025, 14(13), 2346; https://doi.org/10.3390/foods14132346 - 1 Jul 2025
Cited by 1 | Viewed by 584
Abstract
The internal quality detection is extremely important. To solve the challenges of walnut quality detection, we presented the first comprehensive investigation of walnut quality detection method using X-ray imaging and deep learning model. An X-ray machine vision system was designed, and a walnut [...] Read more.
The internal quality detection is extremely important. To solve the challenges of walnut quality detection, we presented the first comprehensive investigation of walnut quality detection method using X-ray imaging and deep learning model. An X-ray machine vision system was designed, and a walnut kernel detection (called WKD) dataset was constructed. Then, an effective walnut kernel detection network (called WKNet) was developed by employing Transformer, GhostNet, and criss-cross attention (called CCA) module to the YOLO v5s model, aiming to solve the time consuming and parameter redundancy issues. The WKNet achieved an mAP_0.5 of 0.9869, precision of 0.9779, and recall of 0.9875 for walnut kernel detection. The inference time per image is only 11.9 ms. Extensive comparison experiments with the state-of-the-art (SOTA) deep learning models demonstrated the advanced nature of WKNet. The online test of walnut internal quality detection also shows satisfactory performance. The innovative combination of X-ray imaging and WKNet provide significant implications for walnut quality control. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

15 pages, 4176 KB  
Article
Wind Turbine Surface Crack Detection Based on YOLOv5l-GCB
by Feng Hu, Xiaohui Leng, Chao Ma, Guoming Sun, Dehong Wang, Duanxuan Liu and Zixuan Zhang
Energies 2025, 18(11), 2775; https://doi.org/10.3390/en18112775 - 27 May 2025
Viewed by 434
Abstract
As a fundamental element of the wind power generation system, the timely detection and rectification of surface cracks and other defects are imperative to ensure the stable function of the entire system. A new wind tower surface crack detection model, You Only Look [...] Read more.
As a fundamental element of the wind power generation system, the timely detection and rectification of surface cracks and other defects are imperative to ensure the stable function of the entire system. A new wind tower surface crack detection model, You Only Look Once version 5l GhostNetV2-CBAM-BiFPN (YOLOv5l-GCB), is proposed to accomplish the accurate classification of wind tower surface cracks. Ghost Network Version 2 (GhostNetV2) is integrated into the backbone of YOLOv5l to realize lightweighting of the backbone, which simplifies the complexity of the model and enhances the inference speed; the Convolutional Block Attention Module (CBAM) is added to strengthen the attention of the model to the target region; and the bidirectional feature pyramid network (BiFPN) has been developed for the purpose of enhancing the model’s detection accuracy in complex scenes. The proposed improvement strategy is verified through ablation experiments. The experimental results indicate that the precision, recall, F1 score, and mean average precision of YOLOv5l-GCB reach 91.6%, 99.0%, 75.0%, and 84.6%, which are 4.7%, 2%, 1%, and 10.4% higher than that of YOLOv5l, and it can accurately recognize multiple types of cracks, with an average number of 28 images detected per second, which improves the detection speed. Full article
Show Figures

Figure 1

19 pages, 5134 KB  
Article
A Garbage Detection and Classification Model for Orchards Based on Lightweight YOLOv7
by Xinyuan Tian, Liping Bai and Deyun Mo
Sustainability 2025, 17(9), 3922; https://doi.org/10.3390/su17093922 - 27 Apr 2025
Cited by 3 | Viewed by 1225
Abstract
The disposal of orchard garbage (including pruning branches, fallen leaves, and non-biodegradable materials such as pesticide containers and plastic film) poses major difficulties for horticultural production and soil sustainability. Unlike general agricultural garbage, orchard garbage often contains both biodegradable organic matter and hazardous [...] Read more.
The disposal of orchard garbage (including pruning branches, fallen leaves, and non-biodegradable materials such as pesticide containers and plastic film) poses major difficulties for horticultural production and soil sustainability. Unlike general agricultural garbage, orchard garbage often contains both biodegradable organic matter and hazardous pollutants, which complicates efficient recycling. Traditional manual sorting methods are labour-intensive and inefficient in large-scale operations. To this end, we propose a lightweight YOLOv7-based detection model tailored for the orchard environment. By replacing the CSPDarknet53 backbone with MobileNetV3 and GhostNet, an average accuracy (mAP) of 84.4% is achieved, while the computational load of the original model is only 16%. Meanwhile, a supervised comparative learning strategy further strengthens feature discrimination between horticulturally relevant categories and can distinguish compost pruning residues from toxic materials. Experiments on a dataset containing 16 orchard-specific garbage types (e.g., pineapple shells, plastic mulch, and fertiliser bags) show that the model has high classification accuracy, especially for materials commonly found in tropical orchards. The lightweight nature of the algorithm allows for real-time deployment on edge devices such as drones or robotic platforms, and future integration with robotic arms for automated collection and sorting. By converting garbage into a compostable resource and separating contaminants, the technology is aligned with the country’s garbage segregation initiatives and global sustainability goals, providing a scalable pathway to reconcile ecological preservation and horticultural efficiency. Full article
Show Figures

Figure 1

23 pages, 19606 KB  
Article
Lubricating Grease Thickness Classification of Steel Wire Rope Surface Based on GEMR-MobileViT
by Ruqing Gong, Yuemin Wang, Fan Zhou and Binghui Tang
Sensors 2025, 25(9), 2738; https://doi.org/10.3390/s25092738 - 26 Apr 2025
Viewed by 594
Abstract
Proper surface lubrication with optimal grease thickness is essential for extending steel wire rope service life. To achieve automated lubrication quality control and address challenges like variable lighting and motion blur that degrade recognition accuracy in practical settings, this paper proposes an improved [...] Read more.
Proper surface lubrication with optimal grease thickness is essential for extending steel wire rope service life. To achieve automated lubrication quality control and address challenges like variable lighting and motion blur that degrade recognition accuracy in practical settings, this paper proposes an improved lightweight GEMR-MobileViT. The model is designed to identify the grease thickness on steel wire rope surfaces while mitigating the high parameters and computational complexity of existing models. In this model, part of the standard convolution is replaced by GhostConv, a novel efficient multi-scale attention (EMA) module is introduced into the local expression part of the MobileViT block, and the structure of residual connections within the MobileViT block is designed. A transfer learning method is then employed. A custom dataset of steel wire rope lubrication images was constructed for model training. The experimental results demonstrated that GEMR-MobileViT achieved a recognition accuracy of 96.63% across five grease thickness categories, with 4.19 M params and 1.31 GFLOPs computational complexity. Compared to the pre-improvement version, recognition accuracy improved by 4.4%, while its parameters and computational complexity were reduced by 15.2% and 10.3%, respectively. When compared with current mainstream classification models such as ConvNeXtV2, EfficientNetV2, EdgeNeXt, NextViT, and MobileNetV4, our GEMR-MobileViT achieved superior recognition accuracy and demonstrated significant advantages in its model parameters, striking a good balance between recognition precision and model size. The proposed model facilitates deployment in steel wire rope lubrication working sites, enabling the real-time monitoring of surface grease thickness, thereby offering a novel approach for automating steel wire rope maintenance. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

30 pages, 24057 KB  
Article
Enhancing Autonomous Orchard Navigation: A Real-Time Convolutional Neural Network-Based Obstacle Classification System for Distinguishing ‘Real’ and ‘Fake’ Obstacles in Agricultural Robotics
by Tabinda Naz Syed, Jun Zhou, Imran Ali Lakhiar, Francesco Marinello, Tamiru Tesfaye Gemechu, Luke Toroitich Rottok and Zhizhen Jiang
Agriculture 2025, 15(8), 827; https://doi.org/10.3390/agriculture15080827 - 10 Apr 2025
Cited by 7 | Viewed by 1420
Abstract
Autonomous navigation in agricultural environments requires precise obstacle classification to ensure collision-free movement. This study proposes a convolutional neural network (CNN)-based model designed to enhance obstacle classification for agricultural robots, particularly in orchards. Building upon a previously developed YOLOv8n-based real-time detection system, the [...] Read more.
Autonomous navigation in agricultural environments requires precise obstacle classification to ensure collision-free movement. This study proposes a convolutional neural network (CNN)-based model designed to enhance obstacle classification for agricultural robots, particularly in orchards. Building upon a previously developed YOLOv8n-based real-time detection system, the model incorporates Ghost Modules and Squeeze-and-Excitation (SE) blocks to enhance feature extraction while maintaining computational efficiency. Obstacles are categorized as “Real”—those that physically impact navigation, such as tree trunks and persons—and “Fake”—those that do not, such as tall weeds and tree branches—allowing for precise navigation decisions. The model was trained on separate orchard and campus datasets and fine-tuned using Hyperband optimization and evaluated on an external test set to assess generalization to unseen obstacles. The model’s robustness was tested under varied lighting conditions, including low-light scenarios, to ensure real-world applicability. Computational efficiency was analyzed based on inference speed, memory consumption, and hardware requirements. Comparative analysis against state-of-the-art classification models (VGG16, ResNet50, MobileNetV3, DenseNet121, EfficientNetB0, and InceptionV3) confirmed the proposed model’s superior precision (p), recall (r), and F1-score, particularly in complex orchard scenarios. The model maintained strong generalization across diverse environmental conditions, including varying illumination and previously unseen obstacles. Furthermore, computational analysis revealed that the orchard-combined model achieved the highest inference speed at 2.31 FPS while maintaining a strong balance between accuracy and efficiency. When deployed in real-time, the model achieved 95.0% classification accuracy in orchards and 92.0% in campus environments. The real-time system demonstrated a false positive rate of 8.0% in the campus environment and 2.0% in the orchard, with a consistent false negative rate of 8.0% across both environments. These results validate the model’s effectiveness for real-time obstacle differentiation in agricultural settings. Its strong generalization, robustness to unseen obstacles, and computational efficiency make it well-suited for deployment in precision agriculture. Future work will focus on enhancing inference speed, improving performance under occlusion, and expanding dataset diversity to further strengthen real-world applicability. Full article
Show Figures

Figure 1

18 pages, 12151 KB  
Article
LGR-Net: A Lightweight Defect Detection Network Aimed at Elevator Guide Rail Pressure Plates
by Ruizhen Gao, Meng Chen, Yue Pan, Jiaxin Zhang, Haipeng Zhang and Ziyue Zhao
Sensors 2025, 25(6), 1702; https://doi.org/10.3390/s25061702 - 10 Mar 2025
Cited by 1 | Viewed by 911
Abstract
In elevator systems, pressure plates secure guide rails and limit displacement, but defects compromise their performance under stress. Current detection algorithms face challenges in achieving high localization accuracy and computational efficiency when detecting small defects in guide rail pressure plates. To overcome these [...] Read more.
In elevator systems, pressure plates secure guide rails and limit displacement, but defects compromise their performance under stress. Current detection algorithms face challenges in achieving high localization accuracy and computational efficiency when detecting small defects in guide rail pressure plates. To overcome these limitations, this paper proposes a lightweight defect detection network (LGR-Net) for guide rail pressure plates based on the YOLOv8n algorithm. To solve the problem of excessive model parameters in the original algorithm, we enhance the baseline model’s backbone network by incorporating the lightweight MobileNetV3 and optimize the neck network using the Ghost convolution module (GhostConv). To improve the localization accuracy for small defects, we add a high-resolution small object detection layer (P2 layer) and integrate the Convolutional Block Attention Module (CBAM) to construct a four-scale feature fusion network. This study employs various data augmentation methods to construct a custom dataset for guide rail pressure plate defect detection. The experimental results show that LGR-Net outperforms other YOLO-series models in terms of overall performance, achieving optimal results in terms of precision (p = 98.7%), recall (R = 98.9%), mAP (99.4%), and parameter count (2,412,118). LGR-Net achieves low computational complexity and high detection accuracy, providing an efficient and effective solution for defect detection in elevator guide rail pressure plates. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

24 pages, 3772 KB  
Article
A Lightweight Network Based on Dynamic Split Pointwise Convolution Strategy for Hyperspectral Remote Sensing Images Classification
by Jing Liu, Meiyi Wu, KangXin Li and Yi Liu
Remote Sens. 2025, 17(5), 888; https://doi.org/10.3390/rs17050888 - 2 Mar 2025
Viewed by 1116
Abstract
For reducing the parameters and computational complexity of networks while improving the classification accuracy of hyperspectral remote sensing images (HRSIs), a dynamic split pointwise convolution (DSPC) strategy is presented, and a lightweight convolutional neural network (CNN), i.e., CSM-DSPCss-Ghost, is proposed based on DSPC. [...] Read more.
For reducing the parameters and computational complexity of networks while improving the classification accuracy of hyperspectral remote sensing images (HRSIs), a dynamic split pointwise convolution (DSPC) strategy is presented, and a lightweight convolutional neural network (CNN), i.e., CSM-DSPCss-Ghost, is proposed based on DSPC. A channel switching module (CSM) and a dynamic split pointwise convolution Ghost (DSPC-Ghost) module are presented by combining the presented DSPC with channel shuffling and the Ghost strategy, respectively. CSM replaces the first expansion pointwise convolution in the MobileNetV2 bottleneck module to reduce the parameter number and relieve the increasing channel correlation caused by the original channel expansion pointwise convolution. DSPC-Ghost replaces the second pointwise convolution in the MobileNetV2 bottleneck module, which can further reduce the number of parameters based on DSPC and extract the depth spectral and spatial features of HRSIs successively. Finally, the CSM-DSPCss-Ghost bottleneck module is presented by introducing a squeeze excitation module and a spatial attention module after the CSM and the depthwise convolution, respectively. The presented CSM-DSPCss-Ghost network consists of seven successive CSM-DSPCss-Ghost bottleneck modules. Experiments on four measured HRSIs show that, compared with 2D CNN, 3D CNN, MobileNetV2, ShuffleNet, GhostNet, and Xception, CSM-DSPCss-Ghost can significantly improve classification accuracy and running speed while reducing the number of parameters. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

19 pages, 13021 KB  
Article
GLS-YOLO: A Lightweight Tea Bud Detection Model in Complex Scenarios
by Shanshan Li, Zhe Zhang and Shijun Li
Agronomy 2024, 14(12), 2939; https://doi.org/10.3390/agronomy14122939 - 10 Dec 2024
Cited by 2 | Viewed by 1217
Abstract
The efficiency of tea bud harvesting has been greatly enhanced, and human labor intensity significantly reduced, through the mechanization and intelligent management of tea plantations. A key challenge for harvesting machinery is ensuring both the freshness of tea buds and the integrity of [...] Read more.
The efficiency of tea bud harvesting has been greatly enhanced, and human labor intensity significantly reduced, through the mechanization and intelligent management of tea plantations. A key challenge for harvesting machinery is ensuring both the freshness of tea buds and the integrity of the tea plants. However, achieving precise harvesting requires complex computational models, which can limit practical deployment. To address the demand for high-precision yet lightweight tea bud detection, this study proposes the GLS-YOLO detection model, based on YOLOv8. The model leverages GhostNetV2 as its backbone network, replacing standard convolutions with depthwise separable convolutions, resulting in substantial reductions in computational load and memory consumption. Additionally, the C2f-LC module is integrated into the improved model, combining cross-covariance fusion with a lightweight contextual attention mechanism to enhance feature recognition and extraction quality. To tackle the challenges posed by varying poses and occlusions of tea buds, Shape-IoU was employed as the loss function to improve the scoring of similarly shaped objects, reducing false positives and false negatives while improving the detection of non-rectangular or irregularly shaped objects. Experimental results demonstrate the model’s superior performance, achieving an AP@0.5 of 90.55%. Compared to the original YOLOv8, the model size was reduced by 38.85%, and the number of parameters decreased by 39.95%. This study presents innovative advances in agricultural robotics by significantly improving the accuracy and efficiency of tea bud harvesting, simplifying the configuration process for harvesting systems, and effectively lowering the technological barriers for real-world applications. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

22 pages, 4876 KB  
Article
Innovative Ghost Channel Spatial Attention Network with Adaptive Activation for Efficient Rice Disease Identification
by Yang Zhou, Yang Yang, Dongze Wang, Yuting Zhai, Haoxu Li and Yanlei Xu
Agronomy 2024, 14(12), 2869; https://doi.org/10.3390/agronomy14122869 - 1 Dec 2024
Cited by 3 | Viewed by 1423
Abstract
To address the computational complexity and deployment challenges of traditional convolutional neural networks in rice disease identification, this paper proposes an efficient and lightweight model: Ghost Channel Spatial Attention ShuffleNet with Mish-ReLU Adaptive Activation Function (GCA-MiRaNet). Based on ShuffleNet V2, we effectively reduced [...] Read more.
To address the computational complexity and deployment challenges of traditional convolutional neural networks in rice disease identification, this paper proposes an efficient and lightweight model: Ghost Channel Spatial Attention ShuffleNet with Mish-ReLU Adaptive Activation Function (GCA-MiRaNet). Based on ShuffleNet V2, we effectively reduced the model’s parameter count by streamlining convolutional layers, decreasing stacking depth, and optimizing output channels. Additionally, the model incorporates the Ghost Module as a replacement for traditional 1 × 1 convolutions, further reducing computational overhead. Innovatively, we introduce a Channel Spatial Attention Mechanism (CSAM) that significantly enhances feature extraction and generalization aimed at rice disease detection. Through combining the advantages of Mish and ReLU, we designed the Mish-ReLU Adaptive Activation Function (MAAF), enhancing the model’s generalization capacity and convergence speed. Through transfer learning and ElasticNet regularization, the model’s accuracy has notably improved while effectively avoiding overfitting. Sufficient experimental results indicate that GCA-MiRaNet attains a precision of 94.76% on the rice disease dataset, with a 95.38% reduction in model parameters and a compact size of only 0.4 MB. Compared to traditional models such as ResNet50 and EfficientNet V2, GCA-MiRaNet demonstrates significant advantages in overall performance, especially on embedded devices. This model not only enables efficient and accurate real-time disease monitoring but also provides a viable solution for rice field protection drones and Internet of Things management systems, advancing the process of contemporary agricultural smart management. Full article
Show Figures

Figure 1

16 pages, 3285 KB  
Article
Research on the Classification of Sun-Dried Wild Ginseng Based on an Improved ResNeXt50 Model
by Dongming Li, Zhenkun Zhao, Yingying Yin and Chunxi Zhao
Appl. Sci. 2024, 14(22), 10613; https://doi.org/10.3390/app142210613 - 18 Nov 2024
Cited by 2 | Viewed by 1167
Abstract
Ginseng is a common medicinal herb with high value due to its unique medicinal properties. Traditional methods for classifying ginseng rely heavily on manual judgment, which is time-consuming and subjective. In contrast, deep learning methods can objectively learn the features of ginseng, saving [...] Read more.
Ginseng is a common medicinal herb with high value due to its unique medicinal properties. Traditional methods for classifying ginseng rely heavily on manual judgment, which is time-consuming and subjective. In contrast, deep learning methods can objectively learn the features of ginseng, saving both labor and time. This experiment proposes a ginseng-grade classification model based on an improved ResNeXt50 model. First, each convolutional layer in the Bottleneck structure is replaced with the corresponding Ghost module, reducing the model’s computational complexity and parameter count without compromising performance. Second, the SE attention mechanism is added to the model, allowing it to capture feature information more accurately and precisely. Next, the ELU activation function replaces the original ReLU activation function. Then, the dataset is augmented and divided into four categories for model training. A model suitable for ginseng grade classification was obtained through experimentation. Compared with classic convolutional neural network models ResNet50, AlexNet, iResNet, and EfficientNet_v2_s, the accuracy improved by 10.22%, 5.92%, 4.63%, and 3.4%, respectively. The proposed model achieved the best results, with a validation accuracy of up to 93.14% and a loss value as low as 0.105. Experiments have shown that this method is effective in recognition and can be used for ginseng grade classification research. Full article
(This article belongs to the Special Issue Deep Learning and Digital Image Processing)
Show Figures

Figure 1

Back to TopTop