Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,599)

Search Parameters:
Keywords = improved YOLO

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 3332 KB  
Article
YOLOv11-XRBS: Enhanced Identification of Small and Low-Detail Explosives in X-Ray Backscatter Images
by Baolu Yang, Zhe Yang, Xin Wang, Baozhong Mu, Jie Xu and Hong Li
Sensors 2025, 25(19), 6130; https://doi.org/10.3390/s25196130 - 3 Oct 2025
Abstract
Identifying concealed explosives in X-ray backscatter (XRBS) imagery remains a critical challenge, primarily due to low image contrasts, cluttered backgrounds, small object sizes, and limited structural details. To address these limitations, we propose YOLOv11-XRBS, an enhanced detection framework tailored to the characteristics of [...] Read more.
Identifying concealed explosives in X-ray backscatter (XRBS) imagery remains a critical challenge, primarily due to low image contrasts, cluttered backgrounds, small object sizes, and limited structural details. To address these limitations, we propose YOLOv11-XRBS, an enhanced detection framework tailored to the characteristics of XRBS images. A dedicated dataset (SBCXray) comprising over 10,000 annotated images of simulated explosive scenarios under varied concealment conditions was constructed to support training and evaluation. The proposed framework introduces three targeted improvements: (1) adaptive architectural refinement to enhance multi-scale feature representation and suppress background interference, (2) a Size-Aware Focal Loss (SaFL) strategy to improve the detection of small and weak-feature objects, and (3) a recomposed loss function with scale-adaptive weighting to achieve more accurate bounding box localization. The experiments demonstrated that YOLOv11-XRBS achieves better performance compared to both existing YOLO variants and classical detection models such as Faster R-CNN, SSD512, RetinaNet, DETR, and VGGNet, achieving a mean average precision (mAP) of 94.8%. These results confirm the robustness and practicality of the proposed framework, highlighting its potential deployment in XRBS-based security inspection systems. Full article
(This article belongs to the Special Issue Advanced Spectroscopy-Based Sensors and Spectral Analysis Technology)
Show Figures

Figure 1

37 pages, 10380 KB  
Article
FEWheat-YOLO: A Lightweight Improved Algorithm for Wheat Spike Detection
by Hongxin Wu, Weimo Wu, Yufen Huang, Shaohua Liu, Yanlong Liu, Nannan Zhang, Xiao Zhang and Jie Chen
Plants 2025, 14(19), 3058; https://doi.org/10.3390/plants14193058 - 3 Oct 2025
Abstract
Accurate detection and counting of wheat spikes are crucial for yield estimation and variety selection in precision agriculture. However, challenges such as complex field environments, morphological variations, and small target sizes hinder the performance of existing models in real-world applications. This study proposes [...] Read more.
Accurate detection and counting of wheat spikes are crucial for yield estimation and variety selection in precision agriculture. However, challenges such as complex field environments, morphological variations, and small target sizes hinder the performance of existing models in real-world applications. This study proposes FEWheat-YOLO, a lightweight and efficient detection framework optimized for deployment on agricultural edge devices. The architecture integrates four key modules: (1) FEMANet, a mixed aggregation feature enhancement network with Efficient Multi-scale Attention (EMA) for improved small-target representation; (2) BiAFA-FPN, a bidirectional asymmetric feature pyramid network for efficient multi-scale feature fusion; (3) ADown, an adaptive downsampling module that preserves structural details during resolution reduction; and (4) GSCDHead, a grouped shared convolution detection head for reduced parameters and computational cost. Evaluated on a hybrid dataset combining GWHD2021 and a self-collected field dataset, FEWheat-YOLO achieved a COCO-style AP of 51.11%, AP@50 of 89.8%, and AP scores of 18.1%, 50.5%, and 61.2% for small, medium, and large targets, respectively, with an average recall (AR) of 58.1%. In wheat spike counting tasks, the model achieved an R2 of 0.941, MAE of 3.46, and RMSE of 6.25, demonstrating high counting accuracy and robustness. The proposed model requires only 0.67 M parameters, 5.3 GFLOPs, and 1.6 MB of storage, while achieving an inference speed of 54 FPS. Compared to YOLOv11n, FEWheat-YOLO improved AP@50, AP_s, AP_m, AP_l, and AR by 0.53%, 0.7%, 0.7%, 0.4%, and 0.3%, respectively, while reducing parameters by 74%, computation by 15.9%, and model size by 69.2%. These results indicate that FEWheat-YOLO provides an effective balance between detection accuracy, counting performance, and model efficiency, offering strong potential for real-time agricultural applications on resource-limited platforms. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
22 pages, 32792 KB  
Article
MRV-YOLO: A Multi-Channel Remote Sensing Object Detection Method for Identifying Reclaimed Vegetation in Hilly and Mountainous Mining Areas
by Xingmei Li, Hengkai Li, Jingjing Dai, Kunming Liu, Guanshi Wang, Shengdong Nie and Zhiyu Zhang
Forests 2025, 16(10), 1536; https://doi.org/10.3390/f16101536 - 2 Oct 2025
Abstract
Leaching mining of ion-adsorption rare earths degrades soil organic matter and hampers vegetation recovery. High-resolution UAV remote sensing enables large-scale monitoring of reclamation, yet vegetation detection accuracy is constrained by key challenges. Conventional three-channel detection struggles with terrain complexity, illumination variation, and shadow [...] Read more.
Leaching mining of ion-adsorption rare earths degrades soil organic matter and hampers vegetation recovery. High-resolution UAV remote sensing enables large-scale monitoring of reclamation, yet vegetation detection accuracy is constrained by key challenges. Conventional three-channel detection struggles with terrain complexity, illumination variation, and shadow effects. Fixed UAV altitude and missing topographic data further cause resolution inconsistencies, posing major challenges for accurate vegetation detection in reclaimed land. To enhance multi-spectral vegetation detection, the model input is expanded from the traditional three channels to six channels, enabling full utilization of multi-spectral information. Furthermore, the Channel Attention and Global Pooling SPPF (CAGP-SPPF) module is introduced for multi-scale feature extraction, integrating global pooling and channel attention to capture multi-channel semantic information. In addition, the C2f_DynamicConv module replaces conventional convolutions in the neck network to strengthen high-dimensional feature transmission and reduce information loss, thereby improving detection accuracy. On the self-constructed reclaimed vegetation dataset, MRV-YOLO outperformed YOLOv8, with mAP@0.5 and mAP@0.5:0.95 increasing by 4.6% and 10.8%, respectively. Compared with RT-DETR, YOLOv3, YOLOv5, YOLOv6, YOLOv7, yolov7-tiny, YOLOv8-AS, YOLOv10, and YOLOv11, mAP@0.5 improved by 6.8%, 9.7%, 5.3%, 6.5%, 6.4%, 8.9%, 4.6%, 2.1%, and 5.4%, respectively. The results demonstrate that multichannel inputs incorporating near-infrared and dual red-edge bands significantly enhance detection accuracy for reclaimed vegetation in rare earth mining areas, providing technical support for ecological restoration monitoring. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

23 pages, 24448 KB  
Article
YOLO-SCA: A Lightweight Potato Bud Eye Detection Method Based on the Improved YOLOv5s Algorithm
by Qing Zhao, Ping Zhao, Xiaojian Wang, Qingbing Xu, Siyao Liu and Tianqi Ma
Agriculture 2025, 15(19), 2066; https://doi.org/10.3390/agriculture15192066 - 1 Oct 2025
Abstract
Bud eye identification is a critical step in the intelligent seed cutting process for potatoes. This study focuses on the challenges of low testing accuracy and excessive weighted memory in testing models for potato bud eye detection. It proposes an improved potato bud [...] Read more.
Bud eye identification is a critical step in the intelligent seed cutting process for potatoes. This study focuses on the challenges of low testing accuracy and excessive weighted memory in testing models for potato bud eye detection. It proposes an improved potato bud eye detection method based on YOLOv5s, referred to as the YOLO-SCA model, which synergistically optimizing three main modules. The improved model introduces the ShuffleNetV2 module to reconstruct the backbone network. The channel shuffling mechanism reduces the model’s weighted memory and computational load, while enhancing bud eye features. Additionally, the CBAM attention mechanism is embedded at specific layers, using dual-path feature weighting (channel and spatial) to enhance sensitivity to key bud eye features in complex contexts. Then, the Alpha-IoU function is used to replace the CloU function as the bounding box regression loss function. Its single-parameter control mechanism and adaptive gradient amplification characteristics significantly improve the accuracy of bud eye positioning and strengthen the model’s anti-interference ability. Finally, we conduct pruning based on the channel evaluation after sparse training, accurately removing redundant channels, significantly reducing the amount of computation and weighted memory, and achieving real-time performance of the model. This study aims to address how potato bud eye detection models can achieve high-precision real-time detection under the conditions of limited computational resources and storage space. The improved YOLO-SCA model has a size of 3.6 MB, which is 35.3% of the original model; the number of parameters is 1.7 M, which is 25% of the original model; and the average accuracy rate is 95.3%, which is a 12.5% improvement over the original model. This study provides theoretical support for the development of potato bud eye recognition technology and intelligent cutting equipment. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

22 pages, 5982 KB  
Article
YOLO-FDLU: A Lightweight Improved YOLO11s-Based Algorithm for Accurate Maize Pest and Disease Detection
by Bin Li, Licheng Yu, Huibao Zhu and Zheng Tan
AgriEngineering 2025, 7(10), 323; https://doi.org/10.3390/agriengineering7100323 - 1 Oct 2025
Abstract
As a global staple ensuring food security, maize incurs 15–20% annual yield loss from pests/diseases. Conventional manual detection is inefficient (>7.5 h/ha) and subjective, while existing YOLO models suffer from >8% missed detections of small targets (e.g., corn armyworm larva) in complex fields [...] Read more.
As a global staple ensuring food security, maize incurs 15–20% annual yield loss from pests/diseases. Conventional manual detection is inefficient (>7.5 h/ha) and subjective, while existing YOLO models suffer from >8% missed detections of small targets (e.g., corn armyworm larva) in complex fields due to feature loss and poor multi-scale fusion. We propose YOLO-FDLU, a YOLO11s-based framework: LAD (Light Attention-Downsampling)-Conv preserves small-target features; C3k2_DDC (DilatedReparam–DilatedReparam–Conv) enhances cross-scale fusion; Detect_FCFQ (Feature-Corner Fusion and Quality Estimation) optimizes bounding box localization; UIoU (Unified-IoU) loss reduces high-IoU regression bias. Evaluated on a 25,419-sample dataset (6 categories, 3 public sources + 1200 compliant web images), it achieves 91.12% Precision, 92.70% mAP@0.5, 78.5% mAP@0.5–0.95, and 20.2 GFLOPs/15.3 MB. It outperforms YOLOv5-s to YOLO12-s, supporting precision maize pest/disease monitoring. Full article
Show Figures

Figure 1

18 pages, 1859 KB  
Article
A Study on the Detection Method for Split Pin Defects in Power Transmission Lines Based on Two-Stage Detection and Mamba-YOLO-SPDC
by Wenjie Zhu, Faping Hu, Xuehao He, Luping Dong, Haixin Yu and Hai Tian
Appl. Sci. 2025, 15(19), 10625; https://doi.org/10.3390/app151910625 - 30 Sep 2025
Abstract
Detecting small split pins on transmission lines poses significant challenges, including low accuracy in complex backgrounds and slow inference speeds. To address these limitations, this study proposes a novel two-stage collaborative detection framework. The first stage utilizes a Yolo11x-based model to localize and [...] Read more.
Detecting small split pins on transmission lines poses significant challenges, including low accuracy in complex backgrounds and slow inference speeds. To address these limitations, this study proposes a novel two-stage collaborative detection framework. The first stage utilizes a Yolo11x-based model to localize and crop components containing split pins from high-resolution images. This procedure transforms the difficult small-object detection problem into a more manageable, conventional detection task on a simplified background. For the second stage, a new high-performance detector, Mamba-YOLO-SPDC, is introduced. This model enhances the Yolo11 backbone by incorporating a Vision State Space (VSS) block, which leverages Mamba—a State Space Model (SSM) with linear computational complexity—to efficiently capture global context. Furthermore, a Space-to-Depth Convolution (SPD-Conv) module is integrated into the neck to mitigate the loss of fine-grained feature information during downsampling. Experimental results confirm the efficacy of the two-stage strategy. On the cropped dataset, the Mamba-YOLO-SPDC model achieves a mean Average Precision (mAP) of 61.9%, a 238% improvement over the 18.3% mAP obtained by the baseline Yolo11s on the original images. Compared to the conventional SAHI framework, the proposed method provides superior accuracy with a substantial increase in inference speed. This work demonstrates that the ‘localize first, then detect’ strategy, powered by the Mamba-YOLO-SPDC model, offers an effective balance between accuracy and efficiency for small object detection. Full article
Show Figures

Figure 1

23 pages, 3731 KB  
Article
ELS-YOLO: Efficient Lightweight YOLO for Steel Surface Defect Detection
by Zhiheng Zhang, Guoyun Zhong, Peng Ding, Jianfeng He, Jun Zhang and Chongyang Zhu
Electronics 2025, 14(19), 3877; https://doi.org/10.3390/electronics14193877 - 29 Sep 2025
Abstract
Detecting surface defects in steel products is essential for maintaining manufacturing quality. However, existing methods struggle with significant challenges, including substantial defect size variations, diverse defect types, and complex backgrounds, leading to suboptimal detection accuracy. This work introduces ELS-YOLO, an advanced YOLOv11n-based algorithm [...] Read more.
Detecting surface defects in steel products is essential for maintaining manufacturing quality. However, existing methods struggle with significant challenges, including substantial defect size variations, diverse defect types, and complex backgrounds, leading to suboptimal detection accuracy. This work introduces ELS-YOLO, an advanced YOLOv11n-based algorithm designed to tackle these limitations. A C3k2_THK module is first introduced that combines a partial convolution, heterogeneous kernel selection protocoland the SCSA attention mechanism to improve feature extraction while reducing computational overhead. Additionally, the Staged-Slim-Neck module is developed that employs dual and dilated convolutions at different stages while integrating GMLCA attention to enhance feature representation and reduce computational complexity. Furthermore, an MSDetect detection head is designed to boost multi-scale detection performance. Experimental validation shows that ELS-YOLO outperforms YOLOv11n in detection accuracy while achieving 8.5% and 11.1% reductions in the number of parameters and computational cost, respectively, demonstrating strong potential for real-world industrial applications. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

18 pages, 6231 KB  
Article
Optical Coherence Imaging Hybridized Deep Learning Framework for Automated Plant Bud Classification in Emasculation Processes: A Pilot Study
by Dasun Tharaka, Abisheka Withanage, Nipun Shantha Kahatapitiya, Ruvini Abhayapala, Udaya Wijenayake, Akila Wijethunge, Naresh Kumar Ravichandran, Bhagya Nathali Silva, Mansik Jeon, Jeehyun Kim, Udayagee Kumarasinghe and Ruchire Eranga Wijesinghe
Photonics 2025, 12(10), 966; https://doi.org/10.3390/photonics12100966 - 29 Sep 2025
Abstract
A vision-based autonomous system for emasculating okra enhances agriculture by enabling precise flower bud identification, overcoming the labor-intensive, error-prone challenges of traditional manual methods with improved accuracy and efficiency. This study presents a framework for an adaptive, automated bud identification method to assist [...] Read more.
A vision-based autonomous system for emasculating okra enhances agriculture by enabling precise flower bud identification, overcoming the labor-intensive, error-prone challenges of traditional manual methods with improved accuracy and efficiency. This study presents a framework for an adaptive, automated bud identification method to assist the emasculation process, hybridized optical coherence tomography (OCT). Three YOLOv8 variants were evaluated for accuracy, detection speed, and frame rate to identify the most efficient model. To strengthen the findings, YOLO was hybridized with OCT, enabling non-invasive sub-surface verification and precise quantification of the emasculated depth of both sepal and petal layers of the flower bud. To establish a solid benchmark, gold standard color histograms and a digital imaging-based method under optimal lighting conditions with confidence scoring were also employed. The results demonstrated that the proposed method significantly outperformed these conventional frameworks, providing superior accuracy and layer differentiation during emasculation. Hence, the developed YOLOv8 hybridized OCT method for flower bud identification and emasculation offers a powerful tool to significantly improve both the precision and efficiency of crop breeding practices. This framework sets the stage for implementing scalable, artificial intelligence (AI)-driven strategies that can modernize and optimize traditional crop breeding workflows. Full article
Show Figures

Figure 1

25 pages, 6044 KB  
Article
Computer Vision-Based Multi-Feature Extraction and Regression for Precise Egg Weight Measurement in Laying Hen Farms
by Yunxiao Jiang, Elsayed M. Atwa, Pengguang He, Jinhui Zhang, Mengzui Di, Jinming Pan and Hongjian Lin
Agriculture 2025, 15(19), 2035; https://doi.org/10.3390/agriculture15192035 - 28 Sep 2025
Abstract
Egg weight monitoring provides critical data for calculating the feed-to-egg ratio, and improving poultry farming efficiency. Installing a computer vision monitoring system in egg collection systems enables efficient and low-cost automated egg weight measurement. However, its accuracy is compromised by egg clustering during [...] Read more.
Egg weight monitoring provides critical data for calculating the feed-to-egg ratio, and improving poultry farming efficiency. Installing a computer vision monitoring system in egg collection systems enables efficient and low-cost automated egg weight measurement. However, its accuracy is compromised by egg clustering during transportation and low-contrast edges, which limits the widespread adoption of such methods. To address this, we propose an egg measurement method based on a computer vision and multi-feature extraction and regression approach. The proposed pipeline integrates two artificial neural networks: Central differential-EfficientViT YOLO (CEV-YOLO) and Egg Weight Measurement Network (EWM-Net). CEV-YOLO is an enhanced version of YOLOv11, incorporating central differential convolution (CDC) and efficient Vision Transformer (EfficientViT), enabling accurate pixel-level egg segmentation in the presence of occlusions and low-contrast edges. EWM-Net is a custom-designed neural network that utilizes the segmented egg masks to perform advanced feature extraction and precise weight estimation. Experimental results show that CEV-YOLO outperforms other YOLO-based models in egg segmentation, with a precision of 98.9%, a recall of 97.5%, and an Average Precision (AP) at an Intersection over Union (IoU) threshold of 0.9 (AP90) of 89.8%. EWM-Net achieves a mean absolute error (MAE) of 0.88 g and an R2 of 0.926 in egg weight measurement, outperforming six mainstream regression models. This study provides a practical and automated solution for precise egg weight measurement in practical production scenarios, which is expected to improve the accuracy and efficiency of feed-to-egg ratio measurement in laying hen farms. Full article
(This article belongs to the Section Agricultural Product Quality and Safety)
Show Figures

Figure 1

19 pages, 7875 KB  
Article
SATSN: A Spatial-Adaptive Two-Stream Network for Automatic Detection of Giraffe Daily Behaviors
by Haiming Gan, Xiongwei Wu, Jianlu Chen, Jingling Wang, Yuxin Fang, Yuqing Xue, Tian Jiang, Huanzhen Chen, Peng Zhang, Guixin Dong and Yueju Xue
Animals 2025, 15(19), 2833; https://doi.org/10.3390/ani15192833 - 28 Sep 2025
Abstract
The daily behavioral patterns of giraffes reflect their health status and well-being. Behaviors such as licking, walking, standing, and eating are not only essential components of giraffes’ routine activities but also serve as potential indicators of their mental and physiological conditions. This is [...] Read more.
The daily behavioral patterns of giraffes reflect their health status and well-being. Behaviors such as licking, walking, standing, and eating are not only essential components of giraffes’ routine activities but also serve as potential indicators of their mental and physiological conditions. This is particularly relevant in captive environments such as zoos, where certain repetitive behaviors may signal underlying well-being concerns. Therefore, developing an efficient and accurate automated behavior detection system is of great importance for scientific management and welfare improvement. This study proposes a multi-behavior automatic detection method for giraffes based on YOLO11-Pose and the spatial-adaptive two-stream network (SATSN). Firstly, YOLO11-Pose is employed to detect giraffes and estimate the keypoints of their mouths. Observation-Centric SORT (OC-SORT) is then used to track individual giraffes across frames, ensuring temporal identity consistency based on the keypoint positions estimated by YOLO11-Pose. In the SATSN, we propose a region-of-interest extraction strategy for licking behavior to extract local motion features and perform daily behavior classification. In this network, the original 3D ResNet backbone in the slow pathway is replaced with a video transformer encoder to enhance global spatiotemporal modeling, while a Temporal Attention (TA) module is embedded in the fast pathway to improve the representation of fast motion features. To validate the effectiveness of the proposed method, a giraffe behavior dataset consisting of 420 video clips (10 s per clip) was constructed, with 336 clips used for training and 84 for validation. Experimental results show that for the detection tasks of licking, walking, standing, and eating behaviors, the proposed method achieves a mean average precision (mAP) of 93.99%. This demonstrates the strong detection performance and generalization capability of the approach, providing robust support for automated multi-behavior detection and well-being assessment of giraffes. It also lays a technical foundation for building intelligent behavioral monitoring systems in zoos. Full article
Show Figures

Figure 1

20 pages, 1860 KB  
Article
An Improved YOLOv11n Model Based on Wavelet Convolution for Object Detection in Soccer Scenes
by Yue Wu, Lanxin Geng, Xinqi Guo, Chao Wu and Gui Yu
Symmetry 2025, 17(10), 1612; https://doi.org/10.3390/sym17101612 - 28 Sep 2025
Abstract
Object detection in soccer scenes serves as a fundamental task for soccer video analysis and target tracking. This paper proposes WCC-YOLO, a symmetry-enhanced object detection framework based on YOLOv11n. Our approach integrates symmetry principles at multiple levels: (1) The novel C3k2-WTConv module synergistically [...] Read more.
Object detection in soccer scenes serves as a fundamental task for soccer video analysis and target tracking. This paper proposes WCC-YOLO, a symmetry-enhanced object detection framework based on YOLOv11n. Our approach integrates symmetry principles at multiple levels: (1) The novel C3k2-WTConv module synergistically combines conventional convolution with wavelet decomposition, leveraging the orthogonal symmetry of Haar wavelet quadrature mirror filters (QMFs) to achieve balanced frequency-domain decomposition and enhance multi-scale feature representation. (2) The Channel Prior Convolutional Attention (CPCA) mechanism incorporates symmetrical operations—using average-max pooling pairs in channel attention and multi-scale convolutional kernels in spatial attention—to automatically learn to prioritize semantically salient regions through channel-wise feature recalibration, thereby enabling balanced feature representation. Coupled with InnerShape-IoU for refined bounding box regression, WCC-YOLO achieves a 4.5% improvement in mAP@0.5:0.95 and a 5.7% gain in mAP@0.5 compared to the baseline YOLOv11n while simultaneously reducing the number of parameters and maintaining near-identical inference latency (δ < 0.1 ms). This work demonstrates the value of explicit symmetry-aware modeling for sports analytics. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

26 pages, 10666 KB  
Article
FALS-YOLO: An Efficient and Lightweight Method for Automatic Brain Tumor Detection and Segmentation
by Liyan Sun, Linxuan Zheng and Yi Xin
Sensors 2025, 25(19), 5993; https://doi.org/10.3390/s25195993 - 28 Sep 2025
Abstract
Brain tumors are highly malignant diseases that severely threaten the nervous system and patients’ lives. MRI is a core technology for brain tumor diagnosis and treatment due to its high resolution and non-invasiveness. However, existing YOLO-based models face challenges in brain tumor MRI [...] Read more.
Brain tumors are highly malignant diseases that severely threaten the nervous system and patients’ lives. MRI is a core technology for brain tumor diagnosis and treatment due to its high resolution and non-invasiveness. However, existing YOLO-based models face challenges in brain tumor MRI image detection and segmentation, such as insufficient multi-scale feature extraction and high computational resource consumption. This paper proposes an improved lightweight brain tumor detection and instance segmentation model named FALS-YOLO, based on YOLOv8n-Seg and integrating three key modules: FLRDown, AdaSimAM, and LSCSHN. FLRDown enhances multi-scale tumor perception, AdaSimAM suppresses noise and improves feature fusion, and LSCSHN achieves high-precision segmentation with reduced parameters and computational burden. Experiments on the tumor-otak dataset show that FALS-YOLO achieves Precision (B) of 0.892, Recall (B) of 0.858, mAP@0.5 (B) of 0.912 in detection, and Precision (M) of 0.899, Recall (M) of 0.863, mAP@0.5 (M) of 0.917 in segmentation, outperforming YOLOv5n-Seg, YOLOv8n-Seg, YOLOv9s-Seg, YOLOv10n-Seg and YOLOv11n-Seg. Compared with YOLOv8n-Seg, FALS-YOLO reduces parameters by 31.95%, computational amount by 20.00%, and model size by 32.31%. It provides an efficient, accurate and practical solution for the automatic detection and instance segmentation of brain tumors in resource-limited environments. Full article
(This article belongs to the Special Issue Emerging MRI Techniques for Enhanced Disease Diagnosis and Monitoring)
Show Figures

Figure 1

25 pages, 6078 KB  
Article
Stoma Detection in Soybean Leaves and Rust Resistance Analysis
by Jiarui Feng, Shichao Wu, Rong Mu, Huanliang Xu, Zhaoyu Zhai and Bin Hu
Plants 2025, 14(19), 2994; https://doi.org/10.3390/plants14192994 - 27 Sep 2025
Abstract
Stomata play a crucial role in plant immune responses, with their morphological characteristics closely linked to disease resistance. Accurate detection and analysis of stomatal phenotypic parameters are essential for soybean disease resistance research and variety breeding. However, traditional stoma detection methods are challenged [...] Read more.
Stomata play a crucial role in plant immune responses, with their morphological characteristics closely linked to disease resistance. Accurate detection and analysis of stomatal phenotypic parameters are essential for soybean disease resistance research and variety breeding. However, traditional stoma detection methods are challenged by complex backgrounds and leaf vein structures in soybean images. To address these issues, we proposed a Soybean Stoma-YOLO (You Only Look Once) model (SS-YOLO) by incorporating large separable kernel attention (LSKA) in the Spatial Pyramid Pooling-Fast (SPPF) module of YOLOv8 and Deformable Large Kernel Attention (DLKA) in the Neck part. These architectural modifications enhanced YOLOV8′s ability to extract multi-scale and irregular stomatal features, thus improving detection accuracy. Experimental results showed that SS-YOLO achieved a detection accuracy of 98.7%. SS-YOLO can effectively extract the stomatal features (e.g., length, width, area, and orientation) and calculate related indices (e.g., density, area ratio, variance, and distribution). Across different soybean rust disease stages, the variety Dandou21 (DD21) exhibited less variation in length, width, area, and orientation compared with Fudou9 (FD9) and Huaixian5 (HX5). Furthermore, DD21 demonstrated greater uniformity in stomatal distribution (SEve: 1.02–1.08) and a stable stomatal area ratio (0.06–0.09). The analysis results indicate that DD21 maintained stable stomatal morphology with rust disease resistance. This study demonstrates that SS-YOLO significantly improved stoma detection and provided valuable insights into the relationship between stomatal characteristics and soybean disease resistance, offering a novel approach for breeding and plant disease resistance research. Full article
(This article belongs to the Section Plant Modeling)
Show Figures

Figure 1

17 pages, 2172 KB  
Article
GLDS-YOLO: An Improved Lightweight Model for Small Object Detection in UAV Aerial Imagery
by Zhiyong Ju, Jiacheng Shui and Jiameng Huang
Electronics 2025, 14(19), 3831; https://doi.org/10.3390/electronics14193831 - 27 Sep 2025
Abstract
To enhance small object detection in UAV aerial imagery suffering from low resolution and complex backgrounds, this paper proposes GLDS-YOLO, an improved lightweight detection model. The model integrates four core modules: Group Shuffle Attention (GSA) to strengthen small-scale feature perception, Large Separable Kernel [...] Read more.
To enhance small object detection in UAV aerial imagery suffering from low resolution and complex backgrounds, this paper proposes GLDS-YOLO, an improved lightweight detection model. The model integrates four core modules: Group Shuffle Attention (GSA) to strengthen small-scale feature perception, Large Separable Kernel Attention (LSKA) to capture global semantic context, DCNv4 to enhance feature adaptability with reduced parameters, and further proposes a novel Small-object-enhanced Multi-scale and Structure Detail Enhancement (SMSDE) module, which enhances edge-detail representation of small objects while maintaining lightweight efficiency. Experiments on VisDrone2019 and DOTA1.0 demonstrate that GLDS-YOLO achieves superior detection performance. On VisDrone2019, it improves mAP@0.5 and mAP@0.5:0.95 by 12.1% and 7%, respectively, compared with YOLOv11n, while maintaining competitive results on DOTA. These results confirm the model’s effectiveness, robustness, and adaptability for complex small object detection tasks in UAV scenarios. Full article
Show Figures

Figure 1

15 pages, 1868 KB  
Article
Utility of Same-Modality, Cross-Domain Transfer Learning for Malignant Bone Tumor Detection on Radiographs: A Multi-Faceted Performance Comparison with a Scratch-Trained Model
by Joe Hasei, Ryuichi Nakahara, Yujiro Otsuka, Koichi Takeuchi, Yusuke Nakamura, Kunihiro Ikuta, Shuhei Osaki, Hironari Tamiya, Shinji Miwa, Shusa Ohshika, Shunji Nishimura, Naoaki Kahara, Aki Yoshida, Hiroya Kondo, Tomohiro Fujiwara, Toshiyuki Kunisada and Toshifumi Ozaki
Cancers 2025, 17(19), 3144; https://doi.org/10.3390/cancers17193144 - 27 Sep 2025
Abstract
Background/Objectives: Developing high-performance artificial intelligence (AI) models for rare diseases like malignant bone tumors is limited by scarce annotated data. This study evaluates same-modality cross-domain transfer learning by comparing an AI model pretrained on chest radiographs with a model trained from scratch for [...] Read more.
Background/Objectives: Developing high-performance artificial intelligence (AI) models for rare diseases like malignant bone tumors is limited by scarce annotated data. This study evaluates same-modality cross-domain transfer learning by comparing an AI model pretrained on chest radiographs with a model trained from scratch for detecting malignant bone tumors on knee radiographs. Methods: Two YOLOv5-based detectors differed only in initialization (transfer vs. scratch). Both were trained/validated on institutional data and tested on an independent external set of 743 radiographs (268 malignant, 475 normal). The primary outcome was AUC; prespecified operating points were high-sensitivity (≥0.90), high-specificity (≥0.90), and Youden-optimal. Secondary analyses included PR/F1, calibration (Brier, slope), and decision curve analysis (DCA). Results: AUC was similar (YOLO-TL 0.954 [95% CI 0.937–0.970] vs. YOLO-SC 0.961 [0.948–0.973]; DeLong p = 0.53). At the high-sensitivity point (both sensitivity = 0.903), YOLO-TL achieved higher specificity (0.903 vs. 0.867; McNemar p = 0.037) and PPV (0.840 vs. 0.793; bootstrap p = 0.030), reducing ~17 false positives among 475 negatives. At the high-specificity point (~0.902–0.903 for both), YOLO-TL showed higher sensitivity (0.798 vs. 0.764; p = 0.0077). At the Youden-optimal point, sensitivity favored YOLO-TL (0.914 vs. 0.892; p = 0.041) with a non-significant specificity difference. Conclusions: Transfer learning may not improve overall AUC but can enhance practical performance at clinically crucial thresholds. By maintaining high detection rates while reducing false positives, the transfer learning model offers superior clinical utility. Same-modality cross-domain transfer learning is an efficient strategy for developing robust AI systems for rare diseases, supporting tools more readily acceptable in real-world screening workflows. Full article
Show Figures

Figure 1

Back to TopTop