Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,121)

Search Parameters:
Keywords = YOLOv5

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 30829 KB  
Article
Crop-IRM: An Intelligent Recognition and Management System for Organ Characteristics of Crop Germplasm Resources
by Jie Zhang, Chenyao Yang, Hailin Peng, Xintong Wei, Jiaqi Zou, Shiyu Wang, Zhaohong Lu, Xianming Tan and Feng Yang
Agriculture 2026, 16(9), 996; https://doi.org/10.3390/agriculture16090996 (registering DOI) - 30 Apr 2026
Abstract
The traditional methods of field-based phenotypic data collection for crop germplasm resources are often inefficient and highly subjective. As the foundation for breeding innovation, these resources require precise identification of phenotypic traits for effective evaluation and utilization. Therefore, efficient and standardized management of [...] Read more.
The traditional methods of field-based phenotypic data collection for crop germplasm resources are often inefficient and highly subjective. As the foundation for breeding innovation, these resources require precise identification of phenotypic traits for effective evaluation and utilization. Therefore, efficient and standardized management of germplasm data is critical during the breeding process. To address this, we have developed an intelligent recognition and management system focused on the crop’s organ characteristics. The system consists of a web client for overall project management and data download, and a WeChat Mini Program for data collection and uploading. Both components are integrated with image analysis models. Using a soybean variety screening experiment as a case study, we have constructed multiple high-definition datasets for soybean phenotypic traits, and employed YOLOv11 series models for object detection, image classification, instance segmentation, and pose estimation to build analytical models for each of these traits. All models achieved a mean average precision (mAP@0.5) exceeding 94%, along with a top1_accuracy of 0.999. In practical evaluations, all models took between 0.71 and 3.03 s to make predictions for 100 images, achieving an accuracy rate of over 98%. This system delivers a comprehensive solution for field phenotypic identification of crop germplasm resources, substantially enhancing the efficiency and objectivity of data collection and analysis. It serves as a valuable decision-support tool for precision breeding and digital agriculture. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

23 pages, 138069 KB  
Article
Instance Segmentation of Ship Images Based on Multi-Branch Adaptive Feature Fusion and Occluded Region Decoupling in Occluded Scenes
by Yuwei Zhu, Wentao Xue, Wei Liu, Hui Ye and Yaohua Shen
J. Mar. Sci. Eng. 2026, 14(9), 841; https://doi.org/10.3390/jmse14090841 - 30 Apr 2026
Abstract
Instance segmentation accurately extracts the position and outline of ships, serving as the foundation for maritime safety tasks such as multi-object tracking, sensor fusion, and collision warning. This study focuses on single-frame segmentation and aims to address the challenge of multi-scale ship occlusion [...] Read more.
Instance segmentation accurately extracts the position and outline of ships, serving as the foundation for maritime safety tasks such as multi-object tracking, sensor fusion, and collision warning. This study focuses on single-frame segmentation and aims to address the challenge of multi-scale ship occlusion in congested ports, providing reliable observational data through high-precision recognition to ensure navigation safety. Existing methods suffer from performance degradation in complex maritime environments due to factors such as multi-scale distribution, low resolution of distant targets, and frequent occlusions. Among these, ship occlusion is particularly problematic as it leads to feature confusion between adjacent instances and inaccurate boundary segmentation. To address these challenges, we propose a novel instance segmentation algorithm (MAF-ORDNet) based on Multi-branch Adaptive Feature Fusion and Occluded Region Decoupling. Firstly, a multi-branch adaptive feature fusion module is designed to capture contextual information through different receptive fields and dynamically fuse multi-scale features, thereby restoring occluded semantics and enhancing robustness. Secondly, an occlusion region decoupling module is constructed to accurately localize occluded regions and enhance contour responses via adaptive sampling, achieving refined boundary processing. In addition, we constructed and annotated the Occlusion ShipSeg dataset, which contains 1969 real occlusion images, 2150 simulated occlusion images, and 1132 images under adverse weather conditions, totaling 17,352 fine instance annotations. Experimental results show that, compared with PatchDCT, YOLOv11s, and Mask2Former, our method improves AP by 2.7%, 3.2%, and 2.4%, respectively, while maintaining a comparable inference speed to YOLOv8s. These results confirm that MAF-ORDNet achieves a favorable balance between accuracy and efficiency in multi-scale occluded ship segmentation tasks. Full article
(This article belongs to the Section Ocean Engineering)
28 pages, 2497 KB  
Article
Research on the Application of Time-Frequency Characteristics of GPR in Railway Mud Pumping Intelligent Detection
by Wenxing Shi, Shilei Wang, Feng Yang, Chi Zhang, Fanruo Li and Suping Peng
Remote Sens. 2026, 18(9), 1393; https://doi.org/10.3390/rs18091393 - 30 Apr 2026
Abstract
Ground penetrating radar (GPR), as an efficient non-destructive testing technique, plays a crucial role in the structural condition assessment and defect identification of railway ballast. Typical defects such as mud pumping generally exhibit characteristics in B-scan images including weak reflections, blurred boundaries, and [...] Read more.
Ground penetrating radar (GPR), as an efficient non-destructive testing technique, plays a crucial role in the structural condition assessment and defect identification of railway ballast. Typical defects such as mud pumping generally exhibit characteristics in B-scan images including weak reflections, blurred boundaries, and irregular structures, which pose significant challenges for stable detection and precise localization using existing methods that rely primarily on spatial feature modeling. Most current deep learning approaches focus on modeling spatial or temporal information, while lacking effective utilization of frequency-domain features, thereby limiting their discriminative capability under complex electromagnetic environments. To address these issues, this paper proposes a single-stage object detection framework, termed YOLO-DGW, based on time-frequency collaborative modeling. Built upon YOLOv8, the proposed method introduces a structure-aware spatial enhancement module to improve the representation of continuous GPR echo structures. Meanwhile, frequency-domain information is incorporated as a modulation prior to guide spatial feature learning, enhancing the model’s sensitivity to weak reflections and complex-shaped targets. In addition, A-CIoU loss function is designed to improve localization accuracy and stability for defect regions of varying scales. Experimental results demonstrate that YOLO-DGW achieves an F1-score of 63.06% and an AP@0.50 of 62.07%, representing improvements of approximately 7.41% and 2.8%, respectively, over the strongest baseline method. Compared with several mainstream object detection models, the proposed approach exhibits superior performance in both detection accuracy and cross-region generalization capability. These findings indicate that integrating frequency-domain information into spatial feature learning through a modulation mechanism can effectively enhance the model’s ability to discriminate weak-reflection anomalies, providing a novel time-frequency collaborative modeling paradigm for railway GPR defect detection. Full article
16 pages, 2473 KB  
Article
Incorporating Crop-Centric Segmentation and Enhanced YOLOv10 for Indirect Weed Detection in Bok Choy Fields
by Weili Li, Wenpeng Zhu, Qianyu Wang, Feng Gao, Kang Han and Xiaojun Jin
Agronomy 2026, 16(9), 907; https://doi.org/10.3390/agronomy16090907 - 30 Apr 2026
Abstract
Weed infestation poses a significant threat to bok choy (Brassica rapa subsp. chinensis) cultivation, reducing crop yield and quality through resource competition and pest facilitation. Traditional weed detection methods face two major bottlenecks: one is data annotation, arising from the need for [...] Read more.
Weed infestation poses a significant threat to bok choy (Brassica rapa subsp. chinensis) cultivation, reducing crop yield and quality through resource competition and pest facilitation. Traditional weed detection methods face two major bottlenecks: one is data annotation, arising from the need for extensive, species-diverse datasets, and the other is visual discrimination, due to the high morphological similarity between crops and weeds at certain growth stages. To address these challenges, this study proposed an indirect weed detection framework that combines an optimized You Only Look Once version 10 (YOLOv10) model for crop detection with Excess Green ExG-based segmentation of residual vegetation. The model incorporates RFD and C2f-WDBB modules to improve feature preservation and multi-scale fusion. Compared with baseline YOLOv10, the final proposed RCW-YOLOv10 reduced the number of parameters by 1.04 million and improved detection performance, achieving increases of 3.5%, 1.5%, and 1.1% percentage points in Precision, Recall, and mAP50, respectively, under field conditions. The system initially detected bok choy plants, subsequently localizing weeds by masking crop regions and thresholding residual ExG signals in the uncovered areas. The detected weed coordinates were used to construct a distribution map that may support targeted control in precision agriculture. This approach simplifies weed identification under the tested bok choy field conditions and may be adaptable to other crops after further validation. Full article
Show Figures

Figure 1

29 pages, 23221 KB  
Article
FMSD-YOLO12: An Efficient and Lightweight Network for Surface Defect Detection of Ferrite Permanent Magnets
by Chuanyu Zhan, Haiting Yu, Ruize Wu and Junfeng Li
Electronics 2026, 15(9), 1900; https://doi.org/10.3390/electronics15091900 - 30 Apr 2026
Abstract
To address micro-break and edge-chipping defects in ferrite magnetic sheets, as well as the difficulty of balancing detection accuracy and deployment cost under complex grinding-texture interference, this paper proposes FMSD-YOLO12, an efficient and lightweight defect detection model based on YOLOv12. The proposed method [...] Read more.
To address micro-break and edge-chipping defects in ferrite magnetic sheets, as well as the difficulty of balancing detection accuracy and deployment cost under complex grinding-texture interference, this paper proposes FMSD-YOLO12, an efficient and lightweight defect detection model based on YOLOv12. The proposed method follows a task-oriented design for three coupled challenges in ferrite magnetic sheet inspection, namely texture-interfered feature extraction, cross-scale feature inconsistency, and lightweight yet accurate defect localization. Specifically, a Spatially Re-weighted Convolution (SR-Conv) is introduced into the C3k2 backbone module to suppress repetitive grinding-texture noise and enhance the response contrast of subtle defect regions. A Context and Spatial Feature Calibration Network (CSFCN) is further developed to improve semantic consistency and spatial alignment during multi-scale feature fusion. In addition, a Lightweight Shared Detail-Enhanced Convolutional Detection head (LSDECD) is designed to strengthen weak-edge localization while reducing parameter redundancy through re-parameterization. Experimental results show that, with a comparable number of parameters, FMSD-YOLO12 improves mAP@50 by 2.40%, mAP@75 by 3.71%, and mAP@50-95 by 3.03% on the magnetic sheet defect dataset. These results demonstrate that the proposed model achieves a favorable balance between detection accuracy and computational efficiency for irregular defect detection under complex industrial backgrounds. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

32 pages, 91311 KB  
Article
From Geometric Exploration to Semantic Completion: Scene Exploration Convolution and Large Format Perception for Adverse-Weather UAV Aerial Object Detection
by Yize Zhao, Bo Wang and Jialei Zhan
Sensors 2026, 26(9), 2802; https://doi.org/10.3390/s26092802 - 30 Apr 2026
Abstract
Object detection from unmanned aerial vehicle (UAV) imagery is essential for applications such as traffic monitoring, disaster response, and urban surveillance, yet most existing methods are developed and evaluated under clear-sky conditions. In real-world UAV operations, adverse weather including fog, rain, and snow [...] Read more.
Object detection from unmanned aerial vehicle (UAV) imagery is essential for applications such as traffic monitoring, disaster response, and urban surveillance, yet most existing methods are developed and evaluated under clear-sky conditions. In real-world UAV operations, adverse weather including fog, rain, and snow introduces severe image degradation that simultaneously disrupts both the geometric and photometric properties of targets. This paper identifies two fundamental bottlenecks underlying this performance collapse: the lack of geometric invariance in standard convolutional operators and the inability of fixed receptive fields to reconstruct features corrupted by atmospheric interference. To address these bottlenecks, we propose SELPNet (Scene Exploration and Large Format Perception Network), a unified framework that integrates geometric alignment and multi-scale contextual perception into the YOLOv13 head. SELPNet consists of two key modules: (1) The Scene Exploration Convolution (SEC) leverages affine Lie group theory to construct a discrete manifold of rotation and scale transformations, actively probing multiple geometric views and selecting the most coherent response via a Maxout mechanism. (2) The Large Format Perception Module (LPM) introduces a dynamic dilation strategy with depthwise separable convolutions, progressively enlarging the receptive field from fine-grained edge preservation to scene-level contextual perception for semantic completion of degraded regions. We further construct and release AWU-OBB, a large-scale benchmark containing over 18,000 oriented bounding box-annotated UAV images across four representative scene categories. Ablation experiments demonstrate that SEC and LPM yield complementary gains, achieving a combined improvement of +4.26% mAP50 over the YOLOv13-n baseline with only 0.11 M additional parameters and 0.2 extra GFLOPs. The source code will be publicly released upon acceptance of this paper. Full article
(This article belongs to the Section Intelligent Sensors)
16 pages, 13549 KB  
Article
YOLO-ALD: An Efficient and Robust Lightweight Model for Apple Leaf Disease Detection in Complex Orchard Environments
by Lei Liu, Yinyin Li, Qingyu Liu, Huihui Sun, Yeguo Sun and Xiaobo Shen
Horticulturae 2026, 12(5), 550; https://doi.org/10.3390/horticulturae12050550 - 30 Apr 2026
Abstract
Real-time detection of apple leaf diseases in orchard environments faces ongoing challenges, particularly in preserving fine-grained disease features with limited computing resources. To address these issues, we propose a high-precision lightweight model based on YOLOv10n, called YOLO-ALD. First, we introduce Spatial and Channel [...] Read more.
Real-time detection of apple leaf diseases in orchard environments faces ongoing challenges, particularly in preserving fine-grained disease features with limited computing resources. To address these issues, we propose a high-precision lightweight model based on YOLOv10n, called YOLO-ALD. First, we introduce Spatial and Channel Reconstruction Convolution into deeper backbone networks to replace standard downsampling layers and convolutions. This suppresses spatial and channel redundancy caused by environmental noise and optimizes feature representation. Second, we design a new C2f-Faster-SimAM module for the neck network. This module combines the inference efficiency of FasterNet with a parameter-free 3D attention mechanism to adaptively focus on early lesions, effectively distinguishing them from leaf veins without increasing model complexity. Third, in the detection head section, we use the Focaler-ShapeIoU loss function to optimize bounding box regression. It utilizes a dynamic focusing mechanism and geometric constraints to ensure the localization accuracy of irregular shapes and hard-to-detect samples. Experimental results on our self-built dataset covering four specific diseases and healthy leaves showed that, compared with YOLOv10n, the mAP@0.5 of YOLO-ALD reached 92.1%, achieving a 2.1% increase. In addition, the model has an inference speed of 105 FPS, with only 2.1 M parameters and 5.6 GFLOPs. Therefore, YOLO-ALD achieves a good balance between efficiency and robustness, showing strong theoretical potential for resource-constrained mobile agriculture diagnosis. Full article
(This article belongs to the Special Issue Emerging Technologies in Smart Agriculture)
Show Figures

Figure 1

24 pages, 4665 KB  
Article
Human Fall Detection with Infrared Imaging: A Comparison of Graph Convolutional Networks and YOLO
by Karol Perliński, Artur Faltyński and Aleksandra Świetlicka
Sensors 2026, 26(9), 2794; https://doi.org/10.3390/s26092794 - 30 Apr 2026
Abstract
This paper presents a comparative study of two artificial intelligence approaches—graph convolutional networks (GCNs) and the YOLO object detection algorithm—for analyzing human fall events using infrared imaging. From the AI perspective, the study introduces a GCN model that achieves over 99% classification accuracy [...] Read more.
This paper presents a comparative study of two artificial intelligence approaches—graph convolutional networks (GCNs) and the YOLO object detection algorithm—for analyzing human fall events using infrared imaging. From the AI perspective, the study introduces a GCN model that achieves over 99% classification accuracy by modeling 2D and 3D skeletal data as graph structures and evaluates the real-time detection capabilities of YOLOv8 on infrared video frames. On the engineering side, the research addresses practical challenges in elderly care and healthcare monitoring systems by demonstrating how these AI methods can accurately detect and classify fall directions under infrared conditions. The results highlight each model’s strengths and propose a hybrid framework combining YOLO’s spatial localization with GCN’s motion-pattern analysis for future real-world applications. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

20 pages, 12707 KB  
Article
SWUAV-DANet: A Severe-Weather UAV Dataset and Dynamic AlignAir Network for Robust Aerial Vehicle Detection
by Longze Zhang and Yihong Li
Sensors 2026, 26(9), 2793; https://doi.org/10.3390/s26092793 - 30 Apr 2026
Abstract
Unmanned aerial vehicle (UAV) aerial object detection is increasingly important for traffic monitoring, emergency rescue, and environmental perception. However, vehicle detection in heavy rain, dense fog, blizzards, and backlit night scenes suffers from target information loss, feature misalignment, and unstable performance. We, therefore, [...] Read more.
Unmanned aerial vehicle (UAV) aerial object detection is increasingly important for traffic monitoring, emergency rescue, and environmental perception. However, vehicle detection in heavy rain, dense fog, blizzards, and backlit night scenes suffers from target information loss, feature misalignment, and unstable performance. We, therefore, construct a new severe-weather UAV dataset, Severe-Weather UAV (SWUAV), and propose the real-time Dynamic AlignAir Network (DANet). SWUAV contains 18,195 red–green–blue (RGB) aerial images covering 12 adverse weather/illumination conditions with 236,392 vehicle instances. After the high-resolution backbone features, we insert a cross-scale adaptive alignment module that performs adaptive channel calibration, contrastive self-attention, and geometric/semantic remapping to reduce scale drift/mismatch, suppress noise, and strengthen degraded target cues; we then design a dynamic adaptive alignment head (DAAH) with a shared encoder and a deformable regression branch to mitigate classification–regression mismatch under adverse conditions while further reducing complexity. On SWUAV, DANet raises the YOLOv11-s baseline average precision (AP)/AP50 (AP at intersection over union, IoU = 0.50) from 43.9%/62.6% to 46.9%/64.8%, with only 8.65 M parameters, 22.7 giga floating-point operations (GFLOPs), and a 323.47 frames-per-second (FPS) end-to-end throughput (3.09 ms per image at batch size 16), outperforming EdgeYOLO-s and RT-DETR. The dataset and code are publicly available. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

18 pages, 2135 KB  
Article
A Non-Destructive Early Sex Identification Method for Chicken Embryos Based on Improved MobileViT-V3
by Qian Yan, Chengyu Yu, Zhoushi Tan, Zesheng Wang and Qiaohua Wang
Animals 2026, 16(9), 1377; https://doi.org/10.3390/ani16091377 - 30 Apr 2026
Abstract
The global poultry hatching industry faces severe challenges of resource waste and animal ethics issues due to the routine culling of day-old male chicks. Meanwhile, early sex identification of 4-day-incubated chicken embryos is limited by low accuracy, as embryos at this stage have [...] Read more.
The global poultry hatching industry faces severe challenges of resource waste and animal ethics issues due to the routine culling of day-old male chicks. Meanwhile, early sex identification of 4-day-incubated chicken embryos is limited by low accuracy, as embryos at this stage have weak, low-contrast blood vessels that are highly susceptible to interference from the eggshell’s texture. To address these issues, this paper proposes a non-destructive early sex identification method for chicken embryos based on an improved MobileViT-V3 model. Taking the lightweight hybrid architecture MobileViT-V3 as the backbone, we embedded a Micro Feature Enhancement module (MFE-Module) in Stage 3 to strengthen the extraction of fine vascular details, and a Multi-Scale Adaptive Attention Fusion module (MSAAF-Module) in Stage 4 to realize adaptive weighted screening of multi-source features. Experiments on the self-constructed dataset of 4-day-incubated embryos show that the improved model achieves a test set classification accuracy of 92.26%, with an F1-score of 92.15%, a recall rate of 92.12%, and a Kappa coefficient of 0.845. It outperforms mainstream models such as YOLOv12, ShuffleNetV2, ConvNeXt-T, ResNet, and Swin-ViT, with only 2.98 M parameters and an inference speed of 97.6 FPS, well exceeding the 30 FPS real-time requirement of industrial sorting lines and showing high potential for practical industrial deployment. This method provides a new scheme for non-destructive, high-precision, and high-efficiency early sex identification in poultry hatching. Full article
Show Figures

Figure 1

14 pages, 3627 KB  
Article
Efficient YOLOv11 with a FasterNet Backbone and Attention for Multi-Class Underwater Object Detection in Nearshore Waters
by Yinghao He, Wenjie Yin, Ruomiao Song, Siyi Zhou, Shimin Shan and Shuo Liu
J. Mar. Sci. Eng. 2026, 14(9), 827; https://doi.org/10.3390/jmse14090827 - 29 Apr 2026
Abstract
Underwater multi-class object detection in nearshore waters is essential for intelligent cleaning operations and ecological monitoring. However, strong reflection and scattering interference, color attenuation, frequent occlusion, and non-rigid deformation often cause fine-grained information loss and feature misalignment in conventional detectors, leading to missed [...] Read more.
Underwater multi-class object detection in nearshore waters is essential for intelligent cleaning operations and ecological monitoring. However, strong reflection and scattering interference, color attenuation, frequent occlusion, and non-rigid deformation often cause fine-grained information loss and feature misalignment in conventional detectors, leading to missed and false detections. To address these challenges, we propose an enhanced YOLOv11 framework integrating FasterNet and attention mechanisms. Specifically, we include FasterNet to replace the YOLOv11 baseline backbone to improve fine-grained feature preservation while reducing computational redundancy. Furthermore, a Deformable Underwater Attention Module (DUAM) is introduced to capture local texture variations and deformation-aware features, enhancing discrimination among heterogeneous categories. Additionally, a Submerged Occlusion-Aware Head (SOAH) is designed to recalibrate features based on occlusion visibility, improving the detection of small-scale and partially occluded objects in the high-resolution P2 layer. Performance gains mainly stem from the recalibration strategy and its synergy with multi-scale optimization objectives. Experiments on a nearshore underwater multi-class dataset (8610 images across 40 classes) show that the proposed method increases mAP from 66.9% to 82.3%, achieving a 15.4-point improvement over baseline YOLOv11, with superior robustness under complex backgrounds. Full article
(This article belongs to the Special Issue Assessment and Monitoring of Coastal Water Quality)
34 pages, 36077 KB  
Article
Modular Multi-Attribute Vehicle Analysis by Color, License Plate, Make and Sub-Model Using YOLO and OCR: A Benchmark Across YOLO Versions
by Cristian Japhet Islas-Yañez, Viridiana Hernández-Herrera and Moisés Márquez-Olivera
Sensors 2026, 26(9), 2785; https://doi.org/10.3390/s26092785 - 29 Apr 2026
Abstract
We present a modular multi-attribute vehicle analysis pipeline that integrates YOLO-based models and an OCR engine into a single workflow. The system detects vehicles, classifies color, recognizes make and sub-model, detects license plates, and extracts plate characters to generate a structured vehicle record. [...] Read more.
We present a modular multi-attribute vehicle analysis pipeline that integrates YOLO-based models and an OCR engine into a single workflow. The system detects vehicles, classifies color, recognizes make and sub-model, detects license plates, and extracts plate characters to generate a structured vehicle record. Vehicle detection is reported with standard metrics (precision, recall, and mAP@0.5), while license plate detection is reported at IoU = 0.3 to reflect the small-object nature of plates and downstream OCR usability. Among the evaluated versions, YOLOv8 provides the most balanced overall performance across modules, while maintaining real-time-equivalent throughput of approximately 18–22 FPS for the full pipeline on recorded traffic videos, depending on scene complexity. We emphasize module-level evaluation and runtime benchmarking; instance-level end-to-end identification across unique vehicles is defined as future work once track-based ground truth becomes available. Full article
(This article belongs to the Topic Deep Visual Recognition: Methods, and Applications)
Show Figures

Figure 1

26 pages, 54080 KB  
Article
MPES-YOLO: A Multi-Scale Lightweight Framework with Selective Edge Enhancement for Loess Landslide Detection
by Hanyu Cheng, Jiali Su, Jiangbo Xi, Haixing Shang, Zhen Zhang, Bingkun Wang and Pan Li
Remote Sens. 2026, 18(9), 1374; https://doi.org/10.3390/rs18091374 - 29 Apr 2026
Abstract
Loess landslides in northwestern China are highly unstable and difficult to distinguish due to sparse vegetation and their spectral and morphological similarity to the surrounding terrain. These landslides demonstrate considerable diversity in manifestation, encompassing shallow translational slides, small-scale features, partially obscured formations, and [...] Read more.
Loess landslides in northwestern China are highly unstable and difficult to distinguish due to sparse vegetation and their spectral and morphological similarity to the surrounding terrain. These landslides demonstrate considerable diversity in manifestation, encompassing shallow translational slides, small-scale features, partially obscured formations, and instances with irregular or poorly defined boundaries. To address the above issues, we propose MPES-YOLO, a multi-scale lightweight YOLO-based framework with selective edge enhancement to detect loess landslides. This model is based on the YOLOv8 architecture and incorporates a multi-scale partial convolution and exponential moving average (MPCE) module to improve multi-scale feature representation while reducing computational cost and enhancing small-target sensitivity. Additionally, to address ambiguous boundaries, a selective edge enhancement (SEE) module is introduced to extract authentic object edges from original images and inject them into key training layers, improving boundary perception. Finally, SIoU is adopted to improve geometric consistency for irregular landslide boundary localization. This paper first verified the basic detection performance of MPES-YOLO on the publicly available Bijie landslide dataset. Then, an experimental study was conducted in the loess landslides of Yan’an City, Shaanxi Province. The mAP@0.5 was 91.9%, and the parameter quantity was reduced by 23.3% compared with the baseline model. A generalization experiment was also carried out on the landslides in the Ningxia region, with the mAP@0.5 being 97.4%. The results show that MPES-YOLO achieves a strong balance between detection accuracy and computational efficiency, providing an effective and scalable solution for automated loess landslide detection and geological disaster early warning. Full article
Show Figures

Figure 1

20 pages, 5162 KB  
Article
Toward Intelligent Emergency Triage: A Feasibility Study of Real-Time Facial Expression-Based Chest Pain Intensity Assessment
by Yu-Tse Tsan, Rita Wiryasaputra, Yi-Jun Hsieh, Qi-Xiang Zhang, Hsing-Hung Liu and Chao-Tung Yang
Diagnostics 2026, 16(9), 1346; https://doi.org/10.3390/diagnostics16091346 - 29 Apr 2026
Abstract
Objectives: Ensuring an effective triage to treat patients with chest pain in emergency settings is critical, but it can often be challenging, particularly when patients wear face masks or are unable to clearly communicate their pain. To address this limitation, this study [...] Read more.
Objectives: Ensuring an effective triage to treat patients with chest pain in emergency settings is critical, but it can often be challenging, particularly when patients wear face masks or are unable to clearly communicate their pain. To address this limitation, this study presents a real-time facial expression–based system for chest pain intensity assessment as an initial step toward realizing intelligent emergency triage. The proposed system integrates deep learning with real-time video analysis to provide objective and rapid pain level recognition. Methods: A YOLOv12-based facial expression recognition model was trained using annotated facial images of patients experiencing chest pain, and the model categorizes pain into three intensity levels: no pain, slight pain, and moderate to severe pain. Multiple YOLOv12 variants were systematically evaluated to identify an optimal configuration for potential clinical use. The developed system supports two operational modes: real-time recognition, which analyzes continuous video streams and delivers immediate visual feedback through an interactive interface, and a manual upload mode for offline video analysis, review of results, and playback. Additional usability features, including error prompts and data reset functions, were implemented to enhance system stability and user experience. Results: Among the evaluated models, the YOLOv12-L model achieved the best performance with an accuracy of 98.81%, sensitivity of 98.76%, specificity of 98.79%, precision of 98.04%, and an F1-score of 98.41%, demonstrating stable and accurate recognition. The proposed system is designed to support the triage process of assessing patients with chest pain, particularly in cases where patients wear masks or cannot clearly express their pain. By providing real-time and objective pain intensity assessment, the system shows potential to assist healthcare professionals in identifying patients who may require priority attention and to serve as a supportive tool for emergency triage workflows. Conclusions: Future work will incorporate edge computing with a lightweight model to enable real-time pain assessment in ambulances, facilitating faster intervention and treatment. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

26 pages, 4074 KB  
Article
Early Diagnosis of Blood Disorders via Enhanced Image Preprocessing and Deep Learning Modeling
by Alpamis Kutlimuratov, Dilshod Eshmurodov, Fotima Tulaganova, Akhmet Utegenov, Piratdin Allayarov, Jamshid Khamzaev, Islambek Saymanov and Fazliddin Makhmudov
BioMedInformatics 2026, 6(3), 25; https://doi.org/10.3390/biomedinformatics6030025 - 29 Apr 2026
Abstract
Background: Accurate and early detection of hematological disorders from microscopic peripheral blood smear images remains a technically challenging task due to inherent imaging limitations, including noise contamination, low contrast, staining variability, and significant cellular overlap. Conventional deep learning-based object detection frameworks often [...] Read more.
Background: Accurate and early detection of hematological disorders from microscopic peripheral blood smear images remains a technically challenging task due to inherent imaging limitations, including noise contamination, low contrast, staining variability, and significant cellular overlap. Conventional deep learning-based object detection frameworks often exhibit limited robustness under such conditions and demonstrate reduced sensitivity to small-scale morphological structures, particularly platelets and abnormal cell variants. Methods: To address these challenges, this study proposes a hybrid detection framework that integrates a fuzzy logic-driven image preprocessing module with the YOLOv11 object detection architecture. The proposed preprocessing pipeline employs adaptive fuzzy membership functions to normalize pixel intensity distributions, suppress high-frequency noise, and enhance edge-defined cellular boundaries. This transformation produces a structurally optimized feature representation, improving downstream feature extraction and localization performance. The proposed framework was evaluated on a curated dataset of 3000 annotated microscopic blood smear images spanning five hematological classes. Results: Experimental results show that the fuzzy logic module improves mAP@0.5 by +3.4% and mAP@0.5:0.95 by +3.6%, confirming its effectiveness in enhancing both classification and localization accuracy. Conclusions: These findings demonstrate the robustness and practical applicability of the proposed hybrid approach under challenging imaging conditions. Full article
Show Figures

Figure 1

Back to TopTop