Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,103)

Search Parameters:
Keywords = background noise

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 5548 KB  
Article
Impact of Simulated Artifacts on the Classification Performance of Apical Views in Transthoracic Echocardiography Using Convolutional Neural Networks
by Gabriela Bernadeta Orzeł-Łomozik, Łukasz Łomozik, Maciej Podolski, Martyna Rożek, Kalina Światlak, Weronika Radwan, Zuzanna Przybylska, Paulina Michalska, Maciej Pruski and Katarzyna Mizia-Stec
Bioengineering 2026, 13(5), 522; https://doi.org/10.3390/bioengineering13050522 (registering DOI) - 30 Apr 2026
Abstract
Background: In recent years, artificial intelligence (AI) methods, including deep convolutional neural networks (CNNs), have gained increasing importance in supporting the automated analysis of echocardiograms. The aim of this study was to evaluate the impact of selected image artifacts—motion blur, acoustic shadowing, and [...] Read more.
Background: In recent years, artificial intelligence (AI) methods, including deep convolutional neural networks (CNNs), have gained increasing importance in supporting the automated analysis of echocardiograms. The aim of this study was to evaluate the impact of selected image artifacts—motion blur, acoustic shadowing, and speckle noise—on the performance of automatic classification of standard transthoracic echocardiographic (TTE) views using deep learning models. Methods: The analysis included 217 TTE video clips (2170 frames) covering apical views: two-chamber (A2C), three-chamber (A3C), four-chamber (A4C), and five-chamber (A5C). Two convolutional neural network architectures—ResNet-18 and ResNet-34—were applied, initialized with weights pretrained on the ImageNet dataset (transfer learning). In a limited comparative scope, EfficientNet-B0, a ViT model used as a frozen feature extractor combined with Logistic Regression, and a classical HOG + SVM model, were also included as reference methods. Classification performance was evaluated under conditions of controlled image degradation caused by motion blur, acoustic shadowing, and speckle noise. Results: All analyzed artifacts reduced classification performance, although the magnitude of this effect depended on artifact type. Speckle noise proved to be the most destructive, causing performance collapse across all evaluated methods at high severity. Motion blur and acoustic shadowing produced more differentiated degradation profiles. The ResNet models achieved the highest performance under reference conditions; however, after degradation, the ranking of models was no longer stable. In the comparative analysis, HOG + SVM showed the smallest relative performance loss under motion blur and the highest balanced accuracy under severe acoustic shadowing, whereas severe speckle remained critical for all models. Conclusions: Image quality degradation significantly impairs TTE view classification performance, and evaluation based solely on reference-quality images does not fully reflect model robustness to artifacts. These findings indicate the need to complement standard model evaluation with a structured robustness analysis under degraded imaging conditions and highlight the importance of training and validation settings that better reflect real clinical practice. Full article
Show Figures

Figure 1

16 pages, 2409 KB  
Article
Unsupervised Reference Modeling of Nanopore Signals for DNA/RNA Modification Detection
by Yongji Zou, Mian Umair Ahsan and Kai Wang
Genes 2026, 17(5), 525; https://doi.org/10.3390/genes17050525 - 29 Apr 2026
Abstract
Background: Nanopore sequencing produces ionic current signals that are sensitive to chemical modifications in DNA and RNA molecules. However, accurate modification detection remains challenging due to limited labeled data and variability across experimental conditions. Methods: We present a scalable unsupervised framework for modification [...] Read more.
Background: Nanopore sequencing produces ionic current signals that are sensitive to chemical modifications in DNA and RNA molecules. However, accurate modification detection remains challenging due to limited labeled data and variability across experimental conditions. Methods: We present a scalable unsupervised framework for modification discovery that learns reference signal distributions from unmodified sequences using a CNN–Transformer variational autoencoder (VAE). The model is trained on large-scale data via streaming sampling and k-mer-aware soft balancing to ensure robust signal representation. At inference, candidate nucleotides are scored using the VAE reconstruction error, and read-level signals are aggregated to produce site-level modification evidence. Results: On controlled DNA oligonucleotide datasets, models trained on unmodified sequences achieve strong discrimination when evaluated on modified oligos. In contrast, performance decreases in cell line samples when models trained on unmodified whole-genome-amplified (WGA) DNA and in vitro-transcribed (IVT) RNA are evaluated on natively modified (5mC/m6A) data, reflecting the impacts of biological noise and heterogeneity. Despite reduced classification accuracy, site-level anomaly score profiles exhibit peak-like patterns that correspond to known modification-enriched regions. Conclusions: These findings demonstrate the feasibility of large-scale unsupervised reference modeling for de novo modification detection, while underscoring the challenges in translating models built from synthetic oligo datasets into robust genome-wide modification detection. Full article
(This article belongs to the Section Bioinformatics)
Show Figures

Figure 1

26 pages, 4074 KB  
Article
Early Diagnosis of Blood Disorders via Enhanced Image Preprocessing and Deep Learning Modeling
by Alpamis Kutlimuratov, Dilshod Eshmurodov, Fotima Tulaganova, Akhmet Utegenov, Piratdin Allayarov, Jamshid Khamzaev, Islambek Saymanov and Fazliddin Makhmudov
BioMedInformatics 2026, 6(3), 25; https://doi.org/10.3390/biomedinformatics6030025 - 29 Apr 2026
Abstract
Background: Accurate and early detection of hematological disorders from microscopic peripheral blood smear images remains a technically challenging task due to inherent imaging limitations, including noise contamination, low contrast, staining variability, and significant cellular overlap. Conventional deep learning-based object detection frameworks often [...] Read more.
Background: Accurate and early detection of hematological disorders from microscopic peripheral blood smear images remains a technically challenging task due to inherent imaging limitations, including noise contamination, low contrast, staining variability, and significant cellular overlap. Conventional deep learning-based object detection frameworks often exhibit limited robustness under such conditions and demonstrate reduced sensitivity to small-scale morphological structures, particularly platelets and abnormal cell variants. Methods: To address these challenges, this study proposes a hybrid detection framework that integrates a fuzzy logic-driven image preprocessing module with the YOLOv11 object detection architecture. The proposed preprocessing pipeline employs adaptive fuzzy membership functions to normalize pixel intensity distributions, suppress high-frequency noise, and enhance edge-defined cellular boundaries. This transformation produces a structurally optimized feature representation, improving downstream feature extraction and localization performance. The proposed framework was evaluated on a curated dataset of 3000 annotated microscopic blood smear images spanning five hematological classes. Results: Experimental results show that the fuzzy logic module improves mAP@0.5 by +3.4% and mAP@0.5:0.95 by +3.6%, confirming its effectiveness in enhancing both classification and localization accuracy. Conclusions: These findings demonstrate the robustness and practical applicability of the proposed hybrid approach under challenging imaging conditions. Full article
Show Figures

Figure 1

11 pages, 1145 KB  
Article
Evaluation of Posture-Dependent Signal Intensity and Contrast Alterations in Low-Field Brain Magnetic Resonance Imaging
by Chang-Soo Yun, Changheun Oh, Kyuseok Kim, Seong-Hyeon Kang, Hajin Kim, Youngjin Lee, Jun-Young Chung and Gun Choi
Diagnostics 2026, 16(9), 1333; https://doi.org/10.3390/diagnostics16091333 - 29 Apr 2026
Abstract
Background/Objectives: Most brain magnetic resonance imaging (MRI) is performed in supine position, although posture may influence cerebrovascular signal characteristics through gravity-related physiological changes. However, posture-dependent vascular signal alterations on low-field MRI have not been sufficiently quantified. This study aimed to quantify posture-related [...] Read more.
Background/Objectives: Most brain magnetic resonance imaging (MRI) is performed in supine position, although posture may influence cerebrovascular signal characteristics through gravity-related physiological changes. However, posture-dependent vascular signal alterations on low-field MRI have not been sufficiently quantified. This study aimed to quantify posture-related internal carotid artery (ICA) signal alterations using low-field MRI by comparing seated and supine images with intensity-, noise-, and texture-based metrics. Methods: Nine healthy adults (20–69 years old; one female) underwent 0.25 T tilting MRI in supine and seated postures. 3D gradient echo T1-weighted images were obtained. The bilateral ICA regions of interest (ROI) and adjacent reference ROI were manually delineated. The signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), signal intensity ratio (SIR), gray-level co-occurrence matrix (GLCM) texture features (contrast, correlation, energy, and homogeneity) were extracted and compared between postures using Wilcoxon signed-rank tests. Results: Seated posture produced significantly higher ICA signal intensity metrics than the supine posture, with increased SNR (median 17.11 vs. 13.48), CNR (median 21.94 vs. 18.36), and SIR (median 10.84 vs. 9.54) (p = 0.004). GLCM texture analysis demonstrated a significant decrease in contrast in the seated position (median 62.01 vs. 145.92; p = 0.004), whereas correlation, energy, and homogeneity showed no significant between-posture differences. Conclusions: Low-field MRI was sensitive to posture-dependent ICA signal alterations. ICA-based metrics may provide quantitative markers of gravity-related cerebrovascular adaptation. Full article
Show Figures

Figure 1

20 pages, 13767 KB  
Article
Geothermal Resource Exploration Using Multi-Temporal Infrared Remote Sensing Data Based on Annual Temperature Variation Model
by Meihua Wei, Guangzheng Jiang, Luyu Zou, Xiaoyi Wen and Zhenyu Li
Remote Sens. 2026, 18(9), 1362; https://doi.org/10.3390/rs18091362 - 28 Apr 2026
Abstract
Thermal infrared remote sensing offers a cost-effective means of regional geothermal reconnaissance, yet a fundamental challenge remains: isolating the weak geothermal surface signal (typically 1–3 °C) from dominant surface noise introduced by seasonal temperature cycles (annual amplitude > 20 °C), topographic variability, land [...] Read more.
Thermal infrared remote sensing offers a cost-effective means of regional geothermal reconnaissance, yet a fundamental challenge remains: isolating the weak geothermal surface signal (typically 1–3 °C) from dominant surface noise introduced by seasonal temperature cycles (annual amplitude > 20 °C), topographic variability, land cover heterogeneity, and irregular cloud-affected satellite sampling. Conventional single-scene or arithmetic-mean approaches are highly susceptible to these confounding factors and frequently produce pseudo-anomalies that obscure genuine geothermal targets. To overcome this limitation, we propose a physics-based time-series framework in which a nonlinear annual temperature variation model, T(t) = T0 + A·sin(2πt/τ + φ), is fitted to multi-temporal Landsat 8 thermal infrared data via the Levenberg–Marquardt algorithm. Applied to ~50 cloud-free scenes (2021–2022) processed on the Google Earth Engine over the Shanxi Graben System, northern China, the model simultaneously retrieves the background temperature parameter T0 and seasonal amplitude A—two physically interpretable quantities that encode distinct geothermal signatures more robustly than simple temporal statistics. Sub-regional corrections for the elevation (−4 °C/100 m above 800 m), aspect (R2 > 0.95 in piecewise linear segments), and slope further suppress topographic pseudo-anomalies prior to anomaly extraction. Over known high-temperature geothermal fields (Tianzhen and Yanggao; >100 °C at 100 m depth), the method reveals clear T0 offsets of +1–2 °C (3–5% relative) and amplitude deficits of ~2 K (5–10% relative) relative to the background, with model-fitted T0 values averaging ~2 °C higher than arithmetic means due to the correction for seasonal sampling bias. Combined with 5 km fault-proximity buffers, extracted anomaly zones align well spatially with known geothermal sites and major structural corridors of the graben system. However, deeper low-temperature systems (45–50 °C at 300–500 m depth) produce ambiguous signals below the ~1.5 K detection threshold, indicating inherent limitations for deeply buried resources. The fully reproducible, training-data-free workflow is implementable via open satellite archives and cloud computing platforms, making it a transferable low-cost tool for structurally controlled geothermal reconnaissance across extensional basins worldwide. Full article
Show Figures

Figure 1

24 pages, 29548 KB  
Article
DEMC: A Diffusion-Enhanced Mutual Consistency Framework for Cross-Domain Object Detection in Optical and SAR Imagery
by Cheng Luo, Yueting Zhang, Jiayi Guo, Guangyao Zhou, Hongjian You, Peifeng Li and Xia Ning
Remote Sens. 2026, 18(9), 1358; https://doi.org/10.3390/rs18091358 - 28 Apr 2026
Abstract
Cross-domain object detection from optical to Synthetic Aperture Radar (SAR) imagery addresses the challenges of SAR data scarcity and high annotation costs, enabling crucial capabilities for persistent maritime surveillance and reconnaissance. However, the substantial modality gap resulting from distinct imaging mechanisms and severe [...] Read more.
Cross-domain object detection from optical to Synthetic Aperture Radar (SAR) imagery addresses the challenges of SAR data scarcity and high annotation costs, enabling crucial capabilities for persistent maritime surveillance and reconnaissance. However, the substantial modality gap resulting from distinct imaging mechanisms and severe coherent speckle noise significantly hampers knowledge transfer. Existing Unsupervised Domain Adaptation (UDA) methods, which primarily rely on adversarial feature alignment or static pseudo-labeling, struggle to replicate the physical backscattering properties of SAR data and often fall prey to confirmation bias due to intense background clutter. To overcome these limitations, this paper introduces the Diffusion-Enhanced Mutual Consistency (DEMC) framework. DEMC introduces a novel two-stage adaptation paradigm. The first stage, the Diffusion-Based Domain Alignment (DBDA) module, generates a physics-aware intermediate domain. By integrating step-efficient diffusion generation with physical refinement, this module effectively reduces the cross-modal visual discrepancy while preserving the semantic structure of the optical source. In the second stage, this paper tackles the pervasive issue of pseudo-label noise with the Dual-Student Mutual Verification (DSMV) mechanism. Guided by Cross-Agent Spatial Consensus (CASC) and Adaptive Thresholding (AIT), this mechanism dynamically refines pseudo-labels through geometric overlap validation, effectively recovering faint, low-contrast targets that would typically be discarded by standard thresholds. Extensive evaluations across four benchmark tasks (HRSC2016/ShipRSImageNet to SSDD/HRSID) demonstrate that DEMC establishes a new state-of-the-art. Notably, the framework significantly enhances detection recall and reduces omission errors in complex coastal environments, offering a robust solution for zero-tolerance, all-weather surveillance tasks. Full article
Show Figures

Figure 1

29 pages, 10384 KB  
Article
OShipNet: Occlusion Ship Detection Based on Multidomain Fusion and Multiscale Refinement
by Shengying Yang, Haowei Luo, Zhenyu Xu, Jing Yang and Wei Zhang
J. Mar. Sci. Eng. 2026, 14(9), 804; https://doi.org/10.3390/jmse14090804 - 28 Apr 2026
Abstract
The growth in international trade has precipitated operational demands on port facilities, mandating the development of advanced intelligent monitoring systems. Existing ship detection algorithms struggle with feature confusion and difficulty in extracting contextual features under occlusion, which reduces the discriminability between object features [...] Read more.
The growth in international trade has precipitated operational demands on port facilities, mandating the development of advanced intelligent monitoring systems. Existing ship detection algorithms struggle with feature confusion and difficulty in extracting contextual features under occlusion, which reduces the discriminability between object features and background noise. This leads to positional misalignment and mismatching of similar targets, which reduce the detection accuracy. To resolve this, we propose OShipNet, an architecture engineered to optimize feature fusion and refinement for occluded ship detection. First, we design the OShipNeXt backbone network, which provides complementary feature representation in frequency and spatial domains. This approach enables the reconstruction of global–local semantic associations for occluded objects, enhancing feature representation and improving detection accuracy. Secondly, to further refine target boundaries, we develop a Multiscale Pooling Attention Module (MSPAM) to enhance contextual awareness and better capture occluded edge features. Furthermore, we propose a dual-path cooperative loss function that mitigates the effects of low-quality bounding boxes. Comprehensive evaluations on the MVDD13 dataset demonstrate the robustness of OShipNet, which achieved 94.98% mAP@50 and 84.37% mAP@50-95, demonstrating advantages over existing object detection methods and establishing an effective framework for intelligent port monitoring. Full article
Show Figures

Figure 1

28 pages, 9414 KB  
Article
FCDNet: An Efficient and Cost-Effective Strawberry Disease Detection Model for Smart Farming Management
by Ruoyu Ouyang, Junying Jiang, Yujia Shao, Jialei Zhan and Xiaoyu Zhang
Plants 2026, 15(9), 1341; https://doi.org/10.3390/plants15091341 - 28 Apr 2026
Abstract
With the rapid development of precision agriculture and smart farming management, accurate crop disease detection has become a critical tool for optimizing agricultural resource allocation, controlling operational costs, and supporting scientific plant protection strategies. However, real-world field environments are often characterized by strong [...] Read more.
With the rapid development of precision agriculture and smart farming management, accurate crop disease detection has become a critical tool for optimizing agricultural resource allocation, controlling operational costs, and supporting scientific plant protection strategies. However, real-world field environments are often characterized by strong background interference, multiple concurrent diseases, and fine-grained lesion differences, posing significant challenges to existing detection methods in practical agricultural Internet of Things (IoT) applications. In this paper, we propose Freq-spatial Context Dynamic Network(FCDNet), an efficient and cost-effective detection model tailored for multi-category strawberry disease recognition in complex field management scenarios. The proposed model integrates a Freq-Spatial Feature Module (FSFM), a Context Guide Fusion Module (CGFM), and a Task Align Dynamic Detection Head (TADDH), enabling enhanced expression of high-frequency micro-lesions, adaptive filtering of field background noise, and spatial alignment of classification and regression tasks, while maintaining a lightweight architecture suitable for low-cost agricultural edge devices. Extensive experiments conducted on the newly constructed Strawberry Disease Dataset-7(S7DD) demonstrate that FCDNet consistently outperforms existing mainstream methods, achieving an F1-score of 91.0% and an mAP@0.5 of 94.6%. The model’s architectural robustness and capacity for generalization are further substantiated by evaluations across diverse agricultural datasets using PlantDoc and ALDOD. Ultimately, FCDNet became a practical and cost-effective tool for real-time detection of strawberry diseases, directly supporting more accurate yield forecasting and risk management in smart agriculture systems. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research—2nd Edition)
Show Figures

Figure 1

24 pages, 10494 KB  
Article
ECG-Gated 4D-CTA Assessment of Intracranial Aneurysm Wall Dynamics and Longitudinal Size Change: An Exploratory Study
by Peter Jankovič, Kamil J. Chodzyński, Axel E. Vanrossomme, Karim Zouaoui Boudjeltia, Andrej Šteňo, Christian R. Wirtz, Ján Šulaj and Andrej Paľa
Neurol. Int. 2026, 18(5), 81; https://doi.org/10.3390/neurolint18050081 - 27 Apr 2026
Viewed by 3
Abstract
Background: The risk stratification of unruptured intracranial aneurysms (UIAs) relies largely on static clinical and morphological parameters, which may not fully capture aneurysm-specific wall behavior. ECG-gated four-dimensional computed tomography angiography (4D-CTA) enables the time-resolved assessment of aneurysm wall motion, but reliable interpretation requires [...] Read more.
Background: The risk stratification of unruptured intracranial aneurysms (UIAs) relies largely on static clinical and morphological parameters, which may not fully capture aneurysm-specific wall behavior. ECG-gated four-dimensional computed tomography angiography (4D-CTA) enables the time-resolved assessment of aneurysm wall motion, but reliable interpretation requires the differentiation of biological motion from measurement uncertainty. Methods: In this prospective exploratory pilot study, ECG-gated 4D-CTA was used to evaluate the longitudinal aneurysm size change, global volumetric pulsation (GVP), spatial wall pulsation (SWP), intrinsic wall deformability and variability. Size change and pulsation were defined using predefined resolution- and noise-based thresholds. Spatial wall motion was assessed using phase-resolved three-dimensional displacement maps. Harmonic modeling isolated periodic pulsation, and residual variability exceeding empirically derived uncertainty limits was conservatively interpreted as deformability. Associations with aneurysm growth and ELAPSS scores were analyzed using exploratory statistics. Results: Eleven UIAs in ten patients were followed for 4.3 ± 1.1 years. A longitudinal size change occurred in six aneurysms (54.5%). Baseline GVP was present in eight aneurysms (73%) and SWP in nine (82%). GVP was not associated with a size change (p = 1.00). All aneurysms with a size change exhibited baseline SWP, whereas no size change was observed in aneurysms without SWP; however, this association did not reach statistical significance in this small exploratory cohort (p = 0.18). Conservative variability metrics were not associated with growth but correlated with baseline shape irregularity, particularly the undulation index (Spearman’s ρ up to ~0.90). Conclusions: In this small exploratory pilot cohort, spatial wall pulsation showed a descriptive directional pattern with longitudinal aneurysm size changes, whereas global volumetric pulsation did not. These findings are preliminary, should be interpreted cautiously, and require confirmation in larger, adequately powered longitudinal studies before clinical application. Full article
(This article belongs to the Section Brain Tumor and Brain Injury)
Show Figures

Figure 1

26 pages, 15962 KB  
Article
LECloud: Efficient Cloud and Cloud-Shadow Segmentation Based on Windowed State Space Model and Lightweight Attention Mechanism
by Ao Lu, Junzhe Wang, Tengyue Guo, Zhiwei Wang and Min Xia
Remote Sens. 2026, 18(9), 1341; https://doi.org/10.3390/rs18091341 - 27 Apr 2026
Viewed by 87
Abstract
Accurate cloud and cloud-shadow segmentation is a crucial step in optical remote sensing image preprocessing, playing a significant role in subsequent applications such as land-cover classification and change detection. However, the complexity of cloud/shadow shapes and noise interference (e.g., snow and ice, buildings, [...] Read more.
Accurate cloud and cloud-shadow segmentation is a crucial step in optical remote sensing image preprocessing, playing a significant role in subsequent applications such as land-cover classification and change detection. However, the complexity of cloud/shadow shapes and noise interference (e.g., snow and ice, buildings, complex backgrounds, and atmospheric optics) make this task challenging. Although existing deep learning methods have achieved remarkable results in cloud segmentation tasks, a better balance between computational efficiency and segmentation accuracy is still needed. Traditional deep learning models have good detail and generalization capabilities due to their local feature extraction ability and spatial invariance, but they are relatively weak in processing global context information, leading to false positives and false negatives in complex scenarios. Encoders based on state space models (such as VMamba) can effectively capture global context through long-range dependency modeling, but there is still room for optimization in computational efficiency. Additionally, complex attention mechanisms (such as CBAM) can improve feature representation capability, but the large number of parameters limits the deployment efficiency of models. This paper conducts a systematic architectural exploration of the MCloudX cloud segmentation network, seeking a balance between efficiency and accuracy from three directions: backbone network modernization, encoder efficiency optimization, and attention mechanism lightweighting. Through comprehensive ablation experiments on SPARCS and L8-Biome datasets, we systematically evaluate the independent and synergistic effects of each component and validate them on Biome_3 and SPARCS datasets. Experimental results show that the proposed optimization configuration (ResNet50+LocalMamba+ECA-Net) significantly improves computational efficiency while maintaining comparable accuracy to the baseline. We name this optimization configuration LECloud, providing valuable empirical references for future research on efficient remote sensing segmentation architectures. Full article
Show Figures

Figure 1

27 pages, 1343 KB  
Article
A Conformer-Based Time–Frequency Decoupling Network for Pig Vocalization Behavior Classification
by Jianping Wang, Yuqing Liu, Siao Geng, Feng Wei, Haoyu Wu, Yuzhen Song, Yingying Lv, Shugang Li and Qian Li
Animals 2026, 16(9), 1337; https://doi.org/10.3390/ani16091337 - 27 Apr 2026
Viewed by 43
Abstract
Continuous monitoring of pig behavior is essential for timely health management and welfare assessment in commercial production systems. Although vision-based methods have been widely studied, their practical application in commercial barns is often limited by variable lighting, frequent occlusion, and high stocking density. [...] Read more.
Continuous monitoring of pig behavior is essential for timely health management and welfare assessment in commercial production systems. Although vision-based methods have been widely studied, their practical application in commercial barns is often limited by variable lighting, frequent occlusion, and high stocking density. Acoustic sensing offers a non-contact alternative that is independent of lighting conditions; however, reliable behavior classification from pig vocalizations remains challenging in commercial environments because of background noise and temporal variability in sound patterns. In this study, an attention-guided acoustic framework, termed ATF-Conformer, was developed for pig vocalization classification under farm conditions. A five-class vocalization dataset was collected from finishing Landrace pigs and multiparous sows on a commercial farm, including cough, scream, estrus, feeding, and normal behavior sounds. The proposed framework combined spectrogram denoising with interactive attention to enhance behavior-related acoustic information, while a time-frequency-decoupled Conformer encoder was introduced to improve feature representation under noisy conditions. Final classification was performed using mask-based temporal pooling with an additive angular margin Softmax objective. In five-fold grouped cross-validation, ATF-Conformer achieved an accuracy of 97.34% ± 0.42 and outperformed several existing acoustic models across multiple evaluation metrics. A similar accuracy of 97.38% was obtained on an independent test set, indicating stable performance across datasets. These results suggest that the proposed method can support continuous, non-invasive pig vocalization-based behavior monitoring and may assist farm owners or workers in pen-level screening of frequent cough or abnormal vocal events, thereby supporting targeted on-site inspection in precision livestock farming. Full article
36 pages, 9864 KB  
Article
Orchard-YOLO: A Robust Deep Learning Framework for Fruit Detection Complex Optical and Environmental Degradation
by Yichen Wang, Hongjun Tian, Yuhan Zhou, Yang Xiong, Yichen Li, Manlin Wang, Yijie Yin, Xiaoyin Guo, Jiani Wu, Jiesen Zhang, Ying Tang and Shuai Huang
Photonics 2026, 13(5), 429; https://doi.org/10.3390/photonics13050429 - 27 Apr 2026
Viewed by 63
Abstract
Accurate target perception in unstructured outdoor environments remains a fundamental challenge in computational imaging and machine vision, primarily due to severe optical degradation caused by variable illumination, specular highlights, and dense foliage occlusion. Existing optical sensing systems often struggle to maintain robustness under [...] Read more.
Accurate target perception in unstructured outdoor environments remains a fundamental challenge in computational imaging and machine vision, primarily due to severe optical degradation caused by variable illumination, specular highlights, and dense foliage occlusion. Existing optical sensing systems often struggle to maintain robustness under these physical constraints, especially when deployed on edge devices with strict computational limits. To address these challenges, this paper proposes Orchard-YOLO, a lightweight, computationally efficient object detection network designed to maintain robustness against environmental and optical noise in complex orchard environments. Unlike generic architectures, Orchard-YOLO introduces three architectural enhancements for robust detection: (1) a High-Resolution P2 Detection Head to preserve high-frequency optical details and fine-grained texture cues often lost during digital downsampling; (2) Coordinate Attention (CA) mechanisms integrated into the feature fusion pathway to filter out background optical interference and enhance spatial discrimination for heavily occluded targets; and (3) a Ghost-convolution-based backbone to optimize the inference pipeline for real-time edge processing. Evaluated on a comprehensive multi-fruit dataset under simulated optical stress (including ±50% illumination variation and up to 70% occlusion), Orchard-YOLO achieves 94.8% mAP@0.5. It shows improved robustness under illumination variation and occlusion compared to baseline models, while achieving up to 25 FPS on an NVIDIA Jetson Nano edge device. These results suggest that Orchard-YOLO offers a detection framework suitable for resource-constrained orchard perception. Full article
(This article belongs to the Special Issue Computational Imaging: Photonics and Optical Applications)
Show Figures

Figure 1

20 pages, 5788 KB  
Article
YOLO-ESO: A Lightweight YOLOv10-Based Model for Individual Pig Identification in Complex Farming Environments
by Juanhua Zhu, Lele Song, Tong Fu, Yan Wang, Miao Wang and Ang Wu
Information 2026, 17(5), 421; https://doi.org/10.3390/info17050421 - 27 Apr 2026
Viewed by 110
Abstract
In intensive farming, contactless individual pig identification is crucial for precision feeding and health monitoring. However, real-world barn conditions—such as fluctuating illumination, severe occlusions, non-rigid poses, and high inter-individual similarity—pose significant challenges. Existing models struggle to balance high accuracy with lightweight deployment. To [...] Read more.
In intensive farming, contactless individual pig identification is crucial for precision feeding and health monitoring. However, real-world barn conditions—such as fluctuating illumination, severe occlusions, non-rigid poses, and high inter-individual similarity—pose significant challenges. Existing models struggle to balance high accuracy with lightweight deployment. To address this, we propose YOLO-ESO, an optimized detection framework based on YOLOv10n. YOLO-ESO introduces three core innovations: (1) integrating the C2f_ODConv module into the backbone to strengthen feature learning under complex poses via dynamic convolution; (2) redesigning the neck with a Semantics and Detail Infusion (SDI) module to improve multi-scale fusion while suppressing background noise; and (3) embedding an Efficient Multi-Scale Attention (EMA) mechanism before the detection head to capture fine-grained identity cues like texture and contours. Evaluated on a real-world pig dataset, YOLO-ESO achieves an mAP@0.5 of 96.6%, an mAP@0.5:0.95 of 71.1%, and an F1 of 92.0%. YOLO-ESO surpasses state-of-the-art detectors including YOLOv8, YOLOv11, and RT-DETR, while introducing only 8.7 GFLOPs and 3.48 million parameters. Overall, the proposed YOLO-ESO provides an accurate and lightweight solution for robust individual pig identification in complex farming environments, showing strong potential for practical deployment in precision livestock farming. Full article
Show Figures

Figure 1

28 pages, 12735 KB  
Article
FMW-YOLO: A Frequency-Enhanced and Multi-Scale Context-Aware Framework for PCB Defect Detection
by Yuguo Li, Shuo Tian, Wenzheng Sun, Longfa Chen, Jian Li, Junkai Hu and Na Meng
Micromachines 2026, 17(5), 531; https://doi.org/10.3390/mi17050531 - 27 Apr 2026
Viewed by 157
Abstract
A high-precision and efficient surface defect detection for printed circuit board (PCB) is critical to ensuring the reliability of electronic systems. However, the presence of complex circuit backgrounds and the small scale of defects often limit the precision and effectiveness of conventional inspection [...] Read more.
A high-precision and efficient surface defect detection for printed circuit board (PCB) is critical to ensuring the reliability of electronic systems. However, the presence of complex circuit backgrounds and the small scale of defects often limit the precision and effectiveness of conventional inspection approaches. To address these challenges, this paper proposes FMW-YOLO, a lightweight and accurate detection framework based on YOLO11n. Specifically, a Frequency-Enhanced Channel-Transposed and Local Feature backbone network is developed to improve feature extraction. By designing a Dual-Frequency and Channel Attention Aggregation module and a Lightweight Edge-Gaussian Block, the original C3k2 structure is refined to suppress noise interference while preserving high-frequency details, thereby enhancing feature representation. Furthermore, a neck network incorporating a Multi-Scale Context-Aware Enhancement mechanism is constructed, in which an Attention-Integrated Feature Pyramid is employed to facilitate more effective cross-scale feature interaction. In addition, a Dilated Reparam Residual Module is embedded into the C3k2 structure to expand the receptive field without significantly increasing computational burden. Finally, Wise-IoU is adopted to optimize bounding box regression by assigning greater importance to anchors of moderate quality. Extensive experiments conducted on the HRIPCB and DeepPCB datasets demonstrate that FMW-YOLO improves mAP50 by 2.1% and 0.3%, respectively, while reducing the number of parameters by 23%. These results indicate that the proposed method achieves improved detection accuracy and demonstrates strong potential for practical industrial applications. Full article
(This article belongs to the Topic AI Sensors and Transducers)
Show Figures

Figure 1

16 pages, 6518 KB  
Article
Optimization of a Range Walk Error Correction for Underwater Photon Counting LiDAR Under Low-Photon Conditions
by Zunhui Wang, Yicheng Wang, Qingli Ma and Yanhua Wu
Photonics 2026, 13(5), 427; https://doi.org/10.3390/photonics13050427 - 27 Apr 2026
Viewed by 70
Abstract
Underwater gated time-correlated single-photon-counting (TCSPC) LiDAR is advantageous when weak target echoes coexist with strong backscatter. However, under the first-photon-triggering and SPAD dead-time mechanism, the estimated time of flight becomes dependent on the return strength, thereby producing a range walk error (RWE). This [...] Read more.
Underwater gated time-correlated single-photon-counting (TCSPC) LiDAR is advantageous when weak target echoes coexist with strong backscatter. However, under the first-photon-triggering and SPAD dead-time mechanism, the estimated time of flight becomes dependent on the return strength, thereby producing a range walk error (RWE). This paper develops a condition-calibrated correction framework for accumulated-histogram underwater ranging in the low-photon regime. A non-homogeneous Poisson first-arrival model that jointly includes gate-limited signal photons and in-gate background triggering yields a computable expression for the total trigger probability and the conditional first-arrival time. A first-order expansion around Npe0 leads to an approximately linear RWE–Npe relation under the present system–water condition. A density-based signal-window localization method and a noise-occlusion-compensated estimator of Npe are combined with reference-plane differential calibration. Experiments in a 10 m clear-freshwater tank at 9.11 m show that the mean absolute error is reduced from 39.205 mm to 2.130 mm, corresponding to a 94.57% improvement. Compared with a quadratic model used under higher-photon conditions, the proposed linear model yields an order-of-magnitude smaller residual error in the low-photon region (Npe<1.6). Full article
Show Figures

Figure 1

Back to TopTop