Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (311)

Search Parameters:
Keywords = reconstruction kernel

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 4208 KB  
Article
Degradation-Aware Dynamic Kernel Generation Network for Hyperspectral Super-Resolution
by Huadong Liu, Haifeng Liang and Qian Wang
Sensors 2026, 26(4), 1362; https://doi.org/10.3390/s26041362 - 20 Feb 2026
Viewed by 9
Abstract
Addressing the problems of the difficulty in reconstructing high-resolution hyperspectral images caused by dynamic degradation characteristics, the poor adaptability of traditional static degradation models, and the oversimplified noise modeling, this paper proposes a degradation-aware dynamic Fourier network (DADFN) for hyperspectral super-resolution. This method [...] Read more.
Addressing the problems of the difficulty in reconstructing high-resolution hyperspectral images caused by dynamic degradation characteristics, the poor adaptability of traditional static degradation models, and the oversimplified noise modeling, this paper proposes a degradation-aware dynamic Fourier network (DADFN) for hyperspectral super-resolution. This method employs a dual-channel split module to decouple and encode spectral and spatial degradation information, realizes the independent mapping of spectral and spatial features via a multi-layer perceptron module, and integrates a spectral–spatial dynamic cross-attention fusion module to generate 3D dynamic blur kernels tailored to different bands and spatial positions. The proposed method designs a multi-scale spectral–spatial collaborative constraint (MSSCC) loss function to ensure the coordinated optimization of modeling rationality, spectral continuity, and spatial detail fidelity. Experiments on the CAVE and Harvard benchmark datasets demonstrate that the DADFN algorithm outperforms the baseline methods in all evaluation metrics, which proves the proposed method’s strong robustness in real-world complex degradation scenarios. This method provides a novel solution balancing physical interpretability and performance superiority for hyperspectral image super-resolution tasks and holds significant value for advancing its applications in remote sensing monitoring, precision agriculture, and other related fields. Full article
11 pages, 457 KB  
Article
Virtual Non-Iodine Coronary Calcium Scoring on Photon-Counting CT: Patient- and Plaque-Level Analysis
by Müjgan Orman, Deniz Alis, Mehmet Onur Önal, Mustafa Ege Seker, Ahmet Akyol, Cem Alhan and Ercan Karaarslan
Diagnostics 2026, 16(4), 599; https://doi.org/10.3390/diagnostics16040599 - 17 Feb 2026
Viewed by 142
Abstract
Background/Objectives: Whether PCCT-derived virtual non-iodine (VNI) images can replace true non-contrast (TNC) for coronary artery calcium scoring (CACS) remains uncertain, particularly for small, low-density plaques. We aimed to evaluate agreement between VNI and TNC for CACS at the patient and lesion levels [...] Read more.
Background/Objectives: Whether PCCT-derived virtual non-iodine (VNI) images can replace true non-contrast (TNC) for coronary artery calcium scoring (CACS) remains uncertain, particularly for small, low-density plaques. We aimed to evaluate agreement between VNI and TNC for CACS at the patient and lesion levels and to quantify risk-category reclassification. Methods: In this retrospective single-center sample (May 2024–May 2025), 211 patients without prior coronary intervention and with nonzero CAC on TNC underwent PCCT. VNI (55 keV, QIR 1; 60 keV, QIR 4; PureCalcium) and TNC were reconstructed with matched section thickness/increment and kernel. Agatston and total calcified volume were recorded. Paired comparisons used Wilcoxon tests; reclassification across CAC categories (0, 1–99, 100–399, ≥400) and lesion-level false negatives (FNs) were assessed with TNC as the reference. Results: Low-keV VNIs (55–60 keV) underestimated CAC versus TNC. The median Agatston score decreased from 35.9 (IQR, 10.3–121.2) on TNC to 23.6 at 55 keV (p = 0.0006) and 22.2 at 60 keV (p = 0.0003); the total volume declined from 37.8 mm3 to 20.2 mm3 (p = 0.001) and 18.3 mm3 (p < 0.0001), respectively. More than half of patients were reassigned to a lower CAC category; despite no patient being CAC = 0 on TNC, 46.9% (55 keV) and 47.4% (60 keV) were labeled CAC = 0 on VNI. Because this study deliberately included only patients with nonzero CAC on the TNC reference, these CAC = 0 rates on VNI represent misclassification within a CAC-positive sample and should not be interpreted as population-level prevalence. At the lesion level, 95% of patients had ≥1 FN plaques (430 FN plaques total), typically small (median 8 mm3) and of low density (median Agatston 6). Conclusions: In this single-center sample with relatively low-burden calcification, low-keV VNI (55–60 keV) significantly underestimates CAC and down-classifies patients, with frequent “false-zero” assignments (defined as CAC_VNI = 0 despite CAC_TNC > 0) driven predominantly by small, low-density plaques. Full article
(This article belongs to the Special Issue Advances in Cardiovascular Diseases: Diagnosis and Management)
Show Figures

Figure 1

20 pages, 1113 KB  
Article
Experimental Cross-Domain Bearing Fault Diagnosis Method Based on Local Mean Decomposition and Improved Transfer Component Analysis
by Jia-Peng Liu, Zi-Hang Lv, Jia-Li Wang, Xin-Cheng Yang, Zhen-Kun He and Run-Sen Zhang
Machines 2026, 14(2), 216; https://doi.org/10.3390/machines14020216 - 12 Feb 2026
Viewed by 146
Abstract
To address the issue of reduced fault diagnosis accuracy caused by insufficient samples in laboratory datasets, this study proposes an improved Transfer Component Analysis (TCA) algorithm with dynamic kernel parameter adjustment, combined with Local Mean Decomposition (LMD). Firstly, the original signals are decomposed [...] Read more.
To address the issue of reduced fault diagnosis accuracy caused by insufficient samples in laboratory datasets, this study proposes an improved Transfer Component Analysis (TCA) algorithm with dynamic kernel parameter adjustment, combined with Local Mean Decomposition (LMD). Firstly, the original signals are decomposed using LMD, and representative signal components are reconstructed based on the Pearson’s correlation coefficient to enhance feature representativeness. Then, multidimensional features, including Root Mean Square (RMS), kurtosis, and main frequency (MF), are extracted from the reconstructed signals to comprehensively reflect signal characteristics in terms of energy distribution, impact properties, and frequency structure. Subsequently, a dynamic kernel parameter adjustment strategy is incorporated into TCA to adaptively optimize the kernel parameters, effectively reducing the distribution discrepancy between the source and target domains and enhancing the generalization capability of cross-domain feature transfer. Finally, a Least Squares Support Vector Machine (LSSVM) classifier is employed to perform fault diagnosis on the reconstructed features. The experimental results demonstrate that the proposed method achieves significantly higher diagnostic accuracy than traditional approaches under various operating conditions, especially when signals are complex and distribution differences are large, showing strong robustness and adaptability. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

22 pages, 2046 KB  
Article
Progressive Upsampling Generative Adversarial Network with Collaborative Attention for Single-Image Super-Resolution
by Haoxiang Lu, Jing Zhang, Mengyuan Jing, Ziming Wang and Wenhao Wang
J. Imaging 2026, 12(2), 79; https://doi.org/10.3390/jimaging12020079 - 11 Feb 2026
Viewed by 150
Abstract
Single-image super-resolution (SISR) is an essential low-level visual task that aims to produce high-resolution images from low-resolution inputs. However, most existing SISR methods heavily rely on ideal degradation kernels and rarely consider the actual noise distribution. To tackle these issues, this paper presents [...] Read more.
Single-image super-resolution (SISR) is an essential low-level visual task that aims to produce high-resolution images from low-resolution inputs. However, most existing SISR methods heavily rely on ideal degradation kernels and rarely consider the actual noise distribution. To tackle these issues, this paper presents a progressive upsampling generative adversarial network with collaborative attention mechanism called PUGAN. Specifically, the residual multiscale blocks (RMBs) based on stacked mixed-pooling multiscale structures (MPMSs) is designed to make full use of multiscale global–local hierarchical features, and the frequency collaborative attention mechanism (CAM) is used to fully dig up high- and low-frequency characteristics. Meanwhile, we design a progressive upsampling strategy to guide the model’s learning better while reducing the model’s complexity. Finally, the discriminator is also used to evaluate the reconstructed high-resolution images for balancing super-resolution reconstruction and detail enhancement. Our PUGAN can yield comparable PSNR/SSIM/LPIPS values for the NTIRE 2020, Urban 100, and B100 datasets, whose values are 33.987/0.9673/0.1210, 32.966/0.9483/0.1431, and 33.627/0.9546/0.1354 for the scale factor of ×2 as well as 26.349/0.8721/0.1975, 26.110/0.8614/0.1983, and 26.306/0.8803/0.1978 for the scale factor of ×4, respectively. Extensive experiments demonstrate that our PUGAN outperforms state-of-the-art SISR methods in qualitative and quantitative assessments for the SISR task. Additionally, our PUGAN shows the potential benefits to pathological image super-resolution. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

28 pages, 12700 KB  
Article
Enhancing Drought Prediction in Semi-Arid Climates: A Synthetic Data and Neural Network Approach Applied to Karaman Region, Turkey
by Akin Duvan and Sadik Alper Yildizel
Atmosphere 2026, 17(2), 172; https://doi.org/10.3390/atmos17020172 - 6 Feb 2026
Viewed by 255
Abstract
This study develops a practical framework for forecasting long-term drought conditions in Karaman Province, a semi-arid region of Turkey, where accurate climate information is vital for water planning and agriculture. Since the area has limited rainfall records and strong year-to-year fluctuations, traditional modeling [...] Read more.
This study develops a practical framework for forecasting long-term drought conditions in Karaman Province, a semi-arid region of Turkey, where accurate climate information is vital for water planning and agriculture. Since the area has limited rainfall records and strong year-to-year fluctuations, traditional modeling approaches often fall short. To better capture local conditions, drought intensity was defined using a simple monthly wetness anomaly measure based directly on precipitation; here, positive values indicate wetter months and negative values indicate drier ones. This makes the method suitable for regions where detailed hydrological data are scarce. Rainfall observations from 1965 to 2011 were expanded using a combination of kernel density estimation and Cholesky-based correlation reconstruction. These steps preserved the main statistical and temporal patterns of the original data while increasing sample diversity. The enriched dataset was then used to train artificial neural networks to predict both precipitation and drought intensity. The models reached R2 values of 0.76 and 0.72, with mean absolute errors of 12.8 mm and 28.4%, which represents an improvement of roughly 10–15% over traditional statistical methods. They were also able to capture the seasonal and year-to-year variability that strongly affects drought conditions in the region. To understand what drives the predictions, the model was examined with LIME, which consistently highlighted lagged rainfall and seasonal indicators as the most influential inputs. A walk-forward validation approach was also used to mimic real forecasting conditions and demonstrated that the model remains stable when projecting into the future. Overall, the proposed framework offers a reliable and practical basis for early-warning efforts and drought-management strategies in semi-arid regions like Karaman. Full article
Show Figures

Figure 1

27 pages, 2785 KB  
Article
HAFNet: Hybrid Attention Fusion Network for Remote Sensing Pansharpening
by Dan Xu, Jinyu Zhang, Wenrui Li, Xingtao Wang, Penghong Wang and Xiaopeng Fan
Remote Sens. 2026, 18(3), 526; https://doi.org/10.3390/rs18030526 - 5 Feb 2026
Viewed by 359
Abstract
Deep learning–based pansharpening methods for remote sensing have advanced rapidly in recent years. However, current methods still face three limitations that directly affect reconstruction quality. Content adaptivity is often implemented as an isolated step, which prevents effective interaction across scales and feature domains. [...] Read more.
Deep learning–based pansharpening methods for remote sensing have advanced rapidly in recent years. However, current methods still face three limitations that directly affect reconstruction quality. Content adaptivity is often implemented as an isolated step, which prevents effective interaction across scales and feature domains. Dynamic multi-scale mechanisms also remain constrained, since their scale selection is usually guided by global statistics and ignores regional heterogeneity. Moreover, frequency and spatial cues are commonly fused in a static manner, leading to an imbalance between global structural enhancement and local texture preservation. To address these issues, we design three complementary modules. We utilize the Adaptive Convolution Unit (ACU) to generate content-aware kernels through local feature clustering, thereby achieving fine-grained adaptation to diverse ground structures. We also develop the Multi-Scale Receptive Field Selection Unit (MSRFU), a module providing flexible scale modeling by selecting informative branches at varying receptive fields. Meanwhile, we incorporate the Frequency–Spatial Attention Unit (FSAU), designed to dynamically fuse spatial representations with frequency information. This effectively strengthens detail reconstruction while minimizing spectral distortion. Specifically, we propose the Hybrid Attention Fusion Network (HAFNet), which employs the Hybrid Attention-Driven Residual Block (HARB) as the fundamental utility to dynamically integrate the above three specialized components. This design enables dynamic content adaptivity, multi-scale responsiveness, and cross-domain feature fusion within a unified framework. Experiments on public benchmarks confirm the effectiveness of each component and demonstrate HAFNet’s state-of-the-art performance. Full article
Show Figures

Figure 1

27 pages, 6439 KB  
Article
Contrastive–Transfer-Synergized Dual-Stream Transformer for Hyperspectral Anomaly Detection
by Lei Deng, Jiaju Ying, Qianghui Wang, Yue Cheng and Bing Zhou
Remote Sens. 2026, 18(3), 516; https://doi.org/10.3390/rs18030516 - 5 Feb 2026
Viewed by 337
Abstract
Hyperspectral anomaly detection (HAD) aims to identify pixels that significantly differ from the background without prior knowledge. While deep learning-based reconstruction methods have shown promise, they often suffer from limited feature representation, inefficient training cycles, and sensitivity to imbalanced data distributions. To address [...] Read more.
Hyperspectral anomaly detection (HAD) aims to identify pixels that significantly differ from the background without prior knowledge. While deep learning-based reconstruction methods have shown promise, they often suffer from limited feature representation, inefficient training cycles, and sensitivity to imbalanced data distributions. To address these challenges, this paper proposes a novel contrastive–transfer-synergized dual-stream transformer for hyperspectral anomaly detection (CTDST-HAD). The framework integrates contrastive learning and transfer learning within a dual-stream architecture, comprising a spatial stream and a spectral stream, which are pre-trained separately and synergistically fine-tuned. Specifically, the spatial stream leverages general visual and hyperspectral-view datasets with adaptive elastic weight consolidation (EWC) to mitigate catastrophic forgetting. The spectral stream employs a variational autoencoder (VAE) enhanced with the RossThick–LiSparseR (R-L) physical-kernel-driven model for spectrally realistic data augmentation. During fine-tuning, spatial and spectral features are fused for pixel-level anomaly detection, with focal loss addressing class imbalance. Extensive experiments on nine real hyperspectral datasets demonstrate that CTDST-HAD outperforms state-of-the-art methods in detection accuracy and efficiency, particularly in complex backgrounds, while maintaining competitive inference speed. Full article
Show Figures

Figure 1

15 pages, 2423 KB  
Article
Infrared Image Super Resolution Method Based on Stochastic Degradation Modeling
by Lihong Yang, Kai Hu, Hang Ge, Zhi Zeng and Shurui Ge
Photonics 2026, 13(2), 155; https://doi.org/10.3390/photonics13020155 - 5 Feb 2026
Viewed by 239
Abstract
Infrared images hold significant application value in fields such as military reconnaissance, security surveillance, and medical diagnosis. However, issues like low resolution, high noise, and complex degradation characteristics severely hinder their practical application effectiveness. This paper introduces an infrared super-resolution reconstruction algorithm based [...] Read more.
Infrared images hold significant application value in fields such as military reconnaissance, security surveillance, and medical diagnosis. However, issues like low resolution, high noise, and complex degradation characteristics severely hinder their practical application effectiveness. This paper introduces an infrared super-resolution reconstruction algorithm based on a random degradation model and Generative Adversarial Networks (GANs), addressing the diversity of infrared image degradation processes. The primary contribution lies in explicitly modeling key degradation parameters of infrared images (such as blur kernel and noise distribution) using a random degradation model, generating diverse low-resolution to high-resolution image pairs, and significantly enhancing the model’s generalization ability for complex degradations. Experiments on the Airo infrared dataset and a self-built infrared dataset demonstrate that when the resolution is increased by a factor of 2, 4, and 8, the 4× resolution reconstructed images exhibit notable advantages in terms of texture detail and noise suppression. Especially in 4× super-resolution reconstruction, compared to three typical deep learning algorithms, our algorithm improves the Peak Signal-to-Noise Ratio (PSNR) by 3.046 dB, 1.8489 dB, and 0.2108 dB, respectively, and the Structural Similarity Index (SSIM) by 0.0387 (4.76%), 0.0287 (3.48%), and 0.0131 (1.56%), respectively, with perceptual similarity decreasing by 0.2465, 0.13344, and 0.0514 (lower values indicate better perceptual quality). Subjective visual assessments further validate the algorithm’s significant advantages in noise reduction and weak texture restoration. This study proposes an infrared image super-resolution reconstruction method based on random degradation modeling, which holds significant theoretical and practical value in complex infrared degradation scenarios. Full article
Show Figures

Figure 1

37 pages, 24393 KB  
Article
Denoising of CT and MRI Images Using Decomposition-Based Curvelet Thresholding and Classical Filtering Techniques
by Mahmoud Nasr, Krzysztof Brzostowski, Rafał Obuchowicz and Adam Piórkowski
Appl. Sci. 2026, 16(3), 1335; https://doi.org/10.3390/app16031335 - 28 Jan 2026
Viewed by 223
Abstract
Medical image denoising is crucial for enhancing the diagnostic accuracy of CT and MRI images. This paper presents a modular hybrid framework that combines multiscale decomposition techniques (Empirical Mode Decomposition, Variational Mode Decomposition, Bidimensional EMD, and Multivariate EMD) with curvelet transform thresholding and [...] Read more.
Medical image denoising is crucial for enhancing the diagnostic accuracy of CT and MRI images. This paper presents a modular hybrid framework that combines multiscale decomposition techniques (Empirical Mode Decomposition, Variational Mode Decomposition, Bidimensional EMD, and Multivariate EMD) with curvelet transform thresholding and traditional spatial filters. The methodology was assessed using a phantom dataset containing regulated Rician noise, clinical CT images rebuilt with sharp (B50f) and medium (B46f) kernels, and MRI scans obtained at various GRAPPA acceleration factors. In phantom trials, MEMD–Curvelet attained the highest SSIM (0.964) and PSNR (28.35 dB), while preserving commendable perceptual scores (NIQE approximately 7.55, BRISQUE around 38.8). In CT images, VMD–Curvelet and MEMD–Curvelet consistently outperformed classical filters, achieving SSIM values over 0.95 and PSNR values above 28 dB, even with sharp-kernel reconstructions. In MRI datasets, MEMD–Curvelet and BEMD–Curvelet reduced perceptual distortion, decreasing NIQE by up to 15% and BRISQUE by 20% compared to Gaussian and median filtering. Deep learning baselines validated the framework’s competitiveness: BM3D attained high fidelity but necessitated 6.65 s per slice, while DnCNN delivered equivalent SSIM (0.958) with a diminished runtime of 2.33 s. The results indicate that the proposed framework excels at noise reduction and structure preservation across various imaging settings, surpassing independent filtering and transform-only methods. Its versatility and efficiency underscore its potential for therapeutic integration in situations necessitating high-quality denoising under limited acquisition conditions. Full article
Show Figures

Figure 1

21 pages, 3656 KB  
Article
Characterization of the Physical Image Quality of a Clinical Photon-Counting Computed Tomography Scanner Across Multiple Acquisition and Reconstruction Settings
by Patrizio Barca, Luigi Masturzo, Luca De Masi, Antonio Traino, Filippo Cademartiri and Marco Giannelli
Appl. Sci. 2026, 16(3), 1322; https://doi.org/10.3390/app16031322 - 28 Jan 2026
Viewed by 171
Abstract
This phantom study presents a thorough characterization of the physical image quality of a clinical whole-body photon-counting computed tomography (PCCT) scanner. Multiple quality metrics—noise, noise power spectrum (NPS), task transfer function (TTF), and detectability index (d′)—were analyzed across a range of reconstruction algorithms [...] Read more.
This phantom study presents a thorough characterization of the physical image quality of a clinical whole-body photon-counting computed tomography (PCCT) scanner. Multiple quality metrics—noise, noise power spectrum (NPS), task transfer function (TTF), and detectability index (d′)—were analyzed across a range of reconstruction algorithms (filtered back projection, FBP, and Quantum Iterative Reconstruction, QIR, with strength levels Q1–Q4), and varying reconstruction kernels (Br40/Br60/Br76/Br98). Both standard (STD, 0.4 mm slice thickness) and high-resolution (HR, 0.2 mm slice thickness) reconstruction modes were assessed. QIR significantly reduced image noise (60–95%) compared to FBP, particularly with sharper kernels. Spatial resolution improved with increasing QIR strength level for smoother kernels and was further enhanced using HR mode with sharp kernels. HR mode exhibited better noise performance than STD with sharper reconstructions, due to the small pixel effect. While STD mode showed higher d′ values for larger objects, HR mode outperformed it for smaller objects and sharper kernels. Compared to a conventional energy-integrating computed tomography system, the PCCT scanner showed superior d′ values under similar settings. Overall, this study highlights the complex interplay between acquisition and reconstruction parameters on image quality, confirms the potential of PCCT technology, and underscores the need for further clinical validation. Full article
(This article belongs to the Special Issue Advances in Diagnostic Radiology)
Show Figures

Figure 1

25 pages, 4008 KB  
Article
SLD-YOLO11: A Topology-Reconstructed Lightweight Detector for Fine-Grained Maize–Weed Discrimination in Complex Field Environments
by Meichen Liu and Jing Gao
Agronomy 2026, 16(3), 328; https://doi.org/10.3390/agronomy16030328 - 28 Jan 2026
Viewed by 395
Abstract
Precise identification of weeds at the maize seedling stage is pivotal for implementing Site-Specific Weed Management and minimizing herbicide environmental pollution. However, the performance of existing lightweight detectors is severely bottlenecked by unstructured field environments, characterized by the “green-on-green” spectral similarity between crops [...] Read more.
Precise identification of weeds at the maize seedling stage is pivotal for implementing Site-Specific Weed Management and minimizing herbicide environmental pollution. However, the performance of existing lightweight detectors is severely bottlenecked by unstructured field environments, characterized by the “green-on-green” spectral similarity between crops and weeds, diminutive seedling targets, and complex mutual occlusion of leaves. To address these challenges, this study proposes SLD-YOLO11, a topology-reconstructed lightweight detection model tailored for complex field environments. First, to mitigate the feature loss of tiny targets, a Lossless Downsampling Topology based on Space-to-Depth Convolution (SPD-Conv) is constructed, transforming spatial information into depth channels to preserve fine-grained features. Second, a Decomposed Large Kernel Attention (D-LKA) mechanism is designed to mimic the wide receptive field of human vision. By modeling long-range spatial dependencies with decomposed large-kernel attention, it enhances discrimination under severe occlusion by leveraging global structural context. Third, the DySample operator is introduced to replace static interpolation, enabling content-aware feature flow reconstruction. Experimental results demonstrate that SLD-YOLO11 achieves an mAP@0.5 of 97.4% on a self-collected maize field dataset, significantly outperforming YOLOv8n, YOLOv10n, YOLOv11n, and mainstream lightweight variants. Notably, the model achieves Zero Inter-class Misclassification between maize and weeds, establishing high safety standards for weeding operations. To further bridge the gap between visual perception and precision operations, a Visual Weed-Crop Competition Index (VWCI) is innovatively proposed. By integrating detection bounding boxes with species-specific morphological correction coefficients, the VWCI quantifies field weed pressure with low cost and high throughput. Regression analysis reveals a high consistency (R2 = 0.70) between the automated VWCI and manual ground-truth coverage. This study not only provides a robust detector but also offers a reliable decision-making basis for real-time variable-rate spraying by intelligent weeding robots. Full article
(This article belongs to the Section Farming Sustainability)
Show Figures

Figure 1

27 pages, 6867 KB  
Article
Recovering Gamma-Ray Burst Redshift Completeness Maps via Spherical Generalized Additive Models
by Zsolt Bagoly and Istvan I. Racz
Universe 2026, 12(2), 31; https://doi.org/10.3390/universe12020031 - 24 Jan 2026
Viewed by 234
Abstract
We present an advanced statistical framework for estimating the relative intensity of astrophysical event distributions (e.g., Gamma-Ray Bursts, GRBs) on the sky tofacilitate population studies and large-scale structure analysis. In contrast to the traditional approach based on the ratio of Kernel Density Estimation [...] Read more.
We present an advanced statistical framework for estimating the relative intensity of astrophysical event distributions (e.g., Gamma-Ray Bursts, GRBs) on the sky tofacilitate population studies and large-scale structure analysis. In contrast to the traditional approach based on the ratio of Kernel Density Estimation (KDE), which is characterized by numerical instability and bandwidth sensitivity, this work applies a logistic regression embedded in a Bayesian framework to directly model selection effects. It reformulates the problem as a logistic regression task within a Generalized Additive Model (GAM) framework, utilizing isotropic Splines on the Sphere (SOS) to map the conditional probability of redshift measurement. The model complexity and smoothness are objectively optimized using Restricted Maximum Likelihood (REML) and the Akaike Information Criterion (AIC), ensuring a data-driven bias-variance trade-off. We benchmark this approach against an Adaptive Kernel Density Estimator (AKDE) using von Mises–Fisher kernels and Abramson’s square root law. The comparative analysis reveals strong statistical evidence in favor of this Preconditioned (Precon) Estimator, yielding a log-likelihood improvement of ΔL74.3 (Bayes factor >1030) over the adaptive method. We show that this Precon Estimator acts as a spectral bandwidth extender, effectively decoupling the wideband exposure map from the narrowband selection efficiency. This provides a tool for cosmologists to recover high-frequency structural features—such as the sharp cutoffs—that are mathematically irresolvable by direct density estimators due to the bandwidth limitation inherent in sparse samples. The methodology ensures that reconstructions of the cosmic web are stable against Poisson noise and consistent with observational constraints. Full article
(This article belongs to the Section Astroinformatics and Astrostatistics)
Show Figures

Figure 1

26 pages, 55590 KB  
Article
Adaptive Edge-Aware Detection with Lightweight Multi-Scale Fusion
by Xiyu Pan, Kai Xiong and Jianjun Li
Electronics 2026, 15(2), 449; https://doi.org/10.3390/electronics15020449 - 20 Jan 2026
Viewed by 230
Abstract
In object detection, boundary blurring caused by occlusion and background interference often hinders effective feature extraction. To address this challenge, we propose Edge Aware-YOLO, a novel framework designed to enhance edge awareness and efficient feature fusion. Our method integrates three key contributions. First, [...] Read more.
In object detection, boundary blurring caused by occlusion and background interference often hinders effective feature extraction. To address this challenge, we propose Edge Aware-YOLO, a novel framework designed to enhance edge awareness and efficient feature fusion. Our method integrates three key contributions. First, the Variable Sobel Compact Inverted Block (VSCIB) employs convolution kernels with adjustable orientation and size, enabling robust multi-scale edge adaptation. Second, the Spatial Pyramid Shared Convolution (SPSC) replaces standard pooling with shared dilated convolutions, minimizing detail loss during feature reconstruction. Finally, the Efficient Downsampling Convolution (EDC) utilizes a dual-branch architecture to balance channel compression with semantic preservation. Extensive evaluations on public datasets demonstrate that Edge Aware-YOLO significantly outperforms state-of-the-art models. On MS COCO, it achieves 56.3% mAP50 and 40.5% mAP50–95 (gains of 1.5% and 1.0%) with only 2.4M parameters and 5.8 GFLOPs, surpassing advanced models like YOLOv11. Full article
(This article belongs to the Topic Intelligent Image Processing Technology)
Show Figures

Figure 1

24 pages, 69667 KB  
Article
YOLO-ELS: A Lightweight Cherry Tomato Maturity Detection Algorithm
by Zhimin Tong, Yu Zhou, Changhao Li, Changqing Cai and Lihong Rong
Appl. Sci. 2026, 16(2), 1043; https://doi.org/10.3390/app16021043 - 20 Jan 2026
Viewed by 209
Abstract
Within the domain of intelligent picking robotics, fruit recognition and positioning are essential. Challenging conditions such as varying light, occlusion, and limited edge-computing power compromise fruit maturity detection. To tackle these issues, this paper proposes a lightweight algorithm YOLO-ELS based on YOLOv8n. Specifically, [...] Read more.
Within the domain of intelligent picking robotics, fruit recognition and positioning are essential. Challenging conditions such as varying light, occlusion, and limited edge-computing power compromise fruit maturity detection. To tackle these issues, this paper proposes a lightweight algorithm YOLO-ELS based on YOLOv8n. Specifically, we reconstruct the backbone by replacing the bottlenecks in the C2f structure with Edge-Information-Enhanced Modules (EIEM) to prioritize morphological cues and filter background redundancy. Furthermore, a Large Separable Kernel Attention (LSKA) mechanism is integrated into the SPPF layer to expand the effective receptive field for multi-scale targets. To mitigate occlusion-induced errors, a Spatially Enhanced Attention Module (SEAM) is incorporated into the decoupled detection head to enhance feature responses in obscured regions. Finally, the Inner-GIoU loss is adopted to refine bounding box regression and accelerate convergence. Experimental results demonstrate that compared to the YOLOv8n baseline, the proposed YOLO-ELS achieves a 14.8% reduction in GFLOPs and a 2.3% decrease in parameters, while attaining a precision, recall, and mAP@50% of 92.7%, 83.9%, and 92.0%, respectively. When compared with mainstream models such as DETR, Faster-RCNN, SSD, TOOD, YOLOv5s, and YOLO11n, the mAP@50% is improved by 7.0%, 4.7%, 11.4%, 8.6%, 3.1%, and 3.2%. Deployment tests on the NVIDIA Jetson Orin Nano Super edge platform yield an inference latency of 25.2 ms and a detection speed of 28.2 FPS, successfully meeting the real-time operational requirements of automated harvesting systems. These findings confirm that YOLO-ELS effectively balances high detection accuracy with lightweight architecture, providing a robust technical foundation for intelligent fruit picking in resource-constrained greenhouse environments. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

18 pages, 4205 KB  
Article
Research on Field Weed Target Detection Algorithm Based on Deep Learning
by Ziyang Chen, Le Wu, Zhenhong Jia, Jiajia Wang, Gang Zhou and Zhensen Zhang
Sensors 2026, 26(2), 677; https://doi.org/10.3390/s26020677 - 20 Jan 2026
Viewed by 255
Abstract
Weed detection algorithms based on deep learning are considered crucial for smart agriculture, with the YOLO series algorithms being widely adopted due to their efficiency. However, existing YOLO algorithms struggle to maintain high accuracy, while low parameter requirements and computational efficiency are achieved [...] Read more.
Weed detection algorithms based on deep learning are considered crucial for smart agriculture, with the YOLO series algorithms being widely adopted due to their efficiency. However, existing YOLO algorithms struggle to maintain high accuracy, while low parameter requirements and computational efficiency are achieved when weeds with occlusion or overlap are detected. To address this challenge, a target detection algorithm called SSS-YOLO based on YOLOv9t is proposed in this paper. First, the SCB (Spatial Channel Conv Block) module is introduced, in which large kernel convolution is employed to capture long-range dependencies, occluded weed regions are bypassed by being associated with unobstructed areas, and features of unobstructed regions are enhanced through inter-channel relationships. Second, the SPPF EGAS (Spatial Pyramid Pooling Fast Edge Gaussian Aggregation Super) module is proposed, where multi-scale max pooling is utilized to extract hierarchical contextual features, large receptive fields are leveraged to acquire background information around occluded objects, and features of weed regions obscured by crops are inferred. Finally, the EMSN (Efficient Multi-Scale Spatial-Feedforward Network) module is developed, through which semantic information of occluded regions is reconstructed by contextual reasoning and background vegetation interference is effectively suppressed while visible regional details are preserved. To validate the performance of this method, experiments are conducted on both our self-built dataset and the publicly available Cotton WeedDet12 dataset. The results demonstrate that compared to existing algorithms, significant performance improvements are achieved by the proposed method. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

Back to TopTop