Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (145)

Search Parameters:
Keywords = Gaussian process prior

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1214 KB  
Article
Microwave-Enabled Two-Step Scheme for Continuous Variable Quantum Communications in Integrated Superconducting
by Yun Mao, Lei Mao, Wanyi Wang, Yijun Wang, Hang Zhang and Ying Guo
Mathematics 2025, 13(20), 3263; https://doi.org/10.3390/math13203263 - 12 Oct 2025
Viewed by 142
Abstract
Quantum secure direct communication (QSDC) is convenient for the direct transmission of secure messages without requiring a prior key exchange by two participants, offering an elegant advantage in transmission security. The traditional implementations usually focus on the discrete-variable (DV) system, whereas its continuous-variable [...] Read more.
Quantum secure direct communication (QSDC) is convenient for the direct transmission of secure messages without requiring a prior key exchange by two participants, offering an elegant advantage in transmission security. The traditional implementations usually focus on the discrete-variable (DV) system, whereas its continuous-variable (CV) counterpart has attracted much attention due to its compatibility with existing optical infrastructure. In order to address its practical deployment in harsh environments, we propose a microwave-based scheme for the CV-QSDC that leverages entangled microwave quantum states through free-space channels in cryogenic environments. The two-step scheme is designed for the secure direct communication, where the classical messages can be encoded by using Gaussian modulation and then transmitted via displacement operations on microwave quantum states. The data processing procedures involve microwave entangled state generation, channel detection, parameter estimation, and so on. Simulation results demonstrate the feasibility of the microwave-based CV-QSDC, highlighting its potential for secure communication in integrated superconducting and solid-state quantum technologies. Full article
(This article belongs to the Special Issue Quantum Information, Cryptography and Computation)
Show Figures

Figure 1

24 pages, 34370 KB  
Article
A Semi-Automatic and Visual Leaf Area Measurement System Integrating Hough Transform and Gaussian Level-Set Method
by Linjuan Wang, Chengyi Hao, Xiaoying Zhang, Wenfeng Guo, Zhifang Bi, Zhaoqing Lan, Lili Zhang and Yuanhuai Han
Agriculture 2025, 15(19), 2101; https://doi.org/10.3390/agriculture15192101 - 9 Oct 2025
Viewed by 264
Abstract
Accurate leaf area measurement is essential for plant growth monitoring and ecological research; however, it is often challenged by perspective distortion and color inconsistencies resulting from variations in shooting conditions and plant status. To address these issues, this study proposes a visual and [...] Read more.
Accurate leaf area measurement is essential for plant growth monitoring and ecological research; however, it is often challenged by perspective distortion and color inconsistencies resulting from variations in shooting conditions and plant status. To address these issues, this study proposes a visual and semi-automatic measurement system. The system utilizes Hough transform-based perspective transformation to correct perspective distortions and incorporates manually sampled points to obtain prior color information, effectively mitigating color inconsistency. Based on this prior knowledge, the level-set function is automatically initialized. The leaf extraction is achieved through level-set curve evolution that minimizes an energy function derived from a multivariate Gaussian distribution model, and the evolution process allows visual monitoring of the leaf extraction progress. Experimental results demonstrate robust performance under diverse conditions: the standard deviation remains below 1 cm2, the relative error is under 1%, the coefficient of variation is less than 3%, and processing time is under 10 s for most images. Compared to the traditional labor-intensive and time-consuming manual photocopy-weighing approach, as well as OpenPheno (which lacks parameter adjustability) and ImageJ 1.54g (whose results are highly operator-dependent), the proposed system provides a more flexible, controllable, and robust semi-automatic solution. It significantly reduces operational barriers while enhancing measurement stability, demonstrating considerable practical application value. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

44 pages, 3213 KB  
Systematic Review
A Systematic Literature Review of Machine Learning Techniques for Observational Constraints in Cosmology
by Luis Rojas, Sebastián Espinoza, Esteban González, Carlos Maldonado and Fei Luo
Galaxies 2025, 13(5), 114; https://doi.org/10.3390/galaxies13050114 - 9 Oct 2025
Viewed by 284
Abstract
This paper presents a systematic literature review focusing on the application of machine learning techniques for deriving observational constraints in cosmology. The goal is to evaluate and synthesize existing research to identify effective methodologies, highlight gaps, and propose future research directions. Our review [...] Read more.
This paper presents a systematic literature review focusing on the application of machine learning techniques for deriving observational constraints in cosmology. The goal is to evaluate and synthesize existing research to identify effective methodologies, highlight gaps, and propose future research directions. Our review identifies several key findings: (1) Various machine learning techniques, including Bayesian neural networks, Gaussian processes, and deep learning models, have been applied to cosmological data analysis, improving parameter estimation and handling large datasets. However, models achieving significant computational speedups often exhibit worse confidence regions compared to traditional methods, emphasizing the need for future research to enhance both efficiency and measurement precision. (2) Traditional cosmological methods, such as those using Type Ia Supernovae, baryon acoustic oscillations, and cosmic microwave background data, remain fundamental, but most studies focus narrowly on specific datasets. We recommend broader dataset usage to fully validate alternative cosmological models. (3) The reviewed studies mainly address the H0 tension, leaving other cosmological challenges—such as the cosmological constant problem, warm dark matter, phantom dark energy, and others—unexplored. (4) Hybrid methodologies combining machine learning with Markov chain Monte Carlo offer promising results, particularly when machine learning techniques are used to solve differential equations, such as Einstein Boltzmann solvers, prior to Markov chain Monte Carlo models, accelerating computations while maintaining precision. (5) There is a significant need for standardized evaluation criteria and methodologies, as variability in training processes and experimental setups complicates result comparability and reproducibility. (6) Our findings confirm that deep learning models outperform traditional machine learning methods for complex, high-dimensional datasets, underscoring the importance of clear guidelines to determine when the added complexity of learning models is warranted. Full article
Show Figures

Figure 1

20 pages, 1520 KB  
Article
Sensor-Driven Localization of Airborne Contaminant Sources via the Sandpile–Advection Model and (1 + 1)-Evolution Strategy
by Miroslaw Szaban and Anna Wawrzynczak
Sensors 2025, 25(19), 6215; https://doi.org/10.3390/s25196215 - 7 Oct 2025
Viewed by 433
Abstract
The primary aim of this study is to develop an effective decision-support system for managing crises related to the release of hazardous airborne substances. Such incidents, which can arise from industrial accidents or intentional releases, necessitate the rapid identification of contaminant sources to [...] Read more.
The primary aim of this study is to develop an effective decision-support system for managing crises related to the release of hazardous airborne substances. Such incidents, which can arise from industrial accidents or intentional releases, necessitate the rapid identification of contaminant sources to enable timely response measures. This work focuses on a novel approach that integrates a modified Sandpile model with advection and employs the (1 + 1)-Evolution Strategy to solve the inverse problem of source localization. The initial section of this paper reviews existing methods for simulating atmospheric dispersion and reconstructing source locations. In the following sections, we describe the architecture of the proposed system, the modeling assumptions, and the experimental framework. A key feature of the method presented here is its reliance solely on concentration measurements obtained from a distributed network of sensors, eliminating the need for prior knowledge of the source location, release time, or emission strength. The system was validated through a two-stage process using synthetic data generated by a Gaussian dispersion model. Preliminary experiments were conducted to support model calibration and refinement, followed by formal tests to evaluate localization accuracy and robustness. Each test case was completed in under 20 min on a standard laptop, demonstrating the algorithm’s high computational efficiency. The results confirm that the proposed (1 + 1)-ES Sandpile model can effectively reconstruct source parameters, staying within the resolution limits of the sensor grid. The system’s speed, simplicity, and reliance exclusively on sensor data make it a promising solution for real-time environmental monitoring and emergency response applications. Full article
(This article belongs to the Collection Sensors for Air Quality Monitoring)
Show Figures

Figure 1

23 pages, 2257 KB  
Article
A Deviation Correction Technique Based on Particle Filtering Combined with a Dung Beetle Optimizer with the Improved Model Predictive Control for Vertical Drilling
by Abobaker Albabo, Guojun Wen, Siyi Cheng, Asaad Mustafa and Wangde Qiu
Appl. Sci. 2025, 15(19), 10773; https://doi.org/10.3390/app151910773 - 7 Oct 2025
Viewed by 210
Abstract
The following study will look at the issue of the dealignment of the trajectory when drilling vertically (a fact), where measurement and process errors are still the primary source of error that can easily lead to the inclination angle having overshot the desired [...] Read more.
The following study will look at the issue of the dealignment of the trajectory when drilling vertically (a fact), where measurement and process errors are still the primary source of error that can easily lead to the inclination angle having overshot the desired bounds. The current methods, such as the Extended Kalman Filters (EKFs), can incorrectly estimate non-Gaussian noises, unlike the classical particle filters (PFs), which are unable to handle significant measurement errors appropriately. We will solve these problems by creating a new deviation correction mechanism using a dung beetle optimizer particle filter (DBOPF) with a superior Model Predictive Controller (MPC). The DBOPF makes use of the prior knowledge and optimization process to enhance the precision of state estimation and is superior in noise reduction to traditional filters. The improved MPC introduces flexible constraints and weight adjustments in the form of a sigmoid function that enables solutions when the inclination angle exceeds the threshold, and priorities are given to control objectives dynamically. The simulation outcomes indicate that the approach is more effective in the correction of the trajectory and control of inclination angle than the conventional MPC and other optimization-based filters, such as the PSO and SSA, in the presence of the noisy drilling environment. Full article
Show Figures

Figure 1

20 pages, 611 KB  
Article
An Adjusted CUSUM-Based Method for Change-Point Detection in Two-Phase Inverse Gaussian Degradation Processes
by Mei Li, Tian Fu and Qian Li
Mathematics 2025, 13(19), 3167; https://doi.org/10.3390/math13193167 - 2 Oct 2025
Viewed by 210
Abstract
Degradation data plays a crucial role in the reliability assessment and condition monitoring of engineering systems. The stage-wise changes in degradation rates often signal turning points in system performance or potential fault risks. To address the issue of structural changes during the degradation [...] Read more.
Degradation data plays a crucial role in the reliability assessment and condition monitoring of engineering systems. The stage-wise changes in degradation rates often signal turning points in system performance or potential fault risks. To address the issue of structural changes during the degradation process, this paper constructs a degradation modeling framework based on a two-stage Inverse Gaussian (IG) process and proposes a change-point detection method based on an adjusted CUSUM (cumulative sum) statistic to identify potential stage changes in the degradation path. This method does not rely on complex prior information and constructs statistics by accumulating deviations, utilizing a binary search approach to achieve accurate change-point localization. In simulation experiments, the proposed method demonstrated superior detection performance compared to the classical likelihood ratio method and modified information criterion, verified through a combination of experiments with different change-point positions and degradation rates. Finally, the method was applied to real degradation data of a hydraulic piston pump, successfully identifying two structural change points during the degradation process. Based on these change points, the degradation stages were delineated, thereby enhancing the model’s ability to characterize the true degradation path of the equipment. Full article
(This article belongs to the Special Issue Reliability Analysis and Statistical Computing)
Show Figures

Figure 1

21 pages, 4397 KB  
Article
Splatting the Cat: Efficient Free-Viewpoint 3D Virtual Try-On via View-Decomposed LoRA and Gaussian Splatting
by Chong-Wei Wang, Hung-Kai Huang, Tzu-Yang Lin, Hsiao-Wei Hu and Chi-Hung Chuang
Electronics 2025, 14(19), 3884; https://doi.org/10.3390/electronics14193884 - 30 Sep 2025
Viewed by 413
Abstract
As Virtual Try-On (VTON) technology matures, 2D VTON methods based on diffusion models can now rapidly generate diverse and high-quality try-on results. However, with rising user demands for realism and immersion, many applications are shifting towards 3D VTON, which offers superior geometric and [...] Read more.
As Virtual Try-On (VTON) technology matures, 2D VTON methods based on diffusion models can now rapidly generate diverse and high-quality try-on results. However, with rising user demands for realism and immersion, many applications are shifting towards 3D VTON, which offers superior geometric and spatial consistency. Existing 3D VTON approaches commonly face challenges such as barriers to practical deployment, substantial memory requirements, and cross-view inconsistencies. To address these issues, we propose an efficient 3D VTON framework with robust multi-view consistency, whose core design is to decouple the monolithic 3D editing task into a four-stage cascade as follows: (1) We first reconstruct an initial 3D scene using 3D Gaussian Splatting, integrating the SMPL-X model at this stage as a strong geometric prior. By computing a normal-map loss and a geometric consistency loss, we ensure the structural stability of the initial human model across different views. (2) We employ the lightweight CatVTON to generate 2D try-on images, that provide visual guidance for the subsequent personalized fine-tuning tasks. (3) To accurately represent garment details from all angles, we partition the 2D dataset into three subsets—front, side, and back—and train a dedicated LoRA module for each subset on a pre-trained diffusion model. This strategy effectively mitigates the issue of blurred details that can occur when a single model attempts to learn global features. (4) An iterative optimization process then uses the generated 2D VTON images and specialized LoRA modules to edit the 3DGS scene, achieving 360-degree free-viewpoint VTON results. All our experiments were conducted on a single consumer-grade GPU with 24 GB of memory, a significant reduction from the 32 GB or more typically required by previous studies under similar data and parameter settings. Our method balances quality and memory requirement, significantly lowering the adoption barrier for 3D VTON technology. Full article
(This article belongs to the Special Issue 2D/3D Industrial Visual Inspection and Intelligent Image Processing)
Show Figures

Figure 1

10 pages, 761 KB  
Proceeding Paper
Nonparametric FBST for Validating Linear Models
by Rodrigo F. L. Lassance, Julio M. Stern and Rafael B. Stern
Phys. Sci. Forum 2025, 12(1), 2; https://doi.org/10.3390/psf2025012002 - 24 Sep 2025
Viewed by 166
Abstract
In Bayesian analysis, testing for linearity requires placing a prior to the entire space of potential regression functions. This poses a problem for many standard tests, as assigning positive prior probability to such a hypothesis is challenging. The Full Bayesian Significance Test (FBST) [...] Read more.
In Bayesian analysis, testing for linearity requires placing a prior to the entire space of potential regression functions. This poses a problem for many standard tests, as assigning positive prior probability to such a hypothesis is challenging. The Full Bayesian Significance Test (FBST) sidesteps this issue, standing out for also being logically coherent and offering a measure of evidence against H 0 , although its application to nonparametric settings is still limited. In this work, we use Gaussian process priors to derive FBST procedures that evaluate general linearity assumptions, such as testing the adherence of data and performing variable selection to linear models. We also make use of pragmatic hypotheses to verify if the data might be compatible with a linear model when factors such as measurement errors or utility judgments are accounted for. This contribution extends the theory of the FBST, allowing for its application in nonparametric settings and requiring, at most, simple optimization procedures to reach the desired conclusion. Full article
Show Figures

Figure 1

23 pages, 5458 KB  
Article
Global Prior-Guided Distortion Representation Learning Network for Remote Sensing Image Blind Super-Resolution
by Guanwen Li, Ting Sun, Shijie Yu and Siyao Wu
Remote Sens. 2025, 17(16), 2830; https://doi.org/10.3390/rs17162830 - 14 Aug 2025
Viewed by 2996
Abstract
Most existing deep learning-based super-resolution (SR) methods for remote sensing images rely on predefined degradation assumptions (e.g., bicubic downsampling). However, when real-world degradations deviate from these assumptions, their performance deteriorates significantly. Moreover, explicit degradation estimation approaches based on iterative schemes inevitably lead to [...] Read more.
Most existing deep learning-based super-resolution (SR) methods for remote sensing images rely on predefined degradation assumptions (e.g., bicubic downsampling). However, when real-world degradations deviate from these assumptions, their performance deteriorates significantly. Moreover, explicit degradation estimation approaches based on iterative schemes inevitably lead to accumulated estimation errors and time-consuming processes. In this paper, instead of explicitly estimating degradation types, we first innovatively introduce an MSCN_G coefficient to capture global prior information corresponding to different distortions. Subsequently, distortion-enhanced representations are implicitly estimated through contrastive learning and embedded into a super-resolution network equipped with multiple distortion decoders (D-Decoder). Furthermore, we propose a distortion-related channel segmentation (DCS) strategy that reduces the network’s parameters and computation (FLOPs). We refer to this Global Prior-guided Distortion-enhanced Representation Learning Network as GDRNet. Experiments on both synthetic and real-world remote sensing images demonstrate that our GDRNet outperforms state-of-the-art blind SR methods for remote sensing images in terms of overall performance. Under the experimental condition of anisotropic Gaussian blurring without added noise, with a kernel width of 1.2 and an upscaling factor of 4, the super-resolution reconstruction of remote sensing images on the NWPU-RESISC45 dataset achieves a PSNR of 28.98 dB and SSIM of 0.7656. Full article
Show Figures

Figure 1

25 pages, 434 KB  
Article
The Impact of Digitalization on Carbon Emission Efficiency: An Intrinsic Gaussian Process Regression Approach
by Yongtong Hu, Jiaqi Xu and Tao Liu
Sustainability 2025, 17(14), 6551; https://doi.org/10.3390/su17146551 - 17 Jul 2025
Cited by 1 | Viewed by 632
Abstract
This study introduces an intrinsic Gaussian Process Regression (iGPR) model for the first time, which incorporates non-Euclidean spatial covariates via a Gaussian process prior to analyzing the relationship between digitalization and carbon emission efficiency. The iGPR model’s hierarchical design embeds a Gaussian process [...] Read more.
This study introduces an intrinsic Gaussian Process Regression (iGPR) model for the first time, which incorporates non-Euclidean spatial covariates via a Gaussian process prior to analyzing the relationship between digitalization and carbon emission efficiency. The iGPR model’s hierarchical design embeds a Gaussian process as a flexible spatial random effect with a heat-kernel-based covariance function to capture the manifold geometry of spatial features. To enable tractable inference, we employ a penalized maximum-likelihood estimation (PMLE) approach to jointly estimate regression coefficients and covariance hyperparameters. Using a panel dataset linking a national digitalization (modernization) index to carbon emission efficiency, the empirical analysis demonstrates that digitalization has a significantly positive impact on carbon emission efficiency while accounting for spatial heterogeneity. The iGPR model also exhibits superior predictive accuracy compared to state-of-the-art machine learning methods (including XGBoost, random forest, support vector regression, ElasticNet, and a standard Gaussian process regression), achieving the lowest mean squared error (MSE = 0.0047) and an average prediction error near zero. Robustness checks include instrumental-variable GMM estimation to address potential endogeneity across the efficiency distribution and confirm the stability of the estimated positive effect of digitalization. Full article
Show Figures

Figure 1

17 pages, 3854 KB  
Article
Research on Signal Processing Algorithms Based on Wearable Laser Doppler Devices
by Yonglong Zhu, Yinpeng Fang, Jinjiang Cui, Jiangen Xu, Minghang Lv, Tongqing Tang, Jinlong Ma and Chengyao Cai
Electronics 2025, 14(14), 2761; https://doi.org/10.3390/electronics14142761 - 9 Jul 2025
Viewed by 480
Abstract
Wearable laser Doppler devices are susceptible to complex noise interferences, such as Gaussian white noise, baseline drift, and motion artifacts, with motion artifacts significantly impacting clinical diagnostic accuracy. Addressing the limitations of existing denoising methods—including traditional adaptive filtering that relies on prior noise [...] Read more.
Wearable laser Doppler devices are susceptible to complex noise interferences, such as Gaussian white noise, baseline drift, and motion artifacts, with motion artifacts significantly impacting clinical diagnostic accuracy. Addressing the limitations of existing denoising methods—including traditional adaptive filtering that relies on prior noise information, modal decomposition techniques that depend on empirical parameter optimization and are prone to modal aliasing, wavelet threshold functions that struggle to balance signal preservation with smoothness, and the high computational complexity of deep learning approaches—this paper proposes an ISSA-VMD-AWPTD denoising algorithm. This innovative approach integrates an improved sparrow search algorithm (ISSA), variational mode decomposition (VMD), and adaptive wavelet packet threshold denoising (AWPTD). The ISSA is enhanced through cubic chaotic mapping, butterfly optimization, and sine–cosine search strategies, targeting the minimization of the envelope entropy of modal components for adaptive optimization of VMD’s decomposition levels and penalty factors. A correlation coefficient-based selection mechanism is employed to separate target and mixed modes effectively, allowing for the efficient removal of noise components. Additionally, an exponential adaptive threshold function is introduced, combining wavelet packet node energy proportion analysis to achieve efficient signal reconstruction. By leveraging the rapid convergence property of ISSA (completing parameter optimization within five iterations), the computational load of traditional VMD is reduced while maintaining the denoising accuracy. Experimental results demonstrate that for a 200 Hz test signal, the proposed algorithm achieves a signal-to-noise ratio (SNR) of 24.47 dB, an improvement of 18.8% over the VMD method (20.63 dB), and a root-mean-square-error (RMSE) of 0.0023, a reduction of 69.3% compared to the VMD method (0.0075). The processing results for measured human blood flow signals achieve an SNR of 24.11 dB, a RMSE of 0.0023, and a correlation coefficient (R) of 0.92, all outperforming other algorithms, such as VMD and WPTD. This study effectively addresses issues related to parameter sensitivity and incomplete noise separation in traditional methods, providing a high-precision and low-complexity real-time signal processing solution for wearable devices. However, the parameter optimization still needs improvement when dealing with large datasets. Full article
Show Figures

Figure 1

13 pages, 1700 KB  
Article
A Simple Yet Powerful Hybrid Machine Learning Approach to Aid Decision-Making in Laboratory Experiments
by Bernardo Campos Diocaretz, Ágota Tűzesi and Andrei Herdean
Mach. Learn. Knowl. Extr. 2025, 7(3), 60; https://doi.org/10.3390/make7030060 - 25 Jun 2025
Viewed by 979
Abstract
High-dimensional experimental spaces and resource constraints challenge modern science. We introduce a hybrid machine-learning (ML) framework that combines Ordinary Least Squares (OLS) for global surface estimation, Gaussian Process (GP) regression for uncertainty modelling, expected improvement (EI) for active learning, and K-means clustering for [...] Read more.
High-dimensional experimental spaces and resource constraints challenge modern science. We introduce a hybrid machine-learning (ML) framework that combines Ordinary Least Squares (OLS) for global surface estimation, Gaussian Process (GP) regression for uncertainty modelling, expected improvement (EI) for active learning, and K-means clustering for diversifying conditions. We applied this approach to published growth-rate data of the diatom Thalassiosira pseudonana, originally measured across 25 phosphate–temperature conditions. Using the nutrient–temperature model as a simulator, our ML framework located the optimal growth conditions in only 25 virtual experiments—matching the original study’s outcome. Sensitivity analyses further revealed that fewer iterations and controlled batch sizes maintain accuracy even with higher data variability. This demonstrates that ML-guided experimentation can achieve expert-level decision-making without extensive prior data, reducing experimental burden while preserving rigour. Our results highlight the promise of algorithm-assisted experimentation in biology, agriculture, and medicine, marking a shift toward smarter, data-driven scientific workflows. Full article
Show Figures

Graphical abstract

21 pages, 2550 KB  
Article
Enhancing Neural Network Interpretability Through Deep Prior-Guided Expected Gradients
by Su-Ying Guo and Xiu-Jun Gong
Appl. Sci. 2025, 15(13), 7090; https://doi.org/10.3390/app15137090 - 24 Jun 2025
Viewed by 812
Abstract
The increasing adoption of DNNs in critical domains such as healthcare, finance, and autonomous systems underscores the growing importance of explainable artificial intelligence (XAI). In these high-stakes applications, understanding the decision-making processes of models is essential for ensuring trust and safety. However, traditional [...] Read more.
The increasing adoption of DNNs in critical domains such as healthcare, finance, and autonomous systems underscores the growing importance of explainable artificial intelligence (XAI). In these high-stakes applications, understanding the decision-making processes of models is essential for ensuring trust and safety. However, traditional DNNs often function as “black boxes,” delivering accurate predictions without providing insight into the factors driving their outputs. Expected gradients (EG) is a prominent method for making such explanations by calculating the contribution of each input feature to the final decision. Despite its effectiveness, conventional baselines used in state-of-the-art implementations of EG often lack a clear definition of what constitutes “missing” information. This study proposes DeepPrior-EG, a deep prior-guided EG framework for leveraging prior knowledge to more accurately align with the concept of missingness and enhance interpretive fidelity. It resolves the baseline misalignment by initiating gradient path integration from learned prior baselines, which are derived from the deep features of CNN layers. This approach not only mitigates feature absence artifacts but also amplifies critical feature contributions through adaptive gradient aggregation. This study further introduces two probabilistic prior modeling strategies: a multivariate Gaussian model (MGM) to capture high-dimensional feature interdependencies and a Bayesian nonparametric Gaussian mixture model (BGMM) that autonomously infers mixture complexity for heterogeneous feature distributions. An explanation-driven model retraining paradigm is also implemented to validate the robustness of the proposed framework. Comprehensive evaluations across various qualitative and quantitative metrics demonstrate its superior interpretability. The BGMM variant achieves competitive performance in attribution quality and faithfulness against existing methods. DeepPrior-EG advances the interpretability of complex models within the XAI landscape and unlocks their potential in safety-critical applications. Full article
Show Figures

Figure 1

35 pages, 8283 KB  
Article
PIABC: Point Spread Function Interpolative Aberration Correction
by Chanhyeong Cho, Chanyoung Kim and Sanghoon Sull
Sensors 2025, 25(12), 3773; https://doi.org/10.3390/s25123773 - 17 Jun 2025
Cited by 1 | Viewed by 752
Abstract
Image quality in high-resolution digital single-lens reflex (DSLR) systems is degraded by Complementary Metal-Oxide-Semiconductor (CMOS) sensor noise and optical imperfections. Sensor noise becomes pronounced under high-ISO (International Organization for Standardization) settings, while optical aberrations such as blur and chromatic fringing distort the signal. [...] Read more.
Image quality in high-resolution digital single-lens reflex (DSLR) systems is degraded by Complementary Metal-Oxide-Semiconductor (CMOS) sensor noise and optical imperfections. Sensor noise becomes pronounced under high-ISO (International Organization for Standardization) settings, while optical aberrations such as blur and chromatic fringing distort the signal. Optical and sensor-level noise are distinct and hard to separate, but prior studies suggest that improving optical fidelity can suppress or mask sensor noise. Upon this understanding, we introduce a framework that utilizes densely interpolated Point Spread Functions (PSFs) to recover high-fidelity images. The process begins by simulating Gaussian-based PSFs as pixel-wise chromatic and spatial distortions derived from real degraded images. These PSFs are then encoded into a latent space to enhance their features and used to generate refined PSFs via similarity-weighted interpolation at each target position. The interpolated PSFs are applied through Wiener filtering, followed by residual correction, to restore images with improved structural fidelity and perceptual quality. We compare our method—based on pixel-wise, physical correction, and densely interpolated PSF at pre-processing—with post-processing networks, including deformable convolutional neural networks (CNNs) that enhance image quality without modeling degradation. Evaluations on DIV2K and RealSR-V3 confirm that our strategy not only enhances structural restoration but also more effectively suppresses sensor-induced artifacts, demonstrating the benefit of explicit physical priors for perceptual fidelity. Full article
Show Figures

Figure 1

23 pages, 31418 KB  
Article
Sparse Inversion of Gravity and Gravity Gradient Data Using a Greedy Cosine Similarity Search Algorithm
by Luofan Xiong, Zhengyuan Jia, Gang Zhang and Guibin Zhang
Remote Sens. 2025, 17(12), 2060; https://doi.org/10.3390/rs17122060 - 15 Jun 2025
Viewed by 763
Abstract
Joint inversion of gravity and gravity gradient data are of paramount importance in geophysical exploration, as the integration of these datasets enhances subsurface resolution and facilitates the accurate delineation of ore body shapes and boundaries. Conventional regularization methods, such as the L2 [...] Read more.
Joint inversion of gravity and gravity gradient data are of paramount importance in geophysical exploration, as the integration of these datasets enhances subsurface resolution and facilitates the accurate delineation of ore body shapes and boundaries. Conventional regularization methods, such as the L2-norm, frequently yield excessively smooth solutions, which complicates the recovery of sharp boundaries. Furthermore, disparities in data units, magnitudes, and noise levels introduce additional complexities in selecting appropriate weighting functions and inversion parameters. To address these challenges, this study proposes a greedy inversion method based on cosine similarity, which identifies the most relevant cells and reduces the complexity involved in data weighting and parameter selection. Additionally, it incorporates prior information on density limits to achieve a high-resolution and sparse solution. To further enhance the stability and accuracy of the inversion process, a pruning mechanism is introduced to dynamically detect and remove erroneously selected cells, thereby suppressing error propagation. Synthetic model experiments demonstrate that incorporating the pruning mechanism significantly improves inversion accuracy. The method not only accurately resolves models of varying volumes while avoiding local convergence issues in the presence of major anomalies, but also exhibits strong robustness against noise, successfully delineating clear boundaries even when applied to complex composite models contaminated with 10% Gaussian noise. Finally, when applied to the joint inversion of measured gravity and gravity gradient tensor data from the Vinton salt dome, the results closely align with previous studies and actual geological observations. Full article
Show Figures

Figure 1

Back to TopTop