Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (914)

Search Parameters:
Keywords = noise and regularization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1102 KB  
Article
Research on the Application of the Joint Algorithm of Improved Wavelet Denoising and Improved UKF in Radar Measurement Data Processing
by Baolu Yang and Liangming Wang
Modelling 2026, 7(3), 79; https://doi.org/10.3390/modelling7030079 - 23 Apr 2026
Abstract
To address the insufficient parameter estimation accuracy and poor filtering convergence caused by noise in radar trajectory measurement data, this paper proposes a joint framework combining SW-STPSO adaptive wavelet denoising and an improved Unscented Kalman Filter (UKF). First, SW-STPSO preprocesses noisy data using [...] Read more.
To address the insufficient parameter estimation accuracy and poor filtering convergence caused by noise in radar trajectory measurement data, this paper proposes a joint framework combining SW-STPSO adaptive wavelet denoising and an improved Unscented Kalman Filter (UKF). First, SW-STPSO preprocesses noisy data using a sliding-window strategy and improved particle swarm optimization to adapt wavelet parameters to local noise characteristics. Then, the improved UKF adopts exponential-decay adaptive Q adjustment and covariance matrix positive-definite regularization to achieve high-precision estimation of ballistic parameters, including position, velocity, and ballistic coefficient. Simulation results show that: (1) SW-STPSO denoising improves subsequent parameter-estimation accuracy by more than 60% compared with the case without denoising; (2) the improved UKF achieves 37% faster convergence and 42% higher stability than the traditional UKF; and (3) the joint scheme reduces the position RMSE, velocity RMSE, and ballistic-coefficient RMSE to 0.92 m, 0.256 m/s, and 0.023 m2/kg, respectively. These results indicate that the proposed method is effective for radar trajectory data processing under the adopted simulation conditions. Full article
17 pages, 4066 KB  
Article
An Impact Load History Reconstruction Method for Composite Structures Based on FBG Sensing Data and the GCV Principle
by Jie Zeng, Jihong Xu, Yuntao Xu, Xin Zhao, Shiao Wang, Yanwei Zhou and Yuxun Wang
Sensors 2026, 26(9), 2601; https://doi.org/10.3390/s26092601 - 23 Apr 2026
Abstract
Accurately and promptly acquiring the load history characteristics of impact events on composite aircraft structures is crucial for identifying impact-induced damage and developing high-fidelity digital twin models. To address this need, we propose a method for reconstructing the impact load history on composite [...] Read more.
Accurately and promptly acquiring the load history characteristics of impact events on composite aircraft structures is crucial for identifying impact-induced damage and developing high-fidelity digital twin models. To address this need, we propose a method for reconstructing the impact load history on composite structures, leveraging Generalized Cross-Validation (GCV) and a Fiber Bragg Grating (FBG) pattern. An equivalent expansion technique based on discretized time-domain sparse strain sampling is developed to mitigate the local distortion of impact response signals, a common issue arising from the low sampling rates of quasi-distributed FBG. By incorporating Tikhonov regularization, the ill-posed nature of the impact frequency response matrix is effectively managed. Furthermore, an adaptive optimization method based on the GCV criterion is introduced to overcome the limitations of manually selecting regularization parameters and the associated constraints on noise suppression. The results show that the proposed GCV-based reconstruction method achieves an average peak relative error of 11.4% and an average root mean square error of 0.36 N for the reconstructed impact load, demonstrating that the proposed method synergistically enhances both the reconstruction of the overall impact load waveform profile and the precise characterization of transient details, even with low-rate sampling. This provides robust technical support for health monitoring and condition-based maintenance of composite structures. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

15 pages, 4324 KB  
Article
How Coupling and Noise Transform Quiescent Neurons into Complex Chaotic Oscillations
by Irina Bashkirtseva and Lev Ryashko
Mathematics 2026, 14(8), 1335; https://doi.org/10.3390/math14081335 - 16 Apr 2026
Viewed by 211
Abstract
This paper is devoted to the problem of identifying the mechanisms of hard excitation of oscillations in coupled systems of equilibrium neurons. In this study, a system of two coupled Chialvo neurons is used. For the deterministic model, we studied how increased coupling [...] Read more.
This paper is devoted to the problem of identifying the mechanisms of hard excitation of oscillations in coupled systems of equilibrium neurons. In this study, a system of two coupled Chialvo neurons is used. For the deterministic model, we studied how increased coupling causes an abrupt transformation of the quiescent neurons into complex oscillations, both regular and chaotic. We show that even in the case when the deterministic system is in equilibrium, similar spike oscillations can be generated by noise. The important role of fractal basins of short and long deterministic transients is discussed. The potential of the principal directions and confidence domain methods for analyzing noise-induced excitation is demonstrated. The phenomena of coherence resonance and the global transition from order to chaos are explored. Full article
(This article belongs to the Section C1: Difference and Differential Equations)
Show Figures

Figure 1

23 pages, 4096 KB  
Article
Prediction of the Surface Quality Obtained by Milling Using Artificial Intelligence Methods
by Andrei Osan, Mihai Banica and Cornel Florian
Coatings 2026, 16(4), 478; https://doi.org/10.3390/coatings16040478 - 16 Apr 2026
Viewed by 265
Abstract
The paper explores the use of artificial neural networks for surface roughness parameter Ra prediction when milling the finishing of flat surfaces with toroidal milling on C45 steel. The experiments were conducted on a 5-axis CNC center, varying three main parameters: cutting speed, [...] Read more.
The paper explores the use of artificial neural networks for surface roughness parameter Ra prediction when milling the finishing of flat surfaces with toroidal milling on C45 steel. The experiments were conducted on a 5-axis CNC center, varying three main parameters: cutting speed, feed per tooth, and tool axis tilt angle. In total, 70 surfaces were processed, with multiple measurements of Ra roughness. The data were preprocessed in MATLAB (noise reduction by Z-score and augmentation to 630 values) and used to train an artificial feedforward neural network with Bayesian regularization. The resulting model showed good performance on the dataset and was experimentally validated on three new parameter combinations, processed and measured independently with a 3D scanner. The results confirm the network’s ability to estimate Ra roughness based on varying process parameters. The paper proposes the model as a useful tool for assessing surface quality in finishing milling and recommends extending the experimental base as the main direction of continuation. Full article
Show Figures

Figure 1

87 pages, 1849 KB  
Article
Statistical Inference for Drift Parameters in Gaussian White Noise Models Driven by Caputo Fractional Dynamics Under Discrete Observation Schemes
by Abdelmalik Keddi and Salim Bouzebda
Symmetry 2026, 18(4), 655; https://doi.org/10.3390/sym18040655 - 14 Apr 2026
Viewed by 197
Abstract
This paper develops a rigorous inferential framework for a class of Gaussian stochastic processes driven by white noise with constant drift, whose temporal evolution is governed by a Caputo fractional derivative of order α(1/2,1). [...] Read more.
This paper develops a rigorous inferential framework for a class of Gaussian stochastic processes driven by white noise with constant drift, whose temporal evolution is governed by a Caputo fractional derivative of order α(1/2,1). The model belongs to the family of fractional Volterra processes, where memory is generated by the dynamics themselves rather than by correlated noise. We derive explicit analytical expressions for the mean, variance, and covariance structure of the solution, thereby characterizing in a precise manner how the fractional order α governs both variance growth and the strength of temporal dependence. In particular, the process exhibits correlated increments and a power-law variance scaling of order t2α1, highlighting the dual role of α as a regularity and memory parameter. Building on this structural analysis, we address the statistical problem of estimating the parameter vector (μ,σ,α) from discrete-time observations. Two complementary procedures are proposed for the estimation of the fractional order: a variance-growth method based on log–log regression of empirical variances, and a wavelet-based estimator exploiting multi-scale scaling properties of the process. For the drift and diffusion parameters (μ,σ), we construct explicit Gaussian pseudo-maximum likelihood estimators derived from the Volterra covariance structure of the increment process. We establish unbiasedness, L2-convergence, strong consistency, and asymptotic normality for all estimators. Furthermore, we derive Berry–Esseen type bounds that quantify the rate of convergence toward the Gaussian law, providing sharp distributional approximations in a genuinely fractional and non-Markovian setting. A Monte Carlo study is carried out, using high-resolution Volterra discretizations, large-scale simulation budgets, covariance-structured linear algebra, and multi-scale diagnostic tools. The numerical experiments confirm the theoretical convergence rates, demonstrate the finite-sample reliability of the estimators, and illustrate the sensitivity of the process dynamics to the fractional order α: smaller values of α produce stronger memory effects and higher variability, while values closer to one lead to smoother and more stable trajectories. The proposed methodology unifies statistical inference for long-memory Gaussian processes with fractional differential stochastic dynamics, offering a coherent analytical and computational framework applicable in areas such as quantitative finance, anomalous diffusion in physics, hydrology, and engineering systems with hereditary effects. Full article
Show Figures

Figure 1

18 pages, 844 KB  
Article
EGD: Error-Entropy-Guided Distillation for Noisy Multi-View Classification
by Xiaoyu Yang, Yanan Li, Shilin Xu and Yuan Sun
Electronics 2026, 15(8), 1596; https://doi.org/10.3390/electronics15081596 - 10 Apr 2026
Viewed by 276
Abstract
In recent years, multi-view learning has received extensive research interest. Most existing multi-view learning methods often rely on well-annotated data to improve decision accuracy. However, noisy labels are ubiquitous in multi-view data due to imperfect annotations. Although some methods have achieved promising performance [...] Read more.
In recent years, multi-view learning has received extensive research interest. Most existing multi-view learning methods often rely on well-annotated data to improve decision accuracy. However, noisy labels are ubiquitous in multi-view data due to imperfect annotations. Although some methods have achieved promising performance using robust-loss designs and implicit regularization, they fail to explicitly model the reliability of the supervision signal and fail to dynamically correct noisy labels during training. Clearly, this largely constrains their performance ceiling. To deal with this problem, we propose an Error-Entropy-Guided Distillation network (EGD) for noisy multi-view classification. In this framework, we first design an Error-Entropy (EE) metric to explicitly evaluate the reliability of sample-wise supervision, which serves as the basis for identifying and filtering noisy labels. On this basis, we adopt the distillation paradigm based on Error-Entropy (EE). The teacher model provides the student with soft label distributions that are less affected by noisy labels in the early training stage. To further mitigate noise memorization and accumulated confirmation bias, we propose a periodic memory-clearing strategy and supervision signal update strategy to prevent the teacher from error memorization and accumulating confirmation bias. Meanwhile, the student model learns from the soft supervision of the teacher to capture structured inter-class relationships. Additionally, a consistency module is employed to enhance the consistency of the student across multiple views. Extensive experiments on five benchmark datasets demonstrate that EGD consistently outperforms state-of-the-art multi-view learning methods under various noise levels. Full article
(This article belongs to the Special Issue Applications in Computer Vision and Pattern Recognition)
Show Figures

Figure 1

34 pages, 2037 KB  
Article
Stock Forecasting Based on Informational Complexity Representation: A Framework of Wavelet Entropy, Multiscale Entropy, and Dual-Branch Network
by Guisheng Tian, Chengjun Xu and Yiwen Yang
Entropy 2026, 28(4), 424; https://doi.org/10.3390/e28040424 - 10 Apr 2026
Viewed by 195
Abstract
Stock price sequences are characterized by pronounced nonlinearity, non-stationarity, and multi-scale volatility. They are further influenced by complex, multi-source factors, such as macroeconomic conditions and market behavior, making high-precision forecasting highly challenging. Existing approaches are limited by noise and multi-dimensional market features, as [...] Read more.
Stock price sequences are characterized by pronounced nonlinearity, non-stationarity, and multi-scale volatility. They are further influenced by complex, multi-source factors, such as macroeconomic conditions and market behavior, making high-precision forecasting highly challenging. Existing approaches are limited by noise and multi-dimensional market features, as well as difficulties in balancing prediction accuracy with model complexity. To address these challenges, we propose Wavelet Entropy and Cross-Attention Network (WECA-Net), which combines wavelet decomposition with a multimodal cross-attention mechanism. From an information-theoretic perspective, stock price dynamics reflect the time-varying uncertainty and informational complexity of the market. We employ wavelet entropy to quantify the dispersion and uncertainty of energy distribution across frequency bands, and multiscale entropy to measure the scale-dependent complexity and regularity of the time series. These entropy-derived descriptors provide an interpretable prior of “information content” for cross-modal attention fusion, thereby improving robustness and generalization under non-stationary market conditions. Experiments on Chinese stock indices, A-Share, and CSI 300 component stock datasets demonstrate that WECA-Net consistently outperforms mainstream models in Mean Absolute Error (MAE) and R2 across all datasets. Notably, on the CSI 300 dataset, WECA-Net achieves an R2 of 0.9895, underscoring its strong predictive accuracy and practical applicability. This framework is also well aligned with sensor data fusion and intelligent perception paradigms, offering a robust solution for financial signal processing and real-time market state awareness. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

37 pages, 1897 KB  
Article
A Bayesian Feature Weighting Model with Simplex-Constrained Dirichlet and Contamination-Aware Priors for Noisy Medical Data
by Mehmet Ali Cengiz, Zeynep Öztürk and Abdulmohsen Alharthi
Mathematics 2026, 14(8), 1243; https://doi.org/10.3390/math14081243 - 8 Apr 2026
Viewed by 381
Abstract
Feature weighting plays a central role in medical classification by enhancing predictive accuracy, interpretability, and clinical trust through the explicit quantification of variable relevance. Despite their widespread use, existing filter-, wrapper-, and embedded-based feature weighting methods are predominantly deterministic and exhibit pronounced sensitivity [...] Read more.
Feature weighting plays a central role in medical classification by enhancing predictive accuracy, interpretability, and clinical trust through the explicit quantification of variable relevance. Despite their widespread use, existing filter-, wrapper-, and embedded-based feature weighting methods are predominantly deterministic and exhibit pronounced sensitivity to label noise and outliers, which are pervasive in real-world medical data. This often results in unstable importance estimates and unreliable clinical interpretations. In this work, we introduce a novel Bayesian feature weighting model that fundamentally departs from existing approaches by jointly integrating simplex-constrained Dirichlet priors for global feature weights, hierarchical shrinkage priors for coefficient regularization, and contamination-aware priors for explicit modeling of label noise within a single coherent probabilistic framework. Unlike conventional Bayesian feature selection or robust classification models, the proposed formulation yields globally interpretable feature weights defined on the probability simplex, while simultaneously providing full posterior uncertainty quantification and robustness to both mislabeled observations and aberrant feature values through principled influence control. Comprehensive simulation studies across diverse contamination scenarios, together with applications to multiple real-world medical datasets, demonstrate that the proposed model consistently outperforms classical and state-of-the-art baselines in terms of discrimination, probabilistic calibration, and stability of feature-importance estimates. These results highlight the practical and methodological significance of the proposed framework as a robust, uncertainty-aware, and interpretable solution for medical decision making under noisy data conditions. Full article
(This article belongs to the Special Issue Statistical Machine Learning: Models and Its Applications)
Show Figures

Figure 1

67 pages, 7738 KB  
Review
An Overview of Complex Time Series Analysis
by Alejandro Ramírez-Rojas, Leonardo Di G. Sigalotti, Luciano Telesca and Fidel Cruz
Mathematics 2026, 14(7), 1231; https://doi.org/10.3390/math14071231 - 7 Apr 2026
Viewed by 410
Abstract
Different methodologies have been developed for the analysis and study of dynamical systems, including both theoretical models and natural systems. Examples span a wide range of applications, such as astronomy, financial and economic time series, biophysical systems, physiological phenomena, and Earth sciences, including [...] Read more.
Different methodologies have been developed for the analysis and study of dynamical systems, including both theoretical models and natural systems. Examples span a wide range of applications, such as astronomy, financial and economic time series, biophysical systems, physiological phenomena, and Earth sciences, including seismicity and climatic processes. The study of these complex systems is commonly based on the analysis of the signals they generate, using mathematical tools to extract relevant information. A broad spectrum of mathematical disciplines converges in this context, including stochastic, probability and statistical theory, entropic and informational measures, fractal and multifractal analysis, natural time analysis, modeling of non-linearity and recurrence methods, generalized entropies, non-extensive systems, machine learning, and high-dimensional and multivariate complexity. Research in this area is largely focused on the characterization of complex systems, providing indicators of determinism or stochasticity, distinguishing between regularity, chaos, and noise, and identifying topological as well as disorder-regularity features. In addition, short- and long-term forecasting, together with the identification of short- and long-range correlations, play a central role in such characterization. To address these objectives, numerous mathematical tools have been developed for the analysis of time series and point processes, each designed to capture specific signal properties. In this work, many of the most important tools used in time series analysis are compiled and reviewed, highlighting their main characteristics and the different types of complex systems to which they have been applied. Full article
(This article belongs to the Special Issue Recent Advances in Time Series Analysis, 2nd Edition)
Show Figures

Figure 1

34 pages, 8819 KB  
Article
Mitigating Overfitting and Physical Inconsistency in Flood Susceptibility Mapping: A Physics-Constrained Evolutionary Machine Learning Framework for Ungauged Alpine Basins
by Chuanjie Yan, Lingling Wu, Peng Huang, Jiajia Yue, Haowen Li, Chun Zhou, Congxiang Fan, Yinan Guo and Li Zhou
Water 2026, 18(7), 882; https://doi.org/10.3390/w18070882 - 7 Apr 2026
Viewed by 429
Abstract
Flood susceptibility mapping in high-altitude ungauged basins faces a structural dichotomy: physically based models often suffer from systematic biases due to uncertain satellite precipitation, whereas data-driven models are prone to overfitting and lack physical consistency in data-scarce regions. To resolve this, this study [...] Read more.
Flood susceptibility mapping in high-altitude ungauged basins faces a structural dichotomy: physically based models often suffer from systematic biases due to uncertain satellite precipitation, whereas data-driven models are prone to overfitting and lack physical consistency in data-scarce regions. To resolve this, this study proposes a Physically constrained Particle Swarm Optimization–Random Forest (P-PDRF) framework, validated in the Lhasa River Basin. The core innovation lies in coupling a hydrological model with statistical learning by utilizing the maximum daily runoff depth as a “Relative Hydraulic Intensity Index.” This approach leverages the topological correctness of physical simulations to circumvent absolute forcing errors. Furthermore, a Physiographically Constrained Negative Sampling (PCNS) strategy and a PSO-optimized “Shallow Tree” configuration are introduced to enforce structural regularization against stochastic noise. Empirical results demonstrate that P-PDRF achieves superior generalization (AUC = 0.942), significantly outperforming standard Random Forest, Support Vector Machine, and Analytic Hierarchy Process models. Ablation studies confirm that the dynamic index outweighs the static Topographic Wetness Index in feature importance, effectively correcting topographic artifacts where static models misclassify arid depressions as high-risk zones. This study offers a scalable Physics-Informed Machine Learning solution for the global “Prediction in Ungauged Basins” initiative. Full article
(This article belongs to the Special Issue Urban Flood Risk Assessment and Management)
Show Figures

Figure 1

28 pages, 4886 KB  
Article
Equivariant Transition Matrices for Explainable Deep Learning: A Lie Group Linearization Approach
by Pavlo Radiuk, Oleksander Barmak, Leonid Bedratyuk and Iurii Krak
Mach. Learn. Knowl. Extr. 2026, 8(4), 92; https://doi.org/10.3390/make8040092 - 6 Apr 2026
Viewed by 300
Abstract
Deep learning systems deployed in regulated settings require explanations that are accurate and stable under nuisance transformations, yet classical post hoc transition matrices rely on fidelity-only fitting that fails to guarantee consistent explanations under spatial rotations or other group actions. In this work, [...] Read more.
Deep learning systems deployed in regulated settings require explanations that are accurate and stable under nuisance transformations, yet classical post hoc transition matrices rely on fidelity-only fitting that fails to guarantee consistent explanations under spatial rotations or other group actions. In this work, we propose Equivariant Transition Matrices, a post hoc approach that augments transition matrices with Lie-group-aware structural constraints to bridge this research gap. Our method estimates infinitesimal generators in the formal and mental feature spaces, enforces an approximate intertwining relation at the Lie algebra level, and solves the resulting convex Least-Squares problem via singular value decomposition for small networks or implicit operators for large systems. We introduce diagnostics for symmetry validation and an unsupervised strategy for regularization weight selection. On a controlled synthetic benchmark, our approach reduces the symmetry defect from 13,100 to 0.0425 while increasing the mean squared error marginally from 0.00367 to 0.00524. On the MNIST dataset, the symmetry defect decreases by 72.6 percent (141.19 to 38.65) with changes in structural similarity and peak signal-to-noise ratio below 0.03 percent and 0.06 percent, respectively. These results demonstrate that explanation-level equivariance can be reliably imposed post-training, providing geometrically consistent interpretations for fixed deep models. Full article
(This article belongs to the Special Issue Trustworthy AI: Integrating Knowledge, Retrieval, and Reasoning)
Show Figures

Figure 1

40 pages, 6859 KB  
Article
Safe Cooperative Decision-Making for Multi-UAV Pursuit–Evasion Games via Opponent Intent Inference
by Wenxin Li, Yongxin Feng and Wenbo Zhang
Sensors 2026, 26(7), 2243; https://doi.org/10.3390/s26072243 - 4 Apr 2026
Viewed by 374
Abstract
Cooperative multi-UAV pursuit–evasion under occlusions and sensor noise is challenged by intermittent observability of the evader, varying observation-window lengths, and non-stationary evader tactics, all of which destabilize prediction and undermine safety-constrained cooperation. To address these challenges, we propose a safe decision-making framework that [...] Read more.
Cooperative multi-UAV pursuit–evasion under occlusions and sensor noise is challenged by intermittent observability of the evader, varying observation-window lengths, and non-stationary evader tactics, all of which destabilize prediction and undermine safety-constrained cooperation. To address these challenges, we propose a safe decision-making framework that uses behavior mode and subgoal inference as intermediate representations for interpretable, uncertainty-aware cooperation. Specifically, an observation-driven generative intent–subgoal model infers the evader’s behavior mode and subgoal from short observation windows. Building on this model, a length-agnostic trajectory predictor is trained via multi-window knowledge distillation and consistency regularization to produce future trajectory predictions with calibrated uncertainty for arbitrary observation-window lengths, thereby reducing cross-window inference inconsistency and lowering online computational cost. Based on these predictions, we derive belief and risk features and develop a belief–risk-gated hierarchical multi-agent policy based on soft actor-critic with a safety projection layer, enabling adaptive strategy switching and a controllable trade-off between efficiency and safety. Experiments in obstacle-rich pursuit–evasion environments with randomized layouts and diverse obstacle configurations demonstrate more stable cooperative capture, safer maneuvering, and lower decision variance than representative baselines, indicating strong robustness and real-time feasibility. Specifically, across different observation-window settings, the proposed method improves the normalized expected return by approximately 5–7% over the strongest baseline and reduces pursuer losses by roughly 22–25%. Moreover, its end-to-end decision latency consistently remains within the 50 ms control cycle. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

24 pages, 11445 KB  
Article
SIMRET: A Similarity-Guided Retinex Approach for Low-Light Enhancement
by Abdülmuttalip Öztürk and Ferzan Katırcıoğlu
Appl. Sci. 2026, 16(7), 3517; https://doi.org/10.3390/app16073517 - 3 Apr 2026
Viewed by 225
Abstract
Standard Retinex-based algorithms typically rely on gradient constraints to decompose an image, assuming that illumination is spatially smooth while reflectance contains sharp details. However, strictly gradient-based priors frequently produce halo artifacts or over-smoothing because they are unable to differentiate between intrinsic structural edges [...] Read more.
Standard Retinex-based algorithms typically rely on gradient constraints to decompose an image, assuming that illumination is spatially smooth while reflectance contains sharp details. However, strictly gradient-based priors frequently produce halo artifacts or over-smoothing because they are unable to differentiate between intrinsic structural edges and high-frequency noise. In this paper, we propose a novel Similarity Image-Guided Retinex (SIMRET) model that fundamentally diverges from traditional derivative-based regularization. We present a color-based pixel-level similarity analysis to build a global guidance matrix rather than merely depending on local gradients. This Similarity Image functions as a reliable weight map during the decomposition process by mathematically encoding the chromatic relationships and spatial coherence between pixels. The model strictly maintains consistency across structural boundaries to avoid halo effects while adaptively enforcing smoothness in homogeneous regions to suppress noise by incorporating this similarity guidance into the optimization objective. We solve the proposed SIMRET model using an alternating optimization framework, where the similarity constraints effectively regularize the ill-posed decomposition problem. Extensive tests on various low-light datasets show that the suggested model successfully overcomes the trade-off between noise reduction and detail preservation, achieving better visual naturalness and signal fidelity than state-of-the-art techniques. Full article
Show Figures

Figure 1

25 pages, 11059 KB  
Article
Few-Shot Open-Set Object Detection with a Synthesized Monument Guided by Contrastive Distilled Prompts
by Hao Chen and Ying Chen
Appl. Sci. 2026, 16(7), 3474; https://doi.org/10.3390/app16073474 - 2 Apr 2026
Viewed by 307
Abstract
Few-shot open-set object detection (FS-OSOD) remains challenging in real-world scenarios, where detectors must accurately recognize known objects from few examples while reliably rejecting vast unknown categories. Under this setting, decision boundaries between known and unknown classes are easily distorted by data scarcity and [...] Read more.
Few-shot open-set object detection (FS-OSOD) remains challenging in real-world scenarios, where detectors must accurately recognize known objects from few examples while reliably rejecting vast unknown categories. Under this setting, decision boundaries between known and unknown classes are easily distorted by data scarcity and background clutter, leading to severe overfitting on base classes and overconfident misclassification of unknowns. Recent research attempts to alleviate these issues by regularizing detection heads to suppress base-class bias, or by leveraging vision–language priors through open-vocabulary alignment and prompt tuning to enhance semantic transferability. However, these solutions often overlook explicit modeling of truly out-of-set unknowns and the instability of prompt adaptation in low-data regimes, which can cause boundary drifts and make unknown proposals be absorbed by similar seen classes or even suppressed as background. To alleviate these issues, a guided prompt–monument network (GPMN) that is proposed, which jointly enhances prompt learning and feature representation learning for FS-OSOD. First, the contrastive distilled prompts (CDP) module employs a teacher–student prompt framework to decouple optimization across base, novel, and unknown classes. This strategy preserves transferability between zero-shot and few-shot settings while enhancing discrimination on base categories. Second, a synthesized monument module (SMM) maintains class-centered memory with momentum-updated prototypes and a non-parametric classifier, which compresses the overlap between seen and unseen distributions and provides a stable rejection margin for unknowns with strong co-occurrence and background noise. Compared with existing head-regularization and open-vocabulary prompt-tuning pipelines, GPMN explicitly targets both base-class bias and seen–unseen overlap at the region level. Extensive experiments on VOC10-5-5 and VOC-COCO benchmarks demonstrate that GPMN consistently improves unknown recall and few-shot mAP over representative FS-OSOD baselines. These results suggest that prompt-level decoupling mitigates base-class bias, whereas memory-anchored regularization enlarges the seen–unseen margin, jointly supporting reliable unknown rejection in scarce-supervision regimes. Full article
(This article belongs to the Special Issue Advances in Computer Vision and Digital Image Processing)
Show Figures

Figure 1

26 pages, 2802 KB  
Article
Dual-Channel Controllable Diffusion Network Based on Hybrid Representations
by Yue Tian, Tianyi Xu, Yinan Hao, Guojun Yang, Hongda Qi and Qin Zhao
Mathematics 2026, 14(7), 1144; https://doi.org/10.3390/math14071144 - 29 Mar 2026
Viewed by 258
Abstract
Traditional social recommendation methods often focus on static representations of users and items, neglecting dynamic changes in user interests and item attractiveness over time, which makes it challenging to adapt to temporal variations in user interests. Additionally, the propagation of information along explicit [...] Read more.
Traditional social recommendation methods often focus on static representations of users and items, neglecting dynamic changes in user interests and item attractiveness over time, which makes it challenging to adapt to temporal variations in user interests. Additionally, the propagation of information along explicit social relationships tends to over-smooth features and weaken individual preferences, while static implicit relationships may increase short-term noise. Thus, a Dual-channel Controllable Diffusion Network based on Hybrid Representations (HR-DCDN) is proposed for social recommendation. The HR-DCDN first incorporates temporal factors by combining dynamic and static representations to capture changes in user interests and item attractiveness. Then, our method proposes a dual-channel aggregation mechanism to obtain higher-order representations of users and items. Explicit social relationships serve as the social-influence channel, while implicit social relationships discovered via dynamic implicit relationship mining constitute the preference-homophily channel. In addition, a learnable polynomial spectral filter incorporates residual connections and dual-channel fusion information at each propagation step, stabilizing deep propagation and alleviating representation homogenization to a limited extent while preserving high-frequency preference information. Finally, we jointly optimize a cross-layer InfoNCE objective on the perturbed interaction branch with the supervised rating loss, which provides an additional empirical regularization effect, improves robustness, and helps preserve representation diversity without altering the graph structure. Experimental results demonstrate that our model outperforms baseline methods on two real-life social datasets. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

Back to TopTop