Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,691)

Search Parameters:
Keywords = deep uncertainties

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 2277 KB  
Article
Enhancing the Safety of UAV Object Detection: A Novel OOD Algorithm for Filtering Prediction Uncertainties in Low-Altitude Airspace
by Xi Chen, Xinru Shi, Lei Dong, Jiachen Liu and Guoqiang Yuan
Drones 2026, 10(5), 328; https://doi.org/10.3390/drones10050328 (registering DOI) - 27 Apr 2026
Abstract
With the rapid development of AI technology in the low-altitude field, the predictive uncertainty of ML models, particularly object-detection models, has become a key bottleneck restricting their airworthiness and safe deployment. A typical manifestation of this is that when facing test samples in [...] Read more.
With the rapid development of AI technology in the low-altitude field, the predictive uncertainty of ML models, particularly object-detection models, has become a key bottleneck restricting their airworthiness and safe deployment. A typical manifestation of this is that when facing test samples in open-world scenarios, object-detection models exhibit overconfidence in erroneous predictions. To address this issue, this paper proposes an anomaly-scoring algorithm for out-of-distribution (OOD) evaluation based on augmented deep network features, named PCA-HBOS. By integrating the high-dimensional semantic features extracted by deep networks, the algorithm can score sample distributions, thereby enabling the identification of both in-distribution and out-of-distribution samples. Through comparisons with mainstream OOD algorithms, the superiority of the PCA-HBOS in low-altitude scenarios is validated. Experimental results on three multi-sensor Full article
37 pages, 2442 KB  
Review
Ground Penetrating Radar for Subsurface Utility Detection: Methods, Challenges, and Future Directions
by Sijie Gao and Da Hu
Sensors 2026, 26(9), 2708; https://doi.org/10.3390/s26092708 (registering DOI) - 27 Apr 2026
Abstract
Ground-penetrating radar (GPR) has applications across many domains, including archaeology, mining, and infrastructure inspection. This review is specifically focused on urban subsurface utility mapping, where accurate detection of buried pipelines, cables, and conduits is critical for excavation safety and infrastructure management. Within this [...] Read more.
Ground-penetrating radar (GPR) has applications across many domains, including archaeology, mining, and infrastructure inspection. This review is specifically focused on urban subsurface utility mapping, where accurate detection of buried pipelines, cables, and conduits is critical for excavation safety and infrastructure management. Within this scope, two major barriers are identified: event–utility mismatch and the synthetic–field domain gap. Bibliometric analysis shows increasing reliance on deep learning, yet most methods remain limited to event-level hyperbola detection rather than utility-level inference. In real urban environments, radar responses are often affected by orientation-dependent signatures, clutter, overlapping reflections, and non-utility anomalies, making detected events difficult to map directly to physical infrastructure. In parallel, models trained on synthetic data frequently show limited field generalization because simulated radargrams do not fully reproduce soil heterogeneity, acquisition variability, and system artifacts. The review argues that future progress in urban utility mapping requires a shift toward utility-level reasoning supported by multi-sensor fusion, physics-guided learning, hybrid simulation–field datasets, and uncertainty-aware interpretation. Such advances are essential for making GPR outputs more reliable and actionable in urban engineering practice. Full article
(This article belongs to the Special Issue Radars, Sensors and Applications for Applied Geophysics)
22 pages, 11494 KB  
Article
Wind-Radiation Data-Driven Modelling Using Derivative Transform, Deep-LSTM, and Stochastic Tree AI Learning in 2-Layer Meteo-Patterns
by Ladislav Zjavka
Modelling 2026, 7(3), 82; https://doi.org/10.3390/modelling7030082 (registering DOI) - 27 Apr 2026
Abstract
Self-contained local forecasting of wind and solar series can improve operational planning of wind farms and photovoltaic (PV) plant day-cycles in addition to numerical models, which are mostly behind time due to high simulation costs. Unstable electricity production requires balancing the availability of [...] Read more.
Self-contained local forecasting of wind and solar series can improve operational planning of wind farms and photovoltaic (PV) plant day-cycles in addition to numerical models, which are mostly behind time due to high simulation costs. Unstable electricity production requires balancing the availability of renewable energy (RE) with unpredictable user consumption to achieve effective usage. Artificial intelligence (AI) predictive modelling can minimise the intermittent uncertainty in wind and solar resources by trying to eliminate specific problems in RE-detached system reliability and optimal utilisation. The proposed 24 h day-training and prediction scheme comprises the starting detection and the following similarity re-assessment of sampling day-series intervals. Two-point professional weather stations record standard meteorological variables, of which the most relevant are selected as optimal model inputs. Automatic two-layer altitude observation captures key relationships between hill- and lowland-level data, which comply with pattern progress. New biologically inspired differential learning (DfL) is designed and developed to integrate adaptive neurocomputing (evolving node tree components) with customised numerical procedures of operator calculus (OC) based on derivative transforms. DfL enables the representation of uncertain dynamics related to local weather patterns. Angular and frequency data (wind azimuth, temperature, irradiation) are processed together with the amplitudes to solve simple 2-variable partial differential equations (PDEs) in binomial nodes. Differentiated data provide the fruitful information necessary to model upcoming changes in mid-term day horizons. Additional PDE components in periodic form improve the modelling of hidden complex patterns in cycle data. The DfL efficiency was proved in statistical experiments, compared to a variety of elaborated AI techniques, enhanced by selective difference input preprocessing. Successful LSTM-deep and stochastic tree learning shows little inferior model performances, notably in day-ahead estimation of chaotic 24 h wind series, and slightly better approximation of alterative 8 h solar cycles. Free parametric C++ software with the applied archive data is available for additional comparative and reproducible experiments. Full article
(This article belongs to the Section Modelling in Artificial Intelligence)
Show Figures

Figure 1

30 pages, 871 KB  
Article
Deep Active Learning for Label-Efficient Refactoring Prediction
by Abdulmajeed Alameer and Amal Alazba
Information 2026, 17(5), 420; https://doi.org/10.3390/info17050420 (registering DOI) - 27 Apr 2026
Abstract
Software refactoring improves the maintainability of code and reduces technical debt, but making the construction of a labeled refactoring dataset is a costly and labor-intensive process. To make refactoring prediction more deployable under limited annotation budgets, this paper introduces a Deep Active Learning [...] Read more.
Software refactoring improves the maintainability of code and reduces technical debt, but making the construction of a labeled refactoring dataset is a costly and labor-intensive process. To make refactoring prediction more deployable under limited annotation budgets, this paper introduces a Deep Active Learning (DAL) pipeline that iteratively trains a deep neural classifier on software-metric representations and selectively queries labels for the most informative unlabeled entities. Our proposed approach is evaluated in a pool-based setting across class-, method-, and variable-level refactoring datasets (multiple refactoring types) using a consistent training protocol and a broad set of query strategies. Results show that DAL can recover near full-data effectiveness with substantially fewer labels: on average, reaching the target performance requires 11.4% labeled data for class-level, 25.0% for method-level, and 20.0% for variable-level refactorings—corresponding to roughly 75–89% labeling savings, demonstrating improved data efficiency for refactoring prediction. Moreover, uncertainty-based and dropout-enhanced strategies were the most consistently effective query strategies across refactoring types and labeling budgets. Full article
18 pages, 5694 KB  
Article
Preference-Conditioned MADDPG for Risk-Aware Multi-Agent Siting of Urban EV Charging Stations Under Coupled Traffic-Distribution Constraints
by Yifei Qi and Bo Wang
Mathematics 2026, 14(9), 1464; https://doi.org/10.3390/math14091464 (registering DOI) - 27 Apr 2026
Abstract
The public deployment of electric vehicle charging stations must simultaneously balance construction economics, user accessibility, queueing pressure, feeder security, tail risk under demand uncertainty, and spatial fairness. These criteria are strongly coupled, yet most existing studies either rely on static optimization with limited [...] Read more.
The public deployment of electric vehicle charging stations must simultaneously balance construction economics, user accessibility, queueing pressure, feeder security, tail risk under demand uncertainty, and spatial fairness. These criteria are strongly coupled, yet most existing studies either rely on static optimization with limited behavioral realism or use multi-agent reinforcement learning for short-term charging operation rather than for long-term siting. This paper proposes a preference-conditioned multi-agent deep deterministic policy gradient (PC-MADDPG) framework for the urban charging station siting problem in a coupled traffic–distribution environment. Candidate charging sites are modeled as cooperative agents under centralized training and decentralized execution. Each agent outputs a continuous pile-allocation action, which is repaired into an integer expansion plan under a budget constraint. The environment evaluates each plan through attraction-based demand assignment, queue approximation, LinDistFlow-style feeder analysis, and a six-objective performance vector, including annual net cost, travel burden, service inconvenience, grid penalty, CVaR of unmet charging demand, and equity loss. On a reproducible benchmark with 12 demand zones, 10 candidate sites, an 11-bus radial feeder, and 16 stochastic daily scenarios, the proposed framework generates a non-dominated archive with 42 unique feasible plans. A representative PC-MADDPG solution opens 5 of 10 candidate sites and installs 20 fast-charging piles, achieving 99.88% mean demand coverage with an annual profit of 2.083 M$ and a maximum line utilization of 0.999. Relative to the NoGrid ablation, the selected full model reduces grid penalty by 23.87% and equity Gini by 51.08%, with only a 0.35% profit concession. Relative to the NoRisk ablation, the CVaR of unmet demand is lowered by 69.70%. Compared with a demand-greedy baseline, the proposed method reduces grid penalty by 11.72% and equity Gini by 25.19% while preserving similar demand coverage. These results provide proof-of-concept evidence, on a reproducible coupled benchmark, that preference-conditioned multi-agent learning can serve as a practical many-objective siting engine for charging-infrastructure planning when coupled traffic and feeder constraints are explicitly modeled. Full article
Show Figures

Figure 1

26 pages, 24594 KB  
Article
Deep Learning-Driven Adaptive-Weight Kalman Filtering for Low-Cost GNSS in Challenging Environments
by Hongxin Zhang, Sizhe Shen, Longjiang Li, Jinglei Zhang, Haobo Li, Dingyi Liu, Zhe Li, Zhiqiang Zhang and Xiaoming Wang
Sensors 2026, 26(9), 2694; https://doi.org/10.3390/s26092694 (registering DOI) - 27 Apr 2026
Abstract
The quality of Global Navigation Satellite System (GNSS) observations on smartphones is highly susceptible to multipath and non-line-of-sight (NLOS) effects in urban environments, resulting in complex and highly variable observation errors. These challenges highlight the necessity of a reliable stochastic model to ensure [...] Read more.
The quality of Global Navigation Satellite System (GNSS) observations on smartphones is highly susceptible to multipath and non-line-of-sight (NLOS) effects in urban environments, resulting in complex and highly variable observation errors. These challenges highlight the necessity of a reliable stochastic model to ensure robust and unbiased parameter estimation. However, conventional empirical stochastic models, such as elevation-dependent or signal-to-noise ratio (SNR)-based weighting schemes, are often insufficient to capture the rapidly changing stochastic behavior of observations in dense urban environments. To overcome this limitation, an adaptive GNSS stochastic model based on a deep neural network (DNN) is developed by integrating SNR, satellite elevation angle, and post-fit pseudorange residuals, which provide a strong indicator of observation quality and environmental context. Specifically, a fully connected DNN is designed to use SNR, satellite elevation angle, and post-fit pseudorange residual as input features, representing signal strength, satellite geometry, and residual information, respectively, and to learn their nonlinear relationship with measurement uncertainty. The network output is then used to adaptively update the diagonal elements of the measurement noise covariance matrix, thereby realizing epoch-wise adaptive weighting within the Kalman filtering process. The proposed DNN-based stochastic model, together with several conventional models, was evaluated using GNSS observations collected by a low-cost u-blox ZED-F9P receiver (u-blox AG, Thalwil, Switzerland) and a Samsung Galaxy S21+ smartphone (Samsung Electronics Co., Ltd., Suwon, Republic of Korea) during vehicle experiments in dense urban canyons. The code-based single point positioning (SPP) results demonstrate that the DNN-based model consistently outperforms traditional stochastic models under both open-sky and urban conditions. The improvement is particularly pronounced for smartphone observations in severely obstructed environments. The proposed DNN-based model reduces the 3D RMSE from 14.25 m, 13.68 m, and 13.05 m, obtained with the elevation-, SNR-, and integrated elevation–SNR-based models, respectively, to 8.94 m, representing an improvement of approximately 35%. A similar improvement is observed for the u-blox ZED-F9P receiver, where the 3D RMSE decreases from 5.71 m, 4.69 m, and 5.15 m to 3.10 m. These results suggest the effectiveness of the proposed DNN-based stochastic model in mitigating complex observation errors and improving positioning accuracy, providing a promising solution for reliable positioning of low-cost GNSS receivers in challenging urban environments. Full article
Show Figures

Figure 1

18 pages, 6586 KB  
Article
Automatic Grade Classification in Prostate Histopathological Images Using EfficientNet and Ordinal Focal Loss
by Woshington Valdeci de Sousa Rodrigues, Armando Luz, José Denes Lima Araújo, João Diniz and Antonio Oseas
Bioengineering 2026, 13(5), 503; https://doi.org/10.3390/bioengineering13050503 (registering DOI) - 26 Apr 2026
Abstract
The automatic classification of ISUP (International Society of Urological Pathology) grade groups in prostate histopathological images remains challenging due to the high similarity between adjacent classes, class imbalance, and label noise. In this work, we propose a deep learning pipeline based on EfficientNet [...] Read more.
The automatic classification of ISUP (International Society of Urological Pathology) grade groups in prostate histopathological images remains challenging due to the high similarity between adjacent classes, class imbalance, and label noise. In this work, we propose a deep learning pipeline based on EfficientNet convolutional neural networks combined with a hybrid loss function that integrates ordinal regression and Focal Loss to better capture the ordered nature of ISUP grades. A noise-filtering strategy based on the entropy of predictions from multiple EfficientNet models was first applied to identify and remove high-uncertainty samples from the training set. The problem was then reformulated as an ordinal regression task to explicitly model the hierarchical relationship among grades. Experiments conducted on the PANDA dataset demonstrate that removing noisy samples improved performance from κ=0.826 to κ=0.833. Incorporating ordinal loss further increased performance to κ=0.851. The best configuration, combining ordinal regression and Focal Loss, achieved κ=0.857 and an accuracy of 0.669, while reducing severe misclassifications and concentrating errors among adjacent classes. These results indicate that explicitly modeling ordinal structure and mitigating label noise are effective strategies for improving prostate cancer grading systems. Full article
66 pages, 1148 KB  
Review
Explainability and Trust in Deep Learning for Cancer Imaging: Systematic Barriers, Clinical Misalignment, and a Translational Roadmap
by Surekha Borra, Nilanjan Dey, Simon Fong, R. Simon Sherratt and Fuqian Shi
Cancers 2026, 18(9), 1361; https://doi.org/10.3390/cancers18091361 - 24 Apr 2026
Viewed by 460
Abstract
Deep learning (DL) has transformed cancer imaging by enabling automated tumour detection, classification, and risk prediction. Despite impressive diagnostic performance, limited explainability and poor uncertainty calibration continue to restrict clinical integration. This review is guided by five research questions that examine the challenges, [...] Read more.
Deep learning (DL) has transformed cancer imaging by enabling automated tumour detection, classification, and risk prediction. Despite impressive diagnostic performance, limited explainability and poor uncertainty calibration continue to restrict clinical integration. This review is guided by five research questions that examine the challenges, impact, and translational implications of explainable artificial intelligence (XAI) in oncology imaging. We identify key barriers to trust, including dataset bias, shortcut learning, opacity of convolutional neural networks, and workflow misalignment. Evidence suggests that explainable models can increase clinician confidence, reduce false positives, and improve collaborative decision-making when explanations are faithful, semantically meaningful, and uncertainty aware. We evaluate architectural strategies that embed interpretability such as concept-bottleneck models, prototype-based learning, and attention regularization along with post hoc techniques. Beyond performance metrics, we examine how interpretable AI aligns with clinical reasoning processes and analyse regulatory, ethical, and medico-legal considerations influencing deployment. The findings indicate that explainability alone is insufficient, durable trust requires epistemic alignment, prospective validation, lifecycle governance, and equity-focused evaluation. By reframing explainability as a structural design principle rather than a supplementary feature, this review outlines a pathway toward accountable and clinically dependable AI systems in oncology. Full article
(This article belongs to the Section Cancer Informatics and Big Data)
Show Figures

Figure 1

31 pages, 6114 KB  
Article
A Multi-Stage YOLOv11-Based Deep Learning Framework for Robust Instance Segmentation and Material Quantification of Mixed Plastic Waste
by Andrew N. Shafik, Mohamed H. Khafagy, Alber S. Aziz and Shereen A. Hussein
Computers 2026, 15(5), 271; https://doi.org/10.3390/computers15050271 - 24 Apr 2026
Viewed by 87
Abstract
Instance segmentation in heterogeneous waste scenes remains challenging due to object variability, deformable shapes, partial occlusion, and large appearance differences across packaging types. This study presents a YOLOv11-based deep learning framework for mixed plastic waste instance segmentation, developed to connect visual perception with [...] Read more.
Instance segmentation in heterogeneous waste scenes remains challenging due to object variability, deformable shapes, partial occlusion, and large appearance differences across packaging types. This study presents a YOLOv11-based deep learning framework for mixed plastic waste instance segmentation, developed to connect visual perception with reliable material quantification. The framework integrates curated instance-level annotations, strict split isolation, multi-stage optimization, training strategy ablation, and seed-robustness analysis to support reproducible model selection. Experimental results on a held-out test set show that the optimized model achieves a mask mAP@50:95 of 0.9337, indicating strong segmentation performance under heterogeneous waste-scene conditions. To extend the analysis beyond standard vision metrics, the framework incorporates a physics-informed mask-to-mass module that converts predicted masks into class-specific mass estimates using geometric calibration and material priors. Applied to a representative stream of 1253 detected objects, the system estimated a total plastic mass of 15.48 ± 1.08 kg, corresponding to a theoretical H2 potential of 0.41 ± 0.04 kg and a greenhouse-gas avoidance of 34.57 ± 4.15 kg CO2e. Overall, the proposed framework extends waste-scene understanding beyond vision-level assessment toward physically grounded, data-driven decision support for smart material recovery systems. Full article
(This article belongs to the Special Issue Machine Learning: Innovation, Implementation, and Impact)
23 pages, 6904 KB  
Article
Efficient Uncertainty-Aware Dual-Attention Network for Brain Tumor Detection
by Sitara Afzal and Jong Ha Lee
Mathematics 2026, 14(9), 1421; https://doi.org/10.3390/math14091421 - 23 Apr 2026
Viewed by 87
Abstract
Brain tumor detection from magnetic resonance imaging (MRI) is fundamental to computer-aided diagnosis, yet automated models must remain robust to heterogeneous imaging conditions. Despite strong recent progress, many deep learning and transformer-based approaches primarily optimize performance accuracy without explicitly improving feature selectivity and [...] Read more.
Brain tumor detection from magnetic resonance imaging (MRI) is fundamental to computer-aided diagnosis, yet automated models must remain robust to heterogeneous imaging conditions. Despite strong recent progress, many deep learning and transformer-based approaches primarily optimize performance accuracy without explicitly improving feature selectivity and spatial localization, and they typically produce deterministic output without uncertainty estimates, which limits reliability. To overcome these limitations, we introduce UA-EffNet-DA, an uncertainty-aware EfficientNet framework that addresses these limitations through three complementary components. First, EfficientNet-B4 serves as an efficient backbone for discriminative feature extraction. Second, lightweight dual attention modules, comprising channel and spatial attention in parallel, are applied to the model to emphasize what and where discriminative features to focus within MRI slices. Third, Monte Carlo dropout is employed during inference to quantify predictive uncertainty and enable confidence-aware decision. Experiments on two public benchmarks demonstrate strong performance, yielding accuracies of 98.73% on the Figshare dataset and 99.23% on the Kaggle dataset. In addition, explainable AI analysis using Gradient-weighted Class Activation Mapping (Grad-CAM) further indicates that the proposed model concentrates on diagnostically relevant tumor regions rather than background structures, supporting transparent decision-making. Ablation studies confirm the complementary contribution of dual attention refinement and uncertainty-aware inference. Overall, the proposed UA-EffNet-DA framework offers an efficient and interpretable approach for brain tumor detection that supports more reliable clinical decision support through uncertainty-aware predictions. Full article
(This article belongs to the Special Issue Recent Advances and Applications of Artificial Neural Networks)
24 pages, 1346 KB  
Article
Physics-Informed TD3 Scheduling for PEMFC-Based Building CCHP Systems with Hybrid Electrical–Thermal Storage Under Load Uncertainty
by Qi Cui, Chengwei Huang, Zhenyu Shi, Hongxin Li, Kechao Xia, Xin Li and Shanke Liu
Sustainability 2026, 18(9), 4203; https://doi.org/10.3390/su18094203 - 23 Apr 2026
Viewed by 106
Abstract
This study addresses the optimal scheduling of a proton exchange membrane fuel cell (PEMFC)-based building combined cooling, heating, and power (CCHP) system, aiming to improve operational efficiency and flexibility under coupled electricity–thermal–cooling demands and load uncertainty. A physics-informed scheduling environment was developed using [...] Read more.
This study addresses the optimal scheduling of a proton exchange membrane fuel cell (PEMFC)-based building combined cooling, heating, and power (CCHP) system, aiming to improve operational efficiency and flexibility under coupled electricity–thermal–cooling demands and load uncertainty. A physics-informed scheduling environment was developed using component models and multi-energy balance constraints, including a PEMFC with waste-heat recovery, a lithium bromide absorption chiller, a reversible heat pump with condenser heat recovery to thermal storage, a battery energy storage system, and a hot-water thermal storage tank. The dispatch problem was formulated as a Markov decision process and solved using deep reinforcement learning with TD3; performance was evaluated on typical summer and winter days, and robustness was tested by generating 100 scenarios with 30% demand perturbations. The results show that TD3 learns coordinated multi-energy dispatch patterns consistent with seasonal operation and reduces hydrogen consumption relative to a rule-based strategy under uncertainty while requiring millisecond-level inference time. Dynamic programming achieved slightly lower hydrogen consumption but incurred orders-of-magnitude higher computation time. Overall, TD3 provides a practical trade-off between near-optimal performance, robustness, and real-time applicability for PEMFC-based building CCHP scheduling. Full article
(This article belongs to the Special Issue Advances in Sustainable Hydrogen Energy and Fuel Cell Research)
32 pages, 2211 KB  
Article
An Automated Vision-Based Inspection System for Metallic Lock Surface Defects Using a Transformer-Enhanced U-Net
by Hong-Dar Lin, Shun-Yan Li and Chou-Hsien Lin
Sensors 2026, 26(9), 2608; https://doi.org/10.3390/s26092608 - 23 Apr 2026
Viewed by 128
Abstract
Surface defect inspection of metallic lock components remains challenging due to strong specular reflections, low-contrast defect patterns, and geometric variability, which limit the consistency of manual inspection and conventional automated optical inspection (AOI) systems. This study presents an integrated visual inspection framework that [...] Read more.
Surface defect inspection of metallic lock components remains challenging due to strong specular reflections, low-contrast defect patterns, and geometric variability, which limit the consistency of manual inspection and conventional automated optical inspection (AOI) systems. This study presents an integrated visual inspection framework that combines controlled image acquisition with deep learning-based semantic segmentation to enable reliable and repeatable defect detection. A standardized rotational fixture with ring illumination was developed to stabilize imaging geometry, reduce reflection variability, and support consistent multi-view acquisition. A region-of-interest (ROI) masking strategy was further applied to suppress background interference and isolate the effective inspection region. At the algorithmic level, a Transformer-enhanced U-Net (TransU-Net) architecture was employed to jointly model local spatial features and global contextual dependencies, thereby improving boundary delineation and the detection of irregular surface anomalies. In addition, a boundary-aware weighted evaluation scheme was introduced to provide a more robust and application-relevant assessment by accounting for annotation uncertainty near defect edges. Experimental results demonstrate that the proposed method achieved an F1-score of 85.15%, with an average inference time of 0.3357 s per image for model prediction. Considering additional processes such as multi-view image acquisition, mechanical rotation, and preprocessing, the overall system-level inspection time is expected to be on the order of seconds per component in practical deployment. Full article
27 pages, 13300 KB  
Article
Information-Entropic Deep Learning with Gaussian Process Regularisation for Uncertainty-Aware Quantitative Trading
by Feng Lin and Huaping Sun
Entropy 2026, 28(5), 485; https://doi.org/10.3390/e28050485 - 23 Apr 2026
Viewed by 107
Abstract
Quantitative trading systems require predictive models that simultaneously deliver accurate forecasts, calibrated uncertainty quantification, and actionable risk measures. This paper proposes an information-theoretic semiparametric regression framework combining a convolutional neural network–Transformer (CNN–Transformer) network for nonlinear temporal dependencies with a Gaussian process (GP) prior [...] Read more.
Quantitative trading systems require predictive models that simultaneously deliver accurate forecasts, calibrated uncertainty quantification, and actionable risk measures. This paper proposes an information-theoretic semiparametric regression framework combining a convolutional neural network–Transformer (CNN–Transformer) network for nonlinear temporal dependencies with a Gaussian process (GP) prior for residual autocorrelation and calibrated predictive distributions. Three theoretical results are established: an identifiability theorem guarantees joint recoverability of the nonparametric and GP components; a consistency theorem showing that the penalised maximum likelihood estimator converges at a rate n1/(2+deff); and a coverage theorem proving asymptotic nominal coverage of the GP’s credible intervals. The framework enables an entropy-regulated trading module where predictive differential entropy informs position sizing via an uncertainty-penalised Kelly criterion, Kullback–Leibler divergence quantifies model uncertainty, and CVaR-constrained optimisation controls the tail risk. Simulations show the method outperforms the CNN, long short-term memory (LSTM), Transformer, XGBoost, random forest, least absolute shrinkage and selection operator (LASSO), and standard GP regression approaches. Backtesting on four Chinese A-share stocks yielded annualised returns of 15.9–22.4% with Sharpe ratios of 0.49–0.62, maximum drawdowns below 15%, and daily 95% CVaR reductions of 28–31% relative to a full-Kelly baseline, confirming both predictive accuracy and risk management effectiveness. Full article
(This article belongs to the Special Issue Entropy, Artificial Intelligence and the Financial Markets)
41 pages, 2440 KB  
Article
Dismantling Binary Opposition in Fraud Detection: A Fuzzy Deep Learning Framework for Imbalanced Transaction Data
by Reham M. Essa, Yasser El-Kassrawy, Amer Alaya and Nevien El-Kassrawy
Risks 2026, 14(5), 98; https://doi.org/10.3390/risks14050098 - 23 Apr 2026
Viewed by 93
Abstract
In the context of behavioral finance, detecting credit card fraud remains a critical challenge, particularly when dealing with highly imbalanced datasets and ambiguous transaction patterns. This complexity highlights the limitations of traditional fraud detection models, which rely on a rigid binary distinction between [...] Read more.
In the context of behavioral finance, detecting credit card fraud remains a critical challenge, particularly when dealing with highly imbalanced datasets and ambiguous transaction patterns. This complexity highlights the limitations of traditional fraud detection models, which rely on a rigid binary distinction between “fraudulent” and “legitimate” transactions. Such a perspective restricts analysts’ ability to capture the nuanced and uncertain nature of fraudulent behavior, underscoring the need for a more flexible and practical approach. Accordingly, this study draws on Derrida’s deconstructive philosophy of binary oppositions to challenge the dominant dichotomy underlying conventional detection systems. This perspective provides a theoretical foundation for rethinking fraud detection by operationalizing deconstructive principles through the integration of fuzzy rules and machine learning architectures. The proposed approach is designed to address uncertainty, class imbalance, and semantic instability in financial transaction data. By combining fuzzy logic with deep learning, the framework deconstructs the rigid binary classification of transactions, enabling interpretation along a spectrum of legitimacy rather than as mutually exclusive categories. Deep learning techniques identify complex, nonlinear patterns that reveal overlaps between fraudulent and legitimate behaviors, while fuzzy membership functions model uncertainty and capture borderline cases that cannot be effectively handled by binary classification. Full article
19 pages, 2352 KB  
Article
Interval Prediction of Remaining Useful Life Based on Uncertainty Quantification with Bayesian Convolutional Neural Networks Featuring Dual-Output Units
by Zhendong Qu, Jialong He, Yan Liu, Song Mao and Xiaowu Han
Sensors 2026, 26(9), 2592; https://doi.org/10.3390/s26092592 - 22 Apr 2026
Viewed by 287
Abstract
RUL prediction methods do not fully account for the uncertainties caused by data scarcity and inherent noise, and they also suffer from low reliability of RUL point estimates. To tackle these challenges, this paper proposes a Bayesian convolutional neural network with dual-output units [...] Read more.
RUL prediction methods do not fully account for the uncertainties caused by data scarcity and inherent noise, and they also suffer from low reliability of RUL point estimates. To tackle these challenges, this paper proposes a Bayesian convolutional neural network with dual-output units for RUL interval predictions. The network employs the negative log-likelihood as the loss function. Thanks to its dual-output structure, it not only provides point estimates, but also quantifies the aleatoric uncertainty inherent in the data. During the training process, the CNN is reformulated using Bayesian principles, and the Bayes-by-backprop method is applied to train the network. This transformation converts model parameters from fixed values into random variables. As a result, epistemic uncertainty caused by model inaccuracies and limited data can be quantified. Experimental validation on the IEEE PHM Challenge 2012 dataset demonstrated that the proposed method achieved a higher prediction accuracy than state-of-the-art uncertainty-aware prediction approaches, demonstrating a better applicability in engineering practice. Full article
(This article belongs to the Special Issue Sensing Technologies in Industrial Defect Detection)
Back to TopTop