Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (25)

Search Parameters:
Keywords = evidential deep learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1306 KB  
Review
DNA Mixture Deconvolution: A Four-Strategy Framework from Physical Separation to Database Searching
by Qiang Zhu, Zhigang Mao and Ji Zhang
Genes 2026, 17(4), 434; https://doi.org/10.3390/genes17040434 - 9 Apr 2026
Viewed by 182
Abstract
DNA mixture interpretation remains one of the most technically demanding challenges in forensic genetics. While probabilistic genotyping (PG) systems have substantially advanced likelihood ratio (LR) evaluation, comparatively less attention has been devoted to the systematic reconstruction of contributor genotypes, particularly in no-suspect and [...] Read more.
DNA mixture interpretation remains one of the most technically demanding challenges in forensic genetics. While probabilistic genotyping (PG) systems have substantially advanced likelihood ratio (LR) evaluation, comparatively less attention has been devoted to the systematic reconstruction of contributor genotypes, particularly in no-suspect and database-search contexts. This review synthesizes recent developments in DNA mixture deconvolution through a four-strategy framework: (i) physical and biological separation, (ii) high-information genetic markers, (iii) continuous probabilistic algorithms, and (iv) integration with database searching infrastructures. Upstream approaches, including single-cell isolation and sequencing, reduce mixture complexity at the molecular level. Marker innovations such as microhaplotypes, MiniHaps and DIP-STRs increase per-locus information content and enhance resistance to degradation. Downstream probabilistic models—extended from STRs to SNPs and microhaplotypes—leverage quantitative signal data to infer contributor genotypes, with recent advances in Hamiltonian Monte Carlo, variational inference, and deep learning improving inferential stability and reconstruction accuracy. Importantly, genotype deconvolution and LR evaluation represent mathematically distinct objectives, requiring different validation metrics and potentially separate architectural optimization. The convergence of molecular innovation, algorithmic refinement, and LR-based database searching is progressively transforming mixture interpretation from a purely evidential assessment into an integrated investigative framework. Future progress will depend on standardized marker panels, deconvolution-specific performance metrics, and scalable LR-enabled database infrastructures. Full article
(This article belongs to the Special Issue Advances in Forensic Genetics and DNA)
Show Figures

Figure 1

36 pages, 3551 KB  
Article
Early Detection of Short-Term Performance Degradation in Electric Vehicle Lithium-Ion Batteries via Physics-Guided Multi-Sensor Fusion and Deep Learning
by David Chunhu Li
Batteries 2026, 12(4), 116; https://doi.org/10.3390/batteries12040116 - 27 Mar 2026
Viewed by 317
Abstract
Early detection of battery degradation is essential for ensuring the safety and reliability of electric vehicle (EV) systems under real-world operating variability. This paper proposes a physics-guided multi-sensor learning framework, termed SensorFusion-Former (SFF), for early warning of short-term EV battery performance degradation. The [...] Read more.
Early detection of battery degradation is essential for ensuring the safety and reliability of electric vehicle (EV) systems under real-world operating variability. This paper proposes a physics-guided multi-sensor learning framework, termed SensorFusion-Former (SFF), for early warning of short-term EV battery performance degradation. The proposed approach integrates a physics-based baseline model for operational normalization, a multi-sensor fusion attention mechanism to model cross-modality interactions, and a lightweight transformer architecture for efficient temporal representation learning. Weak supervision is derived from physics-consistent residual analysis with temporal smoothing, enabling scalable training without dense manual annotations. To support reliable deployment, evidential uncertainty modeling and conformal calibration are incorporated to obtain statistically controlled decision thresholds. Experiments conducted on a real driving cycle dataset from IEEE DataPort demonstrate that SFF consistently outperforms classical machine learning methods, deep neural networks, and standard transformer models in terms of early-warning lead time, false alarm rate, and inference efficiency while maintaining competitive discriminative performance. Cross-scenario evaluations under diverse thermal conditions further confirm the robustness and generalization capability of the proposed framework. Full article
(This article belongs to the Section Energy Storage System Aging, Diagnosis and Safety)
Show Figures

Figure 1

25 pages, 3042 KB  
Article
Quantifying Epistemic Uncertainty in Multimodal Long-Tailed Classification: A Belief Entropy-Based Evidential Fusion Framework
by Guorui Zhu
Entropy 2026, 28(3), 343; https://doi.org/10.3390/e28030343 - 19 Mar 2026
Viewed by 325
Abstract
Deep multimodal learning has excelled in tasks involving vision, language, and audio modalities. Nevertheless, their performance on tail classes exhibits significant degradation under the long-tailed distributions common in real-world data, meanwhile related fusion schemes often provide only limited treatment of modality-specific uncertainty and [...] Read more.
Deep multimodal learning has excelled in tasks involving vision, language, and audio modalities. Nevertheless, their performance on tail classes exhibits significant degradation under the long-tailed distributions common in real-world data, meanwhile related fusion schemes often provide only limited treatment of modality-specific uncertainty and rarely incorporate explicit mechanisms for class-level fairness. To address these information discrepancies, we present a framework that integrates evidential reasoning with deep learning–Uncertainty-Quantified Multimodal Learning for Long-Tailed Classification (UMuLT). The framework includes: (i) an uncertainty-gated evidential fusion module that adaptively down-weights unreliable modalities; (ii) an exponential moving average (EMA) fairness regularizer that dynamically amplifies tail-class gradients; and (iii) a cross-modal consistency regularizer optimized in two stages: tail specialization with lightweight adapters on tail-class data to obtain a balanced initialization, followed by end-to-end fine-tuning. The effectiveness and practicality of our method are verified on three long-tailed benchmarks for multimodal classification. Experiments show consistent gains over strong baselines in overall metrics, calibration, and tail subset performance. Statistical significance tests confirm the superiority of the proposed framework. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

19 pages, 1298 KB  
Article
Evidential Deep Learning for Quantification of Uncertainty in Lithium-Ion Batteries Remaining Useful Life Estimation
by Luca Martiri and Loredana Cristaldi
Energies 2026, 19(6), 1513; https://doi.org/10.3390/en19061513 - 18 Mar 2026
Viewed by 287
Abstract
Lithium-ion batteries are widely used across diverse applications due to their high energy density, long cycle life, and fast charging capabilities. As battery-powered systems become increasingly critical, accurate estimation of the Remaining Useful Life (RUL) is essential for ensuring reliability, safety, and effective [...] Read more.
Lithium-ion batteries are widely used across diverse applications due to their high energy density, long cycle life, and fast charging capabilities. As battery-powered systems become increasingly critical, accurate estimation of the Remaining Useful Life (RUL) is essential for ensuring reliability, safety, and effective maintenance planning. This work investigates Evidential Deep Learning (EDL) for data-driven RUL estimation and introduces a novel risk-aware loss function designed to enhance both predictive accuracy and uncertainty quantification in the End-of-Life (EoL) region, where precise and trustworthy predictions are most needed. Using a publicly available dataset of lithium iron phosphate (LFP) cells, we benchmark the proposed approach against a baseline Conv–LSTM model, Monte Carlo (MC) Dropout, and Deep Ensembles. The results show that integrating the risk-aware loss into the EDL framework substantially improves the calibration of predictive uncertainty while achieving state-of-the-art accuracy near EoL. Unlike MC Dropout and Deep Ensembles, which exhibit increasing or unstable uncertainty as degradation accelerates, the proposed EDL model demonstrates a consistent reduction in uncertainty and significantly higher reliability in late-stage predictions. The findings indicate that the risk-aware evidential framework offers a reliable and computationally efficient solution for battery RUL estimation, enabling more informed decision-making in both safety-critical and consumer-oriented applications. Full article
(This article belongs to the Special Issue Advances in Battery Modelling, Applications, and Technology)
Show Figures

Figure 1

34 pages, 8947 KB  
Article
Lightweight Evidential Time Series Imputation Method for Bridge Structural Health Monitoring
by Die Liu, Jianxi Yang, Lihua Chen, Tingjun Xu, Youjia Zhang, Lei Zhou and Jingyuan Shen
Buildings 2026, 16(5), 1076; https://doi.org/10.3390/buildings16051076 - 9 Mar 2026
Viewed by 373
Abstract
Long-term data loss resulting from sensor malfunctions, communication interruptions, and other factors in Structural Health Monitoring (SHM) significantly undermines the reliability of damage identification and safety assessment. Existing methods—ranging from statistical approaches and low-rank matrix completion to traditional machine learning and deep learning [...] Read more.
Long-term data loss resulting from sensor malfunctions, communication interruptions, and other factors in Structural Health Monitoring (SHM) significantly undermines the reliability of damage identification and safety assessment. Existing methods—ranging from statistical approaches and low-rank matrix completion to traditional machine learning and deep learning imputation techniques—often suffer from either limited accuracy or excessive model size and slow inference, making deployment in resource-constrained scenarios difficult. To address these challenges, this paper proposes TEFN–Imputation, a lightweight and efficient time-series imputation model. This model utilizes observation-driven non-stationary normalization to mitigate the impact of time-varying characteristics and dimensional discrepancies. It employs linear projection for temporal length alignment and constructs BPA-style mass representations from dual perspectives of time and channel. Furthermore, it replaces strict Dempster–Shafer belief combination with an expectation-based evidential aggregation (readout), thereby significantly reducing computational overhead while enabling uncertainty-aware evidential indicators for interpretation rather than claiming a direct accuracy gain from uncertainty modeling. The observed accuracy and robustness improvements are primarily attributed to the normalization and dual temporal–channel modeling design under the same lightweight readout. Systematic experiments on two real-world bridge monitoring datasets, Z24 and Hell Bridge, demonstrate that TEFN consistently maintains low Mean Absolute Error (MAE) and minimal volatility across various combinations of training and testing missing rates, exhibiting high robustness against variations in missing rates and train–test mismatches. Concurrently, compared to RNN and large-scale Transformer baselines, TEFN reduces parameter count and CPU inference time by one to two orders of magnitude. Thus, it achieves a superior trade-off among accuracy, efficiency, and model scale, making it highly suitable for online SHM and imputation tasks in practical engineering applications. Across the settings on Z24, TEFN achieves a mean MAE of 0.218 with a standard deviation of 0.002, while using only 0.02 MB parameters and 2.73 ms per batch CPU inference. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

26 pages, 3681 KB  
Article
Intelligent Acquisition of Dynamic Targets via Multi-Source Information: A Fusion Framework Integrating Deep Reinforcement Learning with Evidence Theory
by Jiyao Yu, Bin Zhu, Yi Chen, Bo Xie, Xuanling Feng, Hongfei Yan, Jian Zeng and Runhua Wang
Remote Sens. 2026, 18(5), 689; https://doi.org/10.3390/rs18050689 - 26 Feb 2026
Viewed by 283
Abstract
Accurate acquisition of low-observable targets with a minimal radar cross-section (RCS) poses a significant challenge for multi-source remote sensing systems, such as integrated radar–electro-optical (REO) platforms, particularly in complex electromagnetic environments characterized by strong noise interference and a high false-alarm rate. Conventional methods, [...] Read more.
Accurate acquisition of low-observable targets with a minimal radar cross-section (RCS) poses a significant challenge for multi-source remote sensing systems, such as integrated radar–electro-optical (REO) platforms, particularly in complex electromagnetic environments characterized by strong noise interference and a high false-alarm rate. Conventional methods, which often treat data association and fusion from heterogeneous sensors as separate, offline processes, struggle with the dynamic uncertainties and real-time decision requirements of such scenarios. To address these limitations, this paper proposes a novel Evidence–Reinforcement Learning-based Decision and Control (ERL-DC) framework. It operates through a closed-loop architecture consisting of three core modules: A static assessment model for initial target prioritization, a Dempster–Shafer (D–S) evidence-based multi-source data decision generator for dynamic information fusion and uncertainty-aware target selection, and a Deep Reinforcement Learning (DRL) controller for noise-robust sensor steering. A high-fidelity simulation environment was developed to model the multi-source data stream, encompassing radar detection with clutter and false targets, as well as the physical constraints of the electro-optical (EO) servo system. Based on the averaged results from multiple Monte Carlo simulations, the proposed ERL-DC framework reduced the Average Decision Time (ADT) from 7.51 s to 4.53 s, corresponding to an absolute reduction of 2.98 s when compared to the conventional method integrating threshold logic with Model Predictive Control (MPC). Furthermore, the Net Discrimination Accuracy (NDA), derived from the statistical outcomes across all the simulation runs, exhibited an absolute increase of 37.8 percentage points, rising from 57.8% to 95.6%. These results indicate that ERL-DC achieves a more favorable trade-off in terms of scheduling efficiency, decision robustness, and resource utilization. The primary contribution is an intelligent, closed-loop architecture that tightly couples high-level evidential reasoning for multi-source data fusion with low-level adaptive control. Within the simulated environment characterized by clutter, false targets, and angular measurement noise, ERL-DC demonstrates improved target discrimination accuracy and decision efficiency compared to conventional methods. Future work will focus on online parameter adaptation and validation on physical platforms. Full article
Show Figures

Figure 1

23 pages, 2117 KB  
Article
Inferring Cosmological Parameters with Evidential Physics-Informed Neural Networks
by Hai Siong Tan
Universe 2025, 11(12), 403; https://doi.org/10.3390/universe11120403 - 5 Dec 2025
Cited by 3 | Viewed by 625
Abstract
We examine the use of a novel variant of Physics-Informed Neural Networks to predict cosmological parameters from recent supernovae and baryon acoustic oscillations (BAO) datasets. Our machine learning framework generates uncertainty estimates for target variables and the inferred unknown parameters of the underlying [...] Read more.
We examine the use of a novel variant of Physics-Informed Neural Networks to predict cosmological parameters from recent supernovae and baryon acoustic oscillations (BAO) datasets. Our machine learning framework generates uncertainty estimates for target variables and the inferred unknown parameters of the underlying PDE descriptions. Built upon a hybrid of the principles of Evidential Deep Learning, Physics-Informed Neural Networks, Bayesian Neural Networks, and Gaussian Processes, our model enables learning the posterior distribution of the unknown PDE parameters through standard gradient-descent-based training. We apply our model to an up-to-date BAO dataset (Bousis et al. 2024) calibrated with the CMB-inferred sound horizon, and the Pantheon+ Sne Ia distances (Scolnic et al. 2018), examining the relative effectiveness and mutual consistency among the standard ΛCDM, wCDM and ΛsCDM models. Unlike previous results arising from the standard approach of minimizing an appropriate χ2 function, the posterior distributions for parameters in various models trained purely on Pantheon+ data were found to be largely contained within the 2σ contours of their counterparts trained on BAO data. Our study illustrates how a data-driven machine learning approach can be suitably adapted for cosmological parameter inference. Full article
(This article belongs to the Section Cosmology)
Show Figures

Figure 1

26 pages, 3269 KB  
Article
DiagNeXt: A Two-Stage Attention-Guided ConvNeXt Framework for Kidney Pathology Segmentation and Classification
by Hilal Tekin, Şafak Kılıç and Yahya Doğan
J. Imaging 2025, 11(12), 433; https://doi.org/10.3390/jimaging11120433 - 4 Dec 2025
Cited by 1 | Viewed by 790
Abstract
Accurate segmentation and classification of kidney pathologies from medical images remain a major challenge in computer-aided diagnosis due to complex morphological variations, small lesion sizes, and severe class imbalance. This study introduces DiagNeXt, a novel two-stage deep learning framework designed to overcome these [...] Read more.
Accurate segmentation and classification of kidney pathologies from medical images remain a major challenge in computer-aided diagnosis due to complex morphological variations, small lesion sizes, and severe class imbalance. This study introduces DiagNeXt, a novel two-stage deep learning framework designed to overcome these challenges through an integrated use of attention-enhanced ConvNeXt architectures for both segmentation and classification. In the first stage, DiagNeXt-Seg employs a U-Net-based design incorporating Enhanced Convolutional Blocks (ECBs) with spatial attention gates and Atrous Spatial Pyramid Pooling (ASPP) to achieve precise multi-class kidney segmentation. In the second stage, DiagNeXt-Cls utilizes the segmented regions of interest (ROIs) for pathology classification through a hierarchical multi-resolution strategy enhanced by Context-Aware Feature Fusion (CAFF) and Evidential Deep Learning (EDL) for uncertainty estimation. The main contributions of this work include: (1) enhanced ConvNeXt blocks with large-kernel depthwise convolutions optimized for 3D medical imaging, (2) a boundary-aware compound loss combining Dice, cross-entropy, focal, and distance transform terms to improve segmentation precision, (3) attention-guided skip connections preserving fine-grained spatial details, (4) hierarchical multi-scale feature modeling for robust pathology recognition, and (5) a confidence-modulated classification approach integrating segmentation quality metrics for reliable decision-making. Extensive experiments on a large kidney CT dataset comprising 3847 patients demonstrate that DiagNeXt achieves 98.9% classification accuracy, outperforming state-of-the-art approaches by 6.8%. The framework attains near-perfect AUC scores across all pathology classes (Normal: 1.000, Tumor: 1.000, Cyst: 0.999, Stone: 0.994) while offering clinically interpretable uncertainty maps and attention visualizations. The superior diagnostic accuracy, computational efficiency (6.2× faster inference), and interpretability of DiagNeXt make it a strong candidate for real-world integration into clinical kidney disease diagnosis and treatment planning systems. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

34 pages, 2006 KB  
Article
Selective Learnable Discounting in Deep Evidential Semantic Mapping
by Dongfeng Hu, Zhiyuan Li, Junhao Chen and Jian Xu
Electronics 2025, 14(23), 4602; https://doi.org/10.3390/electronics14234602 - 24 Nov 2025
Viewed by 626
Abstract
In autonomous driving and mobile robotics applications, constructing accurate and reliable three-dimensional semantic maps poses significant challenges in resolving conflicts and uncertainties among multi-frame observations in complex environments. Traditional deterministic fusion methods struggle to effectively quantify and process uncertainties in observations, while existing [...] Read more.
In autonomous driving and mobile robotics applications, constructing accurate and reliable three-dimensional semantic maps poses significant challenges in resolving conflicts and uncertainties among multi-frame observations in complex environments. Traditional deterministic fusion methods struggle to effectively quantify and process uncertainties in observations, while existing evidential deep learning approaches, despite providing uncertainty modeling frameworks, still exhibit notable limitations when dealing with spatially varying observation quality. This paper proposes a selective learnable discounting method for deep evidential semantic mapping that introduces a lightweight selective α-Net network based on the EvSemMap framework proposed by Kim and Seo. The network can adaptively detect noisy regions and predict pixel-level discounting coefficients based on input image features. Unlike traditional global discounting strategies, this work employs a theoretically principled scaling discounting formula, e^k(x)=α(x)·ek(x), that conforms to Dempster–Shafer theory, implementing a selective adjustment mechanism that reduces evidence reliability only in noisy regions while preserving original evidence strength in clean regions. Theoretical proofs verify three core properties of the proposed method: evidence discounting under preservation (ensuring no loss of classification accuracy), valid uncertainty redistribution validity (effectively suppressing overconfidence in noisy regions), and optimality of discount coefficients (achieving the matching of the theoretical optimal solution of α*(x)=1N(X)). Experimental results demonstrate that the method achieves a 43.1% improvement in Expected Calibration Error (ECE) for noisy regions and a 75.4% improvement overall, with α-Net attaining an IoU of 1.0 with noise masks on the constructed synthetic dataset—which includes common real-scenario noise types (e.g., motion blur, abnormal illumination, and sensor noise) and where RGB features correlate with observation quality—thereby fully realizing the selective discounting design objective. Combined with additional optimization via temperature calibration techniques, this method provides an effective uncertainty management solution for deep evidential semantic mapping in complex scenarios. Full article
Show Figures

Figure 1

19 pages, 2646 KB  
Article
A Comprehensive Study of MCS-TCL: Multi-Functional Sampling for Trustworthy Compressive Learning
by Fuma Kimishima, Jian Yang and Jinjia Zhou
Information 2025, 16(9), 777; https://doi.org/10.3390/info16090777 - 7 Sep 2025
Viewed by 772
Abstract
Compressive Learning (CL) is an emerging paradigm that allows machine learning models to perform inference directly from compressed measurements, significantly reducing sensing and computational costs. While existing CL approaches have achieved competitive accuracy compared to traditional image-domain methods, they typically rely on reconstruction [...] Read more.
Compressive Learning (CL) is an emerging paradigm that allows machine learning models to perform inference directly from compressed measurements, significantly reducing sensing and computational costs. While existing CL approaches have achieved competitive accuracy compared to traditional image-domain methods, they typically rely on reconstruction to address information loss and often neglect uncertainty arising from ambiguous or insufficient data. In this work, we propose MCS-TCL, a novel and trustworthy CL framework based on Multi-functional Compressive Sensing Sampling. Our approach unifies sampling, compression, and feature extraction into a single operation by leveraging the compatibility between compressive sensing and convolutional feature learning. This joint design enables efficient signal acquisition while preserving discriminative information, leading to feature representations that remain robust across varying sampling ratios. To enhance the model’s reliability, we incorporate evidential deep learning (EDL) during training. EDL estimates the distribution of evidence over output classes, enabling the model to quantify predictive uncertainty and assign higher confidence to well-supported predictions. Extensive experiments on image classification tasks show that MCS-TCL outperforms existing CL methods, achieving state-of-the-art accuracy at a low sampling rate of 6%. Additionally, our framework reduces model size by 85.76% while providing meaningful uncertainty estimates, demonstrating its effectiveness in resource-constrained learning scenarios. Full article
(This article belongs to the Special Issue AI-Based Image Processing and Computer Vision)
Show Figures

Figure 1

29 pages, 8811 KB  
Article
Evidential Interpretation Approach for Deep Neural Networks in High-Frequency Electromagnetic Wave Processing
by Xueliang Li, Ming Su, Yu Zhu, Shansong Ma, Shifu Liu and Zheng Tong
Electronics 2025, 14(16), 3277; https://doi.org/10.3390/electronics14163277 - 18 Aug 2025
Cited by 1 | Viewed by 689
Abstract
Despite the widespread adoption of high-frequency electromagnetic wave (HF-EMW) processing, deep neural networks (DNNs) remain primarily black boxes. Interpreting the semantics behind the high-dimensional representations of a DNN is quite crucial for getting insights into the network. This study has proposed an evidential [...] Read more.
Despite the widespread adoption of high-frequency electromagnetic wave (HF-EMW) processing, deep neural networks (DNNs) remain primarily black boxes. Interpreting the semantics behind the high-dimensional representations of a DNN is quite crucial for getting insights into the network. This study has proposed an evidential representation fusion approach that interprets the high-dimensional representations of a DNN as HF-EMW semantics, such as time- and frequency-domain signal features and their physical interpretation. In this approach, an evidential discrete model based on Dempster–Shafer theory (DST) converts a subset of DNN representations to mass function reasoning on a class set, indicating whether the subset contains HF-EMW semantics information. An interpretable continuous DST-based model maps the subset into HF-EMW semantics via representation fusion. Finally, the two DST-based models are extended to interpret the learning processes of high-dimensional DNN representations. Experiments on the two datasets with 2680 and 4000 groups of HF-EMWs demonstrate that the approach can find and interpret representation subsets as HF-EMW semantics, achieving an absolute fractional output change of 39.84% with an 10% removed elements in most important features. The interpretations can be applied for visual learning evaluation, semantic-guided reinforcement learning with an improvement of 4.23% on classification accuracy, and even HF-EMW full-waveform inversion. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

31 pages, 1942 KB  
Article
An Evidential Solar Irradiance Forecasting Method Using Multiple Sources of Information
by Mohamed Mroueh, Moustapha Doumiati, Clovis Francis and Mohamed Machmoum
Energies 2024, 17(24), 6361; https://doi.org/10.3390/en17246361 - 18 Dec 2024
Cited by 1 | Viewed by 1651
Abstract
In the context of global warming, renewable energy sources, particularly wind and solar power, have garnered increasing attention in recent decades. Accurate forecasting of the energy output in microgrids (MGs) is essential for optimizing energy management, reducing maintenance costs, and prolonging the lifespan [...] Read more.
In the context of global warming, renewable energy sources, particularly wind and solar power, have garnered increasing attention in recent decades. Accurate forecasting of the energy output in microgrids (MGs) is essential for optimizing energy management, reducing maintenance costs, and prolonging the lifespan of energy storage systems. This study proposes an innovative approach to solar irradiance forecasting based on the theory of belief functions, introducing a novel and flexible evidential method for short-to-medium-term predictions. The proposed machine learning model is designed to effectively handle missing data and make optimal use of available information. By integrating multiple predictive models, each focusing on different meteorological factors, the approach enhances forecasting accuracy. The Yager combination method and pignistic transformation are utilized to aggregate the individual models. Applied to a publicly available dataset, the method achieved promising results, with an average root mean square error (RMS) of 27.83 W/m2 calculated from eight distinct forecast days. This performance surpasses the best reported results of 30.21 W/m2 from recent comparable studies for one-day-ahead solar irradiance forecasting. Comparisons with deep learning-based methods, such as long short-term memory (LSTM) networks and recurrent neural networks (RNNs), demonstrate that the proposed approach is competitive with state-of-the-art techniques, delivering reliable predictions with significantly less training data. The full potential and limitations of the proposed approach are also discussed. Full article
Show Figures

Figure 1

18 pages, 1074 KB  
Article
LogEDL: Log Anomaly Detection via Evidential Deep Learning
by Yunfeng Duan, Kaiwen Xue, Hao Sun, Haotong Bao, Yadong Wei, Zhangzheng You, Yuantian Zhang, Xiwei Jiang, Sangning Yang, Jiaxing Chen, Boya Duan and Zhonghong Ou
Appl. Sci. 2024, 14(16), 7055; https://doi.org/10.3390/app14167055 - 12 Aug 2024
Cited by 7 | Viewed by 9402
Abstract
With advancements in digital technologies such as 5G communications, big data, and cloud computing, the components of network operation systems have become increasingly complex, significantly complicating system monitoring and maintenance. Correspondingly, automated log anomaly detection has become a crucial means to ensure stable [...] Read more.
With advancements in digital technologies such as 5G communications, big data, and cloud computing, the components of network operation systems have become increasingly complex, significantly complicating system monitoring and maintenance. Correspondingly, automated log anomaly detection has become a crucial means to ensure stable network operation and protect networks from malicious attacks or failures. Conventional machine learning and deep learning methods assume consistent distributions between the training and testing data, adhering to a closed-set recognition paradigm. Nevertheless, in realistic scenarios, systems may encounter new anomalies that were not present in the training data, especially in log anomaly detection. Inspired by evidential learning, we propose a novel anomaly detector called LogEDL, which supervises the training of the model through an evidential loss function. Unlike traditional loss functions, the evidential loss function not only focuses on correct classification but also quantifies the uncertainty of predictions. This enhances the robustness and accuracy of the model in handling anomaly detection tasks while achieving functionality similar to open-set recognition. To evaluate the proposed LogEDL method, we conduct extensive experiments on three datasets, i.e., HDFS, BGL, and Thunderbird, to detect anomalous log sequences. The experimental results demonstrate that our proposed LogEDL achieves state-of-the-art performance in anomaly detection. Full article
Show Figures

Figure 1

19 pages, 11792 KB  
Article
Multi-View Scene Classification Based on Feature Integration and Evidence Decision Fusion
by Weixun Zhou, Yongxin Shi and Xiao Huang
Remote Sens. 2024, 16(5), 738; https://doi.org/10.3390/rs16050738 - 20 Feb 2024
Cited by 9 | Viewed by 3140
Abstract
Leveraging multi-view remote sensing images in scene classification tasks significantly enhances the precision of such classifications. This approach, however, poses challenges due to the simultaneous use of multi-view images, which often leads to a misalignment between the visual content and semantic labels, thus [...] Read more.
Leveraging multi-view remote sensing images in scene classification tasks significantly enhances the precision of such classifications. This approach, however, poses challenges due to the simultaneous use of multi-view images, which often leads to a misalignment between the visual content and semantic labels, thus complicating the classification process. In addition, as the number of image viewpoints increases, the quality problem for remote sensing images further limits the effectiveness of multi-view image classification. Traditional scene classification methods predominantly employ SoftMax deep learning techniques, which lack the capability to assess the quality of remote sensing images or to provide explicit explanations for the network’s predictive outcomes. To address these issues, this paper introduces a novel end-to-end multi-view decision fusion network specifically designed for remote sensing scene classification. The network integrates information from multi-view remote sensing images under the guidance of image credibility and uncertainty, and when the multi-view image fusion process encounters conflicts, it greatly alleviates the conflicts and provides more reasonable and credible predictions for the multi-view scene classification results. Initially, multi-scale features are extracted from the multi-view images using convolutional neural networks (CNNs). Following this, an asymptotic adaptive feature fusion module (AAFFM) is constructed to gradually integrate these multi-scale features. An adaptive spatial fusion method is then applied to assign different spatial weights to the multi-scale feature maps, thereby significantly enhancing the model’s feature discrimination capability. Finally, an evidence decision fusion module (EDFM), utilizing evidence theory and the Dirichlet distribution, is developed. This module quantitatively assesses the uncertainty in the multi-perspective image classification process. Through the fusing of multi-perspective remote sensing image information in this module, a rational explanation for the prediction results is provided. The efficacy of the proposed method was validated through experiments conducted on the AiRound and CV-BrCT datasets. The results show that our method not only improves single-view scene classification results but also advances multi-view remote sensing scene classification results by accurately characterizing the scene and mitigating the conflicting nature of the fusion process. Full article
(This article belongs to the Special Issue Advances in Deep Learning Approaches in Remote Sensing)
Show Figures

Graphical abstract

19 pages, 917 KB  
Article
Hybrid Uncertainty Calibration for Multimodal Sentiment Analysis
by Qiuyu Pan and Zuqiang Meng
Electronics 2024, 13(3), 662; https://doi.org/10.3390/electronics13030662 - 5 Feb 2024
Cited by 8 | Viewed by 4113
Abstract
In open environments, multimodal sentiment analysis (MSA) often suffers from low-quality data and can be disrupted by noise, inherent defects, and outliers. In some cases, unreasonable multimodal fusion methods can perform worse than unimodal methods. Another challenge of MSA is effectively enabling the [...] Read more.
In open environments, multimodal sentiment analysis (MSA) often suffers from low-quality data and can be disrupted by noise, inherent defects, and outliers. In some cases, unreasonable multimodal fusion methods can perform worse than unimodal methods. Another challenge of MSA is effectively enabling the model to provide accurate prediction when it is confident and to indicate high uncertainty when its prediction is likely to be inaccurate. In this paper, we propose an uncertain-aware late fusion based on hybrid uncertainty calibration (ULF-HUC). Firstly, we conduct in-depth research on the issue of sentiment polarity distribution in MSA datasets, establishing a foundation for an uncertain-aware late fusion method, which facilitates organic fusion of modalities. Then, we propose a hybrid uncertainty calibration method based on evidential deep learning (EDL) that balances accuracy and uncertainty, supporting the reduction of uncertainty in each modality of the model. Finally, we add two common types of noise to validate the effectiveness of our proposed method. We evaluate our model on three publicly available MSA datasets (MVSA-Single, MVSA-Multiple, and MVSA-Single-Small). Our method outperforms state-of-the-art approaches in terms of accuracy, weighted F1 score, and expected uncertainty calibration error (UCE) metrics, proving the effectiveness of the proposed method. Full article
Show Figures

Figure 1

Back to TopTop