Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (36)

Search Parameters:
Keywords = counterfactual explanation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1458 KiB  
Article
From Meals to Marks: Modeling the Impact of Family Involvement on Reading Performance with Counterfactual Explainable AI
by Myint Swe Khine, Nagla Ali and Othman Abu Khurma
Educ. Sci. 2025, 15(7), 928; https://doi.org/10.3390/educsci15070928 - 21 Jul 2025
Viewed by 171
Abstract
This study investigates the impact of family engagement on student reading achievement in the United Arab Emirates (UAE) using counterfactual explainable artificial intelligence (CXAI) analysis. Drawing data from 24,600 students in the UAE PISA dataset, the analysis employed Gradient Boosting, SHAP (SHapley Additive [...] Read more.
This study investigates the impact of family engagement on student reading achievement in the United Arab Emirates (UAE) using counterfactual explainable artificial intelligence (CXAI) analysis. Drawing data from 24,600 students in the UAE PISA dataset, the analysis employed Gradient Boosting, SHAP (SHapley Additive exPlanations), and counterfactual simulations to model and interpret the influence of ten parental involvement variables. The results identified time spent talking with parents, frequency of family meals, and encouragement to achieve good marks as the strongest predictors of reading performance. Counterfactual analysis revealed that increasing the time spent talking with parents and frequency of family meals from their minimum (1) to maximum (5) levels, while holding other variables constant at their medians, could increase the predicted reading score from the baseline of 358.93 to as high as 448.68, marking an improvement of nearly 90 points. These findings emphasize the educational value of culturally compatible parental behaviors. The study also contributes to methodological advancement by integrating interpretable machine learning with prescriptive insights, demonstrating the potential of XAI for educational policy and intervention design. Implications for educators, policymakers, and families highlight the importance of promoting high-impact family practices to support literacy development. The approach offers a replicable model for leveraging AI to understand and enhance student learning outcomes across diverse contexts. Full article
Show Figures

Figure 1

25 pages, 1344 KiB  
Article
Customer-Centric Decision-Making with XAI and Counterfactual Explanations for Churn Mitigation
by Simona-Vasilica Oprea and Adela Bâra
J. Theor. Appl. Electron. Commer. Res. 2025, 20(2), 129; https://doi.org/10.3390/jtaer20020129 - 3 Jun 2025
Viewed by 910
Abstract
In this paper, we propose a methodology designed to deliver actionable insights that help businesses retain customers. While Machine Learning (ML) techniques predict whether a customer is likely to churn, this alone is not enough. Explainable Artificial Intelligence (XAI) methods, such as SHapley [...] Read more.
In this paper, we propose a methodology designed to deliver actionable insights that help businesses retain customers. While Machine Learning (ML) techniques predict whether a customer is likely to churn, this alone is not enough. Explainable Artificial Intelligence (XAI) methods, such as SHapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), highlight the features influencing the prediction, but businesses need strategies to prevent churn. Counterfactual (CF) explanations bridge this gap by identifying the minimal changes in the business–customer relationship that could shift an outcome from churn to retention, offering steps to enhance customer loyalty and reduce losses to competitors. These explanations might not fully align with business constraints; however, alternative scenarios can be developed to achieve the same objective. Among the six classifiers used to detect churn cases, the Balanced Random Forest classifier was selected for its superior performance, achieving the highest recall score of 0.72. After classification, Diverse Counterfactual Explanations with ML (DiCEML) through Mixed-Integer Linear Programming (MILP) is applied to obtain the required changes in the features, as well as in the range permitted by the business itself. We further apply DiCEML to uncover potential biases within the model, calculating the disparate impact of some features. Full article
Show Figures

Figure 1

30 pages, 3401 KiB  
Article
Explainable AI Assisted IoMT Security in Future 6G Networks
by Navneet Kaur and Lav Gupta
Future Internet 2025, 17(5), 226; https://doi.org/10.3390/fi17050226 - 20 May 2025
Viewed by 684
Abstract
The rapid integration of the Internet of Medical Things (IoMT) is transforming healthcare through real-time monitoring, AI-driven diagnostics, and remote treatment. However, the growing reliance on IoMT devices, such as robotic surgical systems, life-support equipment, and wearable health monitors, has expanded the attack [...] Read more.
The rapid integration of the Internet of Medical Things (IoMT) is transforming healthcare through real-time monitoring, AI-driven diagnostics, and remote treatment. However, the growing reliance on IoMT devices, such as robotic surgical systems, life-support equipment, and wearable health monitors, has expanded the attack surface, exposing healthcare systems to cybersecurity risks like data breaches, device manipulation, and potentially life-threatening disruptions. While 6G networks offer significant benefits for healthcare, such as ultra-low latency, extensive connectivity, and AI-native capabilities, as highlighted in the ITU 6G (IMT-2030) framework, they are expected to introduce new and potentially more severe security challenges. These advancements put critical medical systems at greater risk, highlighting the need for more robust security measures. This study leverages AI techniques to systematically identify security vulnerabilities within 6G-enabled healthcare environments. Additionally, the proposed approach strengthens AI-driven security through use of multiple XAI techniques cross-validated against each other. Drawing on the insights provided by XAI, we tailor our mitigation strategies to the ITU-defined 6G usage scenarios, with a focus on their applicability to medical IoT networks. We propose that these strategies will effectively address potential vulnerabilities and enhance the security of medical systems leveraging IoT and 6G networks. Full article
(This article belongs to the Special Issue Toward 6G Networks: Challenges and Technologies)
Show Figures

Figure 1

23 pages, 2394 KiB  
Article
Diverse Counterfactual Explanations (DiCE) Role in Improving Sales and e-Commerce Strategies
by Simona-Vasilica Oprea and Adela Bâra
J. Theor. Appl. Electron. Commer. Res. 2025, 20(2), 96; https://doi.org/10.3390/jtaer20020096 - 8 May 2025
Viewed by 811
Abstract
Pricing strategy is a critical challenge in e-commerce, where businesses must balance competitive pricing with profitability. Traditional pricing models rely on historical data and statistical methods but often lack interpretability and adaptability. In this study, we propose a novel approach that leverages Diverse [...] Read more.
Pricing strategy is a critical challenge in e-commerce, where businesses must balance competitive pricing with profitability. Traditional pricing models rely on historical data and statistical methods but often lack interpretability and adaptability. In this study, we propose a novel approach that leverages Diverse Counterfactual Explanations (DiCE) to enhance pricing strategies for mobile phones. Unlike previous research that applied counterfactual analysis in customer segmentation, energy forecasting, and retail pricing, our method directly integrates explainability into product-level pricing decisions. Our approach identifies actionable product features, such as improved hardware specifications, that can be modified to increase the predicted price. By generating counterfactual explanations, we provide insights into how businesses can optimize product attributes to maximize revenue while maintaining transparency in pricing decisions. This framework bridges explainable AI with pricing strategies, allowing companies to justify price points and improve market positioning dynamically. Furthermore, we identify other features that could lead to the same price goal. The linear regression model achieved an R2 score of 96.15% on the test set, along with a mean absolute error (MAE) of 108.31 and mean absolute percentage error (MAPE) of 5.43%, indicating strong predictive performance. Through DiCE, the model identified actionable modifications (e.g., increasing front camera resolution and battery capacity) that effectively raise predicted prices by 15–20%. This insight is particularly valuable for product design and pricing optimization. The model provided a ranking of features based on their impact on price increases, revealing that front camera and battery capacity are more influential than RAM in driving pricing changes. Full article
Show Figures

Figure 1

30 pages, 604 KiB  
Article
XGate: Explainable Reinforcement Learning for Transparent and Trustworthy API Traffic Management in IoT Sensor Networks
by Jianian Jin, Suchuan Xing, Enkai Ji and Wenhe Liu
Sensors 2025, 25(7), 2183; https://doi.org/10.3390/s25072183 - 29 Mar 2025
Viewed by 600
Abstract
The rapid proliferation of Internet of Things (IoT) devices and their associated application programming interfaces (APIs) has significantly increased the complexity of sensor network traffic management, necessitating more sophisticated and transparent control mechanisms. In this paper, we introduce XGate, a novel explainable reinforcement [...] Read more.
The rapid proliferation of Internet of Things (IoT) devices and their associated application programming interfaces (APIs) has significantly increased the complexity of sensor network traffic management, necessitating more sophisticated and transparent control mechanisms. In this paper, we introduce XGate, a novel explainable reinforcement learning framework designed specifically for API traffic management in sensor networks. XGate addresses the critical challenge of balancing optimal routing decisions with the interpretability demands of network administrators operating large-scale IoT deployments. Our approach integrates transformer-based attention mechanisms with counterfactual reasoning to provide human-comprehensible explanations for each traffic management decision across distributed sensor data streams. Through extensive experimentation on three large-scale sensor API traffic datasets, we demonstrate that XGate achieves 23.7% lower latency and 18.5% higher throughput compared to state-of-the-art black-box reinforcement learning approaches. More importantly, our user studies with sensor network administrators (n=42) reveal that XGate’s explanation capabilities improve operator trust by 67% and reduce intervention time by 41% during anomalous sensor traffic events. The theoretical analysis further establishes probabilistic guarantees on explanation fidelity while maintaining computational efficiency suitable for real-time sensor data management. XGate represents a significant advancement toward trustworthy AI systems for critical IoT infrastructure, providing transparent decision making without sacrificing performance in dynamic sensor network environments. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

24 pages, 1761 KiB  
Article
Info-CELS: Informative Saliency Map-Guided Counterfactual Explanation for Time Series Classification
by Peiyu Li, Omar Bahri, Pouya Hosseinzadeh, Soukaïna Filali Boubrahimi and Shah Muhammad Hamdi
Electronics 2025, 14(7), 1311; https://doi.org/10.3390/electronics14071311 - 26 Mar 2025
Viewed by 578
Abstract
As the demand for interpretable machine learning approaches continues to grow, there is an increasing necessity for human involvement in providing informative explanations for model decisions. This is necessary for building trust and transparency in AI-based systems, leading to the emergence of the [...] Read more.
As the demand for interpretable machine learning approaches continues to grow, there is an increasing necessity for human involvement in providing informative explanations for model decisions. This is necessary for building trust and transparency in AI-based systems, leading to the emergence of the Explainable Artificial Intelligence (XAI) field. Recently, a novel counterfactual explanation model, CELS, has been introduced. CELS learns a saliency map for the interests of an instance and generates a counterfactual explanation guided by the learned saliency map. While CELS represents the first attempt to exploit learned saliency maps not only to provide intuitive explanations for the reason behind the decision made by the time series classifier but also to explore post hoc counterfactual explanations, it exhibits limitations in terms of its high validity for the sake of ensuring high proximity and sparsity. In this paper, we present an enhanced approach that builds upon CELS. While the original model achieved promising results in terms of sparsity and proximity, it faced limitations in terms of validity. Our proposed method addresses this limitation by removing mask normalization to provide more informative and valid counterfactual explanations. Through extensive experimentation on datasets from various domains, we demonstrate that our approach outperforms the CELS model, achieving higher validity and producing more informative explanations. Full article
Show Figures

Figure 1

30 pages, 11982 KiB  
Article
LIMETREE: Consistent and Faithful Surrogate Explanations of Multiple Classes
by Kacper Sokol and Peter Flach
Electronics 2025, 14(5), 929; https://doi.org/10.3390/electronics14050929 - 26 Feb 2025
Cited by 1 | Viewed by 648
Abstract
Explainable artificial intelligence provides tools to better understand predictive models and their decisions, but many such methods are limited to producing insights with respect to a single class. When generating explanations for several classes, reasoning over them to obtain a comprehensive view may [...] Read more.
Explainable artificial intelligence provides tools to better understand predictive models and their decisions, but many such methods are limited to producing insights with respect to a single class. When generating explanations for several classes, reasoning over them to obtain a comprehensive view may be difficult since they can present competing or contradictory evidence. To address this challenge, we introduce the novel paradigm of multi-class explanations. We outline the theory behind such techniques and propose a local surrogate model based on multi-output regression trees—called LIMETREE—that offers faithful and consistent explanations of multiple classes for individual predictions while being post-hoc, model-agnostic and data-universal. On top of strong fidelity guarantees, our implementation delivers a range of diverse explanation types, including counterfactual statements favored in the literature. We evaluate our algorithm with respect to explainability desiderata, through quantitative experiments and via a pilot user study, on image and tabular data classification tasks, comparing it with LIME, which is a state-of-the-art surrogate explainer. Our contributions demonstrate the benefits of multi-class explanations and the wide-ranging advantages of our method across a diverse set of scenarios. Full article
Show Figures

Figure 1

19 pages, 1611 KiB  
Article
Improving Crowdfunding Decisions Using Explainable Artificial Intelligence
by Andreas Gregoriades and Christos Themistocleous
Sustainability 2025, 17(4), 1361; https://doi.org/10.3390/su17041361 - 7 Feb 2025
Viewed by 1832
Abstract
This paper investigates points of vulnerability in the decisions made by backers and campaigners in crowdfund pledges in an attempt to facilitate a sustainable entrepreneurial ecosystem by increasing the rate of good projects being funded. In doing so, this research examines factors that [...] Read more.
This paper investigates points of vulnerability in the decisions made by backers and campaigners in crowdfund pledges in an attempt to facilitate a sustainable entrepreneurial ecosystem by increasing the rate of good projects being funded. In doing so, this research examines factors that contribute to the success or failure of crowdfunding campaign pledges using eXplainable AI methods (SHapley Additive exPlanations and Counterfactual Explanations). A dataset of completed Kickstarter campaigns was used to train two binary classifiers. The first model used textual features from the campaigns’ descriptions, and the second used categorical, numerical, and textual features. Findings identify textual terms, such as “stretch goals”, that convey both elements of risk and ambitiousness to be strongly correlated with success, contrary to transparent communications of risks that bring forward worries that would have otherwise remained dormant for backers. Short sentence length, in conjunction with high term complexity, is also associated with campaign success. We link the latter to signaling theory and the campaigners’ projection of knowledgeability of the domain. Certain numerical data, such as the project’s duration, frequency of post updates, and use of images, confirm previous links to campaign success. We enhance implications through the use of Counterfactual Explanations and generate actionable recommendations on how failed projects could become successful while proposing new policies, in the form of nudges, that shield backers from points of vulnerability. Full article
(This article belongs to the Section Economic and Business Aspects of Sustainability)
Show Figures

Figure 1

33 pages, 4288 KiB  
Article
An Interpretable and Generalizable Machine Learning Model for Predicting Asthma Outcomes: Integrating AutoML and Explainable AI Techniques
by Salman Mahmood, Raza Hasan, Saqib Hussain and Rochak Adhikari
World 2025, 6(1), 15; https://doi.org/10.3390/world6010015 - 14 Jan 2025
Cited by 4 | Viewed by 2182
Abstract
Asthma remains a prevalent chronic condition, impacting millions globally and presenting significant clinical and economic challenges. This study develops a predictive model for asthma outcomes, leveraging automated machine learning (AutoML) and explainable AI (XAI) to balance high predictive accuracy with interpretability. Using a [...] Read more.
Asthma remains a prevalent chronic condition, impacting millions globally and presenting significant clinical and economic challenges. This study develops a predictive model for asthma outcomes, leveraging automated machine learning (AutoML) and explainable AI (XAI) to balance high predictive accuracy with interpretability. Using a comprehensive dataset of demographic, clinical, and respiratory function data, we employed AutoGluon to automate model selection, optimization, and ensembling, resulting in a model with 98.99% accuracy and a 0.9996 ROC-AUC score. SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations) were applied to provide both global and local interpretability, ensuring that clinicians can trust and understand model predictions. Additionally, counterfactual analysis enabled hypothetical scenario exploration, supporting personalized asthma management by allowing clinicians to assess potential interventions for individual patient risk profiles. To facilitate clinical adoption, a Streamlit v1.41.0 application was developed for real-time access to predictions and interpretability. This study addresses key gaps in asthma prediction, notably in model transparency and generalizability, while providing a practical tool for enhancing personalized care. Future research could expand the validation across diverse patient populations to reinforce the model’s robustness in broader clinical environments. Full article
(This article belongs to the Special Issue AI-Powered Horizons: Shaping Our Future World)
Show Figures

Figure 1

31 pages, 6140 KiB  
Article
Towards Transparent Diabetes Prediction: Combining AutoML and Explainable AI for Improved Clinical Insights
by Raza Hasan, Vishal Dattana, Salman Mahmood and Saqib Hussain
Information 2025, 16(1), 7; https://doi.org/10.3390/info16010007 - 26 Dec 2024
Cited by 3 | Viewed by 3485
Abstract
Diabetes is a global health challenge that requires early detection for effective management. This study integrates Automated Machine Learning (AutoML) with Explainable Artificial Intelligence (XAI) to improve diabetes risk prediction and enhance model interpretability for healthcare professionals. Using the Pima Indian Diabetes dataset, [...] Read more.
Diabetes is a global health challenge that requires early detection for effective management. This study integrates Automated Machine Learning (AutoML) with Explainable Artificial Intelligence (XAI) to improve diabetes risk prediction and enhance model interpretability for healthcare professionals. Using the Pima Indian Diabetes dataset, we developed an ensemble model with 85.01% accuracy leveraging AutoGluon’s AutoML framework. To address the “black-box” nature of machine learning, we applied XAI techniques, including SHapley Additive exPlanations (SHAP), Local Interpretable Model-Agnostic Explanations (LIME), Integrated Gradients (IG), Attention Mechanism (AM), and Counterfactual Analysis (CA), providing both global and patient-specific insights into critical risk factors such as glucose and BMI. These methods enable transparent and actionable predictions, supporting clinical decision-making. An interactive Streamlit application was developed to allow clinicians to explore feature importance and test hypothetical scenarios. Cross-validation confirmed the model’s robust performance across diverse datasets. This study demonstrates the integration of AutoML with XAI as a pathway to achieving accurate, interpretable models that foster transparency and trust while supporting actionable clinical decisions. Full article
(This article belongs to the Special Issue Medical Data Visualization)
Show Figures

Figure 1

48 pages, 13957 KiB  
Article
Enhancing Explainable Artificial Intelligence: Using Adaptive Feature Weight Genetic Explanation (AFWGE) with Pearson Correlation to Identify Crucial Feature Groups
by Ebtisam AlJalaud and Manar Hosny
Mathematics 2024, 12(23), 3727; https://doi.org/10.3390/math12233727 - 27 Nov 2024
Cited by 2 | Viewed by 1229
Abstract
The ‘black box’ nature of machine learning (ML) approaches makes it challenging to understand how most artificial intelligence (AI) models make decisions. Explainable AI (XAI) aims to provide analytical techniques to understand the behavior of ML models. XAI utilizes counterfactual explanations that indicate [...] Read more.
The ‘black box’ nature of machine learning (ML) approaches makes it challenging to understand how most artificial intelligence (AI) models make decisions. Explainable AI (XAI) aims to provide analytical techniques to understand the behavior of ML models. XAI utilizes counterfactual explanations that indicate how variations in input features lead to different outputs. However, existing methods must also highlight the importance of features to provide more actionable explanations that would aid in the identification of key drivers behind model decisions—and, hence, more reliable interpretations—ensuring better accuracy. The method we propose utilizes feature weights obtained through adaptive feature weight genetic explanation (AFWGE) with the Pearson correlation coefficient (PCC) to determine the most crucial group of features. The proposed method was tested on four real datasets with nine different classifiers for evaluation against a nonweighted counterfactual explanation method (CERTIFAI) and the original feature values’ correlation. The results show significant enhancements in accuracy, precision, recall, and F1 score for most datasets and classifiers; this indicates the superiority of the feature weights selected via AFWGE with the PCC over CERTIFAI and the original data values in determining the most important group of features. Focusing on important feature groups elaborates the behavior of AI models and enhances decision making, resulting in more reliable AI systems. Full article
(This article belongs to the Special Issue Machine Learning Theory and Applications)
Show Figures

Figure 1

26 pages, 3572 KiB  
Article
Prediction of Students’ Adaptability Using Explainable AI in Educational Machine Learning Models
by Leonard Chukwualuka Nnadi, Yutaka Watanobe, Md. Mostafizer Rahman and Adetokunbo Macgregor John-Otumu
Appl. Sci. 2024, 14(12), 5141; https://doi.org/10.3390/app14125141 - 13 Jun 2024
Cited by 7 | Viewed by 5014
Abstract
As the educational landscape evolves, understanding and fostering student adaptability has become increasingly critical. This study presents a comparative analysis of XAI techniques to interpret machine learning models aimed at classifying student adaptability levels. Leveraging a robust dataset of 1205 instances, we employed [...] Read more.
As the educational landscape evolves, understanding and fostering student adaptability has become increasingly critical. This study presents a comparative analysis of XAI techniques to interpret machine learning models aimed at classifying student adaptability levels. Leveraging a robust dataset of 1205 instances, we employed several machine learning algorithms with a particular focus on Random Forest, which demonstrated highest accuracy at 91%. The models’ precision, recall and F1-score were also evaluated, with Random Forest achieving a precision of 0.93, a recall of 0.94, and an F1-score of 0.94. Our study utilizes SHAP, LIME, Anchors, ALE, and Counterfactual explanations to reveal the specific contributions of various features impacting adaptability predictions. SHAP values highlighted ‘Class Duration’ significance (mean SHAP value: 0.175); LIME explained socio-economic and institutional factors’ intricate influence. Anchors provided high-confidence rule-based explanations (confidence: 97.32%), emphasizing demographic characteristics. ALE analysis underscored the importance of ‘Financial Condition’ with a positive slope, while Counterfactual scenarios highlighted the impact of slight feature variations of 0.5 change in ‘Class Duration’. Consistently, ‘Class Duration’ and ‘Financial Condition’ emerge as key factors, while the study also underscores the subtle effects of ‘Institution Type’ and ‘Load-shedding’. This multi-faceted interpretability approach bridges the gap between machine learning performance and educational relevance, presenting a model that not only predicts but also explains the dynamic factors influencing student adaptability. The synthesized insights advocate for educational policies accommodating socioeconomic factors, instructional time, and infrastructure stability to enhance student adaptability. The implications extend to informed and personalized educational interventions, fostering an adaptable learning environment. This methodical research contributes to responsible AI application in education, promoting predictive and interpretable models for equitable and effective educational strategies. Full article
Show Figures

Figure 1

18 pages, 2721 KiB  
Article
An Advanced Explainable Belief Rule-Based Framework to Predict the Energy Consumption of Buildings
by Sami Kabir, Mohammad Shahadat Hossain and Karl Andersson
Energies 2024, 17(8), 1797; https://doi.org/10.3390/en17081797 - 9 Apr 2024
Cited by 4 | Viewed by 2678
Abstract
The prediction of building energy consumption is beneficial to utility companies, users, and facility managers to reduce energy waste. However, due to various drawbacks of prediction algorithms, such as, non-transparent output, ad hoc explanation by post hoc tools, low accuracy, and the inability [...] Read more.
The prediction of building energy consumption is beneficial to utility companies, users, and facility managers to reduce energy waste. However, due to various drawbacks of prediction algorithms, such as, non-transparent output, ad hoc explanation by post hoc tools, low accuracy, and the inability to deal with data uncertainties, such prediction has limited applicability in this domain. As a result, domain knowledge-based explainability with high accuracy is critical for making energy predictions trustworthy. Motivated by this, we propose an advanced explainable Belief Rule-Based Expert System (eBRBES) with domain knowledge-based explanations for the accurate prediction of energy consumption. We optimize BRBES’s parameters and structure to improve prediction accuracy while dealing with data uncertainties using its inference engine. To predict energy consumption, we take into account floor area, daylight, indoor occupancy, and building heating method. We also describe how a counterfactual output on energy consumption could have been achieved. Furthermore, we propose a novel Belief Rule-Based adaptive Balance Determination (BRBaBD) algorithm for determining the optimal balance between explainability and accuracy. To validate the proposed eBRBES framework, a case study based on Skellefteå, Sweden, is used. BRBaBD results show that our proposed eBRBES framework outperforms state-of-the-art machine learning algorithms in terms of optimal balance between explainability and accuracy by 85.08%. Full article
(This article belongs to the Section G: Energy and Buildings)
Show Figures

Figure 1

20 pages, 4924 KiB  
Article
Explainable Approaches for Forecasting Building Electricity Consumption
by Nikos Sakkas, Sofia Yfanti, Pooja Shah, Nikitas Sakkas, Christina Chaniotakis, Costas Daskalakis, Eduard Barbu and Marharyta Domnich
Energies 2023, 16(20), 7210; https://doi.org/10.3390/en16207210 - 23 Oct 2023
Cited by 11 | Viewed by 2034
Abstract
Building electric energy is characterized by a significant increase in its uses (e.g., vehicle charging), a rapidly declining cost of all related data collection, and a proliferation of smart grid concepts, including diverse and flexible electricity pricing schemes. Not surprisingly, an increased number [...] Read more.
Building electric energy is characterized by a significant increase in its uses (e.g., vehicle charging), a rapidly declining cost of all related data collection, and a proliferation of smart grid concepts, including diverse and flexible electricity pricing schemes. Not surprisingly, an increased number of approaches have been proposed for its modeling and forecasting. In this work, we place our emphasis on three forecasting-related issues. First, we look at the forecasting explainability, that is, the ability to understand and explain to the user what shapes the forecast. To this extent, we rely on concepts and approaches that are inherently explainable, such as the evolutionary approach of genetic programming (GP) and its associated symbolic expressions, as well as the so-called SHAP (SHapley Additive eXplanations) values, which is a well-established model agnostic approach for explainability, especially in terms of feature importance. Second, we investigate the impact of the training timeframe on the forecasting accuracy; this is driven by the realization that fast training would allow for faster deployment of forecasting in real-life solutions. And third, we explore the concept of counterfactual analysis on actionable features, that is, features that the user can really act upon and which therefore present an inherent advantage when it comes to decision support. We have found that SHAP values can provide important insights into the model explainability. In our analysis, GP models demonstrated superior performance compared to neural network-based models (with a 20–30% reduction in Root Mean Square Error (RMSE)) and time series models (with a 20–40% lower RMSE), but a rather questionable potential to produce crisp and insightful symbolic expressions, allowing a better insight into the model performance. We have also found and reported here on an important potential, especially for practical, decision support, of counterfactuals built on actionable features, and short training timeframes. Full article
Show Figures

Figure 1

15 pages, 21134 KiB  
Article
Explainable Image Similarity: Integrating Siamese Networks and Grad-CAM
by Ioannis E. Livieris, Emmanuel Pintelas, Niki Kiriakidou and Panagiotis Pintelas
J. Imaging 2023, 9(10), 224; https://doi.org/10.3390/jimaging9100224 - 14 Oct 2023
Cited by 16 | Viewed by 5904
Abstract
With the proliferation of image-based applications in various domains, the need for accurate and interpretable image similarity measures has become increasingly critical. Existing image similarity models often lack transparency, making it challenging to understand the reasons why two images are considered similar. In [...] Read more.
With the proliferation of image-based applications in various domains, the need for accurate and interpretable image similarity measures has become increasingly critical. Existing image similarity models often lack transparency, making it challenging to understand the reasons why two images are considered similar. In this paper, we propose the concept of explainable image similarity, where the goal is the development of an approach, which is capable of providing similarity scores along with visual factual and counterfactual explanations. Along this line, we present a new framework, which integrates Siamese Networks and Grad-CAM for providing explainable image similarity and discuss the potential benefits and challenges of adopting this approach. In addition, we provide a comprehensive discussion about factual and counterfactual explanations provided by the proposed framework for assisting decision making. The proposed approach has the potential to enhance the interpretability, trustworthiness and user acceptance of image-based systems in real-world image similarity applications. Full article
Show Figures

Figure 1

Back to TopTop