1. Introduction
Transportation infrastructure forms the backbone of modern economies, enabling the continuous movement of goods, services, and people. Its effective management is therefore a matter of both economic stability and public safety [
1]. In recent years, predictive maintenance—the proactive identification and mitigation of potential failures—has gained prominence in Transportation Infrastructure Management (TIM) due to its potential to reduce downtime, optimize asset life cycles, and prevent catastrophic failures [
2,
3]. The increasing availability of sensor networks, inspection imagery, and operational data has created fertile ground for Artificial Intelligence (AI) to support these maintenance strategies by detecting patterns that signal emerging defects [
4,
5,
6]. However, despite AI’s ability to achieve high predictive accuracy, its widespread adoption in safety-critical domains such as TIM is hindered by the opacity of modern machine learning models [
7,
8,
9].
Explainable Artificial Intelligence (XAI) offers a path forward by producing human-understandable justifications for AI decisions [
10,
11]. Unlike conventional AI models, XAI frameworks are designed to bridge the gap between algorithmic outputs and human comprehension, enabling stakeholders—from engineers and data scientists to policymakers and the general public—to verify, challenge, and act upon AI-driven recommendations [
12,
13]. In TIM, this capability is critical: a prediction that a rail fastener will fail within days has limited operational value unless accompanied by a clear explanation of the contributing factors, whether they be abnormal vibration patterns, temperature fluctuations, or historical wear trends [
14,
15]. The promise of XAI in TIM lies not only in enhancing trust but also in improving decision quality, ensuring compliance with safety regulations, and reducing the risk of unintended biases in maintenance scheduling [
16,
17,
18]. Yet, despite the potential benefits, the application of XAI in predictive maintenance for TIM remains underdeveloped [
19,
20]. The existing literature is fragmented, with many studies focusing on generic explainable techniques without adapting them to TIM’s multi-modal, heterogeneous data or stringent operational constraints [
21,
22,
23]. Moreover, the absence of standardized interpretability metrics makes it difficult to compare the effectiveness of different XAI approaches in real-world infrastructure settings.
The integration of Explainable Artificial Intelligence (XAI) into predictive maintenance for Transportation Infrastructure Management (TIM) has become increasingly important due to the safety-critical nature of infrastructure decision-making and the growing reliance on AI-driven predictions. In domains such as railway inspection, bridge health monitoring, and tunnel maintenance, advanced AI models frequently demonstrate high predictive accuracy; however, their opaque decision-making processes limit stakeholder trust and hinder large-scale adoption [
24]. Despite rapid advancements in XAI research, three persistent gaps remain evident in the context of TIM: the overemphasis on generic XAI theory without domain-specific adaptation [
25]; the absence of standardized and reproducible evaluation metrics for interpretability [
26,
27]; and the limited capacity of existing approaches to handle TIM’s heterogeneous, noisy, and multi-modal data environments [
28,
29,
30]. These gaps highlight the need for context-aware Explainable AI frameworks tailored specifically to infrastructure systems rather than the direct transplantation of general-purpose XAI tools.
Predictive maintenance in TIM relies on diverse data sources, including vibration time-series, environmental parameters, and high-resolution inspection imagery. While AI models can forecast structural degradation or component failure with considerable accuracy, they often fail to provide actionable justifications [
31]. For example, a model predicting rail track deterioration may not clearly distinguish whether the primary contributing factor is load-induced stress, environmental exposure, or material fatigue. Such ambiguity weakens proactive maintenance planning and complicates regulatory transparency. Although XAI techniques such as SHAP, LIME, Grad-CAM, and emerging Transformer-based interpretability methods offer mechanisms to link predictions to interpretable evidence [
32,
33], their practical deployment in TIM remains limited. High computational costs, latency constraints in real-time systems, cybersecurity concerns related to model transparency [
34,
35], and integration challenges with legacy infrastructure platforms [
36,
37] continue to impede operational scalability. Moreover, current studies rarely evaluate explanation robustness under noisy or incomplete data conditions, which are common in real-world TIM environments [
38,
39].
The heterogeneous and multi-modal nature of infrastructure datasets further complicates Explainable AI. Sensor streams may contain missing intervals, inspection images may suffer from occlusion or poor resolution, and environmental measurements often differ in scale and sampling frequency [
40,
41]. Existing XAI methods typically operate within isolated data modalities, and standardized cross-modal interpretability frameworks are largely absent [
25]. Additionally, interpretability assessment is frequently based on subjective expert feedback rather than objective, reproducible metrics [
42], limiting cross-study comparability. Ethical considerations, including bias in historical maintenance data and uneven resource allocation, remain underexplored despite XAI’s potential to expose such inequities [
43]. Addressing these methodological, operational, and governance challenges is essential to advancing transparent, trustworthy, and scalable predictive maintenance systems in transportation infrastructure [
44].
This paper addresses these gaps by critically examining the integration of XAI into predictive maintenance workflows for TIM. It evaluates both model-agnostic methods, such as SHAP and LIME, and model-specific techniques, such as Grad-CAM and Transformer-based attention visualizations, through the lens of their applicability to transportation asset monitoring. Using a case study on railway fastener inspection, this review demonstrates how XAI can reveal actionable insights that improve transparency, enhance stakeholder confidence, and inform maintenance prioritization. The specific objectives of this study are to:
Identify current gaps and challenges in applying XAI to predictive maintenance in TIM.
Assess the role of XAI in fostering trust, accountability, and actionable decision-making in infrastructure management.
Recommend practical strategies for deploying XAI-driven solutions that are technically robust, ethically sound, and operationally feasible.
By combining a systematic literature review with critical analysis and domain-specific case evidence, this paper contributes to bridging the gap between theoretical advances in Explainable AI and their practical deployment in transportation infrastructure. The remainder of this article is organized as follows:
Section 2 introduces the 4W framework for understanding XAI;
Section 3 outlines the methodology for literature selection and analysis;
Section 4 presents a taxonomy of XAI techniques;
Section 5 discusses real-world applications;
Section 6 addresses implementation challenges;
Section 7 explores decision making;
Section 8 explores future trends; and
Section 9 concludes with key findings and future directions.
2. Explainable AI: The 4Ws
This section critically synthesizes the literature on Explainable Artificial Intelligence (XAI) through the lenses of What, When, Why, and Whom. Rather than positioning XAI as a peripheral technical extension, several scholars conceptualize it as a foundational requirement for trustworthy AI systems. Holzinger [
45] argues that the transition from traditional machine learning to Explainable AI reflects a broader paradigm shift toward human-centered intelligence. Expanding this perspective, Holzinger and Kieseberg [
46] emphasize knowledge extraction as a bridge between algorithmic performance and interpretability. More recently, Holzinger et al. [
47] extend this discussion beyond Explainable AI toward “xxAI,” advocating integrative approaches that combine robustness, causality, and transparency.
Within this evolving discourse, XAI is increasingly framed as an enabling layer for safety-critical and high-impact applications. Although much of the foundational work is domain-agnostic, its implications are particularly significant for Transport Infrastructure Management (TIM), where predictive outputs influence operational safety and resource allocation decisions.
2.1. What: What Does XAI Provide in TIM?
Holzinger [
45] identifies the black-box nature of deep learning models as a central challenge motivating the development of explainable methods. In parallel, Doran et al. [
48] question what Explainable AI truly entails, distinguishing between inherently interpretable systems and those that require post hoc explanation mechanisms. Rousseau et al. [
49] further differentiate interpretability from Explainable AI, arguing that understanding outputs is distinct from understanding the reasoning process that produced them [
50].
Bennetot et al. [
51] provide a practical taxonomy of explanation techniques, including feature attribution, surrogate modeling, and visualization-based approaches. Their tutorial-based synthesis demonstrates that explanation methods vary in scope, assumptions, and fidelity. Collectively, these contributions indicate that XAI provides structured mechanisms to reveal feature relevance, model sensitivity, and decision pathways, rather than merely improving transparency at a superficial level.
However, as Díez et al. [
52] caution in their broader philosophical treatment of explanation, not all explanatory accounts satisfy epistemic rigor. This insight suggests that explanation quality must be critically evaluated, particularly when applied to high-stakes systems such as infrastructure management. The literature, therefore, frames XAI not simply as a technical enhancement, but as a methodological layer requiring validation and contextual alignment [
53,
54].
2.2. When: When Is Explainable AI Most Critical?
Historically, interpretable reasoning was embedded in symbolic and rule-based systems. With the rise of deep neural networks, however, opacity became the norm. Gunning and Aha [
55], through the DARPA XAI program, formalized the need for Explainable AI in AI systems operating in real-world environments, particularly where human–machine teaming is essential.
Doran et al. [
48] argue that Explainable AI becomes most relevant when decisions influence consequential outcomes. Rather than advocating universal transparency at every computational stage, Sztaki [
56] suggests that explanation should be strategically integrated at critical decision junctures. This perspective aligns with the broader understanding that explanation is most valuable when human oversight, accountability, or intervention is required.
Goebel (2020) [
57] further emphasizes that Explainable AI should be embedded in system design rather than appended after deployment. The convergence of these views supports the notion that explanation timing is context-dependent and most impactful at points where automated decisions intersect with safety, governance, or financial risk.
2.3. Why: Why Is XAI Necessary Beyond Accuracy?
The motivation for XAI extends beyond model performance metrics. Hoffman et al. [
58] identify trust, usability, and stakeholder alignment as central dimensions of explainable systems. Similarly, Laato et al. [
59], in their systematic review, argue that explanation enhances user acceptance and facilitates meaningful interaction between humans and AI systems.
From a legal and ethical standpoint, Kirat et al. [
60] analyze the intersection of fairness and Explainable AI in automated decision-making, emphasizing accountability requirements in regulated environments. Díaz-Rodríguez et al. [
61] connect AI principles and governance frameworks to responsible AI system design, demonstrating that transparency is integral to trustworthy deployment. Hamon et al. [
62] further highlight robustness and Explainable AI as complementary pillars in AI assurance strategies.
Beyond governance, Sarabia [
63] reflects on the philosophical necessity of interpretability in maintaining human agency over automated systems. Collectively, these studies reinforce that XAI supports ethical compliance, bias detection, stakeholder confidence, and responsible system governance—dimensions that extend well beyond predictive accuracy [
64].
2.4. Whom: Who Benefits from XAI in TIM?
Stakeholder specificity emerges as a recurring theme in the literature. Hoffman et al. [
58] explicitly analyze the roles and expectations of different stakeholders interacting with explainable systems. Their framework demonstrates that engineers, domain experts, regulators, and end users require distinct forms of explanation.
Laato et al. [
59] similarly argue that explanation effectiveness depends on user background and technical literacy. Díaz-Rodríguez et al. [
61] extend this argument to regulatory and governance actors, emphasizing that responsible AI requires multi-level stakeholder alignment. Mater et al. [
65] and related legal scholarship further examine how algorithmic governance frameworks demand transparency tailored to policy and legal stakeholders.
Taken together, the literature indicates that explanation is not universally standardized but must be adapted to stakeholder needs. Effective XAI, therefore, requires role-sensitive explanation design, aligning technical outputs with engineering validation, regulatory oversight, and public accountability structures.
3. Methodology
This review was designed and conducted as a systematic, protocol-driven investigation of the integration of Explainable Artificial Intelligence (XAI) into Transportation Infrastructure Management (TIM), with a specific focus on predictive maintenance. The methodological structure follows a PRISMA 2020-style workflow to ensure transparency, reproducibility, and traceability throughout the identification, screening, eligibility assessment, and synthesis stages. The overall research flow is illustrated in
Figure 1 (PRISMA-based workflow), while
Figure 2 summarizes the conceptual methodology adopted in this study.
3.1. Review Plan and Research Protocol
The review was guided by predefined research objectives to prevent subjective or ad hoc inclusion decisions. The central aim was to examine how Explainable AI techniques are being integrated into machine learning and deep learning models for transportation infrastructure monitoring, maintenance prioritization, and safety management.
Four guiding research questions were established:
How is XAI being applied in predictive maintenance and condition monitoring of transportation infrastructure assets such as railways, bridges, pavements, tunnels, and road networks?
Which Explainable AI techniques are most frequently adopted in infrastructure-related Artificial Intelligence models?
How does Explainable AI contribute to transparency, trust, risk assessment, and engineering decision support?
What methodological gaps, deployment limitations, and research challenges remain in this domain?
The review period was defined from 2015 to March 2025. The starting year was selected because explainable machine learning approaches began gaining formal research momentum after 2015, particularly in safety-critical and engineering applications. The final database search was completed in March 2025 to ensure inclusion of the most recent studies.
3.2. Literature Search Strategy
The literature search was conducted using two major peer-reviewed scientific databases: Web of Science (WoS) and Scopus. These databases were selected because of their strict indexing standards, high-quality journal coverage, and comprehensive inclusion of engineering and Artificial Intelligence research.
Search queries were carefully constructed using combinations of domain-specific keywords and Boolean operators to ensure both breadth and precision. The keywords were grouped into three thematic clusters:
- (1)
Explainability-related terms: “Explainable Artificial Intelligence”, “XAI”, “Explainable Machine Learning”, “Explainable Deep Learning”, “SHAP”, “LIME”, “Interpretability”, “Saliency Map”, “Counterfactual Explanation”.
- (2)
Maintenance-related terms: “Predictive Maintenance”, “Condition Monitoring”, “Anomaly Detection”, “Fault Diagnosis”, “Risk Assessment”.
- (3)
Infrastructure-related terms: “Transportation Infrastructure”, “Railway”, “Bridge”, “Tunnel”, “Pavement”, “Road Network”, “Civil Infrastructure”.
Representative search logic applied in the databases followed the structure:
(“Explainable AI” OR “XAI” OR “Interpretability”) AND (“Predictive Maintenance” OR “Condition Monitoring”) AND (“Railway” OR “Bridge” OR “Transportation Infrastructure” OR “Civil Infrastructure”).
The search was limited to peer-reviewed journal articles and conference papers published in English between 2015 and March 2025. Editorials, theses, patents, book chapters, and non-peer-reviewed documents were excluded at the search stage.
3.3. Screening and Study Selection Process
The study selection process was conducted in multiple stages to ensure systematic refinement of relevant literature. The workflow is summarized in
Figure 1.
First, duplicate records across the two databases were identified and removed manually. A total of 45 duplicate records were eliminated at this stage.
Second, a preliminary title and abstract review was conducted to remove clearly irrelevant studies. Fifteen records were excluded during this stage because they did not relate to transportation infrastructure or did not involve Artificial Intelligence methodologies.
Following duplicate removal and preliminary exclusion, 390 records remained for detailed screening.
In the next stage, full-text eligibility assessment was performed. A total of 301 reports were sought for retrieval, and all were successfully obtained (reports not retrieved: n = 0). Each full-text article was carefully examined against the predefined inclusion and exclusion criteria.
During eligibility assessment, 174 reports were excluded for specific reasons. Of these, 138 studies were excluded because they did not specifically address civil or transportation infrastructure systems, and 36 studies were excluded because they were not directly related to infrastructure management or maintenance decision-making frameworks.
After completion of all screening stages, a total of 163 studies were retained for final analysis and synthesis.
3.4. Inclusion and Exclusion Criteria
The inclusion criteria were defined to ensure conceptual consistency and domain specificity. A study was included if it met all of the following conditions:
The study addressed transportation or civil infrastructure assets such as railways, bridges, pavements, tunnels, or road systems.
The study employed machine learning or deep learning methods for predictive maintenance, anomaly detection, condition monitoring, or structural health assessment.
The study incorporated Explainable AI, interpretability, or transparency mechanisms within the modeling framework.
The study was published as a peer-reviewed journal article or conference paper in English.
The publication date fell between 2015 and March 2025.
The exclusion criteria were as follows:
Studies focused solely on prediction accuracy without incorporating Explainable AI techniques.
Studies unrelated to infrastructure systems (e.g., healthcare, finance, manufacturing without infrastructure context).
Editorials, commentary articles, theses, patents, and non-peer-reviewed publications.
Purely theoretical XAI papers without application to infrastructure or maintenance contexts.
Duplicate records across databases.
These criteria ensured that the final dataset remained tightly aligned with the research objectives of XAI-driven predictive maintenance in transportation infrastructure.
3.5. Data Extraction Process
A structured data extraction framework was developed to maintain consistency across all selected studies. Each of the 163 included articles was reviewed in full text and manually recorded in a structured spreadsheet database.
For each study, the following variables were extracted:
Bibliographic information (authors, year, journal, country/region).
Type of infrastructure asset (railway, bridge, pavement, tunnel, road network).
Data modality (image-based, sensor-based, vibration data, acoustic emission, IoT signals, etc.).
Predictive task (fault detection, degradation prediction, anomaly detection, risk assessment).
Machine learning or deep learning model architecture.
Explainable AI technique applied (e.g., SHAP, LIME, attention mechanisms, saliency maps, counterfactual reasoning).
Validation strategy (cross-validation, field validation, experimental validation).
Deployment context (laboratory-based, simulation-based, real-world implementation).
This structured extraction ensured traceability and allowed comparative analysis across different infrastructure domains and methodological approaches.
3.6. Quality Assessment and Bias Evaluation
To ensure robustness of the synthesized findings, a qualitative quality assessment was conducted. Each study was evaluated based on methodological clarity, transparency of model description, appropriateness of evaluation metrics, and clarity of explainable integration.
The following quality indicators were considered:
Clear description of dataset characteristics and size.
Transparent explanation of model architecture and training process.
Explicit justification for chosen Explainable AI technique.
Use of appropriate evaluation metrics (e.g., accuracy, F1-score, AUC, precision-recall).
Discussion of limitations or real-world deployment constraints.
Studies lacking methodological transparency or failing to clearly describe Explainable AI integration were critically examined during synthesis to avoid overgeneralization of findings.
Potential sources of bias were also considered, including dataset imbalance, synthetic-only validation, limited geographic representation, and absence of field deployment evidence. These biases are discussed in the
Section 3.7 to contextualize findings appropriately.
3.7. Synthesis of Findings
The synthesis of findings was conducted using thematic and objective-based categorization rather than statistical meta-analysis, due to heterogeneity in datasets, infrastructure types, modeling architectures, and evaluation metrics.
The 163 selected studies were classified according to dominant research objectives, resulting in four primary thematic categories:
Enhancing predictive maintenance accuracy through explainable modeling.
Improving safety and risk management via transparent decision support.
Supporting proactive and scenario-based maintenance planning using interpretable insights.
Strengthening transparency and trustworthiness in automated infrastructure management systems.
The distribution of articles across these objectives is presented in
Figure 3. Additionally, temporal growth trends from 2015 to 2025 were analyzed to identify research evolution patterns, as illustrated in
Figure 4. Regional and country-level contributions were also examined, as shown in
Figure 5 and
Figure 6.
4. Taxonomy of XAI Techniques
The rapid adoption of Artificial Intelligence (AI) in predictive maintenance for Transportation Infrastructure Management (TIM) has brought interpretability to the forefront of operational requirements. As AI models grow more complex—particularly in image-based defect detection for railway tracks, bridge components, and tunnel linings—ensuring that decision-making processes are transparent, auditable, and actionable has become critical. A structured taxonomy of Explainable Artificial Intelligence (XAI) techniques provides a systematic framework for selecting, applying, and evaluating methods in high-stakes TIM applications. This taxonomy organizes XAI approaches into four primary categories:
Model-Specific Methods—tightly coupled with particular model architectures (e.g., CNNs, decision trees), offering fine-grained insight into feature–prediction relationships [
66,
67,
68].
Model-Agnostic Methods—independent of the underlying architecture, treating the AI as a “black box” and inferring explanations through input–output analysis [
69,
70,
71,
72].
Human-Centered Approaches—prioritizing the end-user’s interpretability through visual, interactive, or natural language explanations, making outputs accessible to non-technical stakeholders [
10,
73].
Hybrid Methods—integrating model-specific and model-agnostic techniques to balance precision and versatility [
74,
75,
76].
The rationale for this categorization is two-fold. First, different stages of TIM workflows require different interpretability levels: engineers may need pixel-level heatmaps for defect localization, while policymakers may require high-level feature contribution summaries. Second, operational constraints (e.g., latency in field inspections, limited computing resources on edge devices) dictate which methods are feasible for deployment. The categories of taxonomy are explained in
Table 1 with examples.
4.1. Core XAI Techniques
The practical value of XAI in TIM depends on selecting methods that balance interpretability, accuracy, and operational feasibility. The following techniques represent the most relevant tools for predictive maintenance in transportation assets.
Table 2 provides comprehensive details of XAI core techniques.
4.1.1. XAI Core Techniques’ Implementation
To demonstrate the practical application of XAI techniques in Transportation Infrastructure Management, a dedicated dataset of railway fastener images was collected from the railway track test facility at the Central South University Railway Campus, China. This dataset served as a representative real-world case for applying various interpretability methods, allowing a systematic comparison of their explanatory capabilities.
Gradient-weighted Class Activation Mapping (Grad-CAM) produces heatmaps overlaid on the input images, illustrating the spatial areas most relevant to the model’s classification. In this dataset, Grad-CAM effectively isolates regions such as defect zones, wear points, or irregular surface textures, offering a localized and comprehensible interpretation of high-level feature activations in convolutional neural networks (CNNs) [
86].
Guided backpropagation focuses on fine-grained pixel-level details that contribute to the model’s output. This method is particularly valuable for identifying subtle patterns or micro-defects in fastener surfaces—features that might not be visible to the naked eye but are crucial for predictive maintenance decisions.
Layer-wise Relevance Propagation decomposes the model’s final prediction into feature-level contributions, tracing relevance scores backward through the network’s layers. In practical terms, LRP helps determine exactly which visual attributes—such as bolt head shape, surface wear, or discoloration—had the greatest influence on the prediction, thereby offering a granular, engineering-relevant explanation.
Integrated Gradients quantify the cumulative contribution of each input feature by integrating gradients from a baseline input to the actual image. Applied to the fastener dataset, this method clarifies how different image regions progressively build toward the final prediction, providing a more stable and theoretically grounded attribution compared to simple gradient methods.
By applying these techniques to a real-world railway fastener dataset, we not only showcase their technical feasibility but also illustrate their practical value in supporting explainable, accountable decision-making within transportation infrastructure maintenance.
4.1.2. Critical Observations from TIM Applications
Model-specific methods like Grad-CAM excel in high-resolution image analysis for defect detection but cannot be applied to non-visual sensor data, limiting their use in vibration or load monitoring systems.
Model-agnostic methods such as SHAP and LIME offer broader applicability but face scalability challenges for real-time decision-making in large infrastructure networks.
Human-centered approaches are essential for operational acceptance, yet they are underrepresented in the TIM literature; current studies often stop at generating explanations without testing their interpretability for end-users.
Hybrid methods present a promising avenue for TIM, combining visual localization with quantitative attribution, but few studies have validated them in field conditions.
4.1.3. Comparative Assessment
To guide method selection,
Table 3 contrasts key techniques in terms of strengths, weaknesses, and TIM applications.
4.1.4. Role of Taxonomy in TIM Deployment
A taxonomy-driven approach ensures that Explainable AI tools are chosen based on data modality, operational constraints, and end-user requirements. For example:
High-resolution inspection imagery benefits most from model-specific visual methods (e.g., Grad-CAM, LRP).
Multi-sensor fusion systems may require hybrid strategies to integrate image-based and sensor-based explanations.
Real-time monitoring on edge devices calls for lightweight model-agnostic tools, despite potential precision trade-offs.
By systematically categorizing and critically evaluating these methods, this taxonomy provides a decision-making framework for researchers and practitioners. It supports not only the technical selection of XAI methods but also the operational integration of Explainable AI into the broader maintenance ecosystem—a prerequisite for trustworthy, accountable, and efficient Transportation Infrastructure Management [
81,
82].
4.2. Advanced Techniques in Explainable AI (XAI)
While core XAI techniques offer foundational interpretability, advanced methods provide deeper, more dynamic insights—particularly relevant in the complex, safety-critical domain of Transportation Infrastructure Management (TIM). These techniques are designed to enhance the granularity, adaptability, and decision-relevance of explanations generated by predictive maintenance models. However, their integration into TIM is not without challenges, and a critical understanding of their capabilities and limitations is essential for informed adoption
4.2.1. Attention Mechanisms
Attention mechanisms, particularly within transformer architectures, have redefined the interpretability landscape by enabling models to assign varying weights to input features, thereby focusing on the most salient information for a given task. In predictive maintenance, attention maps can visually highlight critical features such as localized defects in rail fasteners or stress concentration zones in bridge imagery [
87]. However, the reliability of attention weights as direct indicators of true causal importance remains contested. Empirical studies have demonstrated that high attention scores do not always correlate with actual feature contribution to a model’s decision, raising questions about whether attention is an explanation or merely an alignment mechanism [
88,
89]. This limitation is compounded by the fact that attention often requires extensive fine-tuning to produce meaningful, domain-relevant outputs, which can be resource-intensive in large-scale TIM deployments [
90].
Mathematically, attention mechanisms compute a weighted sum of values, with weights determined by the relevance of keys to a query. The most common approaches include Dot-Product Attention, represented as:
where
∈
,
∈
, and
∈
.
Additive Attention computes the attention score using a feed-forward neural network, which is a function of the query and key:
where
and
are weight matrices, and
is a weight vector.
Multi-Head Attention extends the attention mechanism by using multiple attention heads, each with its own set of weight matrices:
whereas,
In TIM contexts, attention mechanisms are best deployed in combination with other interpretability methods—for example, cross-validating attention maps with SHAP feature attributions—to avoid misleading conclusions.
4.2.2. Disentangled Representations
Disentangled representation learning aims to decompose complex data into independent, semantically meaningful factors of variation, each corresponding to a distinct physical or operational property. In predictive maintenance, this can mean isolating variables such as material degradation, load-induced wear, and environmental corrosion into separate latent dimensions, enabling engineers to interpret and control model outputs more directly. Despite its appeal, true disentanglement remains elusive [
91,
92,
93,
94]. The field lacks a universally accepted definition, and many approaches depend on strong inductive biases that may not hold across diverse infrastructure contexts [
95,
96]. This can lead to overfitting, limited transferability, and misleadingly “clean” latent factors that do not correspond to real-world phenomena. Additionally, quantitative evaluation of disentanglement is challenging, often requiring hybrid metrics that combine statistical independence with human judgment [
97].
4.2.3. Perturbation-Based Methods
Perturbation-based interpretability methods determine feature importance by systematically modifying or occluding parts of the input and observing the resulting change in model output [
98]. This process is intuitive, often producing clear and actionable explanations for stakeholders. For example, masking regions of a bridge image can reveal which visual features drive crack detection, or removing specific sensor channels can indicate which measurements are most critical for predicting track failures.
Mathematically,
Given an input
and a perturbation function
, the importance of feature
can be computed as:
While effective, perturbation-based approaches face significant computational overhead, particularly in high-dimensional image or sensor datasets [
99,
100]. Moreover, results can be sensitive to the perturbation strategy—for example, masking vs. adding noise—leading to instability in the generated explanations. These limitations suggest that perturbation-based techniques should be used as part of a broader interpretability pipeline rather than as a sole explanatory tool [
101].
4.2.4. Critical Insights
Advanced XAI methods hold significant promise for TIM predictive maintenance, but their deployment should be strategically selective:
Attention mechanisms are powerful but risk being misinterpreted; pairing them with causal attribution techniques can strengthen validity.
Disentangled representations offer interpretable latent spaces but remain technically and conceptually fragile; domain-specific constraints are essential for practical adoption.
Perturbation-based methods provide intuitive results but struggle with scalability, making them more suitable for offline model audits than for continuous real-time monitoring.
A hybrid interpretability framework—where advanced techniques complement, rather than replace, core XAI methods—offers the most reliable path toward transparent, trustworthy, and operationally viable predictive maintenance systems in transportation infrastructure.
4.3. Key Evaluation Metrics for XAI Techniques
The effectiveness of XAI techniques in predictive maintenance is primarily determined by their ability to enhance decision-making, improve model performance, and build trust with users. The key evaluation metrics for assessing XAI’s impact on Transportation Infrastructure Management (TIM) are outlined in
Table 4 below.
5. Applications of XAI for Transportation Infrastructure Management
5.1. The Need for Explainable AI in Transportation Predictive Maintenance
The increasing complexity of transportation infrastructure, including systems like railways, bridges, and tunnels, presents significant challenges in terms of maintenance and operational efficiency. Traditional maintenance strategies, often reactive, are no longer sufficient to address the demands of modern infrastructure management, where failure to detect early signs of wear or damage can lead to catastrophic consequences, costly repairs, and extended downtimes. In this context, predictive maintenance—the practice of using AI models to forecast potential failures before they occur—has emerged as a critical solution. By leveraging AI and machine learning models, transportation agencies can predict infrastructure degradation based on sensor data, usage patterns, environmental conditions, and historical records, enabling proactive maintenance and resource allocation.
However, despite the promise of AI-powered predictive maintenance, the widespread adoption of these models is hindered by the lack of transparency and interpretability in many of the AI systems used. Most advanced AI models, especially deep learning and neural networks, operate as “black boxes” that offer little insight into the reasoning behind their predictions. This opacity creates a barrier to trust, making it difficult for engineers, maintenance teams, and policymakers to understand how decisions are made and, more importantly, to ensure the validity and reliability of those decisions [
98]. When predictive models flag a railway track or bridge component for maintenance, stakeholders need to understand why a specific component is considered at risk—this understanding is vital for decision-making, especially in high-stakes applications where human lives and large-scale infrastructure investments are at stake.
This is where Explainable AI (XAI) becomes indispensable. XAI aims to make AI models not only more transparent but also interpretable by providing human-readable explanations for their predictions. For example, in a predictive maintenance system, XAI can identify which sensor readings, such as vibration, temperature, or stress levels, contributed most to a prediction of impending failure in a railway track or bridge. By explaining how these features influenced the model’s decision, XAI helps engineers and maintenance teams verify the accuracy of AI predictions and align them with expert knowledge. This transparency is crucial because it allows stakeholders to trust the AI’s recommendations, facilitating informed decision-making and ensuring that AI-driven insights are consistent with engineering judgment and established safety protocols [
102].
Furthermore, the interpretability provided by XAI helps in accountability and regulatory compliance. In the context of transportation infrastructure, AI-driven maintenance decisions often need to meet strict safety standards. Regulatory bodies and the public demand reassurance that AI models are operating fairly, consistently, and without bias. XAI enables infrastructure managers to provide clear, traceable justifications for their predictive maintenance actions. For instance, if an AI model recommends replacing a component, XAI can explain which data points led to that conclusion, thereby ensuring that the decision-making process is transparent and auditable. This is especially important when models are deployed in critical infrastructure systems where decisions can have serious consequences for both safety and public trust.
In addition, XAI aids in debugging and improving AI models. By making the inner workings of AI models more understandable, engineers can more easily identify where models might be going wrong. If a model is consistently flagging the wrong components for maintenance, XAI can provide insights into why the model is failing and suggest areas for improvement. This ability to fine-tune models based on interpretable feedback is essential for continuously enhancing predictive maintenance systems, ensuring that they remain accurate and reliable over time [
103]. Moreover, XAI facilitates human–AI collaboration. In predictive maintenance, AI is a tool to augment human expertise, not replace it. Engineers and maintenance teams have valuable domain-specific knowledge that can enhance AI-driven decision-making. By making AI’s reasoning transparent, XAI allows these experts to collaborate with AI systems more effectively, offering insights that improve the quality of predictions and the overall decision-making process. This collaboration ensures that AI models remain grounded in real-world conditions, increasing their relevance and reliability.
5.2. Enhancing Transparency and Trust
Digital twins and predictive maintenance represent transformative advancements across numerous industries, offering real-time monitoring, simulation, and optimization of complex systems. At the heart of these innovations are AI models that power the decision-making processes, enabling predictive capabilities and system insights. However, the “black-box” nature of traditional AI models—where internal decision processes remain opaque—presents a significant challenge to the acceptance, trust, and effective deployment of these technologies. Explainable AI (XAI) is emerging as the critical tool to address this opacity, ensuring transparency, interpretability, and trust across both digital twins and predictive maintenance systems [
104].
5.3. XAI’s Role in Predictive Maintenance Within Industry 5.0
Industry 5.0 ushers in a new era of industrial advancement, where human intelligence and creativity synergize with cutting-edge technologies like Artificial Intelligence (AI), robotics, and automation. This transformation focuses on creating more personalized, sustainable, and human-centric approaches, particularly in sectors like manufacturing, healthcare, and transportation [
105]. Within this context, Explainable Artificial Intelligence (XAI) plays a crucial role in ensuring that AI systems are not only efficient but also transparent, interpretable, and trustworthy—qualities that are essential in high-stakes applications, such as predictive maintenance for infrastructure. As Industry 5.0 strives to optimize the interaction between humans and AI systems, the need for explainable models becomes increasingly important, especially for predictive maintenance. While traditional AI models, such as deep learning networks, have made significant progress in predicting equipment failures and optimizing maintenance schedules, their “black-box” nature can be problematic. In predictive maintenance for transportation infrastructure—such as railways and bridges—AI models that forecast potential failures or recommend maintenance actions must be understandable. Without Explainable AI, engineers may find it difficult to verify the AI model’s reasoning, which is critical for ensuring safety and operational reliability [
106].
XAI enhances the interpretability of AI systems, particularly in predictive maintenance, by offering transparent explanations for model predictions. Techniques such as SHAP (Shapley Additive Explanations), LIME (Local Interpretable Model-Agnostic Explanations), and Grad-CAM (Gradient-weighted Class Activation Mapping) provide interpretable insights into which input features, such as sensor data, environmental conditions, or historical performance, contributed most to the AI’s decision [
107]. For example, in predictive maintenance for a railway system, XAI can explain how factors like temperature or vibration levels influence the prediction of track degradation, enabling engineers to make informed, evidence-based decisions about which assets to prioritize for maintenance. Within Industry 5.0, XAI not only supports better decision-making but also enhances human–AI collaboration. Rather than replacing human expertise, AI works as a support tool. For instance, in smart transportation systems, XAI allows maintenance teams to understand why a particular infrastructure component, such as a bridge or rail track, is flagged for repair. This transparency improves trust and ensures the decisions align with human expectations, thus fostering a more cooperative relationship between engineers and AI systems. This human-centric approach improves efficiency, safety, and job satisfaction by providing clarity and insight into AI-driven decisions. The integration of XAI with predictive maintenance systems in real-time applications has significantly improved decision-making speed. For instance, real-time anomaly detection models were able to flag critical infrastructure issues, with XAI explanations helping maintenance teams understand the root causes of these anomalies in seconds, thereby accelerating repair responses.
Additionally, XAI is essential for ethical decision-making and regulatory compliance in predictive maintenance applications within Industry 5.0. As AI increasingly drives automation in infrastructure management, it is crucial to ensure that these systems adhere to ethical standards and comply with regulations. By providing transparent explanations, XAI helps mitigate the risk of bias and ensures that AI models operate within ethical boundaries [
73,
108]. In predictive maintenance, for example, XAI ensures that AI recommendations align with safety regulations and ethical standards, promoting fair and safe infrastructure management. As predictive maintenance evolves within Industry 5.0, real-time decision support systems powered by XAI will become increasingly prevalent. For instance, IoT sensors embedded in transportation infrastructure can provide real-time data that is analyzed by AI models to predict potential failures. XAI’s role is to provide engineers with actionable insights and explain why certain data points—such as vibrations or environmental stress—contributed to a predicted failure [
109]. This understanding helps make quicker, more effective responses, preventing system breakdowns before they occur.
XAI’s role in predictive maintenance within Industry 5.0 is pivotal. By making AI models more transparent, interpretable, and accountable, XAI enables better decision-making, enhances human–AI collaboration, and ensures AI-driven processes comply with ethical and regulatory standards. As predictive maintenance continues to evolve in transportation infrastructure, the integration of XAI will be crucial in ensuring that AI systems function safely, effectively, and in harmony with human expertise.
5.4. Current State of Predictive Maintenance in Transportation Infrastructure
Predictive maintenance has gained significant traction in the field of Transportation Infrastructure Management due to its potential to improve safety, reduce costs, and enhance operational efficiency. By leveraging advanced data analytics, machine learning models, and sensor technologies, predictive maintenance systems can forecast equipment failures before they occur, allowing for timely interventions that prevent downtime and catastrophic failures. In the context of transportation infrastructure—such as railways, bridges, tunnels, and highways—the adoption of predictive maintenance has the potential to revolutionize how maintenance activities are planned and executed.
Currently, the state of predictive maintenance in transportation infrastructure is characterized by the widespread use of condition-based monitoring systems. These systems utilize IoT sensors to collect real-time data on various infrastructure components, including vibration, temperature, strain, and wear. Data collected from these sensors is then analyzed to detect patterns or anomalies that might indicate impending failures. Traditional maintenance approaches, which are typically time-based (i.e., scheduled maintenance regardless of the asset’s condition), are increasingly being replaced by predictive systems that only recommend interventions when the condition of the infrastructure warrants it [
25]. This shift allows for more efficient allocation of resources, better utilization of maintenance teams, and significant cost savings by avoiding unnecessary repairs. While significant progress has been made in implementing predictive maintenance in certain sectors, such as railways and bridges, there are still several challenges to overcome. One of the primary challenges is the data integration and quality issue. Predictive maintenance systems often rely on vast amounts of data, including sensor readings, environmental conditions, and historical performance. However, the quality of the data can vary, and integrating data from different sources (e.g., old maintenance logs, new sensor data, and weather data) can be complex. Moreover, data sparsity—especially in cases where the historical failure data is insufficient, or the sensors are sparse—remains a barrier to effective model training, making predictions less accurate and reliable [
110,
111].
Additionally, the use of machine learning models for predictive maintenance, particularly deep learning and neural networks, while powerful, presents challenges due to the black-box nature of many AI models. These models, although capable of providing high-accuracy predictions, often lack interpretability, making it difficult for maintenance engineers and other stakeholders to understand the reasoning behind the predictions. This lack of transparency can limit trust in AI-driven systems, particularly when decisions are critical to safety [
12]. To address these challenges, the integration of Explainable AI (XAI) techniques has become crucial, as they provide insights into the factors driving the predictions, thereby increasing transparency and fostering trust among users. The application of predictive maintenance in transportation infrastructure is also hindered by the high computational requirements of some machine learning models. Predicting failures in large-scale infrastructure networks requires processing vast amounts of real-time sensor data. This is particularly challenging for systems with low-latency requirements, such as railway networks, where predictions must be made in real-time to ensure the safety of operations. In such environments, edge computing—where data is processed closer to the source—offers potential solutions by reducing the need to transmit large volumes of data to central servers.
In recent years, there has been an increasing shift toward the adoption of digital twins and simulation-based models for predictive maintenance in transportation infrastructure. Digital twins are virtual replicas of physical infrastructure that enable real-time monitoring and simulation of asset performance. By combining sensor data with simulation models, digital twins can predict how infrastructure will behave under different conditions, allowing for more accurate failure predictions and the optimization of maintenance schedules. The integration of XAI into these models enhances their transparency, enabling engineers to interpret the predictions and better understand the underlying causes of infrastructure degradation. Despite these advances, there remains a gap in the widespread deployment of predictive maintenance across all sectors of transportation infrastructure. While railways and bridges have made substantial progress, other areas, such as roadways and tunnels, still face significant hurdles. The cost of implementation and lack of standardized procedures are often cited as barriers to the adoption of predictive maintenance technologies. Furthermore, infrastructure management agencies may lack the expertise to effectively implement and maintain predictive maintenance systems, highlighting the need for specialized training and collaboration between engineers, data scientists, and AI experts [
9].
The current state of predictive maintenance in transportation infrastructure is evolving rapidly, with significant advancements in sensor technology, machine learning models, and digital twin systems. However, challenges such as data quality, interpretability of AI models, computational demands, and cost still need to be addressed. The integration of Explainable AI can play a pivotal role in overcoming these challenges, improving the transparency and reliability of predictive maintenance systems, and driving the next phase of innovation in Transportation Infrastructure Management. Moving forward, there is a strong potential for increased adoption of predictive maintenance technologies, particularly as models become more explainable, scalable, and accessible for all types of infrastructure [
21,
46].
6. Challenges in Implementing XAI in TIM
The integration of Explainable Artificial Intelligence (XAI) into predictive maintenance (PdM) for Transportation Infrastructure Management (TIM) holds significant promise for improving safety, optimizing resources, and enabling data-driven decision-making. However, its implementation in operational environments faces technical, operational, and socio-ethical challenges that must be addressed to ensure both utility and adoption. This section critically examines these challenges in light of the current literature, identifying key barriers and research gaps that hinder the deployment of XAI-enabled PdM in TIM.
Most state-of-the-art PdM systems in TIM rely on complex deep learning architectures such as convolutional neural networks (CNNs), graph neural networks (GNNs), and transformer-based models [
112,
113]. In TIM, where public safety and infrastructure resilience are at stake, such opacity is a critical barrier. Maintenance engineers, inspection authorities, and policymakers need to understand why a model predicts imminent failure, not just that it does. Without interpretable reasoning, even accurate predictions may be met with skepticism, potentially delaying critical interventions or leading to over-reliance on human judgment over AI assistance.
A recurring theme in XAI research is the performance–interpretability trade-off. Techniques such as LIME or SHAP provide valuable post hoc explanations but introduce computational overhead and, in some cases, slightly reduce model efficiency [
114]. In TIM’s PdM applications—such as real-time rail defect detection or bridge health monitoring—low-latency decision-making is essential. Even modest delays in generating explanations can be unacceptable when operational crews require immediate, actionable insights for safety-critical interventions.
Predictive maintenance in TIM typically integrates heterogeneous datasets—including time-series sensor data, inspection images, maintenance logs, traffic loads, and environmental variables. These data vary not only in modality but also in quality, completeness, and temporal resolution. Many XAI techniques were developed for single-modal datasets and are not readily adaptable to multi-source data fusion. Additionally, infrastructure datasets are often fragmented across agencies or private operators, creating challenges in building unified, domain-specific models. An effective XAI system for TIM must be able to explain predictions across different subsystems—for example, relating a vibration sensor anomaly in a bridge to historical traffic and weather patterns—while maintaining cross-domain interpretability [
115,
116].
In AI, predictive performance metrics such as accuracy, precision, recall, and F1-score are standardized and widely adopted. However, in XAI, evaluation criteria remain inconsistent and context-dependent [
115,
116]. For TIM, where explanations must influence high-stakes operational decisions, effectiveness cannot be assessed solely by technical correctness—it must also account for user comprehension, trust, and actionability. Without standardized, domain-specific evaluation frameworks, comparing and validating different XAI techniques is challenging. Moreover, user satisfaction studies—critical in assessing whether explanations truly support decision-making—are rare in TIM literature. TIM involves a multi-stakeholder ecosystem, including engineers, field inspectors, maintenance crews, regulatory agencies, and policymakers. Each group requires explanations tailored to their expertise and responsibilities:
Maintenance engineers need granular, feature-level insights to pinpoint the cause of a defect.
Regulators may require higher-level summaries linking explanations to compliance and safety standards.
Budget authorities might prioritize explanations tied to cost–benefit trade-offs.
Designing XAI systems that adapt explanations to different cognitive and operational needs—while preserving accuracy and avoiding information overload—is a complex challenge. This is compounded by the need to ensure fairness and bias mitigation, especially in under-resourced or less-monitored infrastructure systems [
117].
Explanations that are technically accurate but cognitively misaligned with human reasoning may fail to support decision-making. Many current XAI methods provide outputs in the form of heatmaps, feature rankings, or perturbation effects, which require specialized interpretation skills.
Bridging the gap between algorithmic outputs and human cognitive models requires research into:
How maintenance personnel interpret AI outputs in real-time operational contexts.
How to design visualization interfaces and natural language explanations that enhance usability.
This alignment challenge is particularly pressing in TIM, where time-constrained decisions must be made by personnel with varying levels of AI literacy.
TIM operates in environments subject to dynamic and unpredictable conditions—including extreme weather, seismic activity, and unplanned load stresses. AI models that lack robustness to such variability may yield unstable or misleading explanations [
118]. For example, a rail defect detector trained under normal weather conditions might misattribute features during a heavy snow event, misleading maintenance crews. Robustness in XAI is therefore two-fold:
Most current XAI methods describe associations between features and predictions rather than causal relationships. In predictive maintenance for TIM, however, understanding causality is essential. For instance, determining whether increased axle load causes accelerated bridge fatigue—or is merely correlated with it—affects both maintenance prioritization and resource allocation. Developing XAI techniques that can capture and communicate causal mechanisms would enable infrastructure managers to transition from reactive to preventive and prescriptive maintenance strategies [
119,
120].
Effective deployment of XAI in TIM requires expertise spanning machine learning, transportation engineering, human factors, and ethics. Current research often focuses on algorithmic development without adequately involving domain experts in the design and validation process. This disciplinary siloing risks producing explanations that are mathematically correct but operationally irrelevant [
121,
122].
Collaboration between AI developers and TIM practitioners is critical to ensure that explanations not only satisfy technical metrics but also align with field realities. The introduction of XAI into TIM brings forward questions of liability, accountability, and regulatory compliance. If an XAI system misguides a maintenance decision that leads to infrastructure failure, determining responsibility becomes complex. Furthermore, transparency requirements in public-sector infrastructure projects may necessitate explanations that meet legal disclosure standards—adding another layer of design complexity. The challenges facing XAI integration in TIM’s predictive maintenance are multi-dimensional (
Table 5). They span algorithmic limitations, data heterogeneity, evaluation gaps, stakeholder complexity, and governance concerns.
Overcoming these challenges requires coordinated research and development efforts that:
Integrate causal inference with XAI to better capture failure mechanisms in TIM.
Develop real-time, low-latency interpretability frameworks suitable for high-stakes PdM tasks.
Create multi-modal XAI architectures that explain fused sensor, imagery, and environmental datasets.
Establish domain-specific evaluation standards for explanation effectiveness.
Foster interdisciplinary collaborations to align algorithmic outputs with operational needs.
Addressing these areas will enable XAI not just to justify AI predictions, but to actively enhance decision-making—paving the way for safer, more resilient, and cost-effective Transportation Infrastructure Management.
6.1. Novelty and Positioning Relative to Existing XAI and Predictive Maintenance Surveys
Several existing surveys have reviewed Explainable Artificial Intelligence techniques in the context of machine learning, industrial systems, or generic predictive maintenance [
123,
124]. These works typically focus on methodological categorization—such as model-specific versus model-agnostic explanations—and discuss conceptual challenges including faithfulness, stability, and user trust. However, they do not explicitly address the operational, regulatory, and decision-centric constraints inherent to Transportation Infrastructure Management (TIM) [
125].
This review distinguishes itself by positioning XAI within infrastructure-specific management workflows, where explanations are not only required to interpret model behavior but also to support safety-critical decisions, regulatory accountability, and long-term asset management. Unlike prior surveys that emphasize algorithmic properties in isolation, this study integrates Explainable AI with asset-level decision processes, stakeholder roles, and deployment constraints characteristic of transportation infrastructure systems [
126,
127].
Key differentiating aspects of this review include its exclusive focus on TIM assets, systematic consideration of multimodal infrastructure data, and explicit discussion of Explainable AI under domain shift, real-time monitoring, and edge deployment conditions common in field inspections [
128]. Furthermore, this work emphasizes Explainable AI as a mechanism for documentation, auditing, and prioritization rather than solely as a tool for model debugging [
129].
By embedding Explainable AI within Transportation Infrastructure Management workflows and constraints, this review advances existing taxonomies from a purely methodological classification toward a domain-aware synthesis. The resulting perspective highlights gaps that are specific to infrastructure systems, including Explainable AI under environmental variability, long-term asset degradation, and safety-driven accountability, thereby establishing a distinct contribution beyond prior XAI and predictive maintenance surveys [
130,
131,
132].
6.2. Conceptual Properties to Measurable Validation Practices
The evaluation of Explainable Artificial Intelligence (XAI) in Transportation Infrastructure Management must be distinguished from conventional model performance assessment. Based on the reviewed literature, three complementary evaluation dimensions can be identified: predictive performance, explanation quality, and decision-level outcomes [
133].
Predictive performance evaluation follows standard machine learning practices and includes metrics such as accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve, depending on the specific task (e.g., defect detection or condition classification). These metrics assess the correctness of predictions but do not evaluate the reliability or usefulness of explanations [
54].
Explanation quality evaluation focuses on the validity and stability of the generated explanations. In the reviewed TIM literature, commonly applied practices include perturbation-based tests, where input features or image regions are systematically modified to assess changes in model output, and sensitivity analysis to examine whether explanations are consistent under small input variations or random noise. Several studies further assess explanation stability by comparing outputs across multiple model initializations or random seeds. However, the application of formal sanity checks and out-of-distribution validation remains limited in current infrastructure-focused studies [
134,
135].
Decision-level evaluation examines how explanations influence human understanding and operational decisions. In transportation infrastructure applications, this evaluation is most often conducted through qualitative expert validation, where domain engineers assess whether the explanations align with known failure mechanisms, structural behavior, or maintenance logic. While quantitative measures such as time-to-decision or error reduction are rarely reported, expert agreement and qualitative consistency are commonly used as proxies for explanation usefulness in safety-critical contexts [
136,
137].
Overall, the reviewed literature demonstrates that explanation evaluation in TIM remains primarily qualitative and task-driven, with increasing but still limited adoption of systematic validation practices that go beyond conceptual properties such as interpretability or trust [
138,
139].
7. XAI Within Transportation Infrastructure Management Decision Workflows
To move beyond a purely algorithmic perspective, this review explicitly frames Explainable Artificial Intelligence (XAI) within Transportation Infrastructure Management (TIM) decision workflows. In TIM practice, predictive models are not ends in themselves; they support a sequence of operational and strategic decisions ranging from inspection planning to budgeting and regulatory reporting. Accordingly, the value of XAI depends on how effectively explanations align with these decision points and the needs of different stakeholders [
53].
In inspection and monitoring stages, XAI supports inspection triage by identifying spatial or temporal regions that contribute most strongly to predicted anomalies or defects. For asset engineers and inspectors, localized explanations—such as saliency maps or attention weights—enable efficient allocation of field inspections to high-risk components rather than uniform inspection schedules [
92,
102,
140].
For defect localization and condition state estimation, XAI methods provide feature-level or region-level reasoning that links sensor readings or image patterns to structural condition states. Such explanations assist engineers in validating automated assessments and distinguishing between true deterioration and sensor noise or environmental effects.
At the planning level, deterioration forecasting and treatment selection require explanations that reveal temporal trends, influential variables, and uncertainty drivers. Maintenance planners benefit from model-agnostic and surrogate-based explanations that clarify how predicted degradation trajectories respond to usage intensity, environmental exposure, or prior interventions [
141,
142].
For prioritization and budgeting, infrastructure managers require aggregated and stable explanations rather than instance-level detail. In this context, global importance measures and sensitivity analyses help justify why certain assets are prioritized within constrained budgets and enable transparent communication with funding authorities [
54].
Finally, documentation, auditing, and regulatory compliance demand explanations that are traceable, reproducible, and interpretable at a policy level. Regulators and oversight bodies require high-level, human-centered explanations that support accountability, post-decision audits, and long-term asset management reporting. Details are given in
Table 6.
By explicitly linking XAI techniques to concrete infrastructure management decisions and stakeholder roles, this review reframes Explainable AI as an operational enabler rather than a purely technical attribute. This decision-centric perspective highlights how explanation granularity, stability, and format must be tailored to specific TIM objectives to effectively support safe, transparent, and accountable infrastructure management.
8. Future Trends in XAI for Transportation Infrastructure Management (TIM)
Future research on Explainable Artificial Intelligence (XAI) in Transportation Infrastructure Management (TIM) is expected to progress from conceptual interpretability toward deployment-ready, decision-aligned solutions. Based on the reviewed literature, these developments can be structured into near-term, mid-term, and long-term priorities, reflecting increasing levels of technical maturity and operational integration [
92].
8.1. Near-Term Directions: Standardization and Practical Adoption
In the near term, the primary research priority lies in improving the comparability, transparency, and operational relevance of XAI studies in TIM [
66,
143,
144]. This includes the development of standardized reporting practices for Explainable AI, evaluation protocols, and asset-specific benchmarks. Shared datasets combining laboratory experiments, simulated damage, and in-service monitoring data will enable consistent evaluation of both predictive performance and explanation behavior [
145].
Another immediate direction is the refinement of hybrid XAI models that combine deep learning accuracy with interpretable components. Such models support anomaly detection and condition assessment while enabling cross-verification between intrinsic and post hoc explanations [
98,
146,
147]. This balance is particularly important for safety-critical assets, where Explainable AI is required to justify maintenance actions and inspection priorities [
94].
8.2. Mid-Term Directions: Multimodal, Uncertainty-Aware, and Edge-Capable XAI
Mid-term research is expected to focus on multimodal Explainable AI, reflecting the heterogeneous data sources used in infrastructure monitoring, including images, sensor streams, environmental data, and historical inspection records. XAI frameworks capable of integrating and explaining these modalities jointly will improve situational awareness and reduce ambiguity in maintenance decisions [
148].
In parallel, incorporating uncertainty awareness into explanations will become increasingly important. Predictive maintenance decisions benefit not only from point estimates but also from understanding confidence levels and risk margins associated with predictions. Explaining uncertainty alongside model outputs can improve prioritization under budget and resource constraints [
91,
93,
119,
149,
150].
As edge computing becomes more prevalent, edge-capable XAI pipelines will be required to deliver low-latency, interpretable insights directly at data collection points. Lightweight explanation methods suitable for on-device execution will support real-time decision-making in remote or bandwidth-limited environments, enhancing system resilience and operational continuity [
151,
152].
8.3. Long-Term Directions: Causality and Digital Twin Integration
In the long term, XAI in TIM is expected to evolve toward causality-aware and simulation-integrated systems. Moving beyond correlation-based explanations, causal reasoning will enable maintenance teams to understand how specific factors—such as load patterns, environmental exposure, and material fatigue—contribute to infrastructure degradation. This shift supports targeted interventions and proactive risk mitigation rather than reactive maintenance [
153,
154,
155].
The integration of XAI with digital twin technologies represents a major long-term opportunity. Explainable digital twins will allow engineers and decision-makers to explore “what-if” scenarios, assess intervention strategies, and forecast long-term performance while maintaining transparency and traceability. Such systems will be particularly valuable for complex and high-risk assets, including tunnels, long-span bridges, and high-speed rail infrastructure, where interpretability supports both engineering validation and regulatory compliance [
156,
157].
9. Conclusions
This study conducted a systematically structured review of Explainable Artificial Intelligence (XAI) applications in Transportation Infrastructure Management (TIM), synthesizing evidence from 163 peer-reviewed studies published between 2015 and March 2025. The conclusions presented in this work are derived from a transparent, reproducible, and multi-stage screening process that included database identification, duplicate removal, title and abstract screening, full-text eligibility assessment, structured data extraction, and thematic synthesis. By applying predefined inclusion and exclusion criteria and documenting each screening decision, the review ensures that the findings are grounded in methodologically consistent evidence selection rather than narrative summarization.
The systematic classification of selected studies enabled the identification of four dominant thematic patterns: (1) enhancement of predictive maintenance accuracy through Explainable AI integration, (2) improved safety and risk assessment transparency, (3) proactive and scenario-based decision support, and (4) trust-building and accountability in AI-driven infrastructure management. These themes emerged from qualitative thematic synthesis of extracted study characteristics, including modeling techniques, explainable methods, infrastructure domains, and deployment settings. Temporal trend analysis further revealed a sharp growth in XAI-related infrastructure research after 2020, reflecting increasing awareness of regulatory, operational, and ethical requirements in safety-critical systems.
The evidence synthesis was primarily qualitative and thematic, supported by quantitative trend aggregation (e.g., frequency of Explainable AI techniques, infrastructure domain distribution, and regional publication patterns). This mixed analytical approach enabled identification of recurring methodological gaps across studies. Notably, the review highlights persistent challenges, including heterogeneous sensor and image data integration, absence of standardized evaluation metrics for explanation quality, limited validation under real-world noise conditions, insufficient benchmarking across datasets, and computational overhead in edge or real-time environments. Additionally, many studies remain laboratory-based, with limited integration into legacy infrastructure management systems.
The robustness of these conclusions stems directly from the structured review protocol. By limiting the dataset to peer-reviewed studies within defined temporal and domain boundaries, applying consistent eligibility criteria, and extracting comparable variables across all selected papers, the review minimizes selection bias and enhances reproducibility. The transparent documentation of screening numbers and exclusion reasons strengthens confidence that the identified patterns are not incidental but reflective of the broader research landscape.
Looking forward, the systematic analysis indicates that future progress in XAI-driven TIM will depend on the development of domain-specific interpretability benchmarks, hybrid inherently interpretable architectures, causal explanation frameworks, cross-modal reasoning mechanisms, and digital twin-integrated validation environments. Edge-enabled Explainable AI is expected to play a pivotal role in achieving scalable, real-time, and regulation-ready predictive maintenance aligned with Industry 5.0 principles.
Therefore, this review demonstrates that XAI represents not merely an auxiliary enhancement to predictive maintenance models, but a foundational requirement for trustworthy, accountable, and scalable infrastructure management systems. The systematic methodology applied in this study ensures that these conclusions are evidence-based, reproducible, and reflective of the current state and future trajectory of research in this rapidly evolving domain.
Author Contributions
The authors confirm contribution to the paper as follows: Youwen Hu: Conceptualization, Methodology, Formal Analysis, Writing—Original Draft, Review & Editing; Tariq Ur Rahman: Data Collection, Resources, Validation, Methodology, Formal Analysis; Zunaira Atta: Data Collection, Methodology, Visualization, Preprocessing, Validation; Shi Qiu: Funding Acquisition, Resources, Supervision, Writing—Review & Editing; Wei Wei: Data Collection, Validation, Formal Analysis, Writing and Review; Zhiyu Liang: Data Collection, Supervision, Review & Feedback, Writing—Review & Editing; Jin Wang: Funding Acquisition, Resources, Supervision, Writing—Review & Editing; Qasim Zaheer: Supervision, Conceptualization, Methodology, Data Curation, Formal Analysis, Visualization, Writing, Review & Editing, Original Draft. All authors have read and agreed to the published version of the manuscript.
Funding
The research is supported by the National Natural Science Foundation of China (No. U1734208).
Data Availability Statement
The data used to support the findings of this study are available from the corresponding author upon request.
Acknowledgments
Thanks to all authors.
Conflicts of Interest
The author declared that there are no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| Abbreviation | Full Term |
| AI | Artificial Intelligence |
| XAI | Explainable Artificial Intelligence |
| TIM | Transportation Infrastructure Management |
| PM | Predictive Maintenance |
| SHM | Structural Health Monitoring |
| DL | Deep Learning |
| ML | Machine Learning |
| CNN | Convolutional Neural Network |
| SHAP | Shapley Additive Explanations |
| LIME | Local Interpretable Model-Agnostic Explanations |
| CAM | Class Activation Mapping |
| GDPR | General Data Protection Regulation |
| DSS | Decision Support System |
| IoT | Internet of Things |
References
- Pickard, W.F. Smart grids versus the achilles’ heel of renewable energy: Can the needed storage infrastructure be constructed before the fossil fuel runs out? Proc. IEEE 2014, 102, 1094–1105. [Google Scholar] [CrossRef]
- Aslam, F.; Aimin, W.; Li, M.; Rehman, K.U. Innovation in the era of IoT and industry 5.0: Absolute innovation management (AIM) framework. Information 2020, 11, 124. [Google Scholar] [CrossRef]
- Van Erp, T.; Carvalho, N.G.P.; Gerolamo, M.C.; Gonçalves, R.; Rytter, N.G.M.; Gladysz, B. Industry 5.0: A new strategy framework for sustainability management and beyond. J. Clean. Prod. 2024, 461, 142271. [Google Scholar] [CrossRef]
- Buttazzo, G. Can We Trust AI-Powered Real-Time Embedded Systems? In Third Workshop on Next Generation Real-Time Embedded Systems (NG-RES 2022); Open Access Series Informatics; Schloss Dagstuhl—Leibniz-Zentrum für Informatik: Wadern, Germany, 2022; Volume 98, pp. 1:1–1:14. [Google Scholar] [CrossRef]
- Wang, R.; Cheng, R.; Ford, D.; Zimmermann, T. Investigating and Designing for Trust in AI-powered Code Generation Tools. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, Rio de Janeiro, Brazil, 3–6 June 2024. [Google Scholar] [CrossRef]
- Żywiołek, J. Empirical Examination of AI-Powered Decision Support Systems: Ensuring Trust and Transparency in Information and Knowledge Security; Silesian University of Technology Publishing House: Gliwice, Poland, 2024. [Google Scholar] [CrossRef]
- Ehsan, U.; Liao, Q.V.; Muller, M.; Riedl, M.O.; Weisz, J.D. Expanding explainability: Towards social transparency in AI systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Virtual, 8–13 May 2021. [Google Scholar] [CrossRef]
- Zhuoma, C.; Kasamatsu, K.; Ainoya, T. User Experience Analysis for Visual Expression Aiming at Creating Experience Value According to Time Spans. In International Conference on Human-Computer Interaction; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2020; Volume 12424, ISBN 9783030601164. [Google Scholar]
- Ehsan, U.; Saha, K.; De Choudhury, M.; Riedl, M.O. Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI. Proc. ACM Hum.-Comput. Interact. 2023, 7, 1–32. [Google Scholar] [CrossRef]
- Zolanvari, M.; Yang, Z.; Khan, K.; Jain, R.; Meskin, N. TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT Security. IEEE Internet Things J. 2023, 10, 2967–2978. [Google Scholar] [CrossRef]
- Wei, W.; Ding, F.; Ali, S.U.; Rahman, T.U.; Qiu, S.; Khan, M.; Wang, J.; Zaheer, Q. Potential of Deep Learning Models for Point Cloud-Based Infrastructure Management. Electronics 2026, 15, 672. [Google Scholar] [CrossRef]
- Mirza, A.U. Exploring the Frontiers of Artificial Intelligence and Machine Learning Technologies; San International Scientific Publications: Chennai, India, 2024; ISBN 9788197045790. [Google Scholar]
- Wang, W.; Zaheer, Q.; Qiu, S.; Wang, W.; Ai, C.; Wang, J.; Wang, S.; Hu, W. Introduction to Digital Twin Technologies in Transportation Infrastructure Management (TIM). In Digital Twin Technologies in Transportation Infrastructure Management; Springer: Singapore, 2024; pp. 1–25. [Google Scholar] [CrossRef]
- Liao, Q.V.; Varshney, K.R. Human-Centered Explainable AI (XAI): From Algorithms to User Experiences. arXiv 2021, arXiv:2110.10790. [Google Scholar]
- Ali, S.; Abuhmed, T.; El-Sappagh, S.; Muhammad, K.; Alonso-Moral, J.M.; Confalonieri, R.; Guidotti, R.; Del Ser, J.; Díaz-Rodríguez, N.; Herrera, F. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Inf. Fusion 2023, 99, 101805. [Google Scholar] [CrossRef]
- Pappaterra, M.J.; Flammini, F.; Vittorini, V.; Bešinović, N. A systematic review of artificial intelligence public datasets for railway applications. Infrastructures 2021, 6, 136. [Google Scholar] [CrossRef]
- Khalil, R.A.; Safelnasr, Z.; Yemane, N.; Kedir, M.; Shafiqurrahman, A.; Saeed, N. Advanced Learning Technologies for Intelligent Transportation Systems: Prospects and Challenges. IEEE Open J. Veh. Technol. 2024, 5, 397–427. [Google Scholar] [CrossRef]
- Oseni, A.; Moustafa, N.; Creech, G.; Sohrabi, N.; Strelzoff, A.; Tari, Z.; Linkov, I. An Explainable Deep Learning Framework for Resilient Intrusion Detection in IoT-Enabled Transportation Networks. IEEE Trans. Intell. Transp. Syst. 2023, 24, 1000–1014. [Google Scholar] [CrossRef]
- Hassija, V.; Chamola, V.; Mahapatra, A.; Singal, A.; Goel, D.; Huang, K.; Scardapane, S.; Spinelli, I.; Mahmud, M.; Hussain, A. Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence. Cognit. Comput. 2024, 16, 45–74. [Google Scholar] [CrossRef]
- Dazeley, R.; Vamplew, P.; Cruz, F. Explainable reinforcement learning for broad-XAI: A conceptual framework and survey. Neural Comput. Appl. 2023, 35, 16893–16916. [Google Scholar] [CrossRef]
- Aliman, N.M.; Kester, L.; Yampolskiy, R. Transdisciplinary ai observatory—Retrospective analyses and future-oriented contradistinctions. Philosophies 2021, 6, 6. [Google Scholar] [CrossRef]
- Kawakami, A.; Sivaraman, V.; Cheng, H.F.; Stapleton, L.; Cheng, Y.; Qing, D.; Perer, A.; Wu, Z.S.; Zhu, H.; Holstein, K. Improving Human-AI Partnerships in Child Welfare: Understanding Worker Practices, Challenges, and Desires for Algorithmic Decision Support. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 30 April–5 May 2022. [Google Scholar] [CrossRef]
- Colavizza, G.; Blanke, T.; Jeurgens, C.; Noordegraaf, J. Archives and AI: An Overview of Current Debates and Future Perspectives. J. Comput. Cult. Herit. 2022, 15, 1–15. [Google Scholar] [CrossRef]
- Mohseni, S.; Zarei, N.; Ragan, E.D. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. ACM Trans. Interact. Intell. Syst. 2021, 11, 1–45. [Google Scholar] [CrossRef]
- Ali, P.A. Explainable AI: Examining Challenges and Opportunities in Developing Explainable AI Systems for Transparent Decision-Making. J. Artif. Intell. Res. 2024, 4, 1–13. [Google Scholar]
- Khan, A.; Ali, A.; Khan, J.; Khan, M.A.; Ullah, F. A Systematic Literature Review of Explainable Artiicial Intelligence (XAI) in Software Engineering (SE). Preprint 2023. [Google Scholar] [CrossRef]
- Love, P.E.D.; Fang, W.; Matthews, J.; Porter, S.; Luo, H.; Ding, L. Explainable artificial intelligence (XAI): Precepts, models, and opportunities for research in construction. Adv. Eng. Inform. 2023, 57, 102024. [Google Scholar] [CrossRef]
- Lahby, M.; Kose, U.; Bhoi, A.K. Explainable Artificial Intelligence for Smart Cities; CRC Press: Boca Raton, FL, USA, 2021; pp. 1–349. [Google Scholar] [CrossRef]
- Embarak, O. An adaptive paradigm for smart education systems in smart cities using the internet of behaviour (IoB) and explainable artificial intelligence (XAI). In Proceedings of the 2022 8th International Conference on Information Technology Trends (ITT), Dubai, United Arab Emirates, 25–26 May 2022. [Google Scholar] [CrossRef]
- Trindade Neves, F.; Aparicio, M.; de Castro Neto, M. The Impacts of Open Data and eXplainable AI on Real Estate Price Predictions in Smart Cities. Appl. Sci. 2024, 14, 2209. [Google Scholar] [CrossRef]
- Batra, I.; Malik, A.; Sharma, S.; Sharma, C.; Sanwar Hosen, A.S.M. Explainable Artificial Intelligence into Cyber-Physical System Architecture of Smart Cities: Technologies, Challenges, and Opportunities. J. Electr. Syst. 2024, 20, 2343–2362. [Google Scholar] [CrossRef]
- Nascita, A.; Montieri, A.; Aceto, G.; Ciuonzo, D.; Persico, V.; Pescape, A. XAI Meets Mobile Traffic Classification: Understanding and Improving Multimodal Deep Learning Architectures. IEEE Trans. Netw. Serv. Manag. 2021, 18, 4225–4246. [Google Scholar] [CrossRef]
- Price, E.; Masood, A.; Aroraa, G. Azure Machine Learning. In Hands-on Azure Cognitive Services; Apress: Berkeley, CA, USA, 2021; pp. 321–354. [Google Scholar] [CrossRef]
- Ahmad, K.; Maabreh, M.; Ghaly, M.; Khan, K.; Qadir, J.; Al-Fuqaha, A. Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges. arXiv 2020, arXiv:2012.09110. [Google Scholar]
- Mariyam, A.A.; Dharanidharan, M.; Agnus, S. Applications of Artificial Intelligence Technology in the Development of Smart Cities. Int. Res. J. Mod. Eng. Technol. Sci. 2023, 5, 1992–2001. [Google Scholar] [CrossRef]
- Das, A.; Rad, P. Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey. arXiv 2020, arXiv:2006.11371. [Google Scholar] [CrossRef]
- Srivastava, G.; Jhaveri, R.H.; Bhattacharya, S.; Pandya, S.; Rajeswari; Maddikunta, P.K.R.; Yenduri, G.; Hall, J.G.; Alazab, M.; Gadekallu, T.R. XAI for Cybersecurity: State of the Art, Challenges, Open Issues and Future Directions. arXiv 2022, arXiv:2206.03585. [Google Scholar] [CrossRef]
- Tritscher, J.; Krause, A.; Hotho, A. Feature relevance XAI in anomaly detection: Reviewing approaches and challenges. Front. Artif. Intell. 2022, 6, 1099521. [Google Scholar] [CrossRef]
- Fagan, D.; Martín-Vide, C.; Neill, M.O.; Vega-Rodríguez, M.A.; Hutchison, D. (Eds.) Theory and Practice of Natural Computing; Springer: Cham, Switzerland, 2018; ISBN 9783030040697. [Google Scholar] [CrossRef]
- Cai, C.J.; Winter, S.; Steiner, D.; Wilcox, L.; Terry, M. “Hello Ai”: Uncovering the onboarding needs of medical practitioners for human–AI collaborative decision-making. Proc. ACM Hum.-Comput. Interact. 2019, 3, 1–24. [Google Scholar] [CrossRef]
- Ai, C.; Wang, J.; Wang, S.; Hu, W. Digital Twins Technologies. In Digital Twin Technologies in Transportation Infrastructure Management; Springer: Singapore, 2024. [Google Scholar] [CrossRef]
- Lebovitz, S. Diagnostic doubt and artificial intelligence: An inductive field study of radiology work. In Proceedings of the 40th International Conference on Information Systems (ICIS), Munich, Germany, 15–18 December 2019. [Google Scholar]
- Fouladgar, N.; Främling, K. XAI-P-T: A Brief Review of Explainable Artificial Intelligence from Practice to Theory. arXiv 2020, arXiv:2012.09636. [Google Scholar]
- Qiu, S.; Zaheer, Q.; Ali, F.; Wajid, S.; Chen, H.; Ai, C.; Wang, J. Exploring the impact of digital twin technology in infrastructure management: A comprehensive review. J. Civ. Eng. 2025, 31, 395–417. [Google Scholar] [CrossRef]
- Holzinger, A. From machine learning to explainable AI. In Proceedings of the 2018 World Symposium on Digital Intelligence for Systems and Machines (DISA), Košice, Slovakia, 23–25 August 2018. [Google Scholar] [CrossRef]
- Holzinger, A.; Kieseberg, P. Machine Learning and Knowledge Extraction; Springer: Cham, Switzerland, 2020; ISBN 978-3-319-99739-1. [Google Scholar] [CrossRef]
- Holzinger, A.; Goebel, R.; Fong, R.; Moon, T.; Müller, K.-R.; Samek, W. xxAI-Beyond Explainable AI; Springer: Cham, Switzerland, 2022; ISBN 9783031040825. [Google Scholar]
- Doran, D.; Schulz, S.; Besold, T.R. What does explainable AI really mean? A new conceptualization of perspectives. arXiv 2018, arXiv:1710.00794. [Google Scholar] [CrossRef]
- Rousseau, A.J.; Geubbelmans, M.; Valkenborg, D.; Burzykowski, T. Explainable artificial intelligence. Am. J. Orthod. Dentofac. Orthop. 2024, 165, 491–494. [Google Scholar] [CrossRef] [PubMed]
- Ehsan, H.; Qiu, S.; Wang, J.; Qasim, Z. Scalable and Accurate Crack Segmentation for Infrastructure Health Monitoring Using EfficientNetB0 and a Context-Aware Attention Mechanism. J. Infrastruct. Syst. 2026, 32, 1–12. [Google Scholar] [CrossRef]
- Bennetot, A.; Donadello, I.; El Qadi El Haouari, A.; Dragoni, M.; Frossard, T.; Wagner, B.; Sarranti, A.; Tulli, S.; Trocan, M.; Chatila, R.; et al. A Practical tutorial on Explainable AI Techniques. ACM Comput. Surv. 2024, 57, 1–44. [Google Scholar] [CrossRef]
- Díez, J.; Khalifa, K.; Leuridan, B. General theories of explanation: Buyer beware. Synthese 2013, 190, 379–396. [Google Scholar] [CrossRef]
- Wang, W.; Ehsan, H.; Ahmed, S.M.; Shah, H. TrinityNet: A Novel Cross-Modality Generative Reasoning Framework for Zero-Label Railway Fastener Tightness Evaluation. IET Intell. Transp. Syst. 2026, 20, e70164. [Google Scholar] [CrossRef]
- Wang, W.; Ehsan, H.; Qiu, S.; Rahman, T.U.; Wang, J.; Zaheer, Q. Evolution and Emerging Frontiers in Point Cloud Technology. Electronics 2026, 15, 341. [Google Scholar] [CrossRef]
- Gunning, D.; Aha, D.W. DARPA’s explainable artificial intelligence program. AI Mag. 2019, 40, 44–58. [Google Scholar] [CrossRef]
- Helmi, W.; Bridgelall, R.; Askarzadeh, T. Remote Sensing and Machine Learning for Safer Railways: A Review. Appl. Sci. 2024, 14, 3573. [Google Scholar] [CrossRef]
- Goebel, R. Natural Language Processing and Chinese Computing. In 8th CCF International Conference, NLPCC 2019 Dunhuang, China, October 9–14, 2019 Proceedings, Part II; Springer: Berlin/Heidelberg, Germany. [CrossRef]
- Hoffman, R.R.; Mueller, S.T.; Klein, G.; Jalaeian, M.; Tate, C. Explainable AI: Roles and stakeholders, desirements and challenges. Front. Comput. Sci. 2023, 5, 1117848. [Google Scholar] [CrossRef]
- Laato, S.; Tiainen, M.; Najmul Islam, A.K.M.; Mäntymäki, M. How to explain AI systems to end users: A systematic literature review and research agenda. Internet Res. 2021, 32, 1–31. [Google Scholar] [CrossRef]
- Kirat, T.; Tambou, O.; Do, V.; Tsoukiàs, A. Fairness and explainability in automatic decision-making systems. A challenge for computer science and law. EURO J. Decis. Process. 2023, 11, 100036. [Google Scholar] [CrossRef]
- Díaz-Rodríguez, N.; Del Ser, J.; Coeckelbergh, M.; López de Prado, M.; Herrera-Viedma, E.; Herrera, F. Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. Inf. Fusion 2023, 99, 101896. [Google Scholar] [CrossRef]
- Hamon, R.; Junklewitz, H.; Sanchez, I. Robustness and Explainability of Artificial Intelligence; Publications Office of the European Union: Luxembourg, 2020; ISBN 9789276146605. [Google Scholar]
- Sarabia, S. Reflections on artificial intelligence. Rev. Neuropsiquiatr. 2023, 86, 159–160. [Google Scholar] [CrossRef]
- Wang, W.; Zaheer, Q.; Qiu, S.; Wang, W.; Ai, C.; Wang, J.; Wang, S.; Hu, W. Digital Twin in TIM. In Digital Twin Technologies in Transportation Infrastructure Management; Springer: Singapore, 2024; ISBN 9789819958047. [Google Scholar] [CrossRef]
- Kaewunruen, S.; AbdelHadi, M.; Kongpuang, M.; Pansuk, W.; Remennikov, A.M. Digital Twins for Managing Railway Bridge Maintenance, Resilience, and Climate Change Adaptation. Sensors 2023, 23, 252. [Google Scholar] [CrossRef]
- Sattarzadeh, S.; Sudhakar, M.; Plataniotis, K.N. Integrated grad-cam: Sensitivity-aware visual explanation of deep convolutional networks via integrated gradient-based scoring. In Proceedings of the ICASSP 2021—2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2021. [Google Scholar]
- Bhat, A.; Assoa, A.S.; Raychowdhury, A. Gradient Backpropagation based Feature Attribution to Enable Explainable-AI on the Edge. In Proceedings of the 2022 IFIP/IEEE 30th International Conference on Very Large Scale Integration (VLSI-SoC), Patras, Greece, 3–5 October 2022. [Google Scholar] [CrossRef]
- Panati, C.; Wagner, S.; Br, S. Feature Relevance Evaluation using Grad-CAM, LIME and SHAP for Deep Learning SAR Data Classification. In Proceedings of the 2022 23rd International Radar Symposium (IRS), Gdansk, Poland, 12–14 September 2022. [Google Scholar]
- Ali, A.; Schnake, T.; Eberle, O.; Wolf, L. XAI for Transformers: Better Explanations through Conservative Propagation. In Proceedings of the 39th International Conference on Machine Learning, Baltimore MD, USA, 17–23 July 2022. [Google Scholar]
- Chitty-Venkata, K.T.; Emani, M.; Vishwanath, V.; Somani, A.K. Neural Architecture Search for Transformers: A Survey. IEEE Access 2022, 10, 108374–108412. [Google Scholar] [CrossRef]
- Playout, C.; Duval, R.; Carole, M.; Cheriet, F. Focused Attention in Transformers for interpretable classification of retinal images. Med. Image Anal. 2022, 82, 102608. [Google Scholar] [CrossRef] [PubMed]
- Wang, J.; Zeng, Z.; Sharma, P.K.; Alfarraj, O.; Tolba, A.; Zhang, J.; Wang, L. Dual-path network combining CNN and transformer for pavement crack segmentation. Autom. Constr. 2024, 158, 105217. [Google Scholar] [CrossRef]
- Bramhall, S.; Horn, H.; Tieu, M.; Lohia, N. QLIME-A Quadratic Local Interpretable Model-AgnosticExplanation Approach. SMU Data Sci. Rev. 2020, 3, 4. [Google Scholar]
- Mane, D.; Magar, A.; Khode, O.; Koli, S.; Bhat, K.; Korade, P. Unlocking Machine Learning Model Decisions: A Comparative Analysis of LIME and SHAP for Enhanced Interpretability. J. Electr. Syst. 2024, 20, 1252–1267. [Google Scholar] [CrossRef]
- Kawakura, S.; Hirafuji, M.; Ninomiya, S.; Shibasaki, R. Analyses of Diverse Agricultural Worker Data with Explainable Artificial Intelligence: XAI based on SHAP, LIME, and LightGBM. Eur. J. Agric. Food Sci. 2022, 4, 11–19. [Google Scholar] [CrossRef]
- Gawde, S.; Patil, S.; Kumar, S.; Kamat, P.; Kotecha, K.; Alfarhood, S. Explainable Predictive Maintenance of Rotating Machines Using LIME, SHAP, PDP, ICE. IEEE Access 2024, 12, 29345–29361. [Google Scholar] [CrossRef]
- Martins, T.; De Almeida, A.M.; Cardoso, E.; Nunes, L. Explainable Artificial Intelligence (XAI): A Systematic Literature Review on Taxonomies and Applications in Finance. IEEE Access 2024, 12, 618–629. [Google Scholar] [CrossRef]
- Schwalbe, G.; Finzel, B. A comprehensive taxonomy for explainable artificial intelligence: A systematic survey of surveys on methods and concepts. Data Min. Knowl. Discov. 2023, 38, 3043–3101. [Google Scholar] [CrossRef]
- Jiarpakdee, J.; Tantithamthavorn, C.K.; Dam, H.K.; Grundy, J. An Empirical Study of Model-Agnostic Techniques for Defect Prediction Models. IEEE Trans. Softw. Eng. 2022, 48, 166–185. [Google Scholar] [CrossRef]
- Saeed, W.; Omlin, C. Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowl.-Based Syst. 2023, 263, 110273. [Google Scholar] [CrossRef]
- Chromik, M.; Schuessler, M. A taxonomy for human subject evaluation of black-box explanations in XAI. In Proceedings of the IUI Workshop on Explainable Smart Systems and Algorithmic Transparency in Emerging Technologies (ExSS-ATEC’20), Cagliari, Italy, 17 March 2020. [Google Scholar]
- Ponzoni, I.; Páez Prosper, J.A.; Campillo, N.E. Explainable artificial intelligence: A taxonomy and guidelines for its application to drug discovery. Wiley Interdiscip. Rev. Comput. Mol. Sci. 2023, 13, e1681. [Google Scholar] [CrossRef]
- Hu, B.; Tunison, P.; Richardwebster, B.; Hoogs, A. Xaitk-Saliency: An Open Source Explainable AI Toolkit for Saliency. Proc. AAAI Conf. Artif. Intell. 2023, 37, 15760–15766. [Google Scholar] [CrossRef]
- Lamba, K.; Rani, S. A novel approach of brain-computer interfacing (BCI) and Grad-CAM based explainable artificial intelligence: Use case scenario for smart healthcare. J. Neurosci. Methods 2024, 408, 110159. [Google Scholar] [CrossRef]
- Saleh, R.A.A.; Al-Areqi, F.; Konyar, M.Z.; Kaplan, K.; Öngir, S.; Ertunc, H.M. AdvancingTire Safety: Explainable Artificial Intelligence-Powered Foreign Object Defect Detection with Xception Networks and Grad-CAM Interpretation. Appl. Sci. 2024, 14, 4267. [Google Scholar] [CrossRef]
- Qiu, S.; Zaheer, Q.; Hassan Shah, S.M.A.; Shah, S.F.H.; Wang, W.; Ai, C.; Wang, J. Efficient and Interpretable Image Processing Approach for Crack Segmentation. J. Infrastruct. Syst. 2026, 32, 1–18. [Google Scholar] [CrossRef]
- Kotipalli, B.; Ai, E. The Role of Attention Mechanisms in Enhancing Transparency and Interpretability of Neural Network Models in Explainable AI; Digital Commons at Harrisburg University; Harrisburg University of Science and Technology: Harrisburg, PA, USA, 2024. [Google Scholar]
- Gao, Y.; Gu, S.; Jiang, J.; Hong, S.R.; Yu, D.; Zhao, L. Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning. ACM Comput. Surv. 2024, 56, 1–39. [Google Scholar] [CrossRef]
- Ntrougkas, M.; Gkalelis, N.; Mezaris, V. TAME: Attention Mechanism Based Feature Fusion for Generating Explanation Maps of Convolutional Neural Networks. In Proceedings of the 2022 IEEE International Symposium on Multimedia (ISM), Naples, Italy, 5–7 December 2022. [Google Scholar] [CrossRef]
- El Houda Dehimi, N.; Tolba, Z. Attention Mechanisms in Deep Learning: Towards Explainable Artificial Intelligence. In Proceedings of the 2024 6th International Conference on Pattern Analysis and Intelligent Systems (PAIS), El Oued, Algeria, 24–25 April 2024. [Google Scholar] [CrossRef]
- Wang, W.; Zaheer, Q.; Qiu, S.; Wang, W.; Ai, C.; Wang, J.; Wang, S.; Hu, W. Transportation Infrastructure Management. In Digital Twin Technologies in Transportation Infrastructure Management; Springer: Singapore, 2024. [Google Scholar] [CrossRef]
- Ehsan, H.; Wang, W.; Faizan, S.; Shah, H.; Ai, C.; Wang, J.; Qiu, S. DA-NGF: A Domain-Adaptive Neurogenerative Framework for Few-Shot Railway Fastener Defect Synthesis and Transferable Representation Learning. Appl. Intell. 2026, 56, 32. [Google Scholar]
- Wang, W.; Zaheer, Q.; Qiu, S.; Wang, W.; Ai, C.; Wang, J.; Wang, S.; Hu, W. Future Digital Twin in Infrastructure Management. In Digital Twin Technologies in Transportation Infrastructure Management; Springer: Singapore, 2024; pp. 205–222. [Google Scholar] [CrossRef]
- Wang, W.; Zaheer, Q.; Qiu, S.; Wang, W.; Ai, C.; Wang, J.; Wang, S.; Hu, W. Digital Twins in Design and Construction. In Digital Twin Technologies in Transportation Infrastructure Management; Springer: Singapore, 2024; pp. 147–178. [Google Scholar] [CrossRef]
- Karpagam, G.R.; Varma, A.; Samrddhi, M.; Shri Shivathmika, V. Understanding, Visualizing and Explaining XAI Through Case Studies. In Proceedings of the 2022 8th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 25–26 March 2022. [Google Scholar] [CrossRef]
- Na, S.; Nam, W.; Lee, S. Toward practical and plausible counterfactual explanation through latent adjustment in disentangled space. Expert Syst. Appl. 2023, 233, 120982. [Google Scholar] [CrossRef]
- Chormai, P.; Herrmann, J.; Müller, K.-R.; Montavon, G. Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 7283–7299. [Google Scholar] [CrossRef]
- Anwar, S.; Griffiths, N.; Bhalerao, A.; Popham, T.; Bell, M. CHILLI: A data context-aware perturbation method for XAI. arXiv 2024, arXiv:2407.07521. [Google Scholar] [CrossRef]
- Qiu, L.; Yang, Y.; Cao, C.C.; Zheng, Y.; Ngai, H.; Hsiao, J.; Chen, L. Generating Perturbation-based Explanations with Robustness to Out-of-Distribution Data. In Proceedings of the ACM Web Conference 2022, Lyon, France, 25–29 April 2022. [Google Scholar] [CrossRef]
- Ivanovs, M.; Kadikis, R.; Ozols, K. Perturbation-based methods for explaining deep neural networks: A survey. Pattern Recognit. Lett. 2021, 150, 228–234. [Google Scholar] [CrossRef]
- Sun, J.; Shi, W.; Wang, M.D.; Giuste, F.O. Improving explainable AI with patch perturbation-based evaluation pipeline: A COVID-19 X-ray image analysis case study. Sci. Rep. 2023, 13, 19488. [Google Scholar] [CrossRef]
- Mohamed, E.; Sirlantzis, K.; Howells, G. A review of visualisation-as-explanation techniques for convolutional neural networks and their evaluation ☆ Random Input Sampling for Explanation. Displays 2022, 73, 102239. [Google Scholar] [CrossRef]
- Njoku, J.N.; Nwakanma, C.I.; Kim, D.S. Explainable Data-driven Digital Twins for Predicting Battery States in Electric Vehicles. IEEE Access 2024, 12, 83480–83501. [Google Scholar] [CrossRef]
- Fernandez, I.A.; Neupane, S.; Chakraborty, T.; Mitra, S.; Mittal, S.; Pillai, N.; Chen, J.; Rahimi, S. A Survey on Privacy Attacks Against Digital Twin Systems in AI-Robotics. In Proceedings of the 2024 IEEE 10th International Conference on Collaboration and Internet Computing (CIC), Washington, DC, USA, 28–30 October 2024. [Google Scholar]
- Rožanec, J.M.; Lu, J.; Rupnik, J.; Škrjanc, M.; Mladenić, D.; Fortuna, B.; Zheng, X.; Kiritsis, D. Actionable cognitive twins for decision making in manufacturing. Int. J. Prod. Res. 2022, 60, 452–478. [Google Scholar] [CrossRef]
- Kobayashi, K.; Kumar, D.; Alam, S.B. AI-driven non-intrusive uncertainty quantification of advanced nuclear fuels for digital twin-enabling technology. Prog. Nucl. Energy 2024, 172, 105177. [Google Scholar] [CrossRef]
- Hajgató, G.; Wéber, R.; Szilágyi, B.; Tóthpál, B.; Gyires-Tóth, B.; Hős, C. PredMaX: Predictive maintenance with explainable deep convolutional autoencoders. Adv. Eng. Inform. 2022, 54, 101778. [Google Scholar] [CrossRef]
- Kong, X.; Xing, Y.; Tsourdos, A.; Wang, Z.; Guo, W.; Perrusquia, A.; Wikander, A. Explainable Interface for Human-Autonomy Teaming: A Survey. arXiv 2024, arXiv:2405.02583. [Google Scholar] [CrossRef]
- Yang, C.; Ferdousi, R.; El Saddik, A.; Li, Y.; Liu, Z.; Liao, M. Lifetime Learning-enabled Modelling Framework for Digital Twin. In Proceedings of the 2022 IEEE 18th International Conference on Automation Science and Engineering (CASE), Mexico City, Mexico, 20–24 August 2022. [Google Scholar] [CrossRef]
- Coussement, K.; Abedin, M.Z.; Kraus, M.; Maldonado, S.; Topuz, K. Explainable AI for enhanced decision-making. Decis. Support Syst. 2024, 184, 114276. [Google Scholar] [CrossRef]
- Dai, X.; Keane, M.T.; Shalloo, L.; Ruelle, E.; Byrne, R.M.J. Counterfactual Explanations for Prediction and Diagnosis in XAI. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, Oxford, UK, 1–3 August 2022. [Google Scholar] [CrossRef]
- Zaheer, Q.; Tan, Y.; Qamar, F. Literature review of bridge structure’s optimization and it’s development over time. Int. J. Simul. Multidiscip. Des. Optim. 2022, 13, 5. [Google Scholar] [CrossRef]
- Yan, B.; Yang, F.; Qiu, S.; Wang, J.; Cai, B.; Wang, S.; Zaheer, Q.; Wang, W.; Chen, Y.; Hu, W. Digital twin in transportation infrastructure management: A systematic review. Intell. Transp. Infrastruct. 2023, 2, liad024. [Google Scholar] [CrossRef]
- Ye, W.; Ren, J.; Lu, C.; Zhang, A.A.; Zhan, Y.; Liu, J. Intelligent detection of fastener defects in ballastless tracks based on deep learning. Autom. Constr. 2024, 159, 105280. [Google Scholar] [CrossRef]
- Xu, D.; Wang, Y.; Xu, S.; Zhu, K.; Zhang, N.; Zhang, X. Infrared and visible image fusion with a generative adversarial network and a residual network. Appl. Sci. 2020, 10, 554. [Google Scholar] [CrossRef]
- Qi, H.; Xu, T.; Wang, G.; Cheng, Y.; Chen, C. MYOLOv3-Tiny: A new convolutional neural network architecture for real-time detection of track fasteners. Comput. Ind. 2020, 123, 103303. [Google Scholar] [CrossRef]
- An, C.; Study, E.; Makhloufi, E. AI Application in Transport and Logistics Opportunities and Challenges (An Exploratory Study); Amsterdam University of Applied Sciences: Amsterdam, The Netherlands, 2023. [Google Scholar]
- Phusakulkajorn, W.; Núñez, A.; Wang, H.; Jamshidi, A.; Zoeteman, A.; Ripke, B.; Dollevoet, R.; De Schutter, B.; Li, Z. Artificial intelligence in railway infrastructure: Current research, challenges, and future opportunities. Intell. Transp. Infrastruct. 2023, 2, liad016. [Google Scholar] [CrossRef]
- Wei, X.; Wang, J.; Ai, C.; Liu, X.; Qiu, S.; Wang, J.; Luo, Y.; Zaheer, Q.; Li, N. Terrestrial laser scanning-assisted roughness assessment for initial support of railway tunnel. J. Civ. Struct. Health Monit. 2024, 14, 781–800. [Google Scholar] [CrossRef]
- Qiu, S.; Zaheer, Q.; Hassan Shah, S.M.A.; Shah, S.F.H.; Wang, W.; Ai, C.; Wang, J. LiDAR-Simulated Multimodal & Self-Supervised Contrastive Digital Twin Approach for Probabilistic Point Clouds Generation of Rail Fasteners. J. Comput. Civ. Eng. 2025, 39, 04025001. [Google Scholar] [CrossRef]
- Wang, J.; Ahmed, S.M.; Shah, H.; Ehsan, H.; Faizan, S.; Shah, H.; Ai, C.; Kuang, J.; Wang, W.; Qiu, S. Self-supervised contrastive anomaly detection in railway fasteners using point clouds and deep metric learning for imbalance dataset. J. Civ. Struct. Health Monit. 2025, 15, 2861–2886. [Google Scholar] [CrossRef]
- Zaheer, Q.; Qiu, S.; Hassan Shah, S.M.A.; Ai, C.; Wang, J. Intelligent Multitasking Framework for Boundary-Preserving Semantic Segmentation, Width Estimation, and Propagation Modeling of Concrete Cracks. J. Infrastruct. Syst. 2025, 31, 04025009. [Google Scholar] [CrossRef]
- Gürdür Broo, D.; Bravo-Haro, M.; Schooling, J. Design and implementation of a smart infrastructure digital twin. Autom. Constr. 2022, 136, 104171. [Google Scholar] [CrossRef]
- Olugbade, S.; Ojo, S.; Imoize, A.L.; Isabona, J.; Alaba, M.O. A Review of Artificial Intelligence and Machine Learning for Incident Detectors in Road Transport Systems. Math. Comput. Appl. 2022, 27, 77. [Google Scholar] [CrossRef]
- Zaheer, Q.; Ahmed, S.M.; Shah, H.; Wang, W.; Ehsan, H.; Ai, C.; Wang, J.; Qiu, S. Multimodal graph neural network framework for railway fastener tightness assessment from high-resolution point clouds. Eng. Appl. Artif. Intell. 2026, 167, 113829. [Google Scholar] [CrossRef]
- Singh, P.; Elmi, Z.; Krishna, V.; Pasha, J.; Dulebenets, M.A. Cleaner Logistics and Supply Chain Internet of Things for sustainable railway transportation: Past, present, and future. Clean. Logist. Supply Chain 2022, 4, 100065. [Google Scholar] [CrossRef]
- Jing, H.; Meng, X.; Slatcher, N.; Hunter, G. Efficient point cloud corrections for mobile monitoring applications using road/rail-side infrastructure. Surv. Rev. 2021, 53, 235–251. [Google Scholar] [CrossRef]
- Fikar, C.; Hirsch, P.; Posset, M.; Gronalt, M. Impact of transalpine rail network disruptions: A study of the Brenner Pass. JTRG 2016, 54, 122–131. [Google Scholar] [CrossRef]
- Ai, C.; Wang, H.; Liang, Z.; Qiu, S. Attention-guided triplet framework for synthetic data generation and semantic segmentation of railway fasteners under data scarcity and disparity. Appl. Soft Comput. 2025, 184, 113784. [Google Scholar] [CrossRef]
- Terven, J.R.; Cordova-esparza, D.M.; Romero-González, J.A. A comprehensive review of yolo architectures in computer vision: From yolov1 to yolov8 and yolo-nas. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
- Summerhays, K.D.; Henke, R.P.; Baldwin, J.M.; Cassou, R.M.; Brown, C.W. Optimizing discrete point sample patterns and measurement data analysis on internal cylindrical surfaces with systematic form deviations. Precis. Eng. 2002, 26, 105–121. [Google Scholar] [CrossRef]
- Bouraima, M.B.; Qiu, Y.; Yusupov, B.; Ndjegwes, C.M. A study on the development strategy of the railway transportation system in the West African Economic and Monetary Union (WAEMU) based on the SWOT/AHP technique. Sci. Afr. 2020, 8, e00388. [Google Scholar] [CrossRef]
- Iderawumi, M.I.; Adebayo, A.A. COVID-19 Pandemic’s Impacts on Nigerian Educational System and Social-Economic Activities. Eur. J. Sci. Innov. Technol. 2021, 1, 28–38. [Google Scholar]
- Wang, X.; Song, Y.; Yang, H.; Wang, H.; Lu, B. A time-frequency dual-domain deep learning approach for high-speed pantograph-catenary dynamic performance prediction. Mech. Syst. Signal Process. 2025, 238, 113258. [Google Scholar] [CrossRef]
- Alelaimat, A.; Ghose, A.; Dam, H.K. XPlaM: A toolkit for automating the acquisition of BDI agent-based Digital Twins of organizations. Comput. Ind. 2023, 145, 103805. [Google Scholar] [CrossRef]
- Baruwa, A. AI powered infrastructure efficiency: Enhancing US transportation networks for a sustainable future. Int. J. Eng. Technol. Res. Manag. (IJETRM) 2023, 7, 329–350. [Google Scholar]
- Zeb, S.; Mahmood, A.; Khowaja, S.A.; Dev, K.; Hassan, S.A.; Qureshi, N.M.F.; Gidlund, M.; Bellavista, P. Industry 5.0 is Coming: A Survey on Intelligent NextG Wireless Networks as Technological Enablers. arXiv 2022, arXiv:2205.09084. [Google Scholar] [CrossRef]
- Shukla, B.; Fan, I.-S.; Jennions, I. Opportunities for Explainable Artificial Intelligence in Aerospace Predictive Maintenance. PHM Soc. Eur. Conf. 2020, 5, 11. [Google Scholar] [CrossRef]
- Jones, D.; Tamiz, M. International Series in Operations Research & Management Science; Springer: New York, NY, USA, 2010; Volume 139. [Google Scholar]
- Obuzor, P.C. Improving Predictive Process Analytics with Deep Learning and XAI. Doctoral Dissertation, University of Salford, Salford, UK, 2023. [Google Scholar]
- Sadaf, M.; Iqbal, Z.; Javed, A.R.; Saba, I.; Krichen, M.; Majeed, S.; Raza, A. Connected and Automated Vehicles: Infrastructure, Applications, Security, Critical Challenges, and Future Aspects. Technologies 2023, 11, 117. [Google Scholar] [CrossRef]
- Blücher, S.; Vielhaben, J.; Strodthoff, N. PredDiff: Explanations and interactions from conditional expectations. Artif. Intell. 2022, 312, 103774. [Google Scholar] [CrossRef]
- Ghosh, D.P. Intelligent Infrastructure Delivery: AI-Driven Solutions for Lifecycle Design and Engineering Management. Preprint 2025. [Google Scholar] [CrossRef]
- Qiu, S.; Li, X.; Chen, Y.; Wang, W.; Wang, J.; Cheng, R.; Zaheer, Q. Knowledge graph-based operation and maintenance risk analysis and early warning approach for railway traction power supply systems. Eng. Appl. Artif. Intell. 2026, 166, 113564. [Google Scholar] [CrossRef]
- Lin, Y.W.; Hsieh, C.C.; Huang, W.H.; Hsieh, S.L.; Hung, W.H. Railway Track Fasteners Fault Detection using Deep Learning. In Proceedings of the 2019 IEEE Eurasia Conference on IOT, Communication and Engineering (ECICE), Yunlin, Taiwan, 3–6 October 2019. [Google Scholar] [CrossRef]
- Rane, N.; Choudhary, S.; Rane, J. Transforming the civil engineering sector with generative artificial intelligence, such as ChatGPT or Bard. SSRN Electron. J. 2024. [Google Scholar] [CrossRef]
- Zeb, S.; Mahmood, A.; Khowaja, S.A.; Dev, K.; Hassan, S.A.; Gidlund, M.; Bellavista, P. Towards defining industry 5.0 vision with intelligent and softwarized wireless network architectures and services: A survey. J. Netw. Comput. Appl. 2024, 223, 103796. [Google Scholar] [CrossRef]
- Ning, H.; Chang, C.; Zaheer, Q.; Long, X. Investigation on the mechanical properties and failure mechanism of high-strength recycled concrete enhanced with basalt fiber and nano-calcium carbonate under impact loading. Constr. Build. Mater. 2025, 505, 144700. [Google Scholar] [CrossRef]
- Qiu, S.; Zaheer, Q.; Shah, S.M.A.H.; Ai, C.; Wang, J.; Zhan, Y. Vector-Quantized Variational Teacher and Multimodal Collaborative Student for Crack Segmentation via Knowledge Distillation. J. Comput. Civ. Eng. 2025, 39, 04025030. [Google Scholar] [CrossRef]
- Wang, W.; Yin, Q.; Ai, C.; Wang, J.; Zaheer, Q.; Niu, H.; Cai, B.; Qiu, S.; Peng, J. Automation railway fastener tightness detection based on instance segmentation and monocular depth estimation. Eng. Struct. 2025, 322, 119229. [Google Scholar] [CrossRef]
- Zaheer, Q.; Ehsan, H.; Wang, W.; Malik, M.; Wang, J. Real-time synthetic data generation and segmentation of railway fasteners using an attention guided dual-phase diffusion-aided model. Measurement 2026, 261, 119874. [Google Scholar] [CrossRef]
- Qiu, S.; Xiao, C.; Wei, X.; Zaheer, Q.; Wang, W.; Guan, J.; Cheng, R.; Luo, Y.; Kuang, J.; Wang, J. Flatness assessment for NATM tunnel shotcrete lining quality monitoring based on point cloud and 2D discrete wavelet transform. J. Civ. Struct. Health Monit. 2025, 15, 3841–3860. [Google Scholar] [CrossRef]
- De Donato, L.; Flammini, F.; Member, S.; Goverde, R.M.P.; Lin, Z.; Liu, R.; Marrone, S.; Nardone, R.; Tang, T.; Vittorini, V. Artificial Intelligence in Railway Transport: Taxonomy, Regulations, and Applications. IEEE Trans. Intell. Transp. Syst. 2022, 23, 14011–14024. [Google Scholar] [CrossRef]
- Soroush, M.; Braatz, R.D. Artificial Intelligence in Manufacturing; Elsevier: Amsterdam, The Netherlands, 2024; ISBN 9780323991346. [Google Scholar]
- Tamilselvi, M.; Rajeshwari, R.; Chandra Kala, R.; Nagaraju, N.; Sasirekha, D. Design and Development of a Novel Sensor Assisted Vehicle Count Prediction using Modulated Deep Learning Principles. In Proceedings of the 2024 Ninth International Conference on Science Technology Engineering and Mathematics (ICONSTEM), Chennai, India, 4–5 April 2024. [Google Scholar] [CrossRef]
- Malik, M.; Ehsan, H.; Wang, W. Trends and perspectives in structural health monitoring through edge computing: A review with zero-shot natural language processing categorization. J. Railw. Sci. Technol. 2025, 2, 130–200. [Google Scholar] [CrossRef]
- Zaheer, Q.; Wang, J.; Muhammad Ahmed Hassan Shah, S.; Atta, Z.; Wang, W.; Malik, M.; Ai, C.; Qiu, S. An efficient PointNet-based multifaceted autoencoder (PMAE) for denoising rail track fastener point clouds. J. Civ. Struct. Health Monit. 2025, 15, 3995–4015. [Google Scholar] [CrossRef]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Published by MDPI on behalf of the International Society for Photogrammetry and Remote Sensing. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.