Next Article in Journal
Digital Twin-Based Intelligent System for Thermal Conditioning of Engines and Vehicles with Phase Change Thermal Energy Storage
Previous Article in Journal
Comparison of Limb Symmetry Index Values Across Different Knee Flexor Strength Testing Conditions in Healthy Male Recreational Athletes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Real-Time AI-Driven Prognostics and Health Management in Robotics

by
Mohad Tanveer
,
Muhammad Haris Yazdani
,
Rana Talal Ahmad Khan
and
Heung Soo Kim
*
Department of Mechanical, Robotics and Energy Engineering, Dongguk University-Seoul, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(7), 3441; https://doi.org/10.3390/app16073441
Submission received: 10 March 2026 / Revised: 27 March 2026 / Accepted: 31 March 2026 / Published: 1 April 2026
(This article belongs to the Special Issue Deep Learning and Predictive Maintenance in Industrial Applications)

Abstract

The increasing deployment of robotic systems in complex and high-stakes environments, such as advanced manufacturing, healthcare, space exploration, and service robotics, requires robust strategies to ensure operational reliability, safety, and predictive maintenance. Real-time prognostics and health management, supported by recent advances in artificial intelligence, has emerged as a powerful approach for monitoring system health, detecting faults, and predicting failures before they occur. Unlike earlier review studies that mainly summarize traditional machine learning applications, the novelty of this paper lies in presenting a comprehensive taxonomy and critical synthesis of state-of-the-art AI-driven PHM techniques designed specifically for robotic systems. We evaluate a wide range of approaches, beginning with conventional machine learning models and extending to recent deep learning advancements, including transformers, vision transformers, and self-supervised learning frameworks. Furthermore, a novel contribution of this study is the rigorous benchmarking of their real-time feasibility, computational complexity, scalability, and performance trade-offs in practical robotic applications. In addition, this review introduces widely used benchmark datasets and highlights representative industrial case studies that demonstrate the practical effectiveness of AI-enabled PHM systems. The study also discusses important research gaps, including challenges related to model interpretability addressed through eXplainable AI, data privacy supported by federated learning, and the integration of cloud and edge computing within cloud robotics frameworks. Through a comprehensive gap matrix and quantitative comparative evaluations, this review provides insights to support the development of resilient, interpretable, and intelligent PHM systems for next-generation robotic applications.

1. Introduction

Robots and other devices perform a significant proportion of tasks in contemporary enterprises. These robots consist of various components that collaborate to execute specific functions. Due to their constant operation, these components are susceptible to wear, which may occasionally lead to catastrophic failures. Robotics technology has seen exceptional progress in recent decades, profoundly transforming sectors such as manufacturing, aerospace, healthcare, and service applications [1,2,3]. Advances in sensors, actuation, and control technologies have enabled robots to execute increasingly complex tasks with enhanced precision and adaptability [4,5,6,7,8]. As these systems become more vital to critical operations, ensuring their reliability and longevity has become crucial. The global robotics market has experienced significant growth, reflecting the rising adoption of robotic technologies across varied sectors. In 2021, the market was valued at approximately USD 100.59 billion and is projected to reach USD 178.63 billion by 2030, growing at a compound annual growth rate (CAGR) of 12.17% during the forecast period [9]. The increased deployment of robotic systems in complex and high-stakes environments, such as advanced manufacturing, healthcare, space exploration, and aerial robotics, requires robust strategies to ensure operational reliability and safety [10]. In particular, aerial systems face unique challenges where sensor faults and obstacle avoidance must be managed simultaneously through fault-tolerant formation-containment control and advanced maneuver generation using transfer-learning-based deep reinforcement learning [11].
Beyond improving component reliability, AI (artificial intelligence)-driven prognostics and health management (PHM) has a significant impact on modern manufacturing environments. In smart factories and Industry 4.0 production systems, predictive maintenance enabled by AI models allows robotic platforms to continuously monitor their operational health and anticipate potential failures before they occur [12]. This capability directly contributes to reduced unplanned downtime, improved equipment utilization, and enhanced production continuity. By integrating PHM systems with manufacturing execution systems and industrial Internet of Things (IoT) infrastructures, maintenance decisions can be coordinated at the system level, enabling optimized scheduling, adaptive production planning, and more efficient allocation of maintenance resources. As a result, AI-driven PHM plays a critical role in improving productivity, lowering operational costs, and supporting the transition toward fully autonomous and intelligent manufacturing ecosystems.
However, the increasing complexity and deployment scale of robotic systems necessitate advanced maintenance strategies [13]. Traditional, time-based maintenance methods are inadequate for managing the complex dynamics and operational demands of modern robotics. Unexpected failures can lead to significant downtime, safety hazards, and expensive repairs, thereby necessitating the need for proactive maintenance approaches to maintain optimal performance and minimize lifecycle costs. PHM has, in recent times, emerged as a promising methodology for system health monitoring, diagnostics, remaining useful life (RUL) prediction, and prognostics. It is considered an effective method capable of providing comprehensive, accurate, and tailored solutions for managing system health [14].
PHM encompasses three primary functions: early fault detection, fault isolation and identification, and prediction of the RUL of a component or system. These functions can be applied at either the component or system level. At the component level, PHM focuses on developing monitoring strategies for electromechanical components such as electric motors, electronic circuits, bearings, and gear reducers. It assesses whether a component’s condition has deteriorated over time due to environmental influences, operational stresses, or performance-related factors [15,16]. Conversely, at the system level, PHM evaluates the overall health status of the system by considering its operational behavior, design characteristics, and process features [17]. The reliability of the entire PHM framework hinges on the accurate identification of issues; incorrect fault identification may compromise the system’s effectiveness. Recently, techniques pertaining to physics-based methods [18,19] and data-driven approaches [20,21,22] have demonstrated leaps for the advancement of fault detection and diagnosis within PHM systems.
Traditionally, RUL is defined as the time-to-failure conditioned on the current observation history as follows:
RUL ( t c ) = t f t c , t f = i n f { t > t c :     H ( t ) H thr }
where H ( t ) is a health indicator and H thr is a failure threshold.
Data-driven approaches have gained significant traction due to their ability to dynamically adapt to changing conditions in real time. Supported by advances in computational power and sensor technologies, these methods have improved the efficiency of data acquisition. Typically, these approaches employ machine learning and deep learning models, either through the utilization of handcrafted features or by using deep neural networks with multiple layers to autonomously learn and discern significant patterns from the data. That said, practically deploying a PHM system using contemporary data-driven methods necessitates comprehensive and real-time data acquisition, involving measurements such as vibration, acoustic emission, laser displacement, temperature, rotational speed, and electric current [23,24,25,26,27,28].
Previous approaches using PHM have focused on extracting statistical features from audio signals and employing conventional machine learning models for classification. Classifiers such as support vector machines (SVMs) and artificial neural networks (ANNs) have been trained on six statistical metrics: mean, range, standard deviation, skewness, kurtosis, and crest factor [29]. A system based on vibration has been developed to facilitate autonomous operation in mining environments [30]. In a study, in order to process and analyze vibration signals, the researchers employed a low-pass Butterworth filter and extracted eight features from the time domain, eight from the frequency domain, and five using Morlet wavelet analysis [31]. It detailed a method for classifying audio signals using Mel-frequency cepstrum coefficients (MFCCs) and SVM. They proposed the usage of MFCC data and compared it against a feature library to determine if a motorcycle engine is operating correctly or exhibiting faults. In comparison, [32] developed a machine learning-driven approach to identify and diagnose defects in electric current signals, adopting a novel method for feature selection, extraction, and integration. Further, within the domain of deep learning, [33] introduced a diagnostic method that integrated sensor data fusion with a convolutional neural network (CNN) to assess centrifugal pump health. Likewise, [34] presented a technique for detecting sensor faults by simulating defective sensor data and transforming the signal recognition task into an image classification problem using continuous wavelet transform (CWT). To do this, in order to generate time–frequency scalograms used by CNN-based PHM, the CWT was computed by
( a , b ) = 1 a x ( t )   ψ * ,   t b a   d t ,
where a and b denote scale and translation, and ψ * is the mother wavelet.
Recent research in PHM has demonstrated promising results by converting raw data, such as audio signals, into visual formats and employing these to train advanced deep learning models, such as CNNs, to address acoustic emission-related machine faults [35,36,37,38]. This is because, many times, deep learning models work much better using 2D image data. Moreover, 2D images preserve the intricate data that would otherwise get normalized, so more details are preserved during preprocessing. Similarly, studies referenced in [39,40] introduced approaches for detecting tool wear using deep learning applied to multi-channel cutting force time-series data, along with an entropy-based sparsity metric to predict bearing defects.
An implementation framework for integrating PHM into robotic systems is shown in Figure 1. The process initiates with the identification of equipment suitable for PHM application, such as robotic manipulators or autonomous systems operating in mission-critical environments. These components are typically susceptible to wear, fatigue, or unexpected faults, making them ideal for health monitoring. Subsequently, establishing a decision-making framework for information and communication technology (ICT) infrastructure is crucial to facilitate seamless data acquisition, communication, and storage. This process involves setting up a robust ICT network that supports sensor integration, data transmission, and system interoperability. Once the infrastructure is established, PHM algorithms are deployed. These algorithms analyze incoming data using ML, statistical models, or physics-based methods to detect anomalies, assess health status, and forecast potential failures. The insights generated by the PHM layer inform decision-making algorithms, which turn predictive analytics into actionable maintenance strategies.
To formalize real-time sensing, the measured signal from each modality can be represented as a noisy observation of the latent system state:
y t = h ( x t ) + ε t , ε t N ( 0 , R ) .
To better understand the PHM framework, it is first necessary to understand the tasks involved in PHM. This is represented in Figure 2, where we can see the process beginning with a robotic platform equipped with multiple sensors, continuously capturing various types of operational signals such as vibration, acoustic, or current data. These signals are then processed through a multi-stage PHM pipeline. The first stage involves anomaly detection, where patterns in the sensor data are analyzed to identify deviations from normal behavior. In the depicted clustering diagram, blue triangles represent healthy conditions, while red triangles indicate anomalous states. Once an anomaly is detected, the system progresses to the degradation level assessment stage, categorizing the severity of the detected fault. This is represented by discrete health states (I, II, and III), corresponding to increasing degrees of degradation as shown. Finally, the prognostics module estimates the RUL of the component or system. This calculation, determined as the difference between the predicted failure time (tf) and the current time (tc), aids in failure anticipation and enhances maintenance decision-making.
Recently, researchers have increasingly delved into real-time monitoring for robotics [41,42,43,44,45,46]. This trend is visible in the exponential increase in related publications over the past two decades, as illustrated in Figure 3. This review presents the state of the art in real-time, AI-driven PHM for robotics, illuminating the entire PHM workflow from sensor integration and data preprocessing to advanced fault detection, prognostics, and decision support system design. Much of the existing literature provides descriptive aggregations of machine learning and deep learning applications without critically evaluating model complexity, real-time latency constraints, or interpretability.
To address this gap, this paper introduces a novel taxonomy classifying robotic PHM across four critical dimensions: (1) sensing modality and environmental complexity; (2) algorithmic architecture spanning traditional ML to recent transformers and self-supervised learning; (3) real-time deployment constraints such as edge vs. cloud latency using architectural floating point operations (FLOPs); and (4) trust and security through techniques like eXplainable AI and federated learning. Unlike previous reviews, this manuscript benchmarks architectural complexities, presents a formalized gap matrix highlighting underexplored areas, and integrates recent advancements in attention mechanisms and multimodal monitoring.
The insights provided herein aim to assist researchers and practitioners in developing robust, efficient, and intelligent PHM systems tailored to meet the escalating demands of robotic technologies. To ensure transparency and methodological rigor, this review followed a structured literature selection procedure inspired by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework. A systematic search was conducted across major scientific databases, including Scopus, Web of Science, IEEE Xplore, and Google Scholar, to identify relevant publications related to artificial intelligence-driven prognostics and health management (PHM) in robotics. The search used combinations of keywords such as “robotics PHM,” “predictive maintenance in robotics,” “remaining useful life prediction,” “fault diagnosis in robots,” “machine learning for PHM,” and “deep learning prognostics.”
The initial search focused on publications between 2005 and 2026, reflecting the rapid development of AI-based prognostics methods during this period. After removing duplicate records, titles, and abstracts were screened to exclude studies unrelated to robotics, predictive maintenance, or AI-based health monitoring. The remaining articles were assessed through full-text review to determine their relevance to the scope of this paper. Studies were included if they satisfied at least one of the following criteria:
(i)
Application of machine learning or deep learning for robotic fault detection, diagnosis, or RUL prediction;
(ii)
Development of PHM frameworks or architectures for robotic systems;
(iii)
Experimental evaluation of AI-based prognostics using industrial or benchmark datasets.
Studies focusing exclusively on non-robotic systems, purely theoretical reliability models without AI integration, or articles lacking sufficient methodological detail were excluded. Following this screening process, the selected literature formed the basis for the taxonomy, comparative analysis, and critical discussion presented throughout this review.
The remainder of the article is structured as follows: Section 2 discusses sensor integration and data preprocessing relevant to robotics. Section 3 explores various AI methods applied to PHM in robotic systems. Section 4 addresses real-time fault detection and prognostics. Section 5 highlights current challenges and future directions for implementing PHM in robotics. Finally, Section 6 offers concluding remarks and summarizes the critical insights of the review.

2. Sensor Integration and Data Preprocessing in Robotics

Effective PHM in robotics relies heavily on sophisticated sensor technologies to monitor and assess the condition of robotic systems in real time. These sensors collect essential data enabling the detection of anomalies, fault diagnosis, and failure prediction, thus ensuring optimal performance and minimizing downtime. Table 1 summarizes the commonly utilized sensors in robots, their functions, and the issues they help detect.
The integration of these diverse sensors enables comprehensive monitoring of both the internal state and external interactions of robotic systems. The data gathered is crucial in devising AI-driven models that predict failures before they occur, facilitating proactive maintenance strategies. For example, vibration data from accelerometers can be utilized to identify patterns indicative of wear and tear, while thermal data might identify overheating components that could fail if not addressed. Advancements in sensor technology, including miniaturization and enhanced wireless communication capabilities, have further improved the feasibility and efficiency of real-time health monitoring in robotics. The development of intelligent sensors capable of preprocessing data and seamless communication with central diagnostic systems has streamlined the implementation of sophisticated PHM solutions. The next crucial step is transforming raw sensor data into meaningful insights through preprocessing. This stage ensures that the data fed into PHM algorithms is both accurate and informative. It involves cleaning and organizing raw sensor outputs, which often contain noise, artifacts, and inconsistencies. Techniques such as filtering, e.g., low-pass, high-pass, or band-pass; normalization; and outlier removal are utilized to improve data quality. Interpolation methods may also be applied to address missing data points, ensuring a continuous and reliable dataset for analysis. Feature extraction then distills this refined data into lower-dimensional representations that capture the essential characteristics of the system’s behavior. To address this challenge, dimensionality reduction techniques are often applied to retain the most informative aspects of the data while reducing computational complexity and redundancy. One of the most widely used techniques for this purpose is Principal Component Analysis (PCA). The comparative techniques for the major machine learning and deep learning models for the case of robotic PHM are compared in Table 2 as shown below. PCA transforms the original feature space into a set of orthogonal components that capture the directions of maximum variance in the data. By projecting the data onto a smaller subset of these principal components, PCA preserves the most significant patterns while discarding less informative variations, thereby improving the efficiency and robustness of downstream learning models used in PHM. Dimensionality reduction via PCA can be expressed as a linear projection:
z = W ( x μ ) , W = [ w 1 , , w d ] ,
where W contains the top d eigenvectors of the covariance matrix.
To ensure comparable feature scales across sensors, z-score normalization is applied:
y ~ t = y t μ σ , μ = 1 T t = 1 T y t , σ = 1 T t = 1 T ( y t μ ) 2
For non-stationary signals, the short-time Fourier transform (STFT) is defined as follows:
X ( τ , ω ) = t = x ( t )   w ( t τ )   e j ω t ,
where w ( ) is the analysis window.
By integrating these preprocessing and feature extraction techniques, robotic PHM systems can effectively interpret sensor data, facilitating early fault detection and enabling proactive maintenance strategies.
Despite the rapid advancements in complex deep learning architectures, traditional statistical methods and probabilistic reliability models remain the foundational industry standard for many high-stakes robotic environments. Frameworks based on the Weibull distribution for lifecycle survival analysis and Markov Decision Processes (MDPs) for state-based degradation modeling offer distinct advantages that modern neural networks often lack, namely, strict mathematical transparency and minimal computational overhead [63,64]. In safety-critical applications such as collaborative manufacturing and autonomous navigation, practitioners frequently rely on these statistical approaches because they provide highly interpretable risk assessments and explicit uncertainty quantification and require significantly less training data. Furthermore, these traditional models are increasingly being integrated with data-driven techniques to form hybrid probabilistic networks, leveraging the physical consistency of statistical reliability while benefiting from the adaptive feature extraction of AI [65]. Therefore, a holistic robotic PHM strategy must not discard these traditional frameworks but rather utilize them as robust baseline models against which the performance and computational trade-offs of advanced neural architectures can be accurately evaluated.

3. AI Approaches for PHM in Robotics

The integration of advanced AI techniques with PHM in robotics has revolutionized maintenance approaches by enabling early fault detection and real-time decision-making [66]. In robotics, where abundant sensor-generated data such as vibration, thermal, and acoustic signals are available, traditional supervised learning models like SVMs and decision trees have been extensively utilized to distinguish between normal operations and developing failures [55,67,68,69,70,71,72,73]. Research by [74,75] has shown that when these models are trained on comprehensive historical data, they can reliably predict potential faults. Complementing these approaches, unsupervised learning methods have been employed to detect anomalies and cluster sensor data into meaningful patterns, crucial in environments where labeled data is scarce or evolving fault types are present. Deep learning architectures have further advanced PHM capabilities by utilizing CNNs to extract spatial features and recurrent neural networks (RNNs) or long short-term memory networks (LSTMs) to model the temporal dependencies inherent in time-series data [76]. Research has demonstrated that these techniques considerably improve the estimation of component degradation and the prediction of RUL. Additionally, generative adversarial networks (GANs) have emerged as a powerful tool within the PHM framework. GANs are particularly effective in augmenting datasets by synthesizing realistic fault scenarios, which are often underrepresented in traditional training sets [77]. Findings indicate that the inclusion of GANs not only enhances fault diagnosis accuracy but also improves feature extraction by providing synthetic, yet highly representative, data that bridges data gaps. Figure 4 illustrates how a basic CNN processes input data through successive convolutional and pooling layers to extract hierarchical features, which are subsequently fed into fully connected layers for final classification or prediction. This layered structure allows the model to capture both low-level and high-level representations that are essential for tasks such as fault detection and prognostics.
While basic CNNs and autoencoders successfully extract hierarchical spatial features, tracking complex temporal sequences in highly dynamic, occlusion-prone environments requires highly optimized recurrent structures. Recent innovations, such as the hybrid RNN-based analysis methods utilizing multiple hypothesis tracking, have demonstrated significant improvements in handling visual occlusion and enhanced human motion detection. The algorithmic principles from such advanced temporal tracking models are directly transferable to tracking complex, multi-joint robotic degradation over time, replacing the need for basic, computationally heavy autoencoder pipelines.
The incorporation of hybrid methodologies that combine data-driven approaches with physics-based models has become increasingly popular. By incorporating known physical constraints into neural networks, often referred to as physics-informed neural networks, or by developing digital twins that simulate the physical behavior of robotic systems [79], studies like [80] have achieved significant enhancements in prediction robustness. These hybrid models are particularly beneficial in scenarios where sensor data may be noisy or incomplete, as the underlying physical principles provide guidance to the AI model towards more reliable predictions. Reinforcement learning (RL) complements these methods by addressing the challenge of dynamic adaptation in real-time environments. RL algorithms learn optimal maintenance and control strategies through trial and error in simulated settings, enabling robotic systems to adapt their fault mitigation strategies based on continuously evolving operational contexts. Studies by [81] have highlighted the potential of RL in developing adaptive policies that can dynamically adjust maintenance schedules and fault management protocols, ultimately reducing system downtime and enhancing operational efficiency. Together, the integration of classical supervised and unsupervised techniques with deep learning methods, including CNNs, RNNs, LSTMs, and GANs, alongside the incorporation of hybrid models with reinforcement learning, creates a robust and comprehensive framework for PHM in robotics. This multifaceted approach ensures that robotic systems are not only more capable of predicting and diagnosing faults but also of adaptively managing their health in complex and evolving operational contexts. Table 3 summarizes various AI-based techniques and methods used in robotics PHM, categorizing them into supervised, unsupervised, deep learning, hybrid, and reinforcement learning approaches. It also details their respective applications in tasks such as fault detection, degradation modeling, anomaly detection, and maintenance optimization, accompanied by relevant references from the literature.
To provide a clearer analytical synthesis of the strengths and limitations of different AI paradigms used in robotic PHM, a quantitative comparison is presented in Figure 5.
Figure 5 provides a quantitative visual comparison of five prominent AI architectures for robotic PHM, evaluated across four critical deployment criteria: RUL prediction accuracy, model complexity, real-time latency (inference speed), and interpretability. As illustrated by the distinct geometric profiles in the radar chart, no single architecture offers a perfect solution; rather, each presents distinct trade-offs. Traditional machine learning models excel in operational transparency and efficiency, scoring the highest in interpretability and demonstrating excellent real-time latency capabilities (fast inference). However, they exhibit the lowest RUL prediction accuracy and model complexity, limiting their ability to capture highly non-linear degradation behaviors in modern robotic systems. In contrast, deep learning and vision transformers represent a paradigm shift toward high performance, achieving the highest RUL accuracy and handling immense model complexity by learning rich spatiotemporal patterns from multimodal sensor data. Nevertheless, the chart clearly illustrates their primary drawbacks: they suffer from severe limitations in real-time latency (making edge deployment challenging) and score the lowest in interpretability due to their “black-box” nature. Similarly, self-supervised learning approaches maintain high RUL accuracy and complexity to reduce reliance on labeled failure data, yet their visual footprint shows they still inherit the latency and interpretability challenges of deep architectures. Finally, physics-informed neural networks (PINNs) offer a crucial bridge between traditional and deep learning models. By embedding physical laws into the network, PINNs successfully recover a high degree of interpretability while maintaining moderate RUL accuracy and manageable model complexity, making them particularly well-suited for high-stakes, safety-critical robotic applications where both predictive power and operational transparency are mandatory.
Attention-based architectures are increasingly being adopted in PHM applications because they enable models to dynamically prioritize the most relevant features and time steps within high-dimensional sensor streams [82]. One of these attention-based architectures is transformers, which have recently revolutionized sequential data processing, vastly outperforming standard LSTMs in capturing long-range dependencies for RUL prediction without sequential bottlenecks [83]. Mechanisms like the adversarial deep dual patch attention mechanism (D2PAM) [84] have shown profound efficacy in predicting high-frequency anomalous biological events, e.g., epileptic seizures. The underlying architecture of D2PAM is highly applicable to high-frequency vibration anomaly detection in robotic actuators, where subtle, high-frequency shifts indicate early-stage bearing faults.
Furthermore, vision transformers (ViTs) and mixed spatial transformers, such as the Dual-3DM3AD model [85] applied to the semantic segmentation of complex 3D magnetic resonance imaging (MRI) data [86], show the power of self-attention in handling complex 3D structural datasets. In robotic systems, these architectures enable improved real-time processing and interpretation of high-dimensional sensory inputs, such as 3D point clouds and multi-camera visual data, which are critical for accurate structural health monitoring and anomaly detection. Finally, self-supervised learning (SSL) effectively mitigates the chronic issue of labeled data scarcity [87]. By using proxy tasks such as masking and reconstructing sensor signals, SSL models learn robust, generalized representations of “healthy” robotic states without requiring extensive libraries of failure data, directly addressing traditional data constraints.
While traditional deep learning models automate feature extraction, they often lack the physical consistency required for high-precision robotic tasks. Recent advancements in physics-informed neural networks (PINNs) have addressed this by integrating physical laws into the learning process. For instance, combining PINNs with Fourier neural operators has enabled real-time monitoring of complex thermal-elastic deformations using only a limited number of sensors [88]. This approach significantly enhances the reliability of PHM systems in environments with sparse sensing capabilities.
Table 3. Preprocessing and feature extraction techniques with typical applications in fault detection for Robotics PHM.
Table 3. Preprocessing and feature extraction techniques with typical applications in fault detection for Robotics PHM.
CategoryTechniques/MethodsApplicationsRef
Supervised learning methods
SVM
Decision Trees
Fault detection, diagnostics, classification of normal vs. abnormal operations[16,66,89,90,91]
Unsupervised learning methods
Clustering algorithms
Anomaly detection methods
Detection of unknown anomalies, fault classification, and clustering of unlabeled data[92,93,94,95,96,97]
Deep learning architectures
CNN (spatial feature extraction)
RNN and LSTM (temporal dependencies)
GANs (data augmentation)
Predictive degradation modeling, RUL estimation, synthetic fault scenario generation[76,98,99]
Hybrid models
PINNs
Digital twins (simulation-based models)
Handling noisy/incomplete data, prediction robustness, simulation-based validation[100,101,102]
Reinforcement learning
Policy optimization
Adaptive strategy learning
Real-time maintenance optimization, adaptive fault management, minimizing downtime[96,103,104]
When selecting an AI technique for PHM in robotics, it is critical to consider several factors to ensure its effective implementation. The availability of data significantly influences the choice; deep learning methods such as CNNs and RNNs generally require large datasets to achieve optimal accuracy, posing challenges in robotics due to the rarity of failure events [105,106,107]. On the other hand, traditional machine learning models like SVMs and random forests (RFs) are capable of operating efficiently with smaller, well-engineered datasets. While traditional models provide greater transparency and are easier to interpret, which is crucial for maintenance engineers to identify critical features, deep learning models, though powerful, are often criticized for their lack of transparency. This challenge can be addressed by adopting hybrid models or implementing post hoc interpretability techniques.
Computational complexity and real-time performance requirements are also significant factors in technique selection for robotics. Deep learning techniques generally require greater computational power and longer training times, which hinders real-time applications. Conversely, traditional methods offer greater computational efficiency and are ideal for incorporation into embedded robotic systems that demand real-time monitoring and decision-making capabilities. Moreover, the selection should also consider the flexibility and robustness of the techniques. Deep learning provides a comprehensive solution that autonomously manages high-dimensional and diverse sensor data, delivering substantial benefits in intricate and changing robotic environments [108,109,110]. Contrastingly, traditional models demand extensive domain-specific knowledge for feature extraction and might require further tuning to handle operational variability effectively.
A balanced approach, integrating traditional and deep learning methodologies, provides the most robust and effective solution. This hybrid strategy utilizes the interpretability and efficiency of traditional models while harnessing the adaptive feature extraction capabilities and robustness of deep learning techniques, contributing to comprehensive fault detection, diagnosis, and prognostics in robotic systems. A comparative summary of traditional machine learning and deep learning techniques for PHM in robotics, focusing on key decision factors such as data requirements, interpretability, and performance, is detailed in Table 4.
To detect anomalies in complex robotic systems, deep learning models are often employed to learn the underlying distribution of normal operational data. Among these approaches, generative models and reconstruction-based methods have gained significant attention due to their effectiveness in unsupervised settings where labeled fault data is scarce. These models learn representations of normal system behavior and identify deviations that may indicate faults or abnormal conditions.
Figure 6 illustrates two neural network-based approaches for anomaly detection: a GAN-based method and an autoencoder (AE)-based method. The GAN architecture comprises a generator (G) that replicates the distribution of normal-condition data, while the discriminator (D) differentiates between real and generated samples. This enables the model to capture the characteristics of normal data distribution. Additionally, an encoder is integrated to form an autoencoder-like structure that computes reconstruction errors, which are used as anomaly scores. Conversely, the AE-based method uses an encoder (E) to process the input data and a decoder or generator (G) to attempt reconstruction. The difference in reconstruction between the input and output serves as the basis for the anomaly score. Large reconstruction errors suggest deviations from normal operating conditions, thus identifying the data as anomalous. Both methods are extensively applied in unsupervised fault detection within complex systems such as robotics. The GAN training objective used to learn the healthy-data distribution is defined as follows:
m i n G m a x D     E x p data ,   l o g D ( x ) + E z p ( z ) ,   l o g ( 1 D ( G ( z ) ) ) .

4. Real-Time Fault Detection and Prognostics

Real-time fault detection and prognostics are critical for ensuring the reliability and safety of robotic systems operating in complex environments. Continuous monitoring of system behavior enables the early identification of abnormal conditions that may indicate component degradation or impending failure. This capability is particularly important in robotics, where faults in components such as motors, gearboxes, or reducers can rapidly propagate and affect overall system performance.
A critical factor in the effective implementation of AI for PHM is real-time feasibility. Virtual sensing technology has emerged as a key solution for monitoring parameters that are difficult to measure directly. Recent research has demonstrated that PINN-based surrogate models can function as virtual thermal sensors, providing real-time simulations that maintain high accuracy while operating within the strict computational constraints of robotic controllers [111].
To support real-time monitoring, sensor data, including vibration, temperature, and electrical current, are continuously collected from critical subsystems. These streaming signals are analyzed using signal processing techniques such as sliding window analysis, Fourier transforms, and wavelet-based methods to extract informative features that characterize system behavior. The extracted features are then processed by anomaly detection or prognostic models, which compare incoming observations against learned representations of normal operating conditions to identify deviations that may signal emerging faults.
To preserve temporal structure in streaming data, sensor signals are commonly segmented using overlapping windows:
Y k = { y t } t = k S k S + W 1
where W is the window length and S is the stride.
Modern implementations often utilize lightweight deep learning models optimized for edge or embedded deployment, which reduces latency and eliminates the need for cloud-based processing [112,113,114,115,116]. Furthermore, hybrid approaches that integrate classical filtering techniques, e.g., Kalman filters, with machine learning models enhance robustness by reducing noise and capturing complex fault patterns. This multi-tiered strategy guarantees high sensitivity to faults and ensures reliability under variable operating conditions, establishing real-time PHM as essential for safe and efficient robotic systems. In numerous instances, lightweight deep learning models have been implemented on embedded or edge computing platforms, facilitating continuous, adaptive fault detection without the delays associated with centralized processing. These hybrid approaches improve reliability by merging classical filtering methods, such as Kalman filters, to eliminate noise, with data-driven models adept at recognizing complex failure patterns [117].
Upon the detection of a fault, the focus shifts to predicting the RUL of affected components to enable proactive scheduling of maintenance. For estimating RUL, data-driven techniques, particularly those based on recurrent neural networks like LSTM, have demonstrated efficacy by capturing the temporal evolution of degradation patterns. These models are continuously updated with new sensor data, enabling dynamic adjustments to RUL predictions in response to changing operational conditions. When physical degradation models are available, hybrid methods that merge empirical degradation modeling with data-driven predictions are employed to narrow uncertainty intervals and enhance accuracy. Incremental learning techniques also play a crucial role, as they allow the prognostic model to adapt in real time, maintaining reliability as the system’s operating profile evolves.
While real-time fault detection focuses on identifying abnormalities during operation, prognostics aims to estimate the RUL of system components before failure occurs. Recent advances in cyber-physical systems have enabled the integration of digital twin technologies for this purpose [118]. A digital twin is a virtual representation of a physical system that continuously synchronizes with real-world sensor data, allowing the system’s behavior to be simulated, monitored, and predicted over time. By combining real-time data with physics-based models and data-driven analytics, digital twins provide a powerful framework for forecasting degradation and evaluating future operational scenarios.
Figure 7 illustrates a digital twin-based framework for predicting RUL. The process begins with capturing the current machine status using a digital twin, followed by estimating future modeling parameters. Based on the production plan and defined time steps, a simulation is performed to forecast the system’s future dynamic behavior using virtual sensor outputs. The estimated signals are then compared against predefined failure thresholds or nominal baselines. If deviations or threshold exceedances are detected, they are utilized to define a failure event. This comparison is executed iteratively across future time horizons, e.g., Day 20, 40, 60, etc., to determine the point at which the system is expected to breach operational limits. The outcome of this process is an estimation of RUL, which supports proactive maintenance and informed decision-making in robotics and industrial operations.
Integrating the outputs of fault detection and RUL prediction into decision support systems is crucial for translating these insights into actionable maintenance strategies. Data from online monitoring are visualized through intuitive dashboards that display trends, real-time alerts, and diagnostic details, enabling operators to rapidly assess the health of robotic systems [119]. Automated decision-making components, which may utilize reinforcement learning or optimization algorithms, analyze the trade-offs between maintenance costs and the risks associated with unexpected failures. These systems not only recommend timely interventions but also incorporate feedback loops where the outcomes of maintenance actions are fed back into the prognostic models to continually refine their predictions and enhance overall system reliability.
For high-speed industrial robotic control loops, anomaly inference must occur within a strict latency budget of <10 milliseconds to trigger emergency stops before catastrophic failure [120]. Consequently, deploying computationally heavy transformers or deep CNNs locally on embedded systems is a severe constraint. To resolve this, a hybrid edge–cloud architecture is now the industry standard [121]. Lightweight models are deployed at the “edge” for instantaneous anomaly detection, while computationally intensive tasks like RUL prediction are offloaded to cloud robotics platforms such as AWS IoT RoboRunner [122], enabling distributed fleets to pool degradation data globally [123]. Table 5 shows the benchmark datasets commonly used in prognostics research, summarizing target domains, key sensing modalities, and representative applications in remaining useful life prediction and fault prognostics literature.
Theoretical PHM models have increasingly translated into major industrial pilot deployments across advanced robotic manufacturing systems [13]. For example, FANUC’s Zero Down Time platform is a cloud-based predictive maintenance system deployed widely across automotive assembly lines [127,128]. By analyzing motor torque and vibration data streams, the system can predict reducer wear several weeks before functional degradation becomes critical, enabling proactive maintenance scheduling. Similarly, KUKA’s edge analytics solution [129] uses edge computing directly within the robot controller, allowing axis friction to be monitored locally and maintenance alerts to be triggered without continuous reliance on cloud connectivity. Bosch Rexroth [130] has adopted a hybrid strategy that integrates digital twins with artificial intelligence techniques to monitor hydraulic and electrical drives. In this framework, real-time empirical sensor data are continuously matched against physics-based simulation models, allowing early identification of wear and performance deviations.
Despite these advancements, the black-box nature of deep learning models remains a recognized limitation in high-stakes robotic applications. Maintenance engineers require transparency regarding why a system predicts an impending failure, particularly when decisions affect safety, downtime, and operational cost. Explainable artificial intelligence techniques address this need by providing interpretable insights into model behavior. Methods such as SHapley Additive exPlanations (SHAP) [131] and Local Interpretable Model-Agnostic Explanation (LIME) [132] rank individual feature importance, enabling operators to identify which specific sensor measurements contribute most strongly to a RUL prediction. For models that process visual or time–frequency representations, Grad-CAM [133] highlights the specific frequency bands or spatial regions that influenced a convolutional neural network’s fault classification. In Transformer-based architectures, attention visualization techniques provide intrinsic interpretability through self-attention maps, revealing which temporal segments of the machine’s operational history the model emphasized during prediction [134]. Together, these approaches strengthen trust, facilitate validation, and support deployment readiness in industrial robotic PHM systems.

5. Discussion, Challenges and Future Prospects

With the increasing deployment of robotic systems in complex, unstructured, and safety-critical environments, the demand for reliable and responsive health monitoring has escalated significantly. AI-driven PHM methods offer considerable promise by facilitating early fault detection and predictive maintenance. However, their practical implementation in robotics continues to encounter significant obstacles. These challenges arise not only from the technical complexity of robotic systems but also from the demands of real-time operation, variable hardware configurations, and inconsistent data availability. A predominant concern in robotic PHM is the quality and reliability of sensor data. Accurate condition monitoring depends heavily on a continuous stream of measurements from various onboard sensors. In reality, these sensors are often subjected to harsh environments, leading to issues such as noise, drift, or outright failure. Even minor inconsistencies in sensor readings can lead to the misinterpretation of a robot’s health status, thereby diminishing the effectiveness of fault detection and prognostics. To mitigate these risks, robust data preprocessing and sensor fusion strategies must be implemented to filter noise and extract meaningful features that accurately reflect the true operational state of the system. Further compounding the issue of data quality is the challenge of real-time processing. Many AI algorithms used for PHM, particularly deep learning models, are computationally intensive and challenging to deploy on the resource-constrained edge devices typically utilized in robotics. Consequently, there is a pressing need to balance model complexity with computational efficiency. Techniques such as model pruning, quantization, and the adoption of lightweight architectures or hardware accelerators are being explored to make these models viable for real-time inference without compromising accuracy. Despite recent advancements in deep learning for predictive maintenance, a systematic analysis of the current landscape reveals several persistent bottlenecks. Despite recent advancements in technologies for predictive maintenance, there remain several bottlenecks. As shown in Table 6, a critical disconnect exists between high-performing offline models and the rigorous constraints of real-world robotic environments. Notably, the field remains heavily reliant on synthetic data or GAN-based augmentations, highlighting a dire need for large-scale, multimodal robotic datasets.
Figure 8 illustrates how these data limitations impact various PHM tasks. Fault detection suffers from missing or erroneous measurements, which are often the result of sensor faults [10], thus diminishing the system’s capability to detect deviations from normal operations. Fault diagnostics, which depend on labeled data to link specific measurements with degradation states, are frequently limited by the lack of such annotations. Prognostics are additionally hindered by the typical practice of collecting data only at failure events, rather than continuously, a result of the logistical burdens associated with handling and storing expansive volumes of time-series data. This discontinuity in tracking substantially impedes accurate RUL predictions, which rely on observing gradual degradation trends over time.
The integration of cloud robotics introduces critical challenges regarding the complexity and privacy of proprietary industrial data [135]. High-dimensional data complexity heavily influences model dependence and generalization; studies evaluating data complexity in medical imaging, such as the classification of brain tumors and Alzheimer’s disease using MRI images [136], reveal that highly complex, multi-source datasets often contain inherent biases that models memorize rather than generalize. Similarly, streaming complex, raw operational data to centralized clouds risks leaking proprietary manufacturing behaviors. To resolve this, federated learning (FL) is emerging as a crucial component. Inspired by decentralized architectures, such as privacy-preserved smartphone recommendation systems, FL allows multiple robots to collaboratively train a shared PHM model. Instead of exchanging raw sensor data, only localized model weight updates are transmitted, preserving corporate data privacy.
In federated learning, the global PHM model is learned by minimizing the aggregate risk across all participating robotic nodes. The global objective function is defined as follows:
min ω F ω = k = 1 K n k n F k ω
where
  • K represents the total number of clients (individual robots or work cells).
  • n k is the number of data samples available locally at client K .
  • F k ω is the local loss function for client K , typically calculated as the empirical risk over its local data.
In addition to data and computational challenges, interpretability remains a significant concern. Although deep learning models excel in capturing complex patterns in robotic behavior, their opaque nature can be particularly troubling in high-stakes settings where transparency and trust are imperative. Maintenance personnel need to comprehend the reasons behind a model’s indication of component faults or impending failures. Enhancing interpretability through techniques such as attention mechanisms, saliency mapping, or post hoc explanation tools not only increases transparency but also supports operational decision-making and continuous model refinement. Moreover, such explainability is vital for validating model behavior and ensuring sustained reliability under varying conditions. These challenges underscore the necessity for PHM solutions that are precise, efficient, robust to real-world data constraints, and interpretable in their results. Addressing these issues is crucial for harnessing the full capabilities of AI-driven PHM in robotics and for propelling the field towards more robust, adaptive, and autonomous systems.

6. Conclusions

This review explored the current landscape of real-time, AI-driven PHM in robotics, highlighting its potential and the multifaceted challenges that accompany its implementation. As robotic systems are increasingly deployed in dynamic and mission-critical environments, ensuring their operational reliability through effective health monitoring is essential. AI-based PHM constitutes a robust framework for fault detection, diagnostics, and prognostics, utilizing sensor data and machine learning to support predictive maintenance and mitigate unplanned downtime. The discussion illustrated that, while significant advancements have been made, particularly in deep learning, hybrid modeling, and digital twin technologies, several challenges persist. These include inconsistent sensor data, limited failure labeling, high computational demands, and a lack of interpretability in complex models. Real-time deployment heightens these issues, necessitating lightweight, adaptive, and explainable solutions. Moving forward, overcoming these limitations will require not only algorithmic innovation but also enhancements in sensor design, data management, and system integration. PHM systems in robotics must be resilient to real-world uncertainties, capable of learning and adapting online, and transparent enough to support human decision-making. The ongoing evolution of AI techniques, combined with domain knowledge and system-level insights, will be crucial in developing next-generation robotic systems that are safer, smarter, and more autonomous.
Unlike previous surveys that primarily aggregate descriptive applications, the novelty of this work is anchored in its rigorous benchmarking of algorithmic architectures ranging from traditional machine learning to advanced transformers against strict real-time deployment constraints and computational complexities. From an industrial perspective, the integration of AI-driven PHM within robotic systems is a key enabler of smart manufacturing and Industry 4.0 initiatives. By transforming raw sensor data into actionable insights regarding equipment health and degradation trends, these systems support predictive maintenance strategies that enhance productivity, reduce maintenance costs, and improve operational reliability. When deployed across interconnected robotic workcells, PHM frameworks can contribute to system-level optimization by coordinating maintenance planning with production schedules and digital twin simulations. Such capabilities are essential for the realization of intelligent manufacturing environments where robotic systems operate autonomously while maintaining high levels of reliability and efficiency.
Based on this critical synthesis, the future directions of robotic PHM must pivot toward a tightly integrated edge–cloud synergy, where ultra-lightweight and quantized neural networks are optimized for real-time deployment on edge controllers while computationally intensive analytics and high-fidelity digital twins are maintained in the cloud. At the same time, there is a pressing need for federated learning standardization, enabling robust distributed training frameworks that can learn generalized failure patterns across geographically and operationally diverse industrial environments without exposing proprietary production data. Equally important is the advancement of human-centric explainable artificial intelligence, where interpretability mechanisms such as SHAP analysis or attention mapping are embedded directly into industrial maintenance dashboards, ensuring operational transparency, improving technician trust, and supporting informed maintenance decision-making.

Author Contributions

M.T. conceptualized the study, conducted the literature review, and wrote the main text of the manuscript. M.H.Y. contributed to the creation and refinement of the figures and assisted with formatting. R.T.A.K. supported the development of the figures and contributed to the organization of referenced materials. H.S.K. provided supervision, guidance on the research direction, and critical revisions of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the “Regional Innovation System and Education (RISE)” through the Seoul RISE Center, funded by the Ministry of Education (MOE) and the Seoul Metropolitan Government. (2026-RISE-01-007-04) and also supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (RS-2025-00523019).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

AIArtificial Intelligence
IoTInternet of Things
SVMSupport Vector Machine
MFCCsMel-frequency cepstrum coefficients
CWTContinuous Wavelet Transform
FLOPsFloating Point Operations
PCAPrincipal Component Analysis
RNNRecurrent Neural Networks
GANsGenerative Adversarial Networks
D2PAMDeep Dual Patch Attention Mechanism
MRIMagnetic Resonance Imaging
RFsRandom Forests
SHAPSHapley Additive exPlanations
FLFederated Learning
PHMPrognostics and Health Management
RULRemaining Useful Life
ANNsArtificial Neural Networks
CNNsConvolutional Neural Network
ICTInformation and Communication Technology
PRISMAPreferred Reporting Items for Systematic Reviews and Meta-Analyses
STFTShort-Time Fourier Transform
LSTMsLong Short-Term Memory networks
RLReinforcement Learning
ViTsVision Transformers
SSLSelf-Supervised Learning
AEAutoencoder
LIMELocal Interpretable Model-Agnostic Explanation

References

  1. Husainy, A.; Mangave, S.; Patil, N. A Review on Robotics and Automation in the 21st Century: Shaping the Future of Manufacturing, Healthcare, and Service Sectors. Asian Rev. Mech. Eng. 2023, 12, 41–45. [Google Scholar] [CrossRef]
  2. George, A.S.; George, A.H. Riding the Wave: An Exploration of Emerging Technologies Reshaping Modern Industry. Partn. Univers. Int. Innov. J. 2024, 2, 15–38. [Google Scholar]
  3. Licardo, J.T.; Domjan, M.; Orehovački, T. Intelligent Robotics—A Systematic Review of Emerging Technologies and Trends. Electronics 2024, 13, 542. [Google Scholar] [CrossRef]
  4. Ibraheem, A.R.H. Bridging Vision and Mechanics: Innovations in Intelligent Robotic Control Systems. Glob. Res. Rev. 2025, 1, 68–76. [Google Scholar]
  5. Ibraheem, A.R.H. Smart Robot Control: Integrating Computer Vision with Mechanical Engineering for Precision and Adaptability. Glob. Res. Rev. 2025, 1, 17–24. [Google Scholar]
  6. Xie, D.; Chen, L.; Liu, L.; Chen, L.; Wang, H. Actuators and Sensors for Application in Agricultural Robots: A Review. Machines 2022, 10, 913. [Google Scholar] [CrossRef]
  7. Sarker, A.; Ul Islam, T.; Islam, M.R. A Review on Recent Trends of Bioinspired Soft Robotics: Actuators, Control Methods, Materials Selection, Sensors, Challenges, and Future Prospects. Adv. Intell. Syst. 2025, 7, 2400414. [Google Scholar] [CrossRef]
  8. Dou, W.; Zhong, G.; Cao, J.; Shi, Z.; Peng, B.; Jiang, L. Soft Robotic Manipulators: Designs, Actuation, Stiffness Tuning, and Sensing. Adv Mater. Technol. 2021, 6, 2100018. [Google Scholar] [CrossRef]
  9. Robotics Market Size | Mordor Intelligence. Available online: https://www.mordorintelligence.com/industry-reports/robotics-market (accessed on 7 April 2025).
  10. Lu, Y.; Meng, B.; Jin, X. Fault-Tolerant Formation-Containment Control for UAVs with Sensor Faults and Obstacle Avoidance. Int. J. Aeronaut. Space Sci. 2025, 26, 2575–2589. [Google Scholar] [CrossRef]
  11. Hwang, I.; Bae, J.H. UAV Head-on Situation Maneuver Generation Using Transfer-Learning-Based Deep Reinforcement Learning. Int. J. Aeronaut. Space Sci. 2024, 25, 410–419. [Google Scholar] [CrossRef]
  12. Taheri Hosseinkhani, N. Economic Impacts of Artificial Intelligence Integration in Industry 4.0 Manufacturing Systems; Rutgers Business School at Rutgers University in New Jersey: Piscataway, NJ, USA, 2025. [Google Scholar]
  13. Soori, M.; Dastres, R.; Arezoo, B.; Jough, F.K.G. Intelligent Robotic Systems in Industry 4.0: A Review. J. Adv. Manuf. Sci. Technol. 2024, 4, 2024007. [Google Scholar] [CrossRef]
  14. Niu, G. Data-Driven Technology for Engineering Systems Health Management; Springer: Singapore, 2017; ISBN 978-981-10-2031-5. [Google Scholar]
  15. Lall, P.; Lowe, R.; Goebel, K. Prognostics and Health Monitoring of Electronic Systems. In Proceedings of the 2011 12th International Conference on Thermal, Mechanical & Multi-Physics Simulation and Experiments in Microelectronics and Microsystems; IEEE: Piscataway, NJ, USA, 2011; pp. 1–17. [Google Scholar]
  16. Rohan, A.; Raouf, I.; Kim, H.S. Rotate Vector (RV) Reducer Fault Detection and Diagnosis System: Towards Component Level Prognostics and Health Management (PHM). Sensors 2020, 20, 6845. [Google Scholar] [CrossRef]
  17. Abbate, R.; Franciosi, C.; Voisin, A.; Fera, M. A Conceptual Framework Proposal for the Implementation of Prognostic and Health Management in Production Systems. IET Collab. Intell. Manuf. 2024, 6, e12122. [Google Scholar] [CrossRef]
  18. Zhao, P.; Kurihara, M.; Noda, T.; Kashiwa, H.; Hiyama, M. Generating Mathematical Model of Equipment and Its Applications in PHM. In Proceedings of the 2019 IEEE International Conference on Prognostics and Health Management (ICPHM); IEEE: Piscataway, NJ, USA, 2019; pp. 1–7. [Google Scholar]
  19. Dong, M.; Peng, Y. Equipment PHM Using Non-Stationary Segmental Hidden Semi-Markov Model. Robot. Comput. Integr. Manuf. 2011, 27, 581–590. [Google Scholar] [CrossRef]
  20. Tsui, K.L.; Chen, N.; Zhou, Q.; Hai, Y.; Wang, W. Prognostics and Health Management: A Review on Data Driven Approaches. Math. Probl. Eng. 2015, 2015, 793161. [Google Scholar] [CrossRef]
  21. Jieyang, P.; Kimmig, A.; Dongkun, W.; Niu, Z.; Zhi, F.; Jiahai, W.; Liu, X.; Ovtcharova, J. A Systematic Review of Data-Driven Approaches to Fault Diagnosis and Early Warning. J. Intell. Manuf. 2023, 34, 3277–3304. [Google Scholar] [CrossRef]
  22. Sutharssan, T.; Stoyanov, S.; Bailey, C.; Yin, C. Prognostic and Health Management for Engineering Systems: A Review of the Data-driven Approach and Algorithms. J. Eng. 2015, 2015, 215–222. [Google Scholar] [CrossRef]
  23. Chen, Q.; Cao, J.; Zhu, S. Data-Driven Monitoring and Predictive Maintenance for Engineering Structures: Technologies, Implementation Challenges, and Future Directions. IEEE Internet Things J. 2023, 10, 14527–14551. [Google Scholar] [CrossRef]
  24. Chen, H.; Lin, J.; Yang, H.; Xu, G. Measurement Capability Evaluation of Acoustic Emission Sensors in IIoT System for PHM. IEEE Internet Things J. 2024, 11, 28838–28850. [Google Scholar] [CrossRef]
  25. Brunner, A.J. Structural Health and Condition Monitoring with Acoustic Emission and Guided Ultrasonic Waves: What about Long-Term Durability of Sensors, Sensor Coupling and Measurement Chain? Appl. Sci. 2021, 11, 11648. [Google Scholar] [CrossRef]
  26. Kaphle, M.R. Analysis of Acoustic Emission Data for Accurate Damage Assessment for Structural Health Monitoring Applications. Ph.D. Thesis, Queensland University of Technology, Brisbane, Australia, 2012. [Google Scholar]
  27. Peng, F.; Zheng, L.; Peng, Y.; Fang, C.; Meng, X. Digital Twin for Rolling Bearings: A Review of Current Simulation and PHM Techniques. Measurement 2022, 201, 111728. [Google Scholar] [CrossRef]
  28. Kim, S. Investigation on Fault Information Extraction for Acoustic Emission Based Rolling Element Bearing Diag-nostics under Noisy Conditions. Ph.D. Thesis, Seoul National University, Seoul, Republic of Korea, 2022. [Google Scholar]
  29. Singh, S.; Kumar, N. Rotor Faults Diagnosis Using Artificial Neural Networks and Support Vector Machines. Int. J. Acoust. Vib. 2015, 20, 153–159. [Google Scholar] [CrossRef]
  30. Li, J.; Zhan, K. Intelligent Mining Technology for an Underground Metal Mine Based on Unmanned Equipment. Engineering 2018, 4, 381–391. [Google Scholar] [CrossRef]
  31. Abdul, Z.K.; Al-Talabani, A.K. Mel Frequency Cepstral Coefficient and Its Applications: A Review. IEEE Access 2022, 10, 122136–122158. [Google Scholar] [CrossRef]
  32. Kumar, H.; Shafiq, M.; Kauhaniemi, K.; Elmusrati, M. A Review on the Classification of Partial Discharges in Medium-Voltage Cables: Detection, Feature Extraction, Artificial Intelligence-Based Classification, and Optimization Techniques. Energies 2024, 17, 1142. [Google Scholar] [CrossRef]
  33. e Souza, A.C.O.; de Souza, M.B., Jr.; da Silva, F.V. Development of a CNN-Based Fault Detection System for a Real Water Injection Centrifugal Pump. Expert Syst. Appl. 2024, 244, 122947. [Google Scholar] [CrossRef]
  34. Dzaferagic, M.; Marchetti, N.; Macaluso, I. Fault Detection and Classification in Industrial IoT in Case of Missing Sensor Data. IEEE Internet Things J. 2021, 9, 8892–8900. [Google Scholar] [CrossRef]
  35. Rahman, T.; Yang, M.; Sigal, L. TriBERT: Human-Centric Audio-Visual Representation Learning. Adv. Neural Inf. Process. Syst. 2021, 34, 9774–9787. [Google Scholar]
  36. Chen, H.; Xie, W.; Vedaldi, A.; Zisserman, A. Vggsound: A Large-Scale Audio-Visual Dataset. In Proceedings of the ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); IEEE: Piscataway, NJ, USA, 2020; pp. 721–725. [Google Scholar]
  37. Hossain, M.S.; Muhammad, G. Emotion Recognition Using Deep Learning Approach from Audio–Visual Emotional Big Data. Inf. Fusion 2019, 49, 69–78. [Google Scholar] [CrossRef]
  38. Feng, G.; Li, B.; Yang, M.; Yan, Z. V-CNN: Data Visualizing Based Convolutional Neural Network. In Proceedings of the 2018 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC); IEEE: Piscataway, NJ, USA, 2018; pp. 1–6. [Google Scholar]
  39. Zhou, Y.; Zhi, G.; Chen, W.; Qian, Q.; He, D.; Sun, B.; Sun, W. A New Tool Wear Condition Monitoring Method Based on Deep Learning under Small Samples. Measurement 2022, 189, 110622. [Google Scholar] [CrossRef]
  40. He, Z.; Shi, T.; Xuan, J.; Li, T. Research on Tool Wear Prediction Based on Temperature Signals and Deep Learning. Wear 2021, 478, 203902. [Google Scholar] [CrossRef]
  41. Nayyar, A.; Puri, V.; Nguyen, N.G.; Le, D.N. Smart Surveillance Robot for Real-Time Monitoring and Control System in Environment and Industrial Applications. In Information Systems Design and Intelligent Applications; Bhateja, V., Nguyen, B.L., Nguyen, N.G., Satapathy, S.C., Le, D.-N., Eds.; Advances in Intelligent Systems and Computing; Springer: Singapore, 2018; Volume 672, pp. 229–243. ISBN 978-981-10-7511-7. [Google Scholar]
  42. Derbas, A.M.; Al-Aubidy, K.M.; Ali, M.M.; Al-Mutairi, A.W. Multi-Robot System for Real-Time Sensing and Monitoring. In Proceedings of the 15th International Workshop on Research and Education in Mechatronics (REM); IEEE: Piscataway, NJ, USA, 2014; pp. 1–6. [Google Scholar]
  43. Pettersson, O. Execution Monitoring in Robotics: A Survey. Robot. Auton. Syst. 2005, 53, 73–88. [Google Scholar] [CrossRef]
  44. Wijaya, T.; Caesarendra, W.; Pappachan, B.K.; Tjahjowidodo, T.; Wee, A.; Roslan, M.I. Robot Control and Decision Making through Real-Time Sensors Monitoring and Analysis for Industry 4.0 Implementation on Aerospace Component Manufacturing. In Proceedings of the 2017 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM); IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
  45. Kumari, M.; Kumar, A.; Singhal, R. Design and Analysis of IoT-Based Intelligent Robot for Real-Time Monitoring and Control. In Proceedings of the 2020 International Conference on Power Electronics & IoT Applications in Renewable Energy and Its Control (PARC); IEEE: Piscataway, NJ, USA, 2020; pp. 549–552. [Google Scholar]
  46. Kazemian, A.; Yuan, X.; Davtalab, O.; Khoshnevis, B. Computer Vision for Real-Time Extrusion Quality Monitoring and Control in Robotic Construction. Autom. Constr. 2019, 101, 92–98. [Google Scholar] [CrossRef]
  47. Maksymova, S.; Yevsieiev, V.; Nevliudov, I.; Bahlai, O. Balancing System For A Zoomorphic Spot Type Mobile Robot Development Using An Accelerometer MPU 6050 (GY-521). In Proceedings of the 2024 IEEE 19th International Conference on the Perspective Technologies and Methods in MEMS Design (MEMSTECH); IEEE: Piscataway, NJ, USA, 2024; pp. 39–42. [Google Scholar]
  48. Hoang, M.L.; Pietrosanto, A. A New Technique on Vibration Optimization of Industrial Inclinometer for MEMS Accelerometer without Sensor Fusion. IEEE Access 2021, 9, 20295–20304. [Google Scholar] [CrossRef]
  49. Rybarczyk, D. Application of the Mems Accelerometer as the Position Sensor in Linear Electrohydraulic Drive. Sensors 2021, 21, 1479. [Google Scholar] [CrossRef]
  50. Mesmer, P.; Neubauer, M.; Lechler, A.; Verl, A. Robust Design of Independent Joint Control of Industrial Robots with Secondary Encoders. Robot. Comput.-Integr. Manuf. 2022, 73, 102232. [Google Scholar] [CrossRef]
  51. Wallscheid, O. Thermal Monitoring of Electric Motors: State-of-the-Art Review and Future Challenges. IEEE Open J. Ind. Appl. 2021, 2, 204–223. [Google Scholar] [CrossRef]
  52. Zhang, N. Diagnosis and Prevention of Overheating Failures in Mechanical Equipment Based on Numerical Analysis of Temperature and Thermal Stress Fields. Int. J. Heat Technol. 2024, 42, 466. [Google Scholar] [CrossRef]
  53. Cheng, A.; Xin, Y.; Wu, H.; Yang, L.; Deng, B. A Review of Sensor Applications in Electric Vehicle Thermal Management Systems. Energies 2023, 16, 5139. [Google Scholar] [CrossRef]
  54. Shaik, A.K. Assessment of Cyber-Physical Vulnerabilities of Industrial Robotic Sensing Systems. Ph.D. Thesis, University of Michigan, Ann Arbor, MI, USA, 2023. [Google Scholar]
  55. Sabry, A.H.; Amirulddin, U.A.B.U. A Review on Fault Detection and Diagnosis of Industrial Robots and Multi-Axis Machines. Results Eng. 2024, 23, 102397. [Google Scholar] [CrossRef]
  56. Cao, M.Y.; Laws, S.; y Baena, F.R. Six-Axis Force/Torque Sensors for Robotics Applications: A Review. IEEE Sens. J. 2021, 21, 27238–27251. [Google Scholar] [CrossRef]
  57. Li, S.; Xu, J. Multi-Axis Force/Torque Sensor Technologies: Design Principles and Robotic Force Control Applications: A Review. IEEE Sens. J. 2024, 25, 4055–4069. [Google Scholar] [CrossRef]
  58. Wang, S.; Zhang, B.; Yu, Z.; Yan, Y. Differential Soft Sensor-Based Measurement of Interactive Force and Assistive Torque for a Robotic Hip Exoskeleton. Sensors 2021, 21, 6545. [Google Scholar] [CrossRef]
  59. Ibraheem, A.R.H. Advancing Robotic Intelligence: The Role of Computer Vision and Mechanics in Control System Innovation. Glob. Res. Rev. 2025, 1, 34–42. [Google Scholar]
  60. Shahria, M.T.; Sunny, M.S.H.; Zarif, M.I.I.; Ghommam, J.; Ahamed, S.I.; Rahman, M.H. A Comprehensive Review of Vision-Based Robotic Applications: Current State, Components, Approaches, Barriers, and Potential Solutions. Robotics 2022, 11, 139. [Google Scholar] [CrossRef]
  61. Clift, L. An Investigation into a Combined Visual Servoing and Vision-Based Navigation System Robot for the Aerospace Manufacturing Industry. Ph.D. Thesis, University of Sheffield, Sheffield, UK, 2023. [Google Scholar]
  62. Hwang, L.J. RGBD Camera Pose Estimation Techniques, Slip Detection, and Occluded Object Search Strategies for Deformable Linear Object Features in Autonomous Robotic Space Task Execution; University of California: Davis, CA, USA, 2024. [Google Scholar]
  63. Azeta, J.; Omeche, T.T.; Daniyan, I.; Abiola, J.O.; Daniyan, L.; Phuluwa, H.S.; Muvunzi, R. Artificial Intelligence and Robotics in Predictive Maintenance: A Comprehensive Review. Front. Mech. Eng. 2025, 11, 1722114. [Google Scholar] [CrossRef]
  64. Bruno, E. Artificial Intelligence in the Manufacturing Context: Technologies, Agents, and Lifecycle Integration. Master’s Thesis, Politecnico di Torino, Torino, Italy, 2025. [Google Scholar]
  65. Mahmud, D.; Hajmohamed, H.; Almentheri, S.; Alqaydi, S.; Aldhaheri, L.; Khalil, R.A.; Saeed, N. Integrating Llms with Its: Recent Advances, Potentials, Challenges, and Future Directions. IEEE Trans. Intell. Transp. Syst. 2025, 26, 5674–5709. [Google Scholar] [CrossRef]
  66. Kumar, P.; Khalid, S.; Kim, H.S. Prognostics and Health Management of Rotating Machinery of Industrial Robot with Deep Learning Applications—A Review. Mathematics 2023, 11, 3008. [Google Scholar] [CrossRef]
  67. Maincer, D.; Benmahamed, Y.; Mansour, M.; Alharthi, M.; Ghonein, S.S. Fault Diagnosis in Robot Manipulators Using SVM and KNN. Intell. Autom. Soft Comput. 2023, 35, 1957–1969. [Google Scholar] [CrossRef]
  68. Rohan, A. Deep Scattering Spectrum Germaneness for Fault Detection and Diagnosis for Component-Level Prognostics and Health Management (PHM). Sensors 2022, 22, 9064. [Google Scholar] [CrossRef]
  69. Zhang, Y.; Wu, J.; Gao, B.; Xia, L.; Lu, C.; Wang, H.; Cao, G. Fault Types and Diagnostic Methods of Manipulator Robots: A Review. Sensors 2025, 25, 1716. [Google Scholar] [CrossRef]
  70. Datta, A.; Patel, S.; Mavroidis, C.; Antoniadis, I.; Krishnasamy, J.; Hosek, M. Fault Diagnostics of Industrial Robots Using Support Vector Machines and Discrete Wavelet Transforms. ASME Int. Mech. Eng. Congr. Expo. 2006, 47748, 245–251. [Google Scholar]
  71. Chen, Z.; Wu, M.; Zhao, R.; Guretno, F.; Yan, R.; Li, X. Machine Remaining Useful Life Prediction via an Attention-Based Deep Learning Approach. IEEE Trans. Ind. Electron. 2020, 68, 2521–2531. [Google Scholar] [CrossRef]
  72. Cheng, H.; Kong, X.; Chen, G.; Wang, Q.; Wang, R. Transferable Convolutional Neural Network Based Remaining Useful Life Prediction of Bearing under Multiple Failure Behaviors. Measurement 2021, 168, 108286. [Google Scholar] [CrossRef]
  73. Carvalho, T.P.; Soares, F.A.; Vita, R.; Francisco, R.P.; Basto, J.P.; Alcalá, S.G. A Systematic Literature Review of Machine Learning Methods Applied to Predictive Maintenance. Comput. Ind. Eng. 2019, 137, 106024. [Google Scholar] [CrossRef]
  74. Susto, G.A.; Schirru, A.; Pampuri, S.; McLoone, S. Supervised Aggregative Feature Extraction for Big Data Time Series Regression. IEEE Trans. Ind. Inform. 2015, 12, 1243–1252. [Google Scholar] [CrossRef]
  75. Lei, L.; Wu, S.; Lu, S.; Liu, M.; Song, Y.; Fu, Z.; Shi, H.; Raley-Susman, K.M.; He, D. Microplastic Particles Cause Intestinal Damage and Other Adverse Effects in Zebrafish Danio Rerio and Nematode Caenorhabditis Elegans. Sci. Total Environ. 2018, 619, 1–8. [Google Scholar] [CrossRef]
  76. Zhao, Z.; Dua, D.; Singh, S. Generating Natural Adversarial Examples. arXiv 2018, arXiv:1710.11342. [Google Scholar] [PubMed]
  77. Li, C.; Yang, Y.; Ren, L. Genetic Evolution Analysis of 2019 Novel Coronavirus and Coronavirus from Other Species. Infect. Genet. Evol. 2020, 82, 104285. [Google Scholar] [CrossRef] [PubMed]
  78. Albelwi, S.; Mahmood, A. A Framework for Designing the Architectures of Deep Convolutional Neural Networks. Entropy 2017, 19, 242. [Google Scholar] [CrossRef]
  79. Li, H.; He, X.; Wu, Y.; Liu, G.; Wang, H.; Wen, X.; Li, L. Digital Twin and AI-Driven Robotic Embodied Control System: A Novel Adaptive Learning and Decision Optimization Method. Robot. Comput.-Integr. Manuf. 2026, 98, 103138. [Google Scholar] [CrossRef]
  80. Kim, J.I.; Kim, D.; Krebs, M.; Park, Y.S.; Park, Y.-L. Force Sensitive Robotic End-Effector Using Embedded Fiber Optics and Deep Learning Characterization for Dexterous Remote Manipulation. IEEE Robot. Autom. Lett. 2019, 4, 3481–3488. [Google Scholar] [CrossRef]
  81. Wang, Z.; Hong, T. Reinforcement Learning for Building Controls: The Opportunities and Challenges. Appl. Energy 2020, 269, 115036. [Google Scholar] [CrossRef]
  82. Sajjadi, P.; Dinmohammadi, F.; Shafiee, M. Machine Learning in Prognostics and System Health Management of Cyber-Physical Systems: A Review. IEEE Access 2025, 13, 162320–162354. [Google Scholar] [CrossRef]
  83. Shen, T. Towards Scalable and Efficient Deep Learning Models. Ph.D. Dissertation, Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Uppsala, Sweden, 2026. [Google Scholar]
  84. Khan, A.A.; Madendran, R.K.; Thirunavukkarasu, U.; Faheem, M. D2PAM: Epileptic Seizures Prediction Using Adversarial Deep Dual Patch Attention Mechanism. CAAI Trans. Intell. Technol. 2023, 8, 755–769. [Google Scholar] [CrossRef]
  85. Guo, H.; Yang, Z.; Zhang, G.; Lv, L.; Zhao, X. Meta Analysis of the Diagnostic Efficacy of Transformer-Based Multimodal Fusion Deep Learning Models in Early Alzheimer’s Disease. Front. Neurol. 2025, 16, 1641548. [Google Scholar] [CrossRef]
  86. Jiang, H.; Guo, Y. Multi-Class Multimodal Semantic Segmentation with an Improved 3D Fully Convolutional Networks. Neurocomputing 2020, 391, 220–226. [Google Scholar] [CrossRef]
  87. Gui, J.; Chen, T.; Zhang, J.; Cao, Q.; Sun, Z.; Luo, H.; Tao, D. A Survey on Self-Supervised Learning: Algorithms, Applications, and Future Trends. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 9052–9071. [Google Scholar] [CrossRef]
  88. Noh, H.-K.; Go, M.-S.; Lim, J.H. Real-Time Monitoring of Thermoelastic Deformation of a Silicon Wafer with Sparse Measurements in the Photolithography Process Using a Physics-Informed Neural Network and Fourier Neural Operator. Eng. Appl. Artif. Intell. 2025, 152, 110767. [Google Scholar] [CrossRef]
  89. Galan-Uribe, E.; Amezquita-Sanchez, J.P.; Morales-Velazquez, L. Supervised Machine-Learning Methodology for Industrial Robot Positional Health Using Artificial Neural Networks, Discrete Wavelet Transform, and Nonlinear Indicators. Sensors 2023, 23, 3213. [Google Scholar] [CrossRef] [PubMed]
  90. Cheng, Q.; Cao, Y.; Liu, Z.; Cui, L.; Zhang, T.; Xu, L. A Health Management Technology Based on PHM for Diagnosis, Prediction of Machine Tool Servo System Failures. Appl. Sci. 2024, 14, 2656. [Google Scholar] [CrossRef]
  91. Lee, H.; Raouf, I.; Song, J.; Kim, H.S.; Lee, S. Prognostics and Health Management of the Robotic Servo-Motor under Variable Operating Conditions. Mathematics 2023, 11, 398. [Google Scholar] [CrossRef]
  92. Calabrese, F.; Regattieri, A.; Botti, L.; Mora, C.; Galizia, F.G. Unsupervised Fault Detection and Prediction of Remaining Useful Life for Online Prognostic Health Management of Mechanical Systems. Appl. Sci. 2020, 10, 4120. [Google Scholar] [CrossRef]
  93. Fattah, G.; Newton, D.; Qiao, G.; Leber, D.D. Anomaly Detection for Industrial Robot Prognostics and Health Management. In Proceedings of the International Manufacturing Science and Engineering Conference; American Society of Mechanical Engineers: New York, NY, USA, 2023; Volume 87240, p. V002T09A006. [Google Scholar]
  94. Huang, M. Anomaly Detection for Condition Monitoring in Robot Systems. Master’s Thesis, Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Uppsala, Sweden, 2023. [Google Scholar]
  95. Ayankoso, S.; Gu, F.; Louadah, H.; Fahham, H.; Ball, A. Artificial-Intelligence-Based Condition Monitoring of Industrial Collaborative Robots: Detecting Anomalies and Adapting to Trajectory Changes. Machines 2024, 12, 630. [Google Scholar] [CrossRef]
  96. Khan, S.; Yairi, T.; Nakasuka, S.; Tsutsumi, S. Reinforcement Learning-Based Anomaly Detection for PHM Applications. In Proceedings of the 2022 IEEE Aerospace Conference (AERO); IEEE: Piscataway, NJ, USA, 2022; pp. 1–7. [Google Scholar]
  97. Ayankoso, S.; Gu, F.; Louadah, H.; Fahham, H.; Ball, A. Artificial Intelligence Based Anomaly Detection and Trajectory Drift Adaptive Condition Monitoring for Industrial Collaborative Robots. Available online: https://ssrn.com/abstract=4858660 (accessed on 10 June 2024).
  98. Saeed, A.; Khan, M.A.; Akram, U.; Obidallah, W.J.; Jawed, S.; Ahmad, A. Deep Learning Based Approaches for Intelligent Industrial Machinery Health Management and Fault Diagnosis in Resource-Constrained Environments. Sci. Rep. 2025, 15, 1114. [Google Scholar] [CrossRef]
  99. Nandal, T.; Fulara, V.; Singh, R.K. A Synergistic Framework Leveraging Autoencoders and Generative Adversarial Networks for the Synthesis of Computational Fluid Dynamics Results in Aerofoil Aerodynamics. arXiv 2023, arXiv:2305.18386. [Google Scholar] [CrossRef]
  100. Wang, S.; Tao, J.; Jiang, Q.; Chen, W.; Qin, C.; Liu, C. A Digital Twin Framework for Anomaly Detection in Industrial Robot System Based on Multiple Physics-Informed Hybrid Convolutional Autoencoder. J. Manuf. Syst. 2024, 77, 798–809. [Google Scholar] [CrossRef]
  101. Ayankoso, S.; Kaigom, E.; Louadah, H.; Faham, H.; Gu, F.; Ball, A. A Hybrid Digital Twin Scheme for the Condition Monitoring of Industrial Collaborative Robots. Procedia Comput. Sci. 2024, 232, 1099–1108. [Google Scholar] [CrossRef]
  102. He, X.; Li, K.; Wang, S.; Lai, X.; Yang, L.; Kan, Z.; Song, X. Toward an Online Monitoring of Structural Performance Based on Physics-Informed Hybrid Modeling Method. J. Mech. Des. 2024, 146, 011702. [Google Scholar] [CrossRef]
  103. Sathya, D.; Saravanan, G.; Thangamani, R. Reinforcement Learning for Adaptive Mechatronics Systems. In Computational Intelligent Techniques in Mechatronics; Prakash, K.B., Peddapelli, S.K., Tam, I.C.K., Woo, W.L., Jain, V., Eds.; Wiley: Hoboken, NJ, USA, 2024; pp. 135–184. ISBN 978-1-394-17464-5. [Google Scholar]
  104. Morales, E.F.; Murrieta-Cid, R.; Becerra, I.; Esquivel-Basaldua, M.A. A Survey on Deep Learning and Deep Reinforcement Learning in Robotics with a Tutorial on Deep Reinforcement Learning. Intell. Serv. Robot. 2021, 14, 773–805. [Google Scholar] [CrossRef]
  105. Soori, M.; Arezoo, B.; Dastres, R. Artificial Intelligence, Machine Learning and Deep Learning in Advanced Robotics, a Review. Cogn. Robot. 2023, 3, 54–70. [Google Scholar] [CrossRef]
  106. Eang, C.; Lee, S. Predictive Maintenance and Fault Detection for Motor Drive Control Systems in Industrial Robots Using CNN-RNN-Based Observers. Sensors 2024, 25, 25. [Google Scholar] [CrossRef] [PubMed]
  107. Jahanshahi, H.; Zhu, Z.H. Review of Machine Learning in Robotic Grasping Control in Space Application. Acta Astronaut. 2024, 220, 37–61. [Google Scholar] [CrossRef]
  108. Srivastava, G.; Agarwal, S. Deep Learning–Enabled Optical Sensors. In Intelligent Photonics Systems; CRC Press: Boca Raton, FL, USA, 2025; pp. 109–138. [Google Scholar]
  109. Čakurda, T.; Trojanová, M.; Pomin, P.; Hošovský, A. Deep Learning Methods in Soft Robotics: Architectures and Applications. Adv. Intell. Syst. 2024, 7, 2400576. [Google Scholar] [CrossRef]
  110. Zeng, Y.; Liao, B.; Li, Z.; Hua, C.; Li, S. A Comprehensive Review of Recent Advances on Intelligence Algorithms and Information Engineering Applications. IEEE Access 2024, 12, 135886–135912. [Google Scholar] [CrossRef]
  111. Go, M.-S.; Lim, J.H.; Lee, S. Physics-Informed Neural Network-Based Surrogate Model for a Virtual Thermal Sensor with Real-Time Simulation. Int. J. Heat Mass Transf. 2023, 214, 124392. [Google Scholar] [CrossRef]
  112. Chinchali, S.; Sharma, A.; Harrison, J.; Elhafsi, A.; Kang, D.; Pergament, E.; Cidon, E.; Katti, S.; Pavone, M. Network Offloading Policies for Cloud Robotics: A Learning-Based Approach. Auton. Robot. 2021, 45, 997–1012. [Google Scholar] [CrossRef]
  113. Özkan, C.; Şahin, S. AI Applications in Real-Time Edge Processing: Leveraging Artificial Intelligence for Enhanced Efficiency, Low-Latency Decision Making, and Scalability in Distributed Systems. Int. J. Mach. Intell. Smart Appl. (IJMISA) 2024, 14. [Google Scholar]
  114. Thota, R.C. Optimizing Edge Computing and AI for Low-Latency Cloud Workloads. Int. J. Sci. Res. Arch. 2024, 13, 3484–3500. [Google Scholar] [CrossRef]
  115. Yang, C.; Wang, Y.; Lan, S.; Wang, L.; Shen, W.; Huang, G.Q. Cloud-Edge-Device Collaboration Mechanisms of Deep Learning Models for Smart Robots in Mass Personalization. Robot. Comput.-Integr. Manuf. 2022, 77, 102351. [Google Scholar] [CrossRef]
  116. Ahmad, S.; Shakeel, I.; Mehfuz, S.; Ahmad, J. Deep Learning Models for Cloud, Edge, Fog, and IoT Computing Paradigms: Survey, Recent Advances, and Future Directions. Comput. Sci. Rev. 2023, 49, 100568. [Google Scholar] [CrossRef]
  117. Zhang, C.; Zhang, Y.; Liu, S.; Wang, L. Transfer Learning and Augmented Data-Driven Parameter Prediction for Robotic Welding. Robot. Comput.-Integr. Manuf. 2025, 95, 102992. [Google Scholar] [CrossRef]
  118. Wang, B.; Zhou, H.; Yang, G.; Li, X.; Yang, H. Human Digital Twin (HDT) Driven Human-Cyber-Physical Systems: Key Technologies and Applications. Chin. J. Mech. Eng. 2022, 35, 11. [Google Scholar] [CrossRef]
  119. Wu, M.; Rupenyan, A.; Corves, B. Autogeneration and Optimization of Pick-and-Place Trajectories in Robotic Systems: A Data-Driven Approach. Robot. Comput.-Integr. Manuf. 2026, 97, 103080. [Google Scholar] [CrossRef]
  120. Tarapder, S.A. EDGE Artificial Intelligence Based Automation For Ultra-Low-Latency Control In Industrial Robotic Systems. Rev. Appl. Sci. Technol. 2026, 5, 1–37. [Google Scholar] [CrossRef]
  121. Junaidi, A.; Hashim, S.Z.M.; Bin Othman, M.S.; Mohamad, M.M.; Alhussian, H.; Abdulkadir, S.J.; Nasser, M.; Bena, Y.A. Deep Learning and Edge Computing in Agriculture: A Comprehensive Review of Recent Trends and Innovations. IEEE Access 2025, 13, 137464–137490. [Google Scholar] [CrossRef]
  122. Charles, I.; Maghsoumi, H.; Fallah, Y. Advancing Autonomous Racing: A Comprehensive Survey of the RoboRacer (F1TENTH) Platform. In Proceedings of the 2025 6th International Conference on Artificial Intelligence, Robotics and Control (AIRC); IEEE: Piscataway, NJ, USA, 2025; pp. 207–213. [Google Scholar]
  123. Kabir, R.; Watanobe, Y.; Ding, D.; Islam, M.R.; Naruse, K. A Comprehensive Survey on Advanced Data Science Platforms for Cyber-Physical Systems, Digital Twins, and Robotics. IEEE Access 2025, 13, 177269–177304. [Google Scholar] [CrossRef]
  124. Vollert, S.; Theissler, A. Challenges of Machine Learning-Based RUL Prognosis: A Review on NASA’s C-MAPSS Data Set. In Proceedings of the 2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Vasteras, Sweden, 7 September 2021; pp. 1–8. [Google Scholar]
  125. Das, S.; Kumari, R.; Singh, R.K. Detection of Faults by Optimization Driven Methodology: A Comprehensive Study on the Heath of Bearings. In Proceedings of the Third Congress on Control, Robotics, and Mechatronics; Jha, P.K., Jamwal, P., Tripathi, B., Kumar, P., Sharma, H., Eds.; Lecture Notes in Networks and Systems; Springer Nature Singapore: Singapore, 2026; Volume 1850, pp. 241–253. ISBN 978-981-96-9770-0. [Google Scholar]
  126. Wang, B.; Lei, Y.; Li, N.; Li, N. A Hybrid Prognostics Approach for Estimating Remaining Useful Life of Rolling Element Bearings. IEEE Trans. Reliab. 2020, 69, 401–412. [Google Scholar] [CrossRef]
  127. Aromire, S. A Step-by-Step Guide to Industrial Robot Programming for Beginners (Using FANUC); Karelia University of Applied Sciences: Joensuu, Finland, 2025. [Google Scholar]
  128. El Kalach, F.; Farahani, M.; Wuest, T.; Harik, R. Real-Time Defect Detection and Classification in Robotic Assembly Lines: A Machine Learning Framework. Robot. Comput.-Integr. Manuf. 2025, 95, 103011. [Google Scholar] [CrossRef]
  129. Ramasubramanian, A.K.; Mathew, R.; Preet, I.; Papakostas, N. Review and Application of Edge AI Solutions for Mobile Collaborative Robotic Platforms. Procedia CIRP 2022, 107, 1083–1088. [Google Scholar] [CrossRef]
  130. Rexroth, B. Bosch Rexroth. Disponible Desde. 2008. Available online: https://www.fluidestransmissions.com/multimedia/6002.pdf (accessed on 26 March 2026).
  131. Mosca, E.; Szigeti, F.; Tragianni, S.; Gallagher, D.; Groh, G. SHAP-Based Explanation Methods: A Review for NLP Interpretability. In Proceedings of the 29th International Conference on Computational Linguistics, Gyeongju, Republic of Korea, 12–17 October 2022; pp. 4593–4603. [Google Scholar]
  132. Garreau, D.; Luxburg, U. Explaining the Explainer: A First Theoretical Analysis of LIME. In Proceedings of the International Conference on Artificial Intelligence and Statistics; PMLR: Cambridge, MA, USA, 2020; pp. 1287–1296. [Google Scholar]
  133. Selvaraju, R.R.; Das, A.; Vedantam, R.; Cogswell, M.; Parikh, D.; Batra, D. Grad-CAM: Why Did You Say That? arXiv 2016, arXiv:1611.07450. [Google Scholar]
  134. Chefer, H.; Gur, S.; Wolf, L. Transformer Interpretability beyond Attention Visualization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 19–25 June 2021; pp. 782–791. [Google Scholar]
  135. Wan, J.; Tang, S.; Yan, H.; Li, D.; Wang, S.; Vasilakos, A.V. Cloud Robotics: Current Status and Open Issues. IEEE Access 2016, 4, 2797–2807. [Google Scholar] [CrossRef]
  136. Casanova, R.; Whitlow, C.T.; Wagner, B.; Williamson, J.; Shumaker, S.A.; Maldjian, J.A.; Espeland, M.A. High Dimensional Classification of Structural MRI Alzheimer?S Disease Data Based on Large Scale Regularization. Front. Neuroinform. 2011, 5, 22. [Google Scholar] [CrossRef] [PubMed]
Figure 1. PHM framework for informed decision-making.
Figure 1. PHM framework for informed decision-making.
Applsci 16 03441 g001
Figure 2. Sensor data from industrial components enables three primary PHM tasks: anomaly detection, degradation assessment, and prognostics.
Figure 2. Sensor data from industrial components enables three primary PHM tasks: anomaly detection, degradation assessment, and prognostics.
Applsci 16 03441 g002
Figure 3. The number of publications related to real-time AI-driven PHM in robotics in the last two decades.
Figure 3. The number of publications related to real-time AI-driven PHM in robotics in the last two decades.
Applsci 16 03441 g003
Figure 4. A basic convolutional neural network (CNN) architecture, demonstrating the progression from input through convolution, pooling, and fully connected layers to the final output prediction [78].
Figure 4. A basic convolutional neural network (CNN) architecture, demonstrating the progression from input through convolution, pooling, and fully connected layers to the final output prediction [78].
Applsci 16 03441 g004
Figure 5. Quantitative comparison of AI architectures for robotic PHM.
Figure 5. Quantitative comparison of AI architectures for robotic PHM.
Applsci 16 03441 g005
Figure 6. Illustration of the GAN-based method used in fault identification, highlighting the challenges associated with detecting missing fault data.
Figure 6. Illustration of the GAN-based method used in fault identification, highlighting the challenges associated with detecting missing fault data.
Applsci 16 03441 g006
Figure 7. Digital twin-based RUL estimation algorithm.
Figure 7. Digital twin-based RUL estimation algorithm.
Applsci 16 03441 g007
Figure 8. PHM challenges stemming from missing or incomplete data. In this visualization, green markers denote healthy states, while red indicates degraded or faulty conditions. Additionally, blue is utilized to highlight prognostic features and the distribution of data points.
Figure 8. PHM challenges stemming from missing or incomplete data. In this visualization, green markers denote healthy states, while red indicates degraded or faulty conditions. Additionally, blue is utilized to highlight prognostic features and the distribution of data points.
Applsci 16 03441 g008
Table 1. Common sensor types used in robotics PHM and their associated functions and detected issues.
Table 1. Common sensor types used in robotics PHM and their associated functions and detected issues.
Sensor TypeFunctionIssues DetectedRef.
AccelerometersMeasure vibrations and accelerationsImbalances, misalignment, wear and tear[47,48,49]
EncodersMonitor position, velocity, and direction of joints and actuatorsDeviations from expected movements, mechanical issues[50]
Temperature sensorsMonitor thermal conditions of motors, electronics, and jointsOverheating, imminent failures[51,52,53]
Current and voltage sensorsMonitor electrical parameters in motors and actuatorsElectrical anomalies such as overloading or short circuits[54,55]
Force/torque sensorsMeasure the forces and torques exerted during operationsSudden resistance, changes in load, and mechanical issues[56,57,58]
Vision systemsUtilize cameras and image processing to monitor the operational environmentMisalignments and external obstructions[5,59,60,61,62]
Table 2. Comparative overview of major machine learning and deep learning model categories for robotic PHM, highlighting their strengths, limitations, data requirements, real-time feasibility, interpretability, and robustness.
Table 2. Comparative overview of major machine learning and deep learning model categories for robotic PHM, highlighting their strengths, limitations, data requirements, real-time feasibility, interpretability, and robustness.
TechniqueDescriptionTypical Applications
Time-domain analysisStatistical measures (mean, RMS, skewness) from time-series data.Detecting wear and imbalance in actuators and bearings.
Frequency-domain analysisFourier transform to identify dominant frequencies.Identification of resonant frequencies that indicate mechanical degradation.
Time–frequency methodsSTFT and wavelet transforms for analyzing non-stationary signals.Capturing transient events and evolving faults in dynamic operations.
Principal component analysis (PCA)Reduces data dimensionality by transforming variables into principal components.Simplifying datasets while retaining essential variance for fault detection.
Motor current signature analysis (MCSA)Analyses electrical signals to detect mechanical faults.Identifying issues like bearing faults in motors through current signal analysis.
Deep scattering spectrum (DSS)Utilizes wavelet transforms to extract hierarchical features.Enhancing fault detection in complex mechanical components.
Wavelet packet decomposition (WPD)Decomposes signals into frequency sub-bands for detailed analysis.Detection of subtle faults in servo motors and gear systems.
Table 4. Comparison of traditional ML and deep learning techniques for PHM in robotics, considering key decision-making factors.
Table 4. Comparison of traditional ML and deep learning techniques for PHM in robotics, considering key decision-making factors.
Model Category/
Technique
StrengthsWeaknesses and Specific LimitationsData NeedsReal-Time FeasibilityInterpretabilityGeneralization and
Robustness
Traditional ML (SVM, random forest, k-NN)Computationally lightweight; easy to deploy on edge devices; well-understood mathematical foundations.Struggles with highly non-linear, high-dimensional sensor data; relies heavily on manual feature extraction.Low to moderate; performs well on smaller, structured datasets.High: Excellent for microsecond-level fault detection on industrial controllers.High: Decision boundaries and feature importance are highly transparent.Low: Poor generalization to unseen operating conditions or varying robotic payloads.
Standard deep learning (CNNs, RNNs, LSTMs)Automates feature extraction; effectively captures temporal degradation trends (RNN/LSTM) and spatial anomalies (CNN).“Black-box” nature hinders trust; prone to vanishing gradients; computationally heavier than traditional ML.High; requires vast amounts of labeled run-to-failure data.Moderate: Requires specialized hardware (e.g., embedded GPUs) for strict real-time control loops.Low: Requires secondary XAI tools (e.g., SHAP, LIME) to decipher predictions.Moderate: Robust to general noise, but vulnerable to domain shifts (e.g., transferring from a KUKA to a FANUC arm).
Transformers and vision transformers (ViTs)State-of-the-art accuracy; excels at capturing long-range dependencies in complex telemetry; parallelizable training.Massive architectural complexity; massive compute overhead; memory scaling issues with long sensor sequences.Very high; extremely data-hungry, requiring massive multi-sensor datasets to prevent overfitting.Low to Moderate: Inference latency is a major bottleneck for high-speed robotic applications without heavy optimization.Moderate: Attention weights can offer some inherent spatial/temporal insight into model focus.High: Exceptional generalization capabilities across different robotic platforms and degradation states.
Self-supervised learning (SSL)Drastically reduces reliance on expensive, manually labeled run-to-failure data; extracts rich, universal representations.High computational cost during the pre-training phase; evaluating the quality of learned representations is difficult.Low labeled data needs but very high unlabeled data needs.Moderate: Similar to deep learning; pre-training is offline, but real-time inference depends on the backbone size.Low: Retains the black-box limitations of deep neural networks.High: Highly robust to sparse labels and highly adaptable to new, unseen fault modes.
Generative models (GANs, VAEs, diffusion)Excellent for data augmentation; can simulate rare fault conditions and run-to-failure trajectories for training.Risk of introducing bias or “hallucinating” physically impossible sensor data; mode collapse during training.Moderate real data needs (synthesizes the rest).N/A (Offline): Primarily used offline for training data generation rather than real-time inference.Low: The generation process is highly complex and non-transparent.Moderate: Improves the robustness of downstream classifiers, but the synthetic data must strictly adhere to physical laws.
Hybrid models (e.g., CNN-LSTM, physics-informed NN)Integrates the spatial extraction of CNNs with the temporal tracking of LSTMs; physics-informed models bound predictions to reality.Increased architectural complexity; compounding latency; difficult to tune hyperparameters across multiple model components.High; though physics-informed models reduce overall data needs by leveraging known physical laws.Low: Stacking models significantly increases inference time, challenging strict real-time constraints.Moderate: Physics-informed layers add explainability, but deep learning components remain opaque.High: Physics-informed hybrids are highly robust, preventing physically impossible RUL predictions.
Table 5. Benchmark datasets commonly used in prognostics research, summarizing target domains, key sensing modalities, and representative applications in remaining useful life prediction and fault prognostics literature.
Table 5. Benchmark datasets commonly used in prognostics research, summarizing target domains, key sensing modalities, and representative applications in remaining useful life prediction and fault prognostics literature.
DatasetTarget DomainKey Modalities/FeaturesApplication in the Literature
CMAPSS (NASA) [124]Turbofan EnginesMulti-sensor operational trajectories.Standard for RUL prediction benchmarking.
FEMTO (PRONOSTIA) [125]BearingsHigh-frequency vibration, temperature.Accelerated degradation tracking and fault prognostics.
XJTU-SY [126]BearingsRun-to-failure vibration data.Testing model robustness across diverse operating loads.
Table 6. Critical gap matrix in robotic PHM research.
Table 6. Critical gap matrix in robotic PHM research.
Research DimensionCurrent State of the ArtIdentified Research Gap and Future Needs
Data scarcity and labelingHeavy reliance on simulated faults or GAN augmentation.Lack of large-scale robotic multimodal datasets. Severe underutilization of self-supervised learning (SSL).
Real-time deploymentComplex models (transformers, Deep CNNs) evaluated offline.Insufficient research on edge-optimized hardware and model quantization for sub-10 ms control loop integration.
Trust and interpretabilityBlack-box predictive metrics (Accuracy, RMSE) dominate.Explainable AI (SHAP, Grad-CAM) is rarely integrated into real-time operational dashboards.
Data privacyCentralized cloud data lakes for model training.Limited application of Federated Learning for cross-enterprise robotic fleet training.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tanveer, M.; Yazdani, M.H.; Khan, R.T.A.; Kim, H.S. Real-Time AI-Driven Prognostics and Health Management in Robotics. Appl. Sci. 2026, 16, 3441. https://doi.org/10.3390/app16073441

AMA Style

Tanveer M, Yazdani MH, Khan RTA, Kim HS. Real-Time AI-Driven Prognostics and Health Management in Robotics. Applied Sciences. 2026; 16(7):3441. https://doi.org/10.3390/app16073441

Chicago/Turabian Style

Tanveer, Mohad, Muhammad Haris Yazdani, Rana Talal Ahmad Khan, and Heung Soo Kim. 2026. "Real-Time AI-Driven Prognostics and Health Management in Robotics" Applied Sciences 16, no. 7: 3441. https://doi.org/10.3390/app16073441

APA Style

Tanveer, M., Yazdani, M. H., Khan, R. T. A., & Kim, H. S. (2026). Real-Time AI-Driven Prognostics and Health Management in Robotics. Applied Sciences, 16(7), 3441. https://doi.org/10.3390/app16073441

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop