1. Introduction
Traditionally, lubricant innovation has relied on extensive experimental trials and empirical formulation strategies (i.e., additive screening and base oil blending) to validate performance. Although these strategies are effective, they are costly, time-intensive, and limited in their ability to explore the vast chemical and operational design options of modern lubricants. In the past three years, advances in artificial intelligence (AI), ranging from machine learning (ML) property predictors to deep learning vision systems and digital twin frameworks, have begun to re-define how lubricants are designed, monitored, and deployed in industrial, automotive, and marine applications [
1,
2].
Researchers have found that AI begins to connect different stages of lubricant research and use, from laboratory testing to in-service monitoring. Studies applying Bayesian optimization and ML approaches have demonstrated faster, data-driven ways to explore additive and base oil performance [
3,
4,
5,
6]. Meanwhile, tools such as convolutional neural networks and digital twin systems help to translate this progress into real operations by improving the analysis of wear debris and predicting lubricant health trends [
7,
8,
9]. These capabilities have already appeared in Shell, LubeAnalyst, and ExxonMobil Mobil Serv, showing how AI can bridge laboratory discovery and field reliability [
10,
11,
12], which is exemplified in
Figure 1 in a broader context. In this regard,
Figure 1 outlines how AI connects major branches of tribology into a single framework. It links research, component testing, and applications through shared tools such as status monitoring and system optimization. Through this, AI helps researchers to develop a continuous flow of information to help develop practical solutions.
Bottlenecks in Conventional Lubricant Development and the Role of AI
Trial-and-error formulation strategies continue to dominate additive discovery and base oil blending. The identification of formulations that satisfy viscosity targets, oxidation stability, friction reduction, and wear protection often requires hundreds of iterative experiments, especially when additives interact in nonlinear ways across concentration and operating conditions. Bayesian optimization directly addresses this bottleneck by guiding experimental campaigns towards promising regions of formulation space using probabilistic surrogate models, reducing the number of physical tests required while maintaining performance targets [
3,
4,
5,
6].
Limited transferability of empirical structure–property correlations further constrains predictive formulation design. Classical relationships often break down when applied to new base oils, additive chemistries, or emerging sustainable candidates such as ionic liquids and deep eutectic solvents. Structure–property learning frameworks, including QSPR and QSTR, improve generalization by learning nonlinear mappings between molecular descriptors and macroscopic lubricant behavior, enabling prediction of viscosity, friction, and wear trends across diverse chemistries where traditional regression models are insufficient [
13].
The dependence on long-duration endurance and field testing remains another major barrier for rapid screening. Engine and fleet trials are essential for final validation but are impractical for evaluating large candidate sets. ML predictors trained on laboratory measurements, molecular descriptors, or spectroscopic data help narrow candidate pools early by forecasting key properties such as viscosity evolution and oxidation stability, thereby concentrating experimental resources on the most promising formulations [
4,
5].
Finally, conventional diagnostics and maintenance practices remain largely reactive. Standard methods such as ferrography, viscometry, and spectroscopy provide valuable information but typically require offline sampling and expert interpretation, limiting scalability for continuous monitoring. AI enables a shift toward predictive maintenance by automating wear debris classification, integrating multimodal sensor streams, and embedding these insights within digital twin frameworks that forecast lubricant degradation and remaining useful life [
7,
8,
9].
AI is therefore best understood as a set of complementary tools aligned to different stages of the lubricant lifecycle: Bayesian optimization for experimental efficiency, QSPR/QSTR for molecular-level performance prediction, data-driven diagnostics for in-service monitoring, and digital twins for lifecycle optimization and decision support.
Figure 1 provides a lifecycle-level context for these connections by illustrating how AI methods link formulation design, property prediction, and operational monitoring into an emerging intelligent lubrication ecosystem [
1].
Despite these advances, lubricant development remains fundamentally constrained by reliance on trial-and-error formulation, long-duration endurance testing, fragmented laboratory and field datasets, and empirical correlations that often fail to transfer across operating conditions. These limitations restrict systematic exploration of the high-dimensional chemical and operational design space and slow the translation of laboratory discoveries into industrial practice. As lubricant formulations become more complex and sustainability constraints tighten, these constraints increasingly limit the speed, scalability, and generalizability of conventional lubricant development strategies.
Artificial intelligence offers a pathway to overcome these challenges by aligning specific algorithmic approaches with distinct stages of the lubricant lifecycle. Data-efficient optimization methods enable accelerated formulation screening, structure–property learning frameworks connect molecular features to macroscopic performance, and data-driven monitoring architectures integrate in-service measurements for health assessment and predictive maintenance. Framing recent advances around these problem–algorithm relationships provides the organizing logic of this review, which examines AI-enabled formulation, property prediction, condition monitoring, and lifecycle optimization as interconnected components of an emerging intelligent lubrication ecosystem.
2. AI for Formation and Additive Discovery
The formulation of lubricants requires managing multiple properties such as viscosity, volatility, thermal stability, and tribological performance. Conventional approaches to additive discovery heavily rely on empirical testing and trial-and-error processes. While effective, this strategy is slow and costly, requiring hundreds of experiments to identify an acceptable formulation window. AI helps to reshape this process by enabling predictive design and optimization. Over the last three years, researchers have increasingly applied ML and Bayesian optimization (BO) frameworks to streamline experimental design, accelerate discoveries, and improve overall sustainability in lubricant development [
13]. BO frameworks use probability models to search high-dimensional design spaces to rapidly identify optimal lubrication formulas and operating conditions while minimizing physical requirements. This implies that researchers head towards data-driven formulation strategies that reduce the laboratory workload and allow the exploration of previously inaccessible regions of chemical possibilities. This data-driven strategy is visualized in
Figure 2, which outlines how AI integrates data collection, model training, and optimization into a continuous loop.
2.1. Bayesian Optimization and Design of Experiments
BO, often combined with statistical design of experiments, has proven to be a powerful approach for lubricant design since it can efficiently search high-dimensional spaces involving nonlinear reactions among additive concentration and particle size.
Elsoudy et al. showed that BO can make lubricant formulation much more efficient when combined with a structured design of experiments approach [
3]. Their study focused on developing nano-lubricants, where each tribometer test requires time, materials, and careful setup. By using BO to guide the testing sequence, they were able to cut the number of experimental runs by more than 30% compared to conventional methods. Even with fewer tests, the results still reached the same low friction and wear values as manual optimization. The method helped the researchers to identify promising formulation regions early and understand how nanoparticle concentration, dispersion stability, and base oil viscosity interact to affect overall performance. This approach offers a practical way to reduce the cost and workload of tribological testing while improving how efficiently new formulations are developed.
Guo et al. applied a similar strategy to study metallic nanoparticle additives, focusing on nano-copper as a lubricant additive [
14]. They found that BO efficiently determined the concentration range that balanced wear protection with good flow and viscosity control. Too little copper offered little improvement, while too much caused agglomeration and made the lubricant harder to handle. The optimization process helped to pinpoint the best loading level that maintained a stable dispersion and promoted effective tribofilm formation during sliding contact. They also demonstrated that these methods can be extended to hybrid systems that mix copper nanoparticles with polymeric dispersants or surfactants, which often interact in complex ways. Together, these studies show that AI-guided optimization helps researchers to find better-performing formulations faster and more reliably than traditional trial-and-error testing.
2.2. QSPR and Neural Network Models
Quantitative Structure–Property Relationship (QSPR) models use mathematical correlations between molecular structure and measurable properties to predict performance indicators such as viscosity and the coefficient of friction (COF). In lubricant research, these models rely on molecular descriptors that represent features like polarity, molecular weight, functional groups, and branching. By analyzing these features across known datasets, QSPR models can estimate macroscopic behaviors like how certain polar groups improve film formation or how molecular branching affects the resulting flow behavior. This allows researchers to predict performance trends without immediate physical testing, saving time and resources during the early formulation stages.
When combined with neural networks, QSPR frameworks gain the ability to capture more complex relationships between chemical structure and lubricant performance [
15]. Neural networks can detect patterns that simple regression models miss, such as how additive concentration, base oil chemistry, and temperature interact. As a result, QSPR-neural network hybrids can predict key lubricant properties like viscosity index, oxidation stability, and film-forming ability directly from molecular descriptors, reducing dependence on bench tests. Recent studies have shown that these models can achieve high accuracy across different lubricant types, including synthetic esters and nanoparticle-enhanced oils [
16], consistent with broader advances in molecular ML such as generative inverse-design models [
17], graph-based neural message-passing frameworks [
18], and large-scale materials informatics platforms [
19]. These trends are consistent with more general developments in machine learning for molecular and materials science, where supervised and generative models now routinely support property prediction and inverse design across diverse chemistries [
20]. Across recent lubricant studies, different machine learning model classes have been adopted depending on data availability, target properties, and deployment constraints. Neural-network-based models are frequently selected when complex nonlinear relationships between molecular structure, operating conditions, and tribological performance must be captured, particularly in formulation and property prediction tasks. Tree-based methods, including decision trees and ensemble approaches, are often favored when robustness to noisy or limited datasets is required, or when partial interpretability is desirable during early screening stages. Probabilistic models offer additional advantages by providing uncertainty awareness, which is valuable for risk-sensitive decisions, although their scalability can be limited for large design spaces. Accordingly, comparative performance across artificial intelligence models should be interpreted within the context of data availability, target properties, and deployment constraints rather than as evidence of a universally optimal approach. Rather than indicating a single best-performing approach, these trends suggest that model selection in lubricant research is inherently application-specific, with performance, interpretability, and data requirements requiring careful balance.
This approach has also been applied to the discovery of sustainable additives such as ionic liquids (ILs) and deep eutectic solvents (DES), which are highly tunable systems offering a large design space. ML models, including gradient-boosted trees, random forests, and neural networks, have been used to predict properties such as viscosity, miscibility, and biodegradability [
6]. Santos et al. used ML to identify DES formulations that balanced low viscosity with strong tribofilm-forming ability, reducing the number of required laboratory trials by nearly half [
16]. The screened DESs achieved lower COFs than reference oils (
Figure 3), highlighting how AI-driven screening can speed up the development of eco-friendly, high-performance lubricants.
Beyond these case-specific studies, ensemble deep learning frameworks have been used to perform inverse design of hundreds of thousands of ionic liquid variants with multiple targeted properties, demonstrating how scalable ML architectures can explore vast IL design spaces relevant to lubricant applications [
21].
Zhou et al. later introduced the concept of a “Lubrication Brain,” an AI platform that combines QSPR descriptors with deep neural networks to virtually screen thousands of potential lubricant molecules before synthesis [
13]. By training on large experimental datasets, the system can identify chemical candidates that meet multiple design targets, such as high thermal stability, low friction, and strong oxidative resistance. Instead of starting with broad experimental screening, researchers can use computational predictions to focus only on the most promising formulations. This approach does not only reduce costs and time but also opens the door to designing multifunctional additives that combine anti-wear, antioxidant, and dispersant properties.
Building on these advances, hybrid AI models such as Bayesian regularization neural network-based Quantitative Structure–Tribological Relationship (QSTR) frameworks have emerged [
15]. These methods extend QSPR modeling to link molecular descriptors such as connectivity indices, heteroatom content, and electron transfer features directly with tribological performance metrics like COF and wear scar diameter. The Bayesian regularization process gives a better sense of confidence in each prediction, which is especially useful for a situation with a reduced amount of experimental data. These methods also work well for more complex additives such as ILs and DES, where molecular interactions are difficult to describe with simple parameters. By combining chemical structure with performance data, QSTR models make it easier to see how molecular design affects friction and wear in real operating conditions.
2.3. Bridging Predictive Modeling and Sustainable Design
AI-driven formulation tools now allow researchers to treat lubricant design as a guided search problem rather than a sequence of isolated experiments. BO is most effective at deciding which experiments to run, while QSPR-neural network models focus on interpreting structure–property patterns that emerge across different chemistries. Used together, they turn laboratory measurements into a coherent design space that can be navigated deliberately, creating room to incorporate additional targets such as sustainability and regulatory constraints.
Despite these strengths, each method faces distinct limitations. BO requires well-curated experimental data and can struggle with sparse or noisy measurements, while neural-network-based QSPR models risk overfitting when datasets are small or chemically narrow. AI can quickly flag eco-friendly candidates, but the lack of standardized biodegradability and toxicity data limits model transfer across formulations, paralleling issues documented in lifecycle assessment and lubricant development [
22].
Each of these AI methods offers distinct strengths but also faces practical limitations in data quality, scalability, or transferability.
Table 1 provides a concise overview of these trade-offs.
To overcome the involved drawbacks and short-comings, research moves towards combining different AI approaches into hybrid systems. Physics-informed neural networks and transfer-learning models can help to carry transfer knowledge from well-studied additives to newer, more sustainable lubricant materials. Building large, shared data-bases that connect molecular features with performance results will also require close collaboration between universities and industry, shifting lubricant formulation from a trial-and-error process to a more reactive design approach, making it easier to create materials that are efficient, durable, and environmentally responsible on an industrial scale.
2.4. Virtual Formulation Platforms and Digital Twin Integration
As AI tools become more advanced, they are increasingly integrated into virtual platforms that connect laboratory formulation with industrial production. In this context, formulation design is the most data-intensive stage in lubricant development, requiring evaluation of multiple base stock and additive combinations across different properties such as viscosity index, volatility, and oxidation stability.
Zhmud et al. developed a digital AI platform trained on historical crankcase oil datasets that predict viscosity and thermal properties before blending, reducing the number of experimental trials needed to find the most suitable formulations [
23]. Kim et al. expanded this idea with an inverse-prediction system that starts from desired performance targets and recommends potential blends that meet them [
24]. Together, these models demonstrate how virtual “formulator assistants” can greatly decrease the time to find a target spec, which ends up lowering material waste and accelerating commercialization by narrowing the formulation space early in development.
These virtual design systems are now linked with plant-level operations through digital twins, creating continuous feedback between formulation and manufacturing. A digital twin is a real-time virtual model of a system, such as an engine, that uses sensor data and simulations to mirror how that system behaves and evolves. In research, it allows engineers to monitor processes, predict performance, and optimize blending or maintenance decisions without interrupting actual operations. Rebello and Nogueira [
25] proposed a digital twin framework that integrates sensor data from blending facilities to enable real-time learning and uncertainty tracking. Similarly, Peterson et al. [
26] showed that computational twins can simulate dynamic mixing behavior, evaluate process control strategies, and improve production stability. In lubricant manufacturing, these systems can monitor additive dispersion or viscosity-modifier ratios in real time, allowing operators to adjust conditions before off-spec batches occur. These developments show how AI helps accelerate the process between formulation research and real-world production, reflecting broader digital twin adoption trends and implementation challenges observed across the process industries [
27]. Virtual assistants support the design stage, while digital twins ensure those formulations are applied consistently—another step toward self-correcting manufacturing in the lubricant industry.
2.5. Comparison of AI Models for Lubricant Property Prediction
Different model families dominate lubricant property prediction depending on the data regime and the target outcome. Neural-network-based approaches, including multilayer perceptrons and Bayesian-regularized neural networks, are often used when performance depends on nonlinear interactions among molecular structure, additive concentration, and operating conditions, particularly in QSPR/QSTR and degradation forecasting tasks [
15,
28,
29]. Their representational capacity supports high accuracy, but the same flexibility increases overfitting risk when datasets are small or chemically narrow (
Table 2).
Tree-based approaches, including decision trees and random forests, are frequently selected when robustness and interpretability matter more than peak predictive accuracy. In lubricant research, these methods are common in early-stage screening, classification of lubricant condition, and sustainability-oriented decision support, where data may be noisy or incomplete [
4,
6]. Random forests reduce the variance of single trees through ensemble averaging, which can improve generalization while still allowing partial interpretability through feature importance rankings.
Probabilistic models such as Gaussian process regression provide uncertainty-aware prediction, which becomes important when experimental datasets are limited and reliability matters as much as the point estimate. This model class has been used for viscosity prediction tasks and for surrogate modeling within Bayesian optimization loops, where uncertainty bounds are essential for selecting the next experiment [
3,
4]. However, scalability limitations often restrict probabilistic methods to smaller datasets or reduced feature spaces.
Across the literature summarized in
Section 2, no single model family is universally suited. Neural networks tend to be most effective for high-fidelity structure–property learning when sufficient data are available, while tree-based and probabilistic approaches provide robustness, interpretability, or uncertainty awareness under constrained conditions. In practice, model choice is driven by the target property, the data volume and diversity, and the intended deployment context rather than predictive accuracy alone.
Table 2.
Comparison of AI models commonly used for lubricant property prediction.
Table 2.
Comparison of AI models commonly used for lubricant property prediction.
| Model | Typical Inputs | Strengths | Limitations |
|---|
| Neural networks (MLP/BRNN) | QSPR/QSTR descriptors, spectral features, operating conditions | Captures nonlinear relationships, strong predictive accuracy | Overfitting risk on small datasets; limited interpretability [15,28] |
| Random forests | Formulation variables, limited descriptors, mixed lab datasets | Robust to noise, partial interpretability, good screening performance | Limited extrapolation beyond training domain [4,6] |
| Decision trees | Rule-based screening, diagnostic classification | Highly interpretable, easy to deploy | Lower accuracy; unstable with dataset shifts |
| Gaussian process regression | Small datasets, surrogate modeling for BO, viscosity/property prediction | Uncertainty quantification, strong interpolation | Computationally expensive; scaling limits [3,4] |
3. AI for Property and Degradation Prediction
Lubricants are exposed to thermal, chemical, and mechanical stresses that change their molecular structure and gradually reduce their effectiveness. These changes affect viscosity, oxidation stability, total base number, and film strength, which are key properties for reliable operation in engines and machinery. Traditional diagnostic tools such as Fourier transform infrared spectroscopy, viscometry, and elemental analysis provide accurate results but are slow, expensive, and impractical for continuous monitoring. Recent studies show that AI can be used to process data more efficiently, giving faster and more consistent insights into lubricant conditions in both research and in-service environments [
28].
3.1. Spectroscopic and Descriptor-Based Predictions
A Bayesian regularized neural network (BRNN) is a ML model that applies Bayesian probability to control network complexity, reducing overfitting and improving prediction reliability, particularly when working with small or noisy experimental datasets. BRNNs have been applied to tribological problems, where the performance is influenced by multiple interacting variables. Truong et al. demonstrated that BRNN models could predict tool wear from structural and operational descriptors, showing how this approach can handle degradation-related data [
28]. The same methods can be extended to lubricant additives by using descriptors such as molecular connectivity, heteroatom content, and electron transfer capacity. These inputs allow the models to estimate anti-wear behavior before physical testing.
Spectroscopic data is another important input for AI. Chen et al. combined FTIR and other sensor measurements in a fusion model that used temporal convolutional networks and variational autoencoders. The model detected lubricant degradation with 96.7% accuracy and predicted failures several hours in advance [
29]. Their work demonstrated how integrating chemical and temporal data can capture subtle patterns in lubricant aging. Building upon this, similar frameworks can employ BRNNs to map nonlinear relationships between spectroscopic features such as absorbance peaks and bond vibrations and key lubricant properties like viscosity or oxidation stability [
30].
Figure 4 illustrates this generalized workflow, in which spectral and sensor data are pre-processed, transformed into descriptors, and analyzed by a BRNN to produce health metrics such as viscosity trends, oxidation index, and overall lubricant condition.
By combining spectroscopic data with descriptor-based modeling, BRNN frameworks provide a powerful way to connect chemical information to real-world lubricant performance. These models can recognize early degradation signals, adapt as new data is collected, and offer faster and more continuous condition assessment. As research moves toward multimodal systems, BRNNs form an essential foundation for data-driven lubricant monitoring and predictive maintenance.
3.2. Lubricant Properties and Degradation
Lubricant degradation occurs through several overlapping processes, including oxidation, additive depletion, contamination, and generation of wear particles. To better monitor these complex phenomena, researchers have begun developing multimodal sensing systems that collect data from various sources instead of relying on a single measurement. For instance, Chen’s fusion model combines vibration, chemical, and temperature data to produce a more complete assessment of lubricant health [
29]. Pourramezan et al. took a similar approach using dielectric spectroscopy coupled with ML to analyze the electrical properties of oil [
31]. Their work showed that changes in permittivity and loss tangent can reveal both metallic wear debris and nonmetallic contaminants such as soot or oxidation by-products. Since these electrical parameters are sensitive to small changes in oil composition, they offer an efficient and non-intrusive way to detect degradation and contamination, complementing traditional tribological testing methods. These trends align with broader applications of sensor fusion in machine condition monitoring, where combining acoustic, thermal, and chemical channels consistently improves anomaly detection accuracy [
32].
Viscosity is a direct indicator of lubricant aging and mechanical performance. While laboratory measurements provide accurate results, they are not suitable for real-time monitoring during operation. To overcome this limitation, Pourramezan et al. evaluated several soft computing methods that could estimate viscosity based on easily measurable input parameters [
33]. They tested multilayer perceptrons (MLPs), Gaussian process regression (GPR), and radial basis function networks (RBFNs), each representing a different approach to nonlinear modeling. MLPs use layered neural networks to learn general patterns between inputs and viscosity. GPR provides a probabilistic framework that quantifies prediction uncertainty, while RBFNs rely on localized activation functions that respond more precisely to variations in the input space. Among the three, RBFNs achieved the best accuracy and generalization, as shown in
Figure 5, because they captured complex relationships between oil composition, shear rate, and temperature more effectively than the other models. This demonstrates the potential of soft computing techniques to replace time-consuming viscosity tests with fast, data-driven predictions suitable for continuous condition monitoring.
Flexible, locally tuned architectures such as RBFNs are particularly effective since they can adapt to abrupt variations in molecular interactions and temperature–viscosity behavior. This allows for faster validation of lubricant formulations and real-time viscosity estimation.
Beyond conventional fluid metrics, researchers explore novel degradation markers that detect chemical and physical changes earlier in the oil’s lifecycle. Daniels et al. introduced an electronic nose (e-nose) system equipped with a six-sensor MOS array to identify volatile organic compounds produced during lubricant breakdown. They used a combined Principal Component Analysis–Support Vector Machine (PCA-SVM) model to interpret the complex gas sensor data [
34]. PCA reduced the sensor signals into a few key components that captured the main patterns in gas composition, while the SVM classified those patterns into different oil aging stages. Using this approach, the system achieved 95.5% classification accuracy and predicted the degradation time within roughly 2.5 h. Gas-phase sensing offers a fast, non-invasive way to track the oil condition and could complement established methods such as spectroscopy and dielectric sensing by detecting volatile degradation products in real time. Together, these tools mark a shift toward comprehensive, data-driven monitoring that enables earlier fault detection and more reliable operation across industrial and automotive systems.
3.3. Translating AI Diagnostics Practice
AI tools for predicting lubricant properties and monitoring degradation have advanced rapidly in recent years. BRNNs can now link molecular structure directly to tribological performance [
28], while fusion models combine multiple sensor inputs such as vibration, chemical, and thermal data to provide a more complete assessment of oil health [
29]. Electrical property measurements, including permittivity and loss tangent, have also proven valuable for detecting contamination and oxidation in real time [
31]. ML models continue to improve viscosity forecasting [
33], and emerging technologies like electronic nose (e-nose) systems offer a fast, non-invasive way to detect chemical changes as lubricants age [
34].
Even with this progress, several challenges remain. Many models are limited by small, fragmented datasets and struggle to generalize across different lubricant types or operating conditions. Another issue is that most current systems lack uncertainty estimates, which show how confident a model is in its predictions, something that would be especially important in industrial decision-making. Moving forward, researchers will need to integrate physics-based modeling with AI frameworks, build larger and more diverse datasets, and develop lightweight models that can operate directly on embedded sensors. These steps will be critical to making AI diagnostics more reliable, transferable, and ready for real-world deployment.
To better understand these strategies,
Table 3 summarizes the main AI-based approaches discussed along with their benefits and limitations.
4. AI for Condition Monitoring and Predictive Maintenance
Condition monitoring is a critical component of lubricant management since it provides early detection of wear, contamination, and oil breakdown during service. Traditional methods such as ferrography and particle counting are reliable but require expert interpretation and are not practical for continuous monitoring. Recent studies have demonstrated that AI can improve these practices by automating wear debris imaging, analyzing sensor data, and enabling digital twin frameworks that replicate lubricant and machinery behavior in real time. These approaches help to reduce the dependence on manual inspection while increasing accuracy and speed [
35]. In parallel, predictive maintenance extends these capabilities by using the same sensor and imaging streams to forecast equipment degradation, schedule proactive maintenance, and optimize operational performance.
4.1. Wear Particle Imaging and Classification
The analysis of wear particles is one of the oldest techniques to assess the respective lubricant performance and diagnose the underlying wear mechanisms. However, manual image classification is time-consuming and prone to subjective interpretation. To address this limitation, convolutional neural networks (CNNs) are used. This approach relates to a deep learning model designed to recognize visual patterns and features in images to automate debris analysis. Wang et al. used CNNs to enable online wear debris imaging, achieving a classification accuracy above 90% [
35]. Liu et al. further advanced this concept by integrating CNNs with direct optical imaging systems, enabling near real-time classification of particles in circulating lubricants for rotating machinery [
36].
The use of CNN-based image analysis has expanded well beyond industrial applications. Kandel et al. showed that CNN models can automatically classify polymer wear debris in biomedical implants, demonstrating how techniques developed for tribology can also support medical diagnostics [
37]. In industrial lubrication, researchers have built on this foundation to achieve greater precision and automation. Xiao et al. developed an improved mask R-CNN architecture that can separate overlapping particles in ferrography images and measure debris coverage with higher accuracy [
38]. By segmenting each image and calibrating particle dimensions, the model can automatically determine the coverage area of wear debris in lubricating oil samples. The results of this approach are shown in
Figure 6.
To enhance interpretability, Herwig et al. employed explainable AI methods for the wear analysis of gears, mapping which image features influenced the model’s decisions [
39]. Deep learning-driven image analysis parallels developments in other fields where high-throughput virtual screening accelerates materials selection and feature understanding [
40]. Collectively, these studies show that AI can deliver faster and more consistent classifications than manual methods.
4.2. Digital Twins and Multi Monitoring
While particle imaging offers detailed microscopic insight, effective condition monitoring requires a broader view that combines temperature, vibration, viscosity, and chemical data into a single predictive framework. AI provides the foundation for this integration, using algorithms to interpret large, continuous data streams and recognize patterns that signal wear or degradation. Digital twins play a key role in this process by linking real-time sensor data with physics-based simulations, allowing AI models to continuously learn and refine their predictions.
Yin et al. demonstrated a digital twin-driven monitoring system that accurately tracked lubricant health in complex machinery [
41]. Inturi et al. reviewed how similar AI-enhanced twin systems improve fault prediction and maintenance planning in rotating equipment [
42]. Ammar proposed a reduced-dimension digital twin that performs continuous diagnosis and remaining useful life estimation [
43], while Feng et al. showed that coupling Kalman filters with twin architectures improves noise stability and model accuracy [
44].
Du et al. integrated CNNs, long short-term memory units, and attention mechanisms to analyze both debris images and time series sensor data, thus creating a hybrid model that outperformed single-modality systems [
45]. Multimodal systems are increasingly enabled by maturing digital twin infrastructure in the process industries, where implementation barriers such as data flow, governance, and maintenance synchronization have been extensively documented [
27]. These frameworks provide a more complete and reliable assessment of lubricant condition, enabling earlier detection of wear progression and supporting predictive maintenance strategies.
4.3. Digital Twin-Based Prediction of Equipment and Lubricant Lifespan
Digital twin-assisted predictive maintenance has advanced rapidly in recent years. Zhong et al. reviewed current predictive maintenance frameworks and demonstrated that digital twins can replicate the physical and environmental behavior of equipment, improving both forecasting accuracy and maintenance decision-making [
46]. Wahab et al. expanded on this work, showing that integrating AI with virtual models enables continuous health assessment and more precise scheduling [
47]. Together, these studies illustrate how predictive maintenance has evolved beyond basic fault detection into a proactive system capable of forecasting degradation and estimating remaining useful life.
As shown in
Figure 7, sensor data from operating equipment feed into edge analytics and a digital twin model, where AI algorithms predict lubricant degradation and estimate remaining useful life. The results inform maintenance scheduling and real-time process adjustments through a continuous feedback loop.
Khan et al. developed a similar data-driven digital twin framework shown above that applies machine learning to interpret real-time sensor streams and determine lifecycles [
48]. Applied to lubricant systems, this approach connects temperature, pressure, and viscosity patterns to oil degradation and component wear, allowing maintenance to be scheduled proactively rather than reactively. By aligning predictions with actual operating conditions, the system minimizes downtime, extends equipment life, and improves efficiency.
4.4. Industrial Deployment of Predictive Maintenance Systems
Several studies confirm that predictive-maintenance frameworks can be deployed successfully in lubricant-related operations. Szpytko and Salgado-Duarte examined an oil extraction pumping system and showed that digital twin-based maintenance management improved reliability and reduced unexpected downtime [
49]. Embedded ML models analyzed vibration, temperature, and pressure streams to flag deviations from normal behavior and forecast component failures. The integration of AI with the digital twin created a feedback loop where predictions continuously refined the virtual system which improved accuracy and responsiveness.
Kerkeni et al. implemented a similar architecture in an Industry 4.0 environment combining networked sensors, real-time data processing, and embedded deep learning models to predict mechanical wear trends and recommend corrective actions [
50]. In their framework, sensor data from machining systems were continuously streamed into a virtual replica that used ML algorithms to detect deviations from normal operation. The AI component correlated these trends with wear progression and process parameters, enabling the system to forecast failures before they occurred [
51]. More recently, Ma et al. concluded that adaptive models capable of learning from each operating cycle provide more stable predictions than rule-based methods [
52].
4.5. Infrastructure-Level Integration
Digital twin technologies are also applied to larger-scale energy systems. Wang et al. developed a digital twin-integrated “smart liner” for oil and gas pipelines that tracks flow dynamics, corrosion, and deposit formation in real time [
53]. The system combines distributed sensors with AI-based anomaly detection and predictive modeling, allowing the twin to learn normal operating patterns and recognize early signs of fouling or wear. Although focused on pipeline infrastructure, the same principles can be extended to lubricant-circulation systems in industrial equipment. Real-time sensing and adaptive modeling could reveal oil degradation, contamination, or additive loss as they happen, enabling condition-based service rather than fixed intervals.
4.6. Intelligent Scheduling and Manufacturing Optimization
In addition to condition monitoring and failure forecasting, AI supports higher-level operational decisions within lubricant production. By modeling resource availability, batch timing, and equipment loads, AI-driven scheduling systems help blending plants adjust their workflows in real time and maintain stable production. Wei demonstrated that digital twin-driven scheduling can dynamically reallocate resources in intelligent manufacturing lines, improving responsiveness to equipment or supply changes [
54]. Beach et al. developed an approximate optimization method that reformulates complex long-horizon blending schedules into solvable models [
55]. These developments show how AI-driven scheduling transforms blending operations from fixed, reactive planning into an adaptive process that optimizes efficiency across production.
Integrating scheduling analytics with predictive-maintenance frameworks creates a closed-loop system in which data, equipment condition, and operational planning communicate in real time. For example, if a predictive model anticipates reduced mixer efficiency, workloads can be automatically shifted or batch timing adjusted before a fault interrupts operation. This evolution aligns with Industry 5.0, which emphasizes collaboration between human expertise and intelligent automation. In this context, digital twin-enabled scheduling supports the golden batch concept [
56], ensuring that each production run meets target specifications through continuous comparison with ideal parameters.
4.7. Challenges and Practical Issues in Intelligent Condition Monitoring
Early models such as CNN and Mask R-CNN frameworks established the foundation for automated debris characterization [
35,
36,
37,
38,
39], while digital twin and multimodal systems [
41,
42,
43,
44,
45] have extended this capability to continuous, in-service environments. Current efforts focus on integrating these modalities into unified, interpretable platforms suitable for industrial deployment (
Table 4).
Table 4.
Key advantages and limitations of the major AI-based condition monitoring approaches.
Table 4.
Key advantages and limitations of the major AI-based condition monitoring approaches.
| Approach | Key Inputs | Strengths | Limitations | Industrial Readiness |
|---|
| Image-Based (CNN, Mask R-CNN) | Microscopic wear debris images | High diagnostic accuracy, directly visualizes wear mechanisms | Requires sampling and imaging equipment, not continuous | Moderate |
| Sensor/Digital Twin Frameworks | Vibration, temperature, viscosity, chemical, and imaging data | Continuous, real-time monitoring; integrate multiple data types; scalable to complex systems | High system complexity; require precise calibration and data synchronization | High |
| Multimodal (CNN + LSTM + Fusion) | Combined image and time series sensor data | Captures spatial and temporal degradation features, robust predictions | Data-intensive, high computational demand; synchronization required | Emerging |
Despite these gains, data standardization and sensor durability remain major barriers. Harsh environments shorten sensor lifetimes, and differences in data hinder model transferability between facilities. Moreover, technicians must be able to interpret why a model signals degradation. Future development should therefore emphasize interoperable data formats and cloud–edge communication standards, as well as hybrid physics–AI models that provide uncertainty estimates and human-managed interfaces that clarify model reasoning. Collectively, these studies indicate that monitoring and predictive maintenance represent the most mature and industrially deployable applications of artificial intelligence in lubricant research, offering direct operational value through reduced downtime, optimized drain intervals, and improved asset reliability.
4.8. From Monitoring to Decision-Oriented Predictive Maintenance
Wear debris imaging, multisensor condition tracking, and digital twin architectures are often presented as separate threads, but their real value emerges when they are combined into a decision-oriented predictive maintenance pipeline. Early approaches were mainly diagnostic since CNN-based wear classification can standardize debris interpretation and reduce manual subjectivity, which helps maintenance teams to catch abnormal wear earlier than periodic inspection alone [
35,
36]. Mask R-CNN segmentation then moves beyond classification to quantify debris coverage and morphology, which supports trending and severity scoring rather than “normal vs. abnormal” judgments [
38].
The next step connects these image-based signals to time-dependent sensor features. When vibration, temperature, and lubricant property proxies are integrated with wear imagery, models can distinguish transient anomalies (e.g., short-lived load spikes) from sustained degradation trajectories. This is where hybrid CNN–LSTM or attention-based architectures become useful. They treat debris information as spatial evidence while treating sensor streams as temporal context, allowing the model to link particle signatures to degradation evolution over time [
45].
Digital twins strengthen this pipeline by contextualizing AI outputs within operating history. Instead of interpreting an oil health score in isolation, the twin relates it to duty cycles, thermal excursions, and load history, and then projects forward under plausible operating scenarios. Systematic reviews emphasize that this coupling—virtual representation plus real-time data—turns monitoring into forecasting and enables remaining useful life estimation and proactive scheduling [
8,
46,
47]. Field deployments in maintenance management demonstrate the practical value: when prediction is tied to scheduling, teams reduce unplanned downtime and avoid conservative blanket drain intervals that waste lubricant life [
49,
50].
Importantly, the “predictive maintenance” outcome does not just predict failure since it also can help to improve decision-making. That includes recommending inspection windows, adjusting drain intervals based on degradation rate, and coordinating maintenance with production constraints—especially in high-penalty settings like marine systems and continuous-process plants. The studies summarized in
Section 4 show that monitoring and predictive maintenance are currently the most deployable AI applications in the lubricant lifecycle because they attach directly to operational outcomes and can be validated against maintenance logs and downtime metrics rather than only laboratory benchmarks [
46,
49,
52].
5. Challenges in AI-Integrated Lubricant Research
As AI continues to transform formulation, monitoring, and predictive maintenance, several challenges still limit its full integration into lubricant research and industry. Inconsistent data, limited model transferability, and fragmented communication hinder progress towards fully connected systems. Meanwhile, new opportunities emerge through physics–AI models, sustainable design frameworks, and secure distributed learning that can link formulation, production, and operation into one adaptive system. This section outlines the main challenges that must be addressed and the innovations that shape the next generation of intelligent and sustainable lubrication technologies.
Table 5 summarizes the major challenges along with potential solutions and emerging research directions.
5.1. Data Quality and Standardization
Reliable AI models depend on large, consistent datasets, yet lubricant research continues to face data inconsistencies and limited accessibility. Experimental parameters such as viscosity, wear rate, and surface roughness are often recorded under different conditions, which reduces model generalizability. Variations in laboratory protocols, instrument calibration, and measurement timing create gaps that make cross-study comparisons difficult. As a result, even well-trained models may perform unpredictably when applied to new data sources.
Oweida et al. highlighted how integrating materials science with data science demands consistent metadata, curated repositories, and formal education in materials informatics, principles that remain underdeveloped in tribology [
57]. These needs mirror the data governance and quality management challenges reported across smart manufacturing ecosystems [
58,
59]. Many lubricant studies still only report on summarized results, preventing re-use of raw tribological or rheological data. Without standardized formats and metadata, connecting laboratory findings with operational field data becomes nearly impossible. These inconsistencies remain one of the main obstacles to building reproducible, scalable AI frameworks for lubricant design and performance prediction.
5.2. Model Transferability and Generalization
Most current AI models struggle to generalize beyond the specific lubricants, surface materials, or operating conditions they were trained on. Models optimized for one base oil chemistry or additive formulation often lose accuracy when applied to new systems. This limited transferability stems from small, chemically narrow datasets and a lack of embedded physical constraints.
Raissi et al. noted that traditional data-driven models frequently overlook governing physical laws such as conservation of mass, momentum, and energy, making them less stable when faced with unseen data [
60]. In tribological applications, purely empirical models are particularly vulnerable to overfitting when experimental data is scarce or noisy. Consequently, many AI predictions perform well in controlled laboratory conditions but fail under real-world variations in temperature, contamination, or load. Additionally, materials informatics initiatives such as The Materials Project demonstrate how combining first-principles data with ML can dramatically improve model robustness and transferability [
61].
5.3. Interoperability and Lifecycle Integration
AI applications for formulation, condition monitoring, and predictive maintenance are typically developed as isolated tools, leading to fragmented workflows. Data collected in one stage of the lubricant lifecycle, such as additive screening or bench testing, rarely connect with blending or in-service monitoring platforms. This lack of interoperability prevents laboratory knowledge from informing production adjustments or field maintenance.
Burns et al. reviewed Industry 4.0 interoperability standards and emphasized that open architectures and standardized communication protocols are vital for cross-platform integration [
62,
63]. However, many lubricant systems still rely on proprietary software or incompatible data structures, restricting information flow between laboratories, manufacturing plants, and field sensors. Without common data formats or industrial APIs, achieving a unified digital ecosystem that links formulation to operation remains a major barrier to widespread AI adoption.
5.4. Sustainability Integration
Despite growing attention to environmental responsibility, sustainability considerations remain poorly embedded in most AI-driven lubricant design frameworks [
57]. Current models primarily optimize performance metrics such as viscosity, oxidation stability, or friction reduction, while overlooking biodegradability, recyclability, and carbon footprint [
13].
Guo et al. demonstrated that combining lifecycle analysis with ML models can effectively evaluate environmental impact in other chemical systems, yet similar methods are rarely applied in lubricant development [
64]. Wilińska and Wilkanowicz further noted that sustainable lubricants must balance technical performance with reduced toxicity and renewable feedstocks, criteria that AI models seldom include as optimization targets [
65], which is consistent with broader reviews of bio-based lubricants and their tribological performance [
66]. Furthermore, Guinée et al. focused on sustainability-driven optimization in relation to chemical systems through extensive lifecycle frameworks and ML integration [
22]. The absence of standardized sustainability indicators limits the integration of environmental factors into data-driven formulation strategies, leaving a significant gap between performance optimization and ecological design.
5.5. Human–AI Collaboration
Adoption of AI in lubricant research and manufacturing heavily depends on user trust. Engineers and technicians often remain cautious since AI models function as “black boxes,” providing recommendations without transparent reasoning. This lack of interpretability discourages reliance on AI outputs for critical decisions and limits integration into daily operations.
Nahavandi described Industry 5.0 as a human-centric framework that emphasizes cooperation between humans and intelligent systems [
67]. However, current industrial practice still struggles to achieve this balance. Many teams lack sufficient data literacy training, and existing systems provide minimal explanation for their results. Building transparency through interpretable models and human-in-the-loop feedback remains an ongoing challenge to practical AI deployment.
5.6. Cybers Security and Data Privacy
As lubricant research becomes more interconnected through digital twin platforms and cloud-based collaboration, data security and privacy risks rise. Sensitive information such as additive formulations, process parameters, or maintenance logs is often required to train effective AI models. Sharing these datasets across institutional or corporate boundaries raises significant concerns about intellectual property and cybersecurity.
Zhan et al. [
68] emphasized that unsecured data pipelines create vulnerabilities for both theft and manipulation, while Rebello and Nogueira [
25] showed that digital twin integration without proper encryption can compromise plant-level systems. Without robust encryption and standardized security frameworks, collaborative AI development across companies remains limited. Ensuring privacy protection while maintaining data availability is one of the final barriers to creating fully connected AI-driven lubricant networks. Addressing these interlinked challenges requires both technological innovation and systemic collaboration across research and industry, directions explored in the following section.
5.7. Methodological Limitations of AI in Lubricant Research
Several methodological limitations still restrict the reliability and industrial adoption of AI in lubricant research. One persistent issue relates to overfitting, especially in high-capacity models used for QSPR/QSTR and degradation prediction. When datasets are small, chemically narrow, or biased toward specific base oils, additive packages, and test conditions, flexible models can appear highly accurate during training yet fail when applied to new formulations or operating environments [
15]. This risk grows when the target variables come from expensive tribology tests (e.g., wear and scuffing), since data scarcity is built into the workflow. Bayesian regularization and cross-validation reduce the problem, but they do not fully compensate for limited chemical diversity in the training set [
28,
33].
A second limitation connects with the interpretability. Many models used in lubricant research, including deep networks for structure–property prediction and CNN-based debris classifiers, operate as black boxes. They can detect stable statistical patterns, but they often do not explain why a formulation improves wear resistance or which degradation pathway drives a predicted failure trend [
35,
36]. This lack of transparency matters in industrial settings where maintenance schedules, drain intervals, and lubricant changes carry safety and cost consequences. Explainable AI methods are starting to appear in wear analysis, but adoption in tribology remains in its early stages and often involves trade-offs between interpretability and performance [
39].
Data fragmentation further limits transferability. Even when experiments target the same property, differences in protocols, instrumentation, surface materials, and operating conditions make datasets difficult to merge. Monitoring data adds another layer of variability through sensor drift, noise, and calibration differences, which can degrade reliability over long service windows [
29,
41]. Many studies also report aggregated results instead of releasing raw datasets, which prevents benchmarking and cross-validation across research groups. These barriers mirror broader challenges in materials and manufacturing informatics where standardized metadata and interoperable formats are prerequisites for reliable model transfer [
57,
62,
63].
Finally, deployment constraints remain practical obstacles. Digital twin and deep learning frameworks often assume continuous data streams and computational capacity that may not exist in edge environments or legacy equipment [
8,
46]. In addition, many published models do not report uncertainty estimates, which makes risk-sensitive decision-making difficult when predictions drive maintenance actions. Progress therefore depends on lighter-weight architectures, uncertainty-aware models, and hybrid physics–AI strategies that balance accuracy with interpretability and deployability [
60,
67].
6. Future Directions and Emerging Solutions
While the challenges described limit the current reach of AI in lubricant research, they have also guided the direction of innovation. Researchers now develop new frameworks that integrate physical modeling with data science, expand interoperability through digital twins, and embed sustainability and human oversight directly into design. These efforts collectively aim to create an intelligent, adaptive ecosystem that links formulation, production, and field operation.
Progress in AI depends on access to consistent, high-quality data. Following the data inconsistency challenges outlined in
Section 5, Oweida et al. emphasized that integrating materials science with data science requires curated repositories and unified metadata standards [
57]. For lubricants, this translates into developing shared databases that include both chemical and tribological information in machine-readable formats. Digital laboratory management systems can help to minimize measurement variation by enforcing standardized test protocols and enabling automatic data ingestion. Secure industrial data pipelines connecting formulation laboratories, blending plants, and monitoring systems—such as those proposed in emerging digital twin frameworks [
44,
45]—will ensure that data integrity is maintained across the entire lifecycle [
69]. Establishing this foundation will be essential for reproducible, scalable AI applications in tribology.
To overcome the limited transferability, the next generation of lubricant modeling will rely on hybrid approaches that blend physical equations with data-driven inference. Raissi et al. introduced physics-informed neural networks, a class of models that incorporate the governing physical laws such as conservation of mass, momentum, and energy directly into the learning process [
60]. Instead of optimizing solely for statistical accuracy, these networks minimize both prediction error and physical law violations, which stabilizes training and improves generalization, especially when experimental data are limited. Building on this foundation, hybrid frameworks combine ML algorithms with established physical relationships such as viscosity–temperature correlations, rheological flow behavior, and boundary film formation kinetics. Rather than treating physics and data science as competing approaches, hybridization allows both to inform each other: the physical laws constrain the model within realistic boundaries, while machine learning captures complex nonlinear dependencies that traditional equations cannot represent. This structure makes AI more interpretable and reliable for engineering use. It allows researchers to quantify uncertainty and visualize how different factors like temperature, shear rate, or additive concentration interact to influence film formation or wear behavior. As new operational data accumulates, incremental retraining extends model accuracy to new lubricant chemistries and mechanical systems without requiring complete redevelopment.
Achieving full lifecycle integration requires interoperable data systems that connect formulation, blending, and in-service monitoring. Burns et al. noted that open architectures and standardized communication protocols are key to seamless data exchange [
62]. Digital twin platforms can serve as central hubs, synchronizing inputs from formulation labs, production lines, and field sensors. Frameworks proposed by Rebello and Nogueira [
25] and Peterson et al. [
26] demonstrate how continuously synchronized plant-level twins can detect deviations in additive dispersion or blending quality in real time. Extending these architectures across the lubricant lifecycle will allow experimental results to refine maintenance algorithms while field data update formulation strategies forming a closed feedback loop that connects research, production, and performance.
Responding to the gap between performance optimization and ecological responsibility, AI-enabled sustainability tools enable the consideration of environmental metrics into the optimization loop. Guo et al. showed that combining lifecycle analysis with ML models can rapidly estimate environmental impacts of hydrothermal bio-oils, a framework that can be adapted to lubricants to predict carbon footprint and toxicity based on base oil chemistry and service lifetime [
64]. Wilińska and Wilkanowicz [
65] emphasized that environmentally friendly lubricants must maintain high performance while minimizing toxicity and waste. Integrating sustainability indicators such as biodegradability or carbon intensity directly into AI optimization objectives will help guide the industry toward circular economy practices and greener formulations.
To counter the trust and transparency challenges, Industry 5.0 reframes intelligent manufacturing as a partnership between people and machines. Nahavandi described this as human-centric, enhancing creativity and contextual reasoning rather than replacing human input [
67]. In lubricant development and manufacturing, human-in-the-loop frameworks let experts correct and refine AI outputs through iterative feedback, improving reliability and transparency. Interactive dashboards and explainable-AI modules make model reasoning accessible to engineers and technicians. These collaborative systems embody the “golden batch” strategy introduced in lubricant blending [
56], where human oversight ensures that AI-optimized operations maintain consistency, safety, and sustainability across production cycles.
Cross-company collaboration requires secure methods of sharing insight without exposing proprietary data. Federated learning provides one solution by allowing multiple participants to train shared AI models without exchanging raw datasets. Zhan et al. [
68] reviewed recent federated learning architectures designed for secure cloud–edge collaboration, showing that lightweight models can perform distributed training while preserving confidentiality. For lubricant manufacturers, this approach enables real-time analytics on local edge devices while synchronizing with global models for continual improvement. Rebello and Nogueira [
25] previously outlined edge–cloud architectures for lubricant blending that distribute computational loads between local sensors and remote servers, enhancing resilience, scalability, and data protection across connected facilities.
Emerging computing approaches push lubricant research toward autonomous discovery. Jha and Kasabov discussed how quantum-inspired and neuromorphic systems can accelerate model training and improve energy efficiency [
70]. Tom et al. described “self-driving laboratories” that integrate robotics, AI, and closed-loop optimization to autonomously mix base oils and additives, test under controlled conditions, analyze wear behavior, and refine compositions without direct human intervention [
71]. Coupling these autonomous experimentation systems with digital twin feedback would create a continuous learning loop that links virtual design, automated testing, and real-time manufacturing control. These advances outline a roadmap toward a fully connected, intelligent lubricant ecosystem. As hybrid modeling, federated learning, and Industry 5.0 principles converge, AI will enable data-driven design, adaptive manufacturing, and sustainable operation across the lubricant lifecycle.
7. Conclusions
Artificial intelligence is becoming an essential tool for lubricant research, linking design, testing, and operation through data-driven prediction. In formulation, AI models help spot promising molecular structures and forecast tribological behavior well before bench tests. For degradation and property monitoring, neural networks now interpret complex spectral data to estimate viscosity, oxidation stability, and wear trends in real time. Progress in digital twin modeling, multimodal sensing, and federated learning is already bringing these capabilities closer to everyday maintenance practice. AI-driven sustainability analytics are also helping reduce waste and identify greener additive pathways.
Even so, the path forward is uneven. Data remains scattered and hard to standardize, and many models still fail when applied to new lubricants or operating conditions. Building hybrid physics-guided architectures will be key to making predictions transferable and reliable. Equally important is transparency, engineers need to see how models reach their conclusions. Collaboration between companies, universities, and software developers will have to balance open data with protection of proprietary knowledge.
And as computing power grows and Industry 5.0 focuses more on human–machine cooperation, AI is likely to act less as a replacement for engineering judgment and more as an extension of it. The goal is not to automate tribology entirely, but to make the lubricant design process and maintenance more efficient and sustainable across sectors that depend on reliable friction control.