Next Article in Journal
Investigating the Radiomic Performance Gap Driven by Delineation Strategy: Radiotherapy Gross Tumor Volume vs. Dedicated Lesion Segmentation in Proton-Treated Adenoid Cystic Carcinoma
Previous Article in Journal
SoccerDETR: Real-Time Soccer Object Detection via Visual State Space Models with Semantic-Aware Feature Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Integrating Artificial Intelligence into Mechatronics: A Comprehensive Study of Its Influence on System Performance, Autonomy, and Manufacturing Efficiency

by
Ganiyat Salawu
* and
Bright Glen
Department of Mechanical Engineering, University of KwaZulu-Natal, Howard Campus, Durban 4041, South Africa
*
Author to whom correspondence should be addressed.
Technologies 2026, 14(3), 143; https://doi.org/10.3390/technologies14030143
Submission received: 17 January 2026 / Revised: 8 February 2026 / Accepted: 12 February 2026 / Published: 27 February 2026
(This article belongs to the Section Information and Communication Technologies)

Abstract

The rapid evolution of Artificial Intelligence (AI) has significantly transformed the capabilities, performance, and autonomy of modern mechatronic systems. As industries transition toward intelligent and interconnected manufacturing environments, AI has emerged as a powerful enabler of real-time decision-making, adaptive control, predictive maintenance, and autonomous operation. This review provides a comprehensive analysis of AI integration within mechatronic systems, examining its influence on system performance, autonomy, and manufacturing efficiency. Key AI techniques including machine learning, deep learning, reinforcement learning, evolutionary optimization, and computer vision are evaluated in terms of their applications in control, sensing, diagnostics, and robotics. The paper also highlights advancements in AI-driven motion control, autonomous navigation, sensor fusion, and smart factory operations. Critical challenges such as data requirements, computational constraints, system interoperability, and safety concerns are discussed to identify research gaps. Finally, emerging trends and future directions, such as edge AI, digital twins, explainable AI, and fully autonomous mechatronic cells, are explored. This review consolidates current knowledge and provides insights to guide researchers and practitioners in developing next-generation intelligent mechatronic systems capable of supporting the demands of Industry 4.0 and beyond.

Graphical Abstract

1. Introduction

The rapid growth of smart manufacturing, self-operating systems, and connected production has shown the limits of traditional mechatronic system design. Traditional mechatronic systems, which depend on fixed mathematical models and predictable control methods, often struggle to handle changing dynamics, unexpected environmental factors, differences in the system, and rising performance demands. The increasing complexity and data intensity of industrial processes impose constraints on adaptability, robustness, and long-term operational efficiency.
Artificial intelligence (AI) provides a revolutionary means to surmount these limitations. AI changes mechatronic systems from following fixed rules to learning and improving on their own by using data, understanding their environment, and making decisions. AI-driven mechatronic systems can sense their surroundings, evaluate historical and real-time data, anticipate failures, enhance performance, and undertake control operations with limited human involvement. This paradigm shift has already shown considerable advantages in intelligent robotics, where learning-based control enhances adaptability in dynamic manufacturing settings [1]. In industrial manufacturing, machine learning and computer vision methodologies facilitate real-time process oversight, defect identification, quality control, and predictive maintenance, leading to enhanced productivity and minimized downtime. Likewise, AI-augmented sensors and actuators facilitate adaptive control and decentralized decision-making in mechatronic systems, hence, enhancing autonomy at the system level [2,3]. These improvements align closely with Industry 4.0 concepts, where AI plays a key role in smart production systems that use connected technology and can optimize processes in real time.
In addition to monitoring and observation, AI-based optimization methods, including reinforcement learning and evolutionary algorithms, help control systems adapt in real time using environmental feedback, surpassing traditional controllers in variable environments [4]. Nonetheless, despite these advancements, the incorporation of AI into mechatronic systems poses significant obstacles. This issue encompasses the management of high-dimensional and heterogeneous data, adherence to real-time computational limitations, assurance of interpretability and trustworthiness of AI models, and the guarantee of system safety and reliability in mission-critical applications [5]. Given the wide variety of mechatronic applications like industrial automation, self-driving cars, medical devices, and high-precision manufacturing there is a clear need for a careful and structured way to combine AI methods that fit the performance needs and physical limits of mechatronic systems. This review meets this need by carefully examining AI methods related to mechatronic systems, looking at how they work in smart control, optimization, decision-making, and embedded implementation. It also outlines current limitations, gaps in research, and new directions that will shape the next generation of smart mechatronic systems. This article looks into new developments in artificial intelligence, control engineering, and manufacturing science, providing advice for researchers, engineers, and practitioners who want to create strong, flexible, and scalable AI-powered mechatronic systems that fit with Industry 4.0 and the evolving world of smart automation.

2. Conceptual Background

2.1. Mechatronics Systems: Architecture and Components

The field of mechatronics combines mechanical, electrical, and electronic engineering, computer science, and control engineering to create smart systems and products. Synergistic mechanical constructions, electrical control systems, and embedded software create increasingly advanced automated and intelligent goods. By merging electronics and computers with mechanical systems, mechatronics sought to increase their performance and functionality. It now focuses on healthcare, manufacturing, automotive technologies, robotics, consumer electronics, transportation systems, and more. Mechatronics is so dynamic that scholars are continually discussing its meaning, even though many people use it. Originally “mechanics” and “electronics,” the field has evolved to encompass increasingly intelligent, adaptable, and autonomous system capabilities [6]. The literature emphasizes the integration of mechanical, electronic, control engineering, and information technology to solve technological problems and create integrated products or systems [7]. Mechatronics has become a basic paradigm in modern manufacturing systems, bridging the physical and digital domains [8]. Advanced mechatronic systems that combine sensors, actuators, information and communication technology (ICT), and smart control are crucial for Industry 4.0 because they form the basic hardware and software needed for smart and connected manufacturing environments [9].
Mechatronic systems have many interconnected subsystems. Mechanical components generate, transmit, and regulate forces and motion as the system’s substrate. These parts include gears, levers, linkages, frames, bearings, joints, springs, sensors, and actuators. Motors, hydraulic actuators, and pneumatic actuators convert electrical information into mechanical movement. System states like position, velocity, acceleration, and force are fed back via mechanical or electromechanical sensors. These sensors include force sensors, accelerometers, and encoders. Transmission components like gears, cams, sliders, and linkages modify torque and velocity to transfer power and regulate motion, while bearings reduce friction and joints allow relative movement, especially in robotic systems. Frames and chassis maintain mechanical stability, longevity, and protection for integrated electrical and mechanical subsystems. Mechanical springs also utilize energy storage, vibration absorption, suspension support, and force modulation. Mechanical, computing, and control components comprise mechatronic systems.
Computational algorithms and software with mechanical and electronic subsystems assess sensor data, produce control signals, and change system behavior in real time. C (ISO/IEC 9899 standard), Python (Python 3.x series), and Java (Java SE 8) are used to create control algorithms, while data structures help manage and analyze data. MATLAB/Simulink (MathWorks, version R2025b), LabVIEW (National Instruments, 2023 version), and SolidWorks (Dassault Systèmes, 2023) help simulate, analyze, and optimize mechatronic systems before physical implementation. Embedded systems with microcontrollers and processors operate mechatronic devices in real-time, while communication protocols like CAN, I2C, and SPI regulate data transfer between sensors, actuators, and controllers. SCADA systems’ GUIs enable human interaction, monitoring, and supervision of industrial mechatronic processes. Real-time data collecting, analysis, and decision-making improve system flexibility, accuracy, and operational effectiveness as sensor technology, embedded processors, and wireless communication systems advance [10]. ICT has also shown its potential to smooth the integration of mechatronic systems, enabling dispersed control and collaboration in intelligent factories and autonomous production systems [11].

2.2. Evolution of AI in Engineering Systems

Multiple paradigm shifts have occurred since the inception of artificial intelligence, each of which has impacted its application in engineering systems [12,13,14]. To get around these problems, AI research switched to statistical and probabilistic methods, which let systems think in situations where there is noise and uncertainty. This move led to statistical AI, which was more flexible and robust than rule-based systems in signal processing, robotics, and sensor fusion [15]. Classical machine learning (ML) heralded the next major advancement, moving from knowledge-based rules to data-based learning. Classical machine learning algorithms let systems find patterns in data and generalize to new situations without programming. These supervised, unsupervised, and reinforcement learning frameworks were used for classification, regression, clustering, and dimensionality reduction [16,17]. Traditional machine learning approaches have scalability issues, limited ability to express complicated nonlinear relationships, and poor flexibility to changing data distributions, especially in large-scale and dynamic engineering environments. These limits let deep learning (DL) develop in the 2010s, revolutionizing AI in engineering systems. Multilayer neural network-based deep learning models automated hierarchical feature extraction from raw data, eliminating the need for manual feature engineering [15]. Convolutional neural networks (CNNs) for visual perception, recurrent neural networks (RNNs) and long short-term memory (LSTM) networks for sequential data, and Transformer architectures have advanced computer vision, speech recognition, natural language processing, and autonomous systems [18,19,20,21]. These methods integrate learning, reasoning, perception, and decision-making, enabling more autonomy and generalization across domains [15,22,23].
Reinforcement learning can help agents learn optimal policies in complex contexts, making it promising for control systems and robotics. Hybrid and neuro-symbolic techniques combine neural learning with symbolic logic and domain expertise to improve data-driven model interpretability and reasoning [23]. Powerful computers, clever algorithms, and massive datasets have transformed AI since its inception. This section covers AI’s history from symbolic systems to deep learning, as seen in Figure 1.

2.3. Intersection of AI and Mechatronics

AI and mechatronics have created a new paradigm for intelligent engineering systems [2]. In complex and dynamic contexts, modern mechatronic applications, including industrial automation, robotics, wearable medical devices, and autonomous vehicles, require seamless sensor, actuator, and control algorithm integration. These systems formerly used fixed control algorithms and behavioral patterns. As mechatronic systems become more interconnected, autonomous, and data-intensive, traditional approaches struggle to manage uncertainties, nonlinear dynamics, and adaptive performance demands. Advanced actuators and sensors underpin this change. Smart sensors can collect data and preprocess it locally using embedded intelligence, enabling observation, interpretation, and context-aware responses to real-world conditions. Intelligent actuators typically function autonomously using real-time data streams or predictive analytics. With AI, these components may adapt to changing operational conditions, learn from past experiences, and intelligently coordinate with other subsystems, boosting system performance and resilience. ML, DL, RL, fuzzy logic, and hybrid intelligent systems are being integrated into sensing and actuation mechanisms. These methods enable predictive maintenance, real-time optimization, adaptive decision-making, and self-calibration in mechatronic systems beyond reactive control. Deep neural networks are used for multi-sensor data fusion in robotic vision systems, and reinforcement learning helps autonomous robots find optimal control strategies in dynamic environments without programming [2]. Academic literature has extensively investigated mechatronics AI integration feasibility and benefits. In Hashemi & Dowlatshahi [24], a comprehensive review of AI’s applicability in mechatronics engineering is provided, while Zaitceva & Andrievsky [25] use the interdependence between control theory and AI to develop intelligent control strategies for robotic and mechatronic applications. AI methods solve previously intractable control and optimization problems while improving computational efficiency and implementation adaptability, as shown in this research.
Industry 4.0 requires AI and mechatronics integration. Integrating artificial intelligence and machine learning with modern mechatronic systems’ massive operational data enables sophisticated predictive maintenance techniques. Data-driven algorithms can detect small irregularities, predict equipment failures, and initiate proactive maintenance, optimizing system availability and eliminating undue delays. In addition to maintenance, AI-enhanced mechatronic systems enable flexible manufacturing. Industry 4.0 data analytics can help mechanical systems make small quantities of customized products with mass production quality and efficiency. Mass personalization, which produces customized items at scale to meet consumer preferences, relies on this skill [3,26]. Recent advances in peripheral AI, tiny Machine Learning (TinyML), and digital twins are boosting intelligent mechatronic system implementation. Edge computing lets AI models run on embedded devices near sensors and actuators, reducing latency, communication overhead, and energy usage. Digital twins enable AI model training, validation, and optimization in simulated environments before real-world application. However, integrating AI models into resource-limited embedded platforms, ensuring reliability and explainability in safety-critical applications, managing diverse data sources, and the lack of standardized frameworks for AI-driven sensor-actuator systems continue to hinder progress, scalability, and interoperability [2,27].

2.4. Materials and Mechanism-Level Perspectives in AI-Enabled Mechatronic Systems

Mechatronic systems rely fundamentally on the interaction between materials, mechanical structures, sensors, actuators, and control intelligence. While artificial intelligence enhances perception, decision-making, and optimization, system performance is ultimately constrained by material properties such as stiffness, hardness, thermal stability, electrical conductivity, wear resistance, and surface integrity. Consequently, advancing AI-enabled mechatronic systems beyond algorithmic improvements requires a deep understanding of material behavior and the underlying actuation and control mechanisms governing physical processes [28,29]. Modern mechatronic systems employ a wide range of structural, functional, and intelligent materials. Structural materials such as high-strength steels, aluminum alloys, titanium alloys, ceramics, and advanced composites are selected to satisfy requirements related to strength, weight reduction, fatigue resistance, and manufacturability. Functional and smart materials including piezoelectric ceramics, shape memory alloys, magnetostrictive materials, and electroactive polymers enable precise motion control, vibration suppression, and adaptive system responses. In high-end applications such as aerospace manufacturing, ultra-precision machining, and optical systems, materials such as diamond, cubic boron nitride, and advanced ceramics are widely used due to their exceptional mechanical, thermal, and wear-resistant properties [30,31].
AI-enabled mechatronic systems operate through complex multi-physical field interactions involving mechanical forces, thermal effects, electromagnetic fields, chemical reactions, and surface phenomena at the mechanism level [32]. Artificial intelligence does not replace these physical mechanisms; rather, it enhances and orchestrates them. In precision manufacturing, material removal mechanisms including plastic deformation, brittle fracture, chemical etching, and thermal ablation are governed by intricate interactions among tool kinematics, contact mechanics, heat transfer, and material microstructure. AI-driven control systems can dynamically adjust process parameters in real time based on sensor feedback, maintaining optimal material removal modes while suppressing defect formation [33].
Recent advances in ultra-precision surface engineering demonstrate how multi-physical field coupling techniques integrate mechanical, chemical, plasma, and energy-assisted processes to achieve atomic-scale material removal with minimal subsurface damage. Studies on multi-physical field coupling polishing of diamond reveal how the coordinated application of mechanical action and energy fields directly influences surface roughness and subsurface integrity [34]. Investigations into inductively coupled plasma diamond polishing further show that variations in processing parameters significantly affect material removal rates and surface finish at the sub-nanometer scale [35]. These studies illustrate how AI-based optimization frameworks can adapt tool trajectories, energy input, and environmental conditions using real-time sensor feedback, transforming traditional open-loop or experience-based processes into intelligent closed-loop systems [36].
Beyond surface engineering, AI-driven optimization methodologies have been increasingly applied across materials-intensive mechatronic applications. Existing frameworks include: (1) training machine learning models as objective and constraint functions for optimization problems; (2) using machine learning to enhance the efficiency of optimization algorithms; (3) employing neural networks as surrogate models to approximate complex simulations such as finite element analysis and computational fluid dynamics; and (4) predicting optimal design parameters or initial solutions that are subsequently refined through optimization. These approaches are particularly effective in capturing nonlinear relationships among processing parameters, material microstructure, and functional performance that are difficult to describe analytically [37].
In materials processing and manufacturing, AI-based surrogate models are widely used to predict surface roughness, residual stress, material removal rate, phase transformation, and defect formation, enabling rapid process optimization while significantly reducing experimental and computational costs. Despite these advances, challenges remain in data quality, model interpretability, generalization, and computational expense, particularly in real-time and safety-critical environments. Future progress depends on integrating explainable and physics-informed AI with mechatronic system design and control frameworks, ensuring robustness, scalability, and trustworthiness in industrial deployment [38]. Figure 2 illustrates the mechanism-level closed-loop interaction between material behavior, physical processes, AI-based modeling, and mechatronic actuation in intelligent manufacturing systems.

3. Artificial Intelligence Techniques Relevant to Mechatronics

Researchers have continuously advanced artificial intelligence to support increasingly complex engineering and mechatronic systems. In recent years, AI has significantly influenced everyday technologies and industrial problem-solving. This section briefly introduces the historical development, applications, limitations, and future prospects of artificial intelligence relevant to engineering systems [39]. Modern AI research formally emerged in the 1950s, marking the beginning of algorithmic approaches to machine intelligence. At the 1956 Dartmouth Symposium, researchers discussed intelligent automata and coined the term “artificial intelligence,” laying the conceptual foundation for data-driven reasoning and decision-making in engineering systems [40]. Figure 3 summarizes the evolution of artificial intelligence paradigms and highlights their relevance to contemporary mechatronic and manufacturing applications.

3.1. Machine Learning

General learning involves acquiring or changing behaviors, attitudes, knowledge, abilities, or preferences. Machines use data, but humans learn by doing. Machine learning (ML) is a subfield of AI that lets computers think and learn autonomously. It focuses on allowing computers to adapt their behaviors to improve performance, where accuracy is measured by the frequency of correct actions [41]. Machine learning, a subfield of artificial intelligence, aims to create systems that can learn from data and perform classification, regression, clustering, anomaly detection, and reinforcement learning [42,43,44]. Machine learning began in the mid-20th century when researchers began studying how machines could mimic human intelligence and learning. One of the first machine learning achievements was the creation of the artificial neural network, a mathematical model based on biological neurons. Warren McCulloch and Walter Pitts created a simple neural network model in 1943 using electrical circuits for logical operations. They proved that a network of neurons can compute any logical function, laying the framework for neural computation research [45]. Alan Turing, who developed a machine intelligence test in 1950, was another machine learning pioneer. The Turing test required a human interrogator to distinguish between humans and machines based on their responses. Turing also postulated that machines could learn and improve over time. Arthur Samuel invented a checkers-playing self-learning algorithm in 1952. Samuel used alpha–beta pruning to reduce movements and a scoring mechanism to evaluate board positions. The system learned from its mistakes and improved its strategy through self-play using reinforcement learning. Frank Rosenblatt invented the Perceptron neural network in 1957 to classify patterns. The perception transistor included input, output, and changeable weight layers. Rosenblatt created a learning technique that corrected weights for perceptron faults. He showed that the perceptron could recognize simple shapes and characters [46]. The closest neighbor approach, developed by Peter Hart, Nils Nilsson, and Bertram Raphael in 1967, is simple but effective for classification and regression. It is still used in picture recognition, recommendation systems, and anomaly detection [47]. The backpropagation algorithm, developed by Paul Werbos in 1974, trains multilayer neural networks well. The algorithm calculated the output layer error and propagated it backward through the network to change weights. The backpropagation algorithm helped neural networks learn complex nonlinear functions and solve challenges beyond the capabilities of simpler models. Hans Moravec and his Stanford University team created the Stanford Cart in 1979, a camera- and computer-controlled autonomous vehicle. Stanford Cart’s machine vision technology controlled steering and propulsion by analyzing photographs. One of the first robotics applications of machine learning was the Stanford Cart. Although successful, machine learning faced several obstacles and restrictions in succeeding decades. Surmounting these hurdles led to AI winters, periods of reduced funding and interest in machine learning. Machine learning research slowed over these eras until breakthroughs reinvigorated the field [48,49].

3.2. Deep Learning

Deep learning trains neural networks with several hidden layers to represent data abstractly. Deep learning does not require manually built features and can learn straight from raw data, making it ideal for complex and high-dimensional problems. CNNs are popular in image and video analysis because they can record spatial hierarchies using convolutional and pooling layers. Architecture innovations like Efficient Net optimize model scaling with fewer parameters, while Vision Transformers (ViTs) adapt transformer architectures for visual data, improving scalability and accuracy in large-scale vision applications [50]. RNNs and their variants, such as LSTM networks and GRUs, are widely used in time-series forecasting and language modeling for sequential and temporal data. However, transformer-based models like BERT, GPT, and T5 have advanced natural language processing by using self-attention mechanisms to capture long-range dependencies better than RNN architectures [50]. Shallow models like SVMs, GMMs, and linear or nonlinear dynamical systems were used for signal processing and pattern identification before deep learning. Effective for simpler problems, these methods struggle with complex data like speech, language, and visual situations, which require more sophisticated and expressive architectures [51]. Backpropagation (BP)-trained feed-forward neural networks were the foundation of deep learning [52]. Henry J. Kelley, a control theory researcher, introduced continuous backpropagation [53]. Backpropagation initially struggled with deep architectures due to local optima in non-convex optimization landscapes [44,54]. Implementing unsupervised and layer-wise pretraining alleviated these constraints. The first convolutional neural network that could identify visual patterns was invented by Kunihiko Fukushima. Yann LeCun made practical use of backpropagation-trained convolutional neural networks for handwritten digit recognition [15,54].
The introduction of Deep Belief Networks (DBNs) with stacked Restricted Boltzmann Machines (RBMs) enabled greedy layer-wise training and deep architectural optimization [55]. Increased depth and number of neurons in deep neural networks improve modeling capacity and durability, reducing vulnerability to inadequate local optima as expressiveness increases [56]. The computing requirements of these advances slowed early adoption. The availability of large datasets and high-performance computing resources revived deep learning. In 2009, Fei-Fei Li created ImageNet, a dataset of over 14 million tagged photos that enabled massive deep convolutional neural network training. Deep learning outperforms conventional methods, as shown by AlexNet’s 2012 ImageNet victory with GPU-accelerated training [53,57]. Later advances, such as Genera-tive Adversarial Networks (GANs), have given deep learning systems powerful generative capabilities, allowing them to make realistic data for art, fashion, and scientific modeling [58]. Deep learning is used in robotics, healthcare, cybersecurity, intelligent manufacturing, virtual assistants, image recognition, and natural language processing, advancing modern intelligent systems [59].

3.3. Reinforcement Learning

Object recognition in image analysis and autonomous vehicles are among the applications of reinforcement learning (RL), a rapidly growing field within machine learning. Reinforcement learning involves interacting with its surroundings, taking actions, and obtaining rewards or penalties. Reinforcement learning without labeled training data allows the agent to learn by optimizing total rewards over time. Through repeated encounters, positive reinforcement promotes desired behaviors and negative reinforcement discourages undesirable ones, helping the agent develop effective decision-making methods [60]. Reinforcement learning involves the interaction between an agent and its environment. Depending on the reward, the agent does the next action at time t + 1 in state st and receives rt + 1. During interaction with an uncertain dynamic environment, the agent acquires the “optimal decision policy,” which describes its state after optimal actions [61,62]. Figure 4 shows reinforcement learning schematically. Kurrek et al. developed a control policy formulation method for various robotic systems, settings, and manipulation tasks. The computer application implements a physical or virtual environment [63].
Kim et al. conducted a virtual case study of an inverted pendulum managed by a proportional-integral-derivative (PID) controller to demonstrate reinforcement learning [64]. Real-time agent-environment interaction is crucial [65]. The agent observes the environment’s responses to random or suboptimal acts in real time and rewards them. Iterative reinforcement learning involves policy evaluation and enhancement. In the literature, reinforcement learning is called approximation dynamic programming, neuro-dynamic programming, and adaptive critic [25]. Werbos used “approximate dynamic reinforcement learning programming” [66]. In general, approximate dynamic programming includes any computational methods that seek to solve the Bellman equation with the most accurate feasible solution to satisfy dynamic programming’s optimality requirements. Control theory views reinforcement learning as an adaptive optimal control aided by an adaptive controller that converges to the optimal solution [67]. A comparative assessment of ML, DL, and RL techniques for engineering and mechatronic systems is summarized in Table 1.

4. AI for Enhancing Mechatronic System Performance

4.1. Intelligent Control Systems (AI-PID, Adaptive Control)

Systems and control engineering have significantly contributed to the advancement of automation [69]. Common control system architecture contains input, controller, and output. Control engineering creates, analyzes, and executes dynamic system control mechanisms to fulfill performance goals [70,71]. Automation improves output, reduces mistakes, and reduces human involvement. Automation lowers manufacturing costs and enhances safety, especially in dangerous tasks. Proper regulation keeps equipment safe and within limits. Temperature controls adjust refrigerator and pressing iron temperatures to prevent damage. These requirements have led to sophisticated control systems, notably the proportional-integral-derivative (PID) controller, the most popular industrial controller [72,73]. In Figure 5, the PID controller integrates proportional, integral, and derivative components to adaptively repair control faults and increase system stability and transient response [74].
Classic PID controllers, however widely used, perform poorly when system parameters change or external disturbances occur. Like optimal control, adaptive control has a solid theoretical foundation and rigorous mathematical validation due to this limitation. Adaptive control is a natural evolution from automatic to intelligent control because it helps controllers function well when plant parameters and ambient variables change [75]. Intelligent control today relies on adaptation, from self-tuning controllers to adaptive learning systems. AI has been affected by discrete-time adaptive control. V. Yakubovich’s linear classification model training algorithms included adaptive control theory and machine learning [76,77]. This led Lipkovich [78] to revisit Yakubovich’s Stripe Algorithm (SA) today. This numerical analysis reveals that the SA is perfect for online machine learning, exceeding linear learning in some cases. Lipkovich also established that classification and regression loss optimization are inequalities. Complex nonlinear models generally outperform linear approaches, yet the SA is suited for real-time control applications due to its easy implementation, minimal processing demands, and good interpretability. Intelligent control systems have advanced in recent decades as industrial processes have become more complex, nonlinear, and uncertain. Traditional control approaches work well in well-defined operating parameters but fail in rapid fluctuations, poor system representations, and multivariable interactions. Fuzzy logic, neural networks, evolutionary computation, and machine learning offer novel control paradigms that learn from data, adapt to disruptions, and manage complexity beyond standard controllers. Fuzzy inference systems and neural controllers supplemented PID and state-space control, whereas hybrid neuro-fuzzy systems, metaheuristic optimization, predictive control structures, and deep reinforcement learning are newer methodologies. Thus, intelligent control based on artificial intelligence is a research priority to increase renewable energy, industrial automation, robotics, power grids, and autonomous system resilience, flexibility, fault tolerance, and performance [79]. AI-based intelligent control has advanced from rule-based and adaptive controllers to data-driven and hybrid systems, according to evaluated studies. Traditional machine learning approaches are used in industrial control systems because they are stable, interpretable, and real-time. Deep learning and reinforcement learning improve flexibility and performance in nonlinear and uncertain settings. Due to data reliance, training instability, and poor explainability, they are unsuitable for safety-critical applications. Thus, hybrid control frameworks, which integrate classical control theory and AI for adaptation and robustness, are the most used technology. Explainable, real-time, and stable industrial deployment control decisions are major challenges.

4.2. Optimization of Motion and Positioning Accuracy

Motion and positional accuracy are critical performance metrics in mechatronic systems, particularly in applications such as robotics, high-precision manufacturing, machine tools, semiconductor fabrication, and automated inspection apparatus. Accurate trajectory monitoring and placement affect product quality, system reliability, and efficiency. A typical motion control system uses sensors, actuators, servo drives, and controllers to control position, speed, and acceleration, but factors like nonlinear dynamics, friction, backlash, vibration, mechanical flexibility, and outside disturbances make it hard to achieve Due to their simplicity and reliability, PID and model-based motion control techniques are widely used. However, modeling flaws, unmodeled dynamics, and time delays worsen tracking performance and positional precision as motion systems operate at higher velocities and stricter limits [80]. Inaccuracies in manufacturing can cause dimensional variations, diminished repeatability, and process instability, lowering product quality and yield [81]. Data-driven learning and adaptive optimization in motion control systems have made artificial intelligence a powerful tool for improving motion and positioning precision. AI-enhanced motion control approaches allow controllers to grasp complex nonlinear relationships between control inputs and system outputs from operational data, reducing the need for explicit mathematical models. Learning-based control strategies improve trajectory tracking precision, disturbance mitigation, and robustness over conventional control methods, according to Li et al. [80]. One of the biggest AI applications in motion control is trajectory optimization. Learning-enabled controllers optimize motion profiles in real time by considering system limits, dynamic behavior, and performance goals. AI-assisted systems improve trajectories, response times, and overshoot for high-speed, high-precision motion applications by continuously altering control actions in response to tracking faults [80].
In systems with flexible constructions and fast acceleration-deceleration cycles, vibration mitigation affects positioning precision. Traditional vibration mitigation approaches require detailed system modeling, which is difficult in complex mechatronic systems. AI-driven motion control methods analyze system behavior to find effective vibration abatement strategies. Li et al. show that learning algorithm controllers reduce residual vibrations, improving motion fluidity and positional precision [80]. Optimizing motion and positioning precision in manufacturing ensures process stability and product quality. Mechanical tolerances, thermal deformation, structural compliance, and control system mistakes cause manufacturing equipment positioning problems. Smart, data-based optimization techniques allow for immediate correction of errors and enhancement of performance in precision manufacturing systems, leading to better consistency and productivity. Advanced motion control solutions reduce scrap, improve dimensional precision, and promote operational efficiency [81]. Overall, artificial intelligence in motion control systems provides a solid foundation for mechatronic motion and positioning precision. AI-assisted motion control combines better path planning, less vibration, and the ability to learn and adapt in real-time to enhance mechatronic systems and precise AI-driven motion and positioning optimization research corrects modeling mistakes, disruptions, and system nonlinearity with data. Machine learning can predict issues and adjust parameters for accurate motion control. Despite its ability to handle complex dynamics and sensor inputs, simulations and experiments mostly use deep learning. While reinforcement learning has been used for adaptive trajectory optimization, safety and convergence problems limit its use. Further study will focus on AI-driven motion optimization in real-time industrial controllers and their robustness under various operational scenarios.

4.3. Energy Efficiency and Resource Optimization

Electricity is critically scarce worldwide. Expanding energy-producing capacity and improving energy efficiency through load management and demand forecasting usually solve this gap [82]. Recent studies emphasize the former strategy since intelligent energy management reduces energy usage, operational costs, and environmental impact. Residential, commercial, and industrial buildings consume a lot of energy [83]. Large facilities like university campuses are energy-intensive due to HVAC, lab equipment, IT infrastructure, and support services [84]. These qualities make such facilities suitable for advanced energy management systems. Thus, Electrical Energy Management Systems (EEMS) have been widely pushed as effective ways to match power demand with limited energy supplies and lower electricity consumption costs [85]. Recent studies suggest that AI and data-driven methods can help make better use of resources and save energy. These “black-box” models use machine learning and deep learning on historical and real-time data instead of explicit physical modeling [86,87]. Data-driven methods explain large-scale energy systems’ nonlinear linkages and complex interactions better than engineering (“white-box”) and statistical (“grey-box”) methods [88]. Probabilistic modeling [89], artificial neural networks, random forests [90], and regression-based models [91] have been used to estimate energy demand and efficiency. Manufacturing and energy systems use AI for efficiency assessment and optimization beyond forecasts. Data Envelopment Analysis (DEA) is a popular nonparametric method for assessing decision-making unit efficacy using input-output data [92]. System operators can measure current performance and estimate future efficiency trajectories using DEA and machine learning [93,94].
Genetic Algorithms (GA), inspired by biological evolution, optimize complicated, nonlinear system setups and control parameters [95]. Hybrid AI frameworks that combine DEA, machine learning, and evolutionary optimization can improve energy and production efficiency in industrial and energy applications. DEA-ML has been used to measure adaptive capacity in petrochemical plants [96], compare efficiency in cement businesses [97], and analyze energy consumption and CO2 emissions for sustainable facility planning [98]. DEA model expansions include methods that change over time and Malmquist-based approaches to evaluate efficiency trends among different decision-making units [99]. Machine learning and evolutionary optimization have improved energy system forecasts and renewable energy optimization. Deep learning models improve energy demand prediction and operational planning for electrical load forecasting [100]. Photovoltaic power forecasting using genetic algorithm-optimized machine learning models reduces energy loss and improves grid dispatch efficiency [93,101]. By fine-tuning the size of parts and adjusting system settings, genetic algorithms have made hybrid renewable energy systems, such as those using solar, wind, and diesel, more cost-effective and efficient Industrial energy efficiency and recovery have increased using AI-powered optimization [102]. Optimized layouts and fuel choices make combined heat and power (CHP) systems cheaper and more efficient [103]. Further research has focused on waste heat recovery, where AI-driven algorithms convert surplus thermal energy into thermoelectric power, improving resource efficiency [104]. DEA-based methods have also been used to determine the best input factors for chemical and industrial energy efficiency assessments [105]. Industrial systems increasingly adopt AI-driven energy efficiency and resource optimization. Data-driven forecasting, optimization, and predictive maintenance utilizing machine learning have proven energy-saving and operational benefits. Deep learning and hybrid AI models increase demand forecasting and system optimization but require plenty of data and computing. Industrial use of reinforcement learning for adaptive energy management is minimal. Concerns include data access, heterogeneous system scalability, and energy management infrastructure integration.

4.4. Real-Time Embedded AI in Mechatronics

Traditional cloud-based artificial intelligence systems are becoming less and less practical for time-sensitive mechatronic applications because of latency, bandwidth reliance, and stability concerns, according to recent literature. Based on edge computing and real-time analytics research, cloud-centric processing has significant latency, communication overhead, and security threats, restricting its usage in industrial automation, healthcare monitoring, and autonomous systems [106,107]. Implementing AI models directly on embedded and peripheral devices allows local data processing and quick decision-making [108]. Real-time analytics are made possible by embedded AI on microcontrollers, system-on-a-chip devices, and embedded processors. Embedding intelligent algorithms locally minimizes latency, dependence on remote processing, and enables autonomous systems [109]. Medtronic systems demand deterministic timing and quick environmental reactions from sensor-actuator feedback circuits. Mechatronic systems depend on sophisticated sensors and actuators. Researchers have found that AI-enhanced sensors now have localized intelligence for signal preprocessing, inference, and adaptive response [2]. Sensor-level AI offers real-time environmental monitoring and analysis while lowering communication overhead and energy utilization [107].
Smart actuators with built-in AI can automatically adjust how they work based on real-time data, which helps make robots, industrial machines, and smart gadgets more accurate, reliable, and able to handle faults better. Apart from these advantages, a significant problem in the literature is the limited computing and energy resources of embedded systems. Research on intelligent devices and edge AI demonstrates that the restricted processing power of embedded hardware makes it hard to utilize powerful deep learning models directly without changes. Quantization, pruning, and knowledge distillation are frequently employed to lower computer complexity while maintaining inference accuracy [109]. These optimization techniques dramatically lower memory and energy consumption, allowing real-time learning on embedded systems, according to evaluations of embedded artificial intelligence [109]. These improvements guarantee the appropriate operation of battery-powered, mobile, and distributed mechatronic systems. Embed AI offers real-time monitoring and diagnostics in addition to inference and control. By localizing sensor signals like vibration, temperature, and electrical measurements, AI models can detect anomalies and anticipate component failures [2,107]. Due to early defect identification without cloud connectivity, edge-based predictive maintenance solutions have reduced operational outages and maintenance costs [107]. Incidents in industrial mechatronic systems can generate major economic losses and safety hazards, making this capacity crucial. Localized intelligence promotes system reliability and autonomous correction. Edge AI and TinyML give intelligent models near physical processes, propping up embedded systems and AI integration. Embedding artificial intelligence allows systems to view, decide, and act locally under stringent real-time limits, enabling autonomous function [109].
Current mechatronic systems require autonomy, flexibility, and robustness, which these developments meet. Embedded processors and information and communication technologies provide real-time data collecting, analysis, and decision-making, boosting mechatronic application adaptability, accuracy, and performance [3]. The reviewed studies reveal that real-time embedded AI is the most prevalent technique to allow low-latency, autonomous, and reliable mechatronic systems. Industrial maturity is strongest in edge-based sensing, localized inference, and predictive maintenance, where microcontrollers and embedded processors use lightweight machine learning models and tailored deep learning architectures. These systems outperform cloud-based ones in responsiveness, communication efficiency, and operational resilience. However, the literature also highlights certain outstanding difficulties that limit large-scale use. Embedded platforms have computational and energy restrictions, sophisticated deep learning models could harm performance, and embedded AI design frameworks are not standardized. Maintaining reliability, explainability, and safety of embedded AI choices in mission-critical mechatronic systems is an ongoing research issue. Transitioning embedded AI from task-specific solutions to scalable, trustworthy components of next-generation intelligent mechatronic systems demands overcoming these difficulties.
Artificial intelligence in mechatronic systems within the Industry 4.0 paradigm enables a closed-loop framework that tightly integrates sensing, data-driven decision-making, actuation, and continuous optimization [3,106]. To satisfy real-time and latency-sensitive control requirements, edge and embedded AI process high-frequency operational data acquired from smart sensors embedded in mechatronic systems [107,108,109]. Digital twins provide a virtual representation of the physical mechatronic system, supporting offline analysis, predictive modeling, and performance optimization [26,27]. This bidirectional interaction between the physical and digital domains facilitates continuous learning and system improvement [26,106].

5. AI-Driven Autonomy in Mechatronic Systems

5.1. Autonomous Decision-Making and Planning

The knowledge-based approach generally distinguishes the decision-making process from the path-planning stage to improve modularity, manageability, and computational efficiency. Within this framework, the decision module is tasked with high-level behavioral strategies such as lane changing, overtaking, or maintaining following distance from other vehicles, while the path planning module converts these decisions into feasible and safe trajectories. According to Hu et al. [110], this segregation enables autonomous systems to better handle complexity while enhancing system transparency and dependability. The comprehensive classification framework for knowledge-driven decision-making and planning methodologies, along with the interaction between the decision module and the planning module, is depicted in Figure 6 and Figure 7 of the reviewed study [110,111]. The knowledge-driven paradigm generally classifies decision-making techniques into rule-based, state-transition-based, and game-theoretic approaches. Rule-based approaches depend on established traffic regulations and expert insights, employing conditional logic to determine suitable driving actions. State-transition models, including Markov Decision Processes (MDPs) and Partially Observable Markov Decision Processes (POMDPs), represent the driving environment as a collection of states and transitions, facilitating optimal decision-making in the presence of uncertainty.
In contrast, approaches grounded in game theory regard autonomous driving as a multi-agent problem, wherein interactions among vehicles, pedestrians, and other road users are explicitly modeled to develop cooperative or competitive driving strategies [111]. Autonomous decision-making and planning form the fundamental intellect of AI-enabled mechatronic systems, guiding how machines interpret sensory data, reason about their environment, and perform goal-directed actions independently of ongoing human oversight. Attaining effective autonomy necessitates the seamless incorporation of perception, reasoning, learning, and control, especially within dynamic and uncertain contexts such as robotics, unmanned aerial vehicles, and autonomous transportation systems [112].
Recent developments indicate that perception increasingly drives autonomous decision-making, with vision-based technologies taking the lead. Ashraf [113], illustrates that the fusion of computer vision and machine learning facilitates autonomous systems in executing real-time object recognition, depth perception, environmental mapping, and scene comprehension. These perceptual abilities furnish the essential information necessary for informed decision-making, enabling systems to evaluate risks, foresee obstacles, and determine appropriate actions within complex and swiftly evolving environments [113]. This perception-oriented perspective corresponds with the architectural framework proposed by Veres et al. [112], which considers sensory processing and world modeling as fundamental elements of intelligent autonomous agents. In their ANSI-based reference architecture, unprocessed sensor data are converted into abstract representations that facilitate reasoning and planning processes, emphasizing the critical role of robust perception for dependable autonomous decision-making. Autonomous planning mechanisms establish the systematic arrangement of decision-making processes over time to accomplish mission objectives. Veres et al. [112], offer an extensive taxonomy of decision-making architectures, encompassing reactive, behavioral, logic-based, Belief-Desire-Intention (BDI), and layered hybrid architectures. Among these, layered architectures are recognized as the most appropriate for real-world applications, as they combine high-level deliberative planning with low-level reactive control, thereby ensuring both flexibility and real-time responsiveness [112]. Aligned with this perspective, Ashraf highlights that vision-based systems substantially improve planning precision through the provision of continuous environmental feedback.
Real-time visual perception facilitates autonomous robots, drones, and self-driving vehicles in dynamically modifying planned trajectories, circumventing obstacles, and adapting duties in response to environmental variations. This ongoing perception–planning feedback cycle enables autonomous systems to function reliably in unpredictable and complex environments [102,113]. Both Veres et al. and Ashraf [112,113], underscore the increasing significance of learning-based methodologies in autonomous decision-making and planning. Ashraf [113], asserts that reinforcement learning and deep learning methodologies facilitate the enhancement of decision-making policies in autonomous systems through experiential learning, thereby improving navigation efficacy, motion planning, and task execution. Vision-based feedback enhances this learning process by delivering comprehensive contextual information to support ongoing adaptation. Veres et al. [112], emphasize the importance of MDPs and POMDPs as formal structures for making decisions when there is uncertainty. These frameworks facilitate autonomous agents in probabilistically assessing action sequences, effectively harmonizing immediate reactive responses with long-term strategic goals. These methodologies are especially pertinent for autonomous vehicles functioning within partially observable and dynamically changing environments [112].

5.2. Robotics Navigation and Path Planning

Path planning constitutes a fundamental area of research within robotics, autonomous systems, and artificial intelligence. It primarily concentrates on formulating algorithms that allow autonomous agents, including robotics, autonomous vehicles, and unmanned aerial vehicles (UAVs), to traverse their environment from a starting point to a specified target position. The primary objective is to identify an optimal route while simultaneously averting obstacles and adhering to all relevant regulations. Path planning is especially critical in practical applications such as autonomous vehicles, unmanned aerial vehicles, and robotic manipulators within industrial environments, where agents are required to traverse dynamic and unstructured settings safely and efficiently [114]. With the growing implementation of autonomous technologies, the significance of path planning has become increasingly prominent across a variety of fields. For instance, autonomous vehicles are required to determine optimal routes for navigating urban streets, whereas UAVs must plan trajectories that circumvent obstacles in the airspace or other aerial hazards. Similarly, robotic manipulators are required to plan movements within manufacturing settings while preventing collisions with adjacent equipment and complying with rigorous safety regulations. As autonomous systems operate within increasingly complex environments, the need for robust, adaptable, and computationally efficient path planning algorithms has risen markedly. Path planning extends beyond merely identifying a practicable route; it also encompasses optimizing the path according to multiple criteria, such as the shortest distance, minimal travel time, optimal energy efficiency, and safety considerations. Furthermore, the complexity of the problem increases when accounting for constraints such as non-holonomic motion (which limits the vehicle’s movement to particular directions), dynamic obstacles (moving entities that the agent must evade), and time-varying goals (where the target may change during navigation). For example, in an autonomous vehicle, the objective may vary depending on traffic conditions, or the vehicle may need to evade pedestrians who move unpredictably across the roadway. Furthermore, the attributes of the environment, whether fully understood, partially understood, or completely unknown, significantly influence the selection of algorithms and strategies employed. Path planning within a known environment enables the application of more efficient algorithms, whereas navigation in partially or entirely unfamiliar environments necessitates algorithms capable of dynamically adjusting to evolving conditions, such as real-time obstacle detection and mapping [115,116].
The development of path planning methodologies, as illustrated in Figure 8, has experienced substantial advancements over time, transitioning from traditional deterministic algorithms to more sophisticated metaheuristic and artificial intelligence-based techniques. This transition underscores the growing complexity of real-world challenges and the necessity for more versatile and adaptable solutions [117]. The importance of path planning transcends robotics and autonomous vehicles, serving a crucial function across various fields including urban mobility, healthcare robotics, search and rescue missions, as well as logistics and warehousing. For example, in urban mobility, path planning algorithms enhance traffic management and route optimization in smart cities, while in healthcare robotics, they facilitate surgical robots to navigate complex environments with accuracy. During search and rescue missions, unmanned aerial vehicles and robots are capable of navigating hazardous or disaster-affected regions. In logistics, path planning facilitates the automation of products movement within dynamic warehouse settings [118,119,120]. Given these applications, the ongoing advancement and enhancement of path planning algorithms continue to be essential. The incorporation of sophisticated computational methods, sensor technologies, and AI-based approaches possesses significant potential to revolutionize the manner in which autonomous systems engage with and traverse their environment. By systematically categorizing and analyzing the extensive range of existing path planning algorithms, this review functions as a comprehensive resource for researchers, practitioners, and students seeking to deepen their understanding of the current advancements in this field [121,122].
In the contemporary period, the increasing need to minimize human labor has resulted in the extensive adoption of machines for executing physical tasks traditionally performed by humans. However, in addition to performing physical tasks, there is a growing demand for machines to demonstrate intelligence, particularly the capacity to deliberate, learn, and make decisions in a manner comparable to humans. To attain this degree of intelligence, Artificial Intelligence (AI) has garnered significant attention as a fundamental enabling discipline. One of the most essential challenges in robotics is path planning, which is critical for robot navigation. Path planning encompasses the process of guiding a robot from an initial position to a designated destination while effectively averting obstacles within its environment. This challenge has been thoroughly investigated across diverse robotic platforms, including micro air vehicles [123], mobile robots, wall-climbing robots, and underwater robots, employing a broad spectrum of algorithms [124]. Initial research endeavors chiefly concentrated on two-dimensional (2D) path planning challenges. For example, Choset [125] examined conventional approaches to 2D path planning, predominantly neglecting bio-inspired algorithms. Subsequently, numerous surveys were carried out concerning mobile robot navigation within two-dimensional environments, emphasizing fundamental methodologies and their associated limitations. Mobile robot navigation continues to be a vital area of research in robotics, owing to its significance in facilitating intelligent autonomous operations. Mobile robots are extensively utilized in transportation networks, industrial automation, and search-and-rescue missions. Path planning is a fundamental and widely recognized component of autonomous navigation. Over the past twenty years, various approaches have been developed to address the path planning problem, which seeks to identify a collision-free trajectory between two points while optimizing specific cost functions, such as path length, fluidity, or energy efficiency. Based on environmental attributes, path planning challenges may be classified into static and dynamic settings. In static environments, obstacles remain stationary over time, whereas in dynamic environments, they may alter their position or orientation with regard to time. Furthermore, path planning algorithms may be categorized as either offline or online. In offline path planning, the robot possesses complete knowledge of the environment it will navigate. In online path planning, the robot constructs a map of its environment utilizing data obtained from sensors affixed to the robot during its movement. Numerous strategies have been proposed in the literature to enhance the efficiency and reliability of navigation.
Ibraheem and Ajeil [126], proposed a hybrid methodology that integrates Particle Swarm Optimization (PSO) with a Modified Bat Algorithm for robotic path planning in both static and dynamic environments. Their approach optimized multiple objectives, such as minimizing path length and enhancing path fluidity, while utilizing a gap vector method for obstacle avoidance. Similarly, Al-Nayar et al. [127] introduced a path planning approach utilizing free-segment and turning-point techniques for navigating congested environments. Their methodology pinpoints secure turning points between the initial and target positions and integrates a sliding mode controller to ensure the stability of the robot’s movement along the intended trajectory, thereby optimizing both the safety and efficiency of the path. Hassani et al. [128] conducted an additional investigation into multi-objective path planning by integrating chaotic PSO with the Firefly Algorithm for wheeled mobile robotics. Artificial intelligence fundamentally involves the comprehension and creation of intelligent systems capable of autonomous decision-making and executing complex tasks. An intelligent system depends on diverse computational methods to address problems effectively, commonly known as artificial intelligence techniques. These methodologies encompass fuzzy logic systems, artificial neural networks, neuro-fuzzy systems, and evolutionary computing techniques such as Genetic Algorithms (GA), bio-inspired algorithms, Ant Colony Optimization (ACO), and Particle Swarm Optimization (PSO). These AI-driven techniques can be generally classified into deterministic and non-deterministic (stochastic) algorithms. The combination of these techniques results in evolutionary algorithms, which have been widely employed in robot navigation and path planning problems owing to their robustness, adaptability, and capacity to manage complex, multi-objective optimization challenges. Figure 9 depicts the categorization of deterministic, non-deterministic, and evolutionary approaches frequently utilized in robot navigation [129].

5.3. Machine Vision and Perceptual Intelligence

Machine vision (MV) constitutes a core technology within the perception layer of intelligent driving systems, enabling environmental perception, object detection, and semantic segmentation via the analysis of image data procured from cameras. Recently, the convergence of deep learning and computer vision has driven the rapid development of machine vision technology [130]. Computer vision, employing digital image processing techniques, allows machines to map environments, identify obstacles, and precisely locate themselves [131,132]. This interdisciplinary field integrates computer science, artificial intelligence, and image analysis to extract meaningful insights from the physical environment, thereby facilitating informed decision-making by computers [133]. Real-time vision algorithms, employed in domains such as robotics and mobile technology, have achieved significant results, making a substantial and lasting impact on the scientific community [134].
The analysis of computer vision encompasses numerous complex challenges and essential limitations. Developing algorithms for tasks such as image classification, object detection, and image segmentation requires a thorough understanding of the underlying mathematics. Nevertheless, it is essential to recognize that each computer vision task requires a customized approach, thereby elevating the complexity of the analysis. Therefore, the integration of theoretical knowledge and practical skills is crucial in this field, as it promotes advancements in artificial intelligence and the creation of impactful real-world applications. The field of computer vision has been profoundly influenced by preceding research efforts. The 1980s experienced notable progress in digital image processing and the development of algorithms pertinent to image understanding. Before these developments, researchers developed mathematical models to emulate human vision and investigated the integration of vision into autonomous robotic systems. Initially, the expression “machine vision” was predominantly linked to electrical engineering and industrial robotics. However, over time, it assimilated with computer vision, leading to the development of a unified scientific discipline. This integration of machine vision and computer vision has led to substantial progress, with machine learning techniques playing a vital role in facilitating rapid advancement. Today, real-time vision algorithms have become prevalent, seamlessly integrated into common devices such as smartphones equipped with cameras. This integration has transformed our understanding and interaction with technology [134]. Machine vision has revolutionized computer systems by integrating advanced artificial intelligence techniques that surpass human capabilities in a wide range of specialized applications. Through computer vision systems, computers have gained the ability to perceive and interpret the visual environment [133].
The primary objectives of computer vision are to enable computers to perceive, identify, and interpret the visual environment in a manner akin to human perception. Researchers in the field of machine vision have concentrated their efforts on designing algorithms that facilitate these visual perception abilities. These functions include image classification, which evaluates the presence of particular objects within image data; object detection, which identifies the locations of semantic objects within predefined categories; and image segmentation, which divides images into separate segments for comprehensive analysis. The complexity of each computer vision task, combined with the diverse mathematical principles involved, poses significant challenges for analysis. Nevertheless, understanding and addressing these challenges holds considerable theoretical and practical importance within the field of computer vision [135]. Machine Vision (MV), also known as Computer Vision (CV), represents a prominent area within the domain of Artificial Intelligence. As depicted in Figure 10, it chiefly supports individuals in decision-making within four areas: recognition, measurement, classification, and detection, thereby alleviating the overall workload. It supports humans in measurement, decision-making, and judgment through the utilization of machinery. Figure 11 depicts the development of machine vision. Since the notion of machine recognition was introduced in 1950, machine vision has experienced a significant development trajectory, resulting in a wide range of applications across image processing, robotics, industrial automation, optics, and related disciplines. The adaptability and versatility of MV applications have been significantly improved [136].

Algorithm of Machine Vision

Since 2018, MV has experienced widespread application across diverse fields, largely due to advancements in algorithmic development and the contributions of various research groups. Wu, Li, and Yao’s publication examine the methods, applications, and challenges associated with deep learning in MV tasks. Wu et al. also introduced the fundamental concepts of algorithms within this domain [137]. Katti et al. [137] examined the advantages of MV in relation to human contextual expectations. This research group assessed several multi-view algorithms for vehicle and pedestrian classification [138]. In Gorodokin’s study, MV was utilized within the transportation domain [139]. By employing the MV algorithm, congestion has been alleviated, and drivers no longer spend excessive time waiting. Panagakis et al. [140] provided a comprehensive overview of the utilization of tensor techniques in deep learning, with particular emphasis on visual data analysis and multi-view applications [140]. Furthermore, Zhang et al. examined the applications of adversarial networks in multimedia visualization [141]. They observed that applications have become increasingly widespread while also identifying several challenges within this domain. Gustafsson et al. assessed scalable Bayesian deep learning techniques for robust motion verification [142]. The findings indicate that ensembling consistently yields more dependable and practically valuable uncertainty estimates. The research team led by Whatmough et al. introduced a novel CNN framework in the MV region [143]. The experimental results demonstrate that it is capable of attaining exceptionally high energy efficiencies. Shu et al. devised an interactive design of an intelligent MV based on a human-computer interaction mode [144]. Wu et al. [145] devised a machine vision algorithm to identify defects in electrical connectors. The precision of this study is 93.5% [145]. Wu et al. reviewed the advancements and challenges associated with the incorporation of 2D material photodetectors in sense-memory computation and biomimetic image sensors for MV [146].
In underwater operations, mobile vehicles (MV) serve numerous functions to assist operators, and this domain has been examined by Reggiannini & Moroni [147]. In the field of MV, machine learning represents the most extensively employed methodology. Numerous studies aim to enhance the accuracy and efficiency of MV through modifications to machine learning algorithms. Akhtar et al. conducted a review of various studies on the subject [148]. Deep neural networks perform effectively in numerous MV tasks; however, they necessitate a substantial number of parameters and computational operations. Goel et al. conducted a survey of low-power MV techniques aimed at accomplishing the same duties with reduced memory consumption [149]. O’Mahony et al. [150] examined the advantages and disadvantages of deep learning in comparison to traditional MV [150]. Yang et al. [151] proposed a GPU scheduler for MV applications based on their analysis. In practical applications, the operational efficiency of MV is a critical factor [151]. Image recognition is an important part of MV. Baygin et al. [152] noted that MV is extensively employed in manufacturing processes due to its cost-effectiveness and high accuracy in image recognition [152]. Talebi and Peyman [153] introduced a learned image resizer that was trained jointly with a baseline vision model, resulting in enhanced image quality following fine-tuning [153]. Multiple researchers have concentrated on the integration of multimedia and natural language processing. El-Komy et al. [154] developed a central server utilizing a Faster Region Convolutional Neural Network for object detection in images to assist visually impaired individuals in avoiding obstacles [154].
Li et al. [155] developed an innovative framework utilizing machine learning algorithms for the automated recognition of complex SEM images [155]. Hu et al. [156] introduced an innovative image coding framework designed to facilitate both motion vector and human perception tasks concurrently [156]. Roggi et al. [157] introduced an automated inspection approach utilizing UAVs for photovoltaic installations [157]. Paul et al. employed high dynamic range imaging algorithms on MV systems operating under direct sunlight conditions [158]. Rebecq et al. [159] employed established motion vector techniques on videos reconstructed from event data to enhance image quality [159]. Mennel et al. [160] demonstrated that an image sensor can function as an artificial neural network independently, capable of both capturing and processing optical images in real time without delay [160]. Gamanayake et al. [161] introduced a novel multi-view algorithm called cluster pruning to facilitate edge implementation [161].
Ding et al. [162] devised a moving object detection algorithm utilizing robust segmentation of image features based on thresholds, in conjunction with optical flow estimation [162]. Fang et al. [163] enhanced Mask R-CNN for object detection and segmentation in complex scenes, attaining increased accuracy [163]. Moru and Borro employed MV for accurate measurement of industrial gears [164]. Li et al. [165] developed a deep learning algorithm for object classification that can be extended to spectral-domain measurement systems [165] Yin et al. [166] introduced a vehicle counting method based on MV [166]. Dan et al. established a method based on the mobility vector for detecting moving loads in bridge monitoring [136,167].

5.4. Human-Machine Interaction and Collaborative Systems

Man-machine interaction (also referred to by the acronym MMI or as human-machine interaction, HMI) is defined as the engagement and communication between human users and machines within a dynamic environment via various interfaces. Since the inception of instrument development by humans, there has been an ongoing interaction between humans and machines [168]. This interaction has undergone substantial development over time. Initially, before the onset of the Second World War, humans were compelled to adapt to machines, which entailed training operators to conform to the specifications of the machinery. However, during the Second World War, technological advancements progressed at such a rapid pace that providing sufficient human training became progressively more challenging. This scenario necessitated a systematic analysis and synthesis of human–machine interaction. The historical evolution of HMI can be categorized into four primary periods. Between 1940 and 1955, designers concentrated on delineating the boundaries of human capabilities, creating equipment that operators could scarcely operate. Between 1955 and 1970, researchers endeavored to conceptualize humans as machines and developed systems in accordance with this perspective. In approximately 1970, progress in electronics facilitated extensive automation, and from 1970 to 1985, numerous tasks traditionally carried out by humans were automated.
During this stage, humans shifted from being primary operators to supervisory functions. Since 1985, this progression has accelerated, with a growing focus on workload, cognitive functions, and the affective dimensions of human–robot interaction. As a result, human–machine interaction has experienced significant transformation and, within the framework of Industry 4.0, has attained a new echelon of innovation propelled by automation and intelligent systems [168]. The integration of artificial intelligence (AI), robotics, and immersive technologies including augmented reality (AR), virtual reality (VR), and extended reality (XR) continues to significantly reshape the ways in which humans engage with machines. HMI has progressed from basic command-driven systems to sophisticated, adaptive interactions that utilize artificial intelligence to interpret and respond to human behavior in real time. As robots become progressively integrated into industrial and daily settings, the demand for interactions that are intuitive, efficient, and centered on human needs continues to rise [169]. Recent developments in artificial intelligence have markedly improved robotic functionalities, allowing machines to perceive their surroundings, make informed decisions, and acquire knowledge through interaction.
Deep learning models, such as convolutional neural networks (CNNs) and large language models (LLMs), have enhanced capabilities in autonomous perception, language comprehension, and decision-making processes [170]. These capabilities enable robots to identify objects, comprehend human speech, and even perceive emotional signals, thereby facilitating more natural interactions. Nonetheless, these advancements present challenges concerning ethical AI deployment, data privacy, and algorithmic bias [169,171]. Through successive industrial revolutions and shifts in production paradigms, the human-machine relationship has developed into what is commonly referred to as the 5C model: Coexistence, Cooperation, Collaboration, Compassion, and Coevolution (Figure 12) [172]. These stages exemplify a progressive evolution rather than abrupt transitions, with each new stage constituting a continuation of the preceding ones. Ongoing technological advancements have propelled the human–machine relationship toward progressively more incorporated and interactive configurations.
In later stages, intelligent machines work collaboratively with humans within shared work environments, performing duties through coordinated and interactive joint actions as part of a unified team framework [173]. At the stage of the fifth industrial revolution, humans and collaborative robots (cobots) operate collaboratively as teams to execute production duties [174]. At the core of this relationship lie considerations of trust, decision-making authority, communication, and team development. In mixed human–cobot environments, successful collaboration depends on workers understanding machine behavior, trusting machine intentions, and explicitly comprehending decision-making processes. As industrial systems progressively embrace human-centered methodologies, human–machine collaboration is characterized as a relationship in which humans and machines work together to achieve common objectives through coordinated cognitive and physical interactions [175]. In these systems, humans and machines frequently function at the same level of decision-making, participating in interactive dialog to accomplish task objectives [176]. Human–robot collaboration (HRC) explicitly emphasizes coordinated interactions between humans and machines across both cognitive and physical domains, with the aim of achieving shared objectives [177].
HRC is integral to contemporary manufacturing, providing essential support for surveillance, prognostics, health management, safety, and sustainability [178,179]. Machines can support human decision-makers by collecting data, evaluating uncertainty, and conveying pertinent insights, thereby alleviating cognitive burden and emotional bias while maintaining human oversight and judgment [180]. As machines advance in intelligence, the conventional master–servant relationship is transitioning into a master–collaborator paradigm, necessitating novel strategies in system design, information sharing, and interface development [181]. This transition has resulted in the development of collaborative robotics, or cobots, engineered to function safely and efficiently in conjunction with humans [182]. Cobots possess the ability to detect human presence, interpret intentions, and modify their behavior correspondingly. Through observation and learning, they are able to acquire task-related knowledge and execute operations in a manner comparable to human performance [183]. In human-centered production systems, progress in cognitive science and personalized artificial intelligence indicates the potential for creating empathetic machines capable of detecting human emotions, requirements, and preferences. Such systems are capable of offering contextual support that extends beyond mere functional collaboration, facilitating more profound interactions marked by empathy and reciprocal adaptation. These advancements facilitate ongoing human–machine co-evolution, whereby both human and machine abilities develop in tandem, promoting collaborative interactions centered on mutual value generation rather than rivalry [173,184].
Human–robot collaboration (HRC) has been instrumental in contemporary industrial systems, especially within the contexts of Industry 4.0 and the emerging framework of Industry 5.0 [185]. Despite ongoing technological advancements throughout successive industrial revolutions, the fundamental element that was predominantly absent was the explicit incorporation of humans into the process. HRC explicitly addresses this deficiency by integrating human factors such as cognition, perception, trust, and decision-making into intelligent systems. Although its development originated during Industry 4.0, the significance of HRC has become increasingly evident with the shift toward Industry 5.0, which emphasizes human-centered design principles (Figure 13) [186,187].
Industrial mechatronic systems are the most frequent users of autonomous decision-making and path planning, which are typically presented as algorithmic categories. In industrial robotics, reinforcement learning and model-based planning are being utilized more frequently to enhance the robustness and flexibility of assembly strategies in actual production environments by adapting them to part geometry, positioning uncertainty, and environmental disturbances [113,114,115]. Autonomous path-planning algorithms facilitate the movement of reconfigurable robots in flexible manufacturing cells without collisions, thereby reducing production changeover delay and enhancing system utilization [116,117]. In dynamic factories, autonomous mobile robots (AMRs) and automated guided vehicles (AGVs) employ AI-based decision-making for traffic management, task distribution, and navigation [118,119,120]. These systems need to find a balance between safety, efficiency, and real-time responsiveness by making trade-offs between learning- and model-driven engineering [121]. Autonomous planning and decision-making make equipment and production more reliable, productive, and precise by letting operations deal with problems, change settings in real time, and create flexible tool paths. These examples demonstrate that AI-driven autonomy in mechatronic systems transcends algorithmic innovation, facilitating practical, scalable, and intelligent industrial processes.

6. AI-Enabled Manufacturing Efficiency

6.1. Smart Factories and Industry 4.0 Integration

Smart factories constitute the primary operational manifestation of Industry 4.0, wherein physical manufacturing systems are comprehensively incorporated with digital intelligence to facilitate adaptive, interconnected, and data-informed production processes. In this paradigm, artificial intelligence (AI) serves as a fundamental enabling technology that converts traditional automated factories into intelligent manufacturing ecosystems [188]. By integrating artificial intelligence into mechatronic systems, smart factories attain greater levels of autonomy, efficiency, adaptability, and responsiveness to fluctuating market and operational requirements. At the core of smart factories is the integration of cyber-physical systems (CPS), the Industrial Internet of Things (IIoT), and sophisticated communication infrastructures [189]. Mechatronic systems such as robotic manipulators, CNC machines, automated conveyors, and sensor-equipped production units consistently produce substantial volumes of data concerning process conditions, machine health, energy usage, and product quality [190]. AI algorithms analyze this data in real time to derive actionable insights, facilitating informed decision-making at various levels of the manufacturing system.
This integrated interaction between physical processes and digital analytics constitutes the foundation of smart factory operations [191]. AI-powered analytics are essential for improving situational awareness and operational transparency in smart factories. Machine learning and deep learning models are extensively utilized to oversee production processes, recognize patterns, and identify deviations from standard operating conditions [89]. These capabilities empower manufacturers to shift from reactive to proactive decision-making, which reduces disruption, boosts throughput, and guarantees consistent product quality. Unlike conventional rule-based systems, AI-powered smart factories possess the ability to adjust to fluctuations in materials, production schedules, and operational conditions without the need for extensive reprogramming [192]. Another distinguishing feature of smart factories is the comprehensive horizontal and vertical integration of manufacturing systems. Horizontally, AI enhances coordination and information exchange among machines, production lines, and logistics systems, thereby supporting synchronized operations and the optimization of material flow [193].
Vertically, AI integrates shop-floor systems with higher-level enterprise systems, including manufacturing execution systems (MES) and enterprise resource planning (ERP) platforms. This integration facilitates comprehensive visibility and optimization, harmonizing production operations with organizational objectives, supply chain limitations, and customer demands [1]. Digital twins enhance the application of AI in intelligent manufacturing facilities by offering virtual models of physical production systems. These digital replicas enable AI models to simulate, analyze, and optimize system performance across various operational scenarios prior to implementing changes on the shop floor [194]. Through ongoing data exchange between physical assets and their digital counterparts, smart factories are able to validate control strategies, assess process enhancements, and predict system malfunctions with minimal interruption to current operations [195]. AI-powered intelligent factories additionally facilitate mass customization and adaptable manufacture, which are becoming progressively vital in contemporary production settings. By utilizing advanced planning, scheduling, and reconfiguration capabilities, artificial intelligence enables manufacturing systems to effectively manage small batch sizes, frequent product modifications, and customized production while maintaining high levels of productivity and quality [196].
Mechatronic systems integrated with artificial intelligence are capable of dynamically modifying parameters, tooling, and workflows in response to evolving product specifications and demand fluctuations. Although these benefits are evident, the incorporation of artificial intelligence into smart factories introduces multiple challenges. These encompass data interoperability among diverse systems, cybersecurity threats linked to heightened connectivity, and the necessity for dependable real-time performance in safety-critical applications [197]. Furthermore, the effective deployment of such systems necessitates personnel with expertise in administering AI-driven technologies and interpreting analytical results [198]. Overcoming these challenges is crucial to completely unlocking the potential of AI-powered smart factories. In short, the integration of AI into smart factories signifies a significant progression toward intelligent, robust, and sustainable manufacturing systems. Through the integration of advanced mechatronic systems with AI-driven analytics and decision-making processes, Industry 4.0 smart factories enable continuous optimization, enhanced productivity, and greater flexibility, thereby establishing a solid foundation for future advancements toward human-centered and autonomous manufacturing paradigms [199].

6.2. Intelligent Process Monitoring and Control

Intelligent process monitoring and control constitute a fundamental component of AI-enabled manufacturing systems, facilitating real-time oversight, adaptive management, and ongoing optimization of production processes. Conventional process monitoring techniques are predominantly rule-based and depend on predefined thresholds or mathematical models, which frequently encounter difficulties in addressing nonlinear dynamics, process variability, and the uncertainty intrinsic to contemporary manufacturing settings. Artificial intelligence overcomes these limitations by facilitating data-driven monitoring and control strategies that analyze both historical and real-time data, thereby enhancing robustness, precision, and responsiveness [200]. In intelligent manufacturing settings, mechatronic systems are outfitted with an extensive array of sensors that continuously gather data pertaining to temperature, pressure, vibration, force, speed, energy consumption, and product quality. AI methodologies, particularly machine learning (ML) and deep learning (DL), are employed to analyze this high-dimensional sensor data to identify patterns, correlations, and anomalies that are challenging to detect through traditional methods [201].
Consequently, advanced monitoring systems offer improved situational awareness and facilitate the early identification of aberrant process behaviors [202]. Anomaly detection represents one of the most significant implementations of artificial intelligence in process surveillance. Unsupervised and semi-supervised learning techniques, including autoencoders, clustering algorithms, and probabilistic models, are extensively employed to identify deviations from normal operating conditions without the necessity of large amounts of labeled defect data [203]. These methods facilitate the early detection of process instabilities, equipment failures, and quality issues, thereby decreasing scrap rates and unplanned delays. Compared to conventional statistical process control, AI-driven anomaly detection provides enhanced flexibility in adapting to evolving process conditions and intricate system behaviors [130]. Beyond mere monitoring, AI assumes a vital function in intelligent process control, wherein control actions are adaptively modified in response to real-time data and predictive analytics. Data-driven control methodologies integrate machine learning models with conventional control frameworks to improve monitoring precision and disturbance reduction [204].
For example, neural network-based models have been effectively utilized to approximate intricate nonlinear process dynamics, facilitating adaptive and predictive control approaches that surpass traditional proportional-integral-derivative (PID) controllers in complex manufacturing operations [205]. Model predictive control (MPC), when combined with artificial intelligence, has attracted a lot of attention in the field of smart manufacturing. AI-augmented MPC frameworks utilize data-driven models to forecast future process dynamics and optimize control strategies over a finite horizon, ensuring adherence to operational constraints [206]. This methodology is especially efficacious in multivariable and highly interconnected mechatronic systems, where conventional control design presents significant challenges. AI-driven Model Predictive Control facilitates superior product quality, decreased energy consumption, and enhanced process stability [206].
Quality assurance and defect forecasting also derive significant advantages from AI-enhanced process surveillance. Supervised learning methods, such as support vector machines, random forests, and deep neural networks, are extensively employed to forecast quality outcomes derived from process variables and sensor data [207]. In manufacturing processes, including machining, additive manufacturing, and semiconductor fabrication, AI models facilitate real-time quality assessment and closed-loop control modifications, thereby decreasing dependence on end-of-line inspections and enhancing overall yield [208]. Although these benefits are evident, the implementation of AI-driven process monitoring and control systems entails various challenges. These encompass challenges related to data quality, the generalization of models across varying operational conditions, real-time computational limitations, and the necessity for interpretability in safety-critical contexts. Furthermore, the integration of AI-driven control systems with existing automation infrastructure necessitates meticulous system design to guarantee stability, reliability, and adherence to industrial standards [191].

6.3. Predictive Maintenance and Fault Diagnosis

Predictive maintenance leverages historical data, sensor inputs, and advanced predictive models to forecast equipment failures prior to their occurrence, facilitating prompt and economical interventions while reducing operational outages. Condition-based maintenance (CBM) reacts to degradation once it crosses thresholds, while predictive maintenance anticipates degradation patterns to optimize the scheduling of maintenance activities. Reliability-centered maintenance (RCM), on the other hand, emphasizes categorizing failures according to their safety, operational, and economic implications, utilizing historical data, expert judgment, and risk assessment rather than relying on real-time monitoring [209]. Predictive maintenance (PdM), or condition-based monitoring, is a sophisticated diagnostic method employed to detect machinery faults at their early stages prior to any failure. Therefore, necessary maintenance can be performed through the analysis of the equipment’s sensor signals. In a production environment, this sophisticated monitoring process can be implemented either directly (offline) or indirectly (online) [210]. The inactive monitoring strategy involves machine-assisted periodic on-site inspections that necessitate operational interruptions. Conversely, online monitoring persistently assesses the equipment via sensors throughout operational processes. There are two main types of PdM techniques: model-based and data-driven [211].
Model-based approaches develop a mathematical representation of the system derived from empirical data, considering any deterministic discrepancy between the actual output and the model’s output as indicative of a defect [5]. Nevertheless, the implementation of model-based approaches is problematic in real-world contexts and is only feasible under certain conditions or within a controlled environment [211,212]. A data-driven approach employs sophisticated models to identify fault signals within apparatus lifecycle data. According to a recent report, machine learning (ML) methods are identified as an intelligent approach for predictive maintenance (PdM) [213,214]. In mechatronic systems, machines are outfitted with sensors that record condition-related data, including vibration, temperature, acoustic emissions, electrical signals, and lubricant properties. Artificial intelligence methodologies, especially machine learning and deep learning, are utilized to analyze these extensive and diverse datasets for the purpose of identifying early indicators of degradation and anomalous behavior [215]. Fault diagnosis is an essential element of predictive maintenance, encompassing the detection, isolation, and categorization of faults in complex systems.
Supervised learning techniques, including support vector machines, artificial neural networks, and ensemble methods, have been extensively employed for diagnosing established defect types when labeled datasets are accessible [216]. In situations where fault labels are limited or absent, unsupervised and semi-supervised methods, such as clustering algorithms and autoencoders, are progressively employed to identify anomalies and previously unrecognized fault patterns [203]. Deep learning has further enhanced predictive maintenance through the facilitation of automatic feature extraction from raw sensor data. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs), including long short-term memory (LSTM) architectures, have exhibited robust capabilities in modeling spatial and temporal dependencies within condition-monitoring signals [217]. Another critical aspect of AI-driven predictive maintenance is the assessment of remaining useful life (RUL). RUL prediction provides quantitative data on how long a part or system will last before it breaks down, which makes it easier to plan maintenance and allocate resources wisely. Data-driven Remaining Useful Life (RUL) estimation techniques utilize historical degradation data alongside real-time sensor measurements to forecast future system performance [218].
Machine learning models, encompassing regression-based methods and deep learning architectures, have demonstrated encouraging effectiveness in precisely estimating RUL across diverse operating conditions. The incorporation of predictive maintenance systems within Industry 4.0 frameworks further augments their efficacy. The Industrial Internet of Things (IIoT) facilitates connectivity, enabling the exchange of maintenance data among machines, production lines, and enterprise systems. Maintenance management systems can integrate AI-powered diagnostic insights to streamline automated work-order creation, spare-parts logistics, and maintenance decision support [219]. This integration makes it easier to move from reactive maintenance to condition-based and predictive maintenance. Although it offers numerous benefits, the implementation of AI-driven predictive maintenance solutions continues to encounter several challenges. These encompass challenges related to data quality and imbalance, model generalization across various devices and operational settings, and the necessity for interpretability in safety-critical applications. Furthermore, guaranteeing dependable real-time performance and seamlessly integrating AI models with existing industrial control systems necessitate meticulous system design and thorough validation [220,221].

6.4. Production Flow Optimization and Scheduling

Production flow optimization and scheduling address the coordination of machines, resources, and operations across manufacturing systems with the objective of improving throughput, reducing cycle time, and enhancing overall operational efficiency. Unlike predictive maintenance, which focuses on equipment health and fault anticipation at the component level, production flow optimization operates at the system level and targets the global performance of interconnected mechatronic subsystems.
With the advent of Industry 4.0, production scheduling has transitioned from static, rule-based planning toward adaptive and data-driven decision-making frameworks. Intelligent manufacturing systems are using real-time data from cyber-physical systems, smart sensors, and industrial communication networks to adjust schedules and allocate resources quickly as production conditions change. Artificial intelligence helps this change by letting production systems handle problems on their own, like changes in demand, machine availability, and process bottlenecks.
Machine learning techniques have been applied to learn effective scheduling policies from historical production and operational data, enabling improved flexibility in complex manufacturing environments where conventional optimization methods face scalability and uncertainty limitations [106,107]. These AI-based scheduling strategies can simultaneously consider multiple performance objectives, including makespan minimization, energy efficiency, workload balancing, and system robustness, which are critical requirements in modern intelligent factories [3].
Figure 14 illustrates the AI-enabled production flow optimization framework, where real-time shop-floor data from IIoT-enabled snsors are processed by machine learning models to support adaptive scheduling, resource allocation, and control decisions. The closed-loop interaction between sensing, AI-based decision layers, and actuation enables continuous optimization of production flow under dynamic operating conditions. The integration of AI-driven scheduling with the Industrial Internet of Things (IIoT) further enhances production flow optimization by enabling continuous visibility of shop-floor conditions and closed-loop feedback across production lines [26,106]. In this context, digital twin technologies play a complementary role by providing virtual representations of manufacturing systems that support offline simulation, performance evaluation, and optimization of scheduling strategies prior to physical deployment [27,108]. This cyber-physical interaction allows manufacturers to anticipate bottlenecks, evaluate alternative production scenarios, and refine scheduling decisions without interrupting ongoing operations.
Several challenges still constrain the industrial deployment of AI-based production scheduling, despite these advances. Interoperability issues between AI models and legacy Manufacturing Execution Systems (MES), Programmable Logic Controllers (PLCs), and Supervisory Control and Data Acquisition (SCADA) architectures limit scalability and maintainability [3,26]. Moreover, guaranteeing real-time responsiveness, reliability, and robustness of AI-enabled scheduling under strict industrial timing constraints continues to be an open research challenge [107,109]. Fixing these problems is crucial for achieving reliable, scalable, and trustworthy AI-driven improvements in production processes within smart mechatronic systems.

7. Case Studies and Industrial Applications

7.1. Robotics

There are many prospective applications of artificial intelligence (AI), machine learning (ML), and deep learning (DL) within advanced manufacturing robotics. These technologies enable the intelligent analysis of production data, thereby facilitating improved production planning and more efficient operational processes. In manufacturing environments, artificial intelligence and machine learning algorithms are frequently employed for quality assurance. They are capable of autonomously inspecting products, identifying defects, and initiating real-time corrective measures on the production line [222,223,224]. This feature assists manufacturers in reducing waste, eliminating bottlenecks, and enhancing overall productivity. AI-powered monitoring systems enhance ongoing oversight of manufacturing processes by identifying anomalies and deviations in real time. Such real-time analysis enhances product quality while minimizing the requirement for extensive human involvement in inspection procedures [224]. Furthermore, predictive maintenance has become a vital application of artificial intelligence and machine learning in robotics, wherein learning algorithms process sensor data to anticipate equipment failures prior to malfunctions. This proactive maintenance approach reduces unanticipated outage and markedly enhances system reliability and operational efficiency [225].
Advanced manufacturing robots integrated with AI and machine learning algorithms are progressively capable of autonomous functioning. This autonomy is especially important in hazardous environments or duties requiring high precision, where human participation may be unsafe or unfeasible [226]. AI, ML, and DL technologies enable robots to learn from previous experiences, adapt to novel circumstances, and improve their performance in tasks over time. This enhances the intelligence and efficiency of robotic systems [227]. In robotic assembly procedures, artificial intelligence is instrumental in managing control and enhancing optimization. Advanced algorithms enable robots to collaborate efficiently with human operators, adapt dynamically to process variations, and improve assembly accuracy [228]. Safety is further enhanced through AI-driven surveillance of robotic movements and environmental conditions, facilitating the early identification of potential hazards and mitigating the likelihood of workplace incidents [229]. Moreover, AI-powered workflow optimization analyzes production data to identify inefficiencies and improve the synchronization of robotic processes [230]. AI, ML, and DL methodologies are likewise utilized to identify optimum manufacturing strategies, enhancing process efficiency and minimizing material waste [231].
In collaborative manufacturing settings, these technologies facilitate the integration of robots working alongside human operators by executing repetitive or hazardous tasks, thereby enhancing productivity and ensuring workplace safety [232]. Beyond stationary robots, AI-powered robotics encompasses automated guided vehicles (AGVs), which are essential components of intelligent manufacturing systems. AI and machine learning algorithms improve the capabilities of robotics and AGVs by advancing navigation, object recognition, and real-time decision-making based on sensor data [233,234,235]. Path optimization algorithms further decrease travel time and enhance efficiency within manufacturing facilities [236]. In the end, the incorporation of AI, ML, and DL into sophisticated manufacturing robotics facilitates intelligent, adaptive, and secure robotic systems that substantially improve productivity and operational efficiency [237].

7.2. Automotive

Advanced transportation systems progressively incorporate artificial intelligence (AI), machine learning (ML), and deep learning (DL) to improve safety, efficiency, and user experience [238]. AI-powered intelligent transportation systems (ITS) are important for better traffic management, less congestion, and safer roads. Machine learning algorithms are extensively employed to analyze traffic flow patterns and enhance signal timing at intersections, while deep learning models facilitate real-time identification of potential hazards and assist in prompt driver notifications through vision-based perception systems [1]. AI, ML, and DL methodologies are widely employed to monitor and analyze traffic conditions, enabling transportation authorities to enhance traffic flow and reduce congestion [239]. Smart traffic cameras and AI-operated traffic signals consistently gather and analyze data to enhance the effectiveness of traffic management systems. Figure 15 presents a comprehensive overview of artificial intelligence applications within intelligent traffic management systems. One of the most important uses of AI-enabled mechatronics in the car industry is the development of self-driving and semi-autonomous cars. AI, ML, and DL technologies enable vehicles to perceive their environment using multi-sensor data, analyze complex driving situations, and make informed decisions autonomously without human intervention [240].
These capabilities constitute the fundamental basis of sophisticated driver-assistance systems (ADAS) and fully autonomous driving platforms. AI-powered intelligent transportation systems include smart traffic signals, electronic toll collection, and intelligent parking management, all of which improve transportation infrastructure and reduce operational inefficiencies [241]. Machine learning algorithms enhance predictive maintenance by analyzing sensor data from vehicles to forecast component failures and facilitate proactive maintenance scheduling. This methodology is especially advantageous for extensive vehicle fleets, such as public transportation networks, where minimizing interruption is essential [242]. AI-driven parking management systems enhance urban mobility by assisting vehicles in efficiently identifying available parking spaces, thereby alleviating congestion in densely populated areas. Machine learning algorithms enhance the efficiency of parking space utilization, while deep learning-based vision systems facilitate license plate recognition and the automated enforcement of parking regulations [243].
In the field of logistics and freight transportation, machine learning algorithms are utilized to optimize delivery routes, thereby decreasing travel time, petroleum usage, and environmental impact [244]. AI-driven analysis of traffic patterns and high-risk zones further promotes road safety. Learning algorithms are capable of informing drivers of potential hazards, recommending safer routes, and even forecasting possible collision scenarios in real time [245]. Artificial intelligence and machine learning techniques are also used to improve the scheduling and routing of public transportation, which makes the service more reliable and comfortable for passengers. Deep learning models can additionally be employed to observe passenger behavior and identify potential safety issues within transit systems [239,240,246].

7.3. Aerospace Manufacturing and Unmanned Aerial Systems

Artificial intelligence (AI), machine learning (ML), and deep learning (DL) have considerably enhanced the capabilities of advanced unmanned aerial vehicles (UAVs), commonly known as drones, thereby increasing their efficiency and effectiveness across a broad spectrum of aerospace and industrial applications [247]. By incorporating AI algorithms with onboard sensors, cameras, and communication systems, drones are able to analyze extensive data sets in real time and execute complex tasks autonomously. These capabilities allow drones to recognize and monitor objects, detect obstacles, prevent collisions, and enhance flight path optimization to ensure maximum operational efficiency [248]. Machine learning is instrumental in improving drone perception and situational awareness. Machine learning algorithms enable drones to identify patterns within sensor and image data, thereby enhancing the precision of object recognition, object tracking, and obstacle detection. By training on diverse datasets, drones can acquire the capability to recognize a range of objects, including vehicles, infrastructure elements, and humans, and respond suitably in various operational contexts [249]. This learning-driven adaptability is especially advantageous in the dynamic and unpredictable environments encountered in aerospace and industrial missions. Deep learning, a specialized branch of machine learning, allows drones to analyze extensive datasets through multilayer neural networks, facilitating more sophisticated capabilities such as autonomous navigation, mapping, and real-time decision-making. DL-based perception systems enable drones to identify and evade obstacles in real-time during flight, even within complex environments characterized by limited visibility or fluctuating conditions [250].
The integration of mobile edge computing with artificial intelligence further advances drone navigation by enabling low-latency data processing and real-time decision-making directly on the UAV [251]. AI and ML algorithms are also extensively employed for object detection and categorization in drone imagery. These capabilities are especially advantageous in search-and-rescue missions, where drones can swiftly survey extensive geographic regions and detect objects of interest such as stranded persons, compromised infrastructure, or vehicles [252]. Similarly, deep learning-based autonomous navigation facilitates drone operations without the need for constant human oversight, ensuring safe flight around obstacles and structures during industrial inspections and aerospace maintenance activities. The application of machine learning significantly enhances autonomous drone navigation by improving environmental perception and navigation decision-making, enabling robust obstacle detection, adaptive path planning, and real-time trajectory optimization in dynamic and unstructured environments [253]. AI-powered drones play a vital role in precision agriculture by collecting data on crop health, soil properties, and moisture content, alongside performing inspection and navigation tasks. This information facilitates data-driven decision-making, resulting in improved crop yields and reduced resource wastage [254].
In security and surveillance contexts, drones integrated with AI and machine learning algorithms are employed for border monitoring, intrusion detection, and threat assessment, providing improved situational awareness across extensive regions [255]. Disaster response and emergency management have also benefited from the application of AI-enabled drones. By swiftly evaluating damage and pinpointing essential regions, drones support first responders in prioritizing rescue operations and distributing resources more efficiently [20]. In logistics and supply chain management, drones integrated with artificial intelligence and machine learning algorithms facilitate autonomous delivery of shipments and supplies to remote or inaccessible areas, enhancing route efficiency and obstacle avoidance during flight [256]. Overall, the incorporation of AI, ML, and DL into drone-based mechatronic systems has revolutionized unmanned aerial platforms into intelligent aerospace systems capable of autonomous operation, adaptive decision-making, and precise task execution.

7.4. Healthcare Mechatronics

Artificial intelligence (AI), machine learning (ML), and deep learning (DL) have emerged as essential drivers in healthcare mechatronics, notably in medical imaging, diagnostics, and data-driven clinical decision support. AI in medical imaging encompasses a broad spectrum of imaging modalities, including X-ray, magnetic resonance imaging (MRI), computed tomography (CT), ultrasound, pathology, and microscopy, for purposes such as disease detection, diagnosis, and classification [257]. These applications overlap with machine vision and diagnostic analytics, but they also represent a separate area of intelligent healthcare systems. This section talks about the most important parts of integrating AI models with medical imaging technology, such as the pros, cons, and limits. A full review of all relevant publications is beyond the scope of this work. AI has many benefits that could change medical imaging, such as better accuracy, faster processing, better scalability, and lower costs [258,259]. A significant advantage of artificial intelligence, especially deep learning algorithms, resides in their capacity to identify subtle and intricate patterns within medical images that may not be readily perceptible to the human eye. Numerous studies and instruments have demonstrated the superior performance of AI in medical imaging tasks [260,261].
For example, Nam et al. demonstrated that an automated detection algorithm surpassed physicians in the classification of radiographs and the identification of malignant pulmonary nodules on chest radiographs [262]. Importantly, the study also demonstrated that AI algorithms, when employed as supplementary tools, enhanced physician performance as indicated by the area under the receiver operating characteristic curve (AUROC) and the jackknife alternative free-response receiver-operating characteristic (JAFROC) figure of merit. Comparable methods have been employed to identify lymph node metastases in breast cancer within time-sensitive contexts [263]. AI-driven tools, such as the Automated Retinal Disease Assessment (ARDA) based on the foundational research of Gulshan et al., help doctors find diabetic retinopathy and are being improved to help find other diseases as well [264,265]. Beyond radiology, artificial intelligence and machine learning have been widely employed in image analysis across diverse microscopy modalities, such as pathology, transmitted-light, lattice light-sheet, and confocal microscopy [266,267]. These AI-driven methods have consistently surpassed conventional non-machine-learning techniques, especially in the domains of object detection, image feature extraction, classification, and semantic and instance segmentation. Their efficacy has been particularly pronounced in the fields of radiology, pathology, cancer biology, and immunology. The swift expansion of open-source software and the advancement of sophisticated neural network architectures have markedly enhanced the precision of cell detection and segmentation in microscopy images [268].
Furthermore, AI methodologies have improved the objectivity, sensitivity, and specificity of cervical cancer screening through the use of whole-slide imaging and have introduced innovative approaches such as AI-powered transmitted-light microscopy (AIM), which facilitates the visualization and analysis of live cell structures without the need for labeling [251,258]. Diagnosis is a crucial component of healthcare systems, and advancements in big data analytics and artificial intelligence have significantly improved the effectiveness and accuracy of diagnostic techniques [269,270]. A comprehensive review of studies conducted between 2012 and 2019 indicated that deep learning–based diagnostic performance is on par with healthcare professionals in disease classification utilizing medical images [264,271]. During the COVID-19 pandemic, artificial intelligence-based diagnostic tools were instrumental and demonstrated high levels of diagnostic accuracy [265,272]. Numerous studies have investigated machine learning–based disease diagnosis, encompassing applications targeting specific conditions such as cancers, mental disorders, neurological and cardiovascular diseases, fractures, and other medical conditions, as well as diagnostics derived from particular tests including electrocardiography, radiology, flow cytometry, and microbiome analysis [273,274,275].
Representative disease diagnostics and their corresponding algorithms are depicted in Figure 15. The integration of comprehensive electronic health records (EHRs) with machine learning methodologies presents significant opportunities for furthering clinical research and diagnostic practices [276,277]. In addition to healthcare system-managed electronic health records, post-marketing surveillance of adverse drug events offers supplementary data sources for retrospective analysis [278]. Given the swift expansion of electronic health record data, data-driven approaches have been extensively implemented to identify concealed patterns and authentic signals within large-scale healthcare datasets [279]. Before the advent of the machine learning era, statistical methods were utilized to identify safety signals in electronic health records by considering covariates and combining multiple data sources [280]. Recently, machine learning and deep learning models have been integrated to analyze both structured and unstructured electronic health record data, enhancing disease screening, risk stratification, and diagnostic precision [281]. Deep learning models convert raw healthcare data into multi-layered representations via nonlinear transformations, facilitating the extraction of high-level features for diagnostic applications [15]. These methodologies facilitate early diagnosis of diseases, personalized treatment strategies, improved analysis of medical imaging, and AI-enabled patient engagement. Furthermore, the incorporation of genomic and biomarker data into machine learning models has facilitated the identification of clinically significant biomarkers despite the high dimensionality of omics datasets [282,283]. For instance, machine learning classifiers utilizing metabolic biomarkers have been applied to assess tumor response to radiation [283], while alternative studies have employed live-cell biomarkers and machine learning algorithms to categorize cancer patients based on pathological risk [284]. As machine learning and artificial intelligence applications advance within biomedical fields, there is a growing integration of statistical and explainable methods to enhance the transparency and interpretability of AI-based diagnostic systems [282,285].

7.5. Industrial Automation and Advanced Manufacturing

Artificial intelligence (AI), machine learning (ML), and deep learning (DL) have become key factors in modern industrial automation, especially in smart manufacturing and Industry 4.0. These technologies facilitate the shift from inflexible, rule-driven automation to intelligent, adaptive systems capable of real-time decision-making, learning, and optimization [198,286]. In industrial automation, artificial intelligence-powered systems are extensively employed for process monitoring and control, wherein substantial amounts of sensor data from machinery, production lines, and cyber-physical systems are persistently analyzed to identify anomalies, forecast faults, and enhance operational parameters [287]. Machine learning models facilitate automated pattern recognition within complex, nonlinear industrial processes, enhancing stability, efficiency, and product quality beyond the scope of conventional control strategies [195]. Predictive maintenance represents another significant application of artificial intelligence within industrial automation. Through the integration of machine learning and deep learning models with Industrial Internet of Things (IIoT) infrastructures, automated systems are capable of predicting equipment failures, planning maintenance proactively, and minimizing unplanned outages [219,220].
These capabilities enable manufacturing systems to transition from reactive and preventive maintenance strategies to condition-based and predictive approaches, thereby improving reliability and asset utilization. Production planning, scheduling, and workflow enhancement also widely utilize AI-driven optimization methodologies. Advanced algorithms are capable of dynamically allocating resources, balancing duties, and adjusting schedules in response to disruptions such as equipment failures, demand variability, or supply chain interruptions [197,207]. Such adaptive scheduling is especially beneficial in high-mix, low-volume manufacturing settings, where flexibility and responsiveness are essential. Furthermore, industrial automation is progressively focused on human–machine collaboration, wherein artificial intelligence facilitates safe and efficient interactions between operators and automated systems. Intelligent automation systems can support human operators through decision assistance, visual analytics, and adaptive control interfaces, thereby enhancing ergonomics, safety, and productivity [288]. This transition corresponds with the emergent principles of Industry 5.0, which emphasize human-centric, resilient, and sustainable manufacturing systems. Overall, the incorporation of AI, ML, and DL into industrial automation facilitates the development of more intelligent manufacturing distinguished by self-monitoring, self-optimization, and ongoing learning. These capabilities are transforming industrial automation by improving efficiency, adaptability, and sustainability, all while preserving strong human supervision and control [195,286].
Ultra-precision surface engineering emerges as one of the most challenging and significant applications of AI-enabled mechatronic systems. Recent advancements in precision diamond polishing demonstrate that energy-field coupling can enhance surface finishing and facilitate multi-modal process control [68]. The article examines advanced ways for achieving sub-nanometer surface roughness in “Multi-physical field coupling polishing of diamond for atomic-scale damage-free surface,” focusing on the physical principles and critical technological challenges related to atomic-scale diamond surface quality. AI-driven mechatronic optimization frameworks that synchronize sensing, actuation, and process parameters for peak performance are analogous to the synergistic amalgamation of mechanical, chemical, plasma, laser, and ultrasonic energy fields for surface material removal and subsurface damage reduction [34]. Inductively coupled plasma (ICP) polishing of single-crystal diamond has been employed to evaluate the influence of etching parameters on surface roughness by atomic force microscopy. Such parameter optimization problems are well suited to AI-based surrogate modeling and closed-loop optimization, where process variables are continuously updated based on sensor feedback. Surface roughness decreased by 74.7%, from 2.22 nm to 0.562 nm, through the optimization of gas flow ratio, ICP and RF power, chamber pressure, and etching duration, rendering it suitable for high-precision optical applications [35].
The potential of multi-energy-field-assisted processes in advanced manufacturing has been evidenced by innovative photocatalytic chemical mechanical polishing (CMP) methods that achieve atomic-level surface quality with minimal damage and elevated material removal rates [289]. These ultra-precision surface processing systems necessitate intelligent control, real-time monitoring, process optimization, and multi-sensor feedback for the integration of AI with mechatronics. Research on diamond polishing highlights challenges and optimization techniques that suggest AI-integrated mechatronic systems could enhance precision manufacturing in future generations [3]. Schematic illustration of multi-physical field coupling in ultra-precision diamond polishing, highlighting the interaction between mechanical, plasma, chemical, and energy-assisted mechanisms for atomic-scale material removal Figure 16.

7.6. Cross-Case Synthesis and Conceptual Framework for AI-Enabled Mechatronic Systems

Artificial intelligence has evolved from a supporting tool to a core enabler of modern mechatronic systems in various industrial domains, as evidenced by the case studies reviewed in Section 7.1, Section 7.2, Section 7.3, Section 7.4 and Section 7.5. Architectural patterns, technical constraints, and optimization objectives are consistently observed in various applications, including robotics, transportation systems, healthcare mechatronics, industrial automation, and ultra-precision manufacturing, despite the differences in context [2,3,106].

7.6.1. Common Architectural Patterns Across Application Domains

AI integration is implemented in all reviewed domains through a layered architecture that encompasses sensing, data-driven inference, decision-making, actuation, and feedback. This structure facilitates adaptive motion planning, task execution under uncertainty, and collision avoidance in autonomous systems and robotics [53,63]. AI-driven perception and decision layers facilitate real-time traffic management, predictive maintenance, and vehicle autonomy in automotive and transportation systems [90,91,92,93]. Healthcare mechatronic systems employ analogous architectures for patient monitoring, intelligent diagnostics, and robotic-assisted intervention, albeit with heightened safety and interpretability standards [238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,253]. The migration of intelligence closer to the physical layer through embedded and edge AI is a consistent trend across the case studies. The need to meet real-time demands, reduce delays, and ensure predictable performance in systems that are critical for safety is what is pushing this change in design.

7.6.2. Role of AI Across Control, Optimization, and Material-Level Mechanisms

The study shows that AI excels in mechatronic systems due to both its algorithmic capabilities and effective integration with physical mechanisms and material properties. In robotic and industrial automation systems, reinforcement learning and adaptive control algorithms improve autonomy by learning the best ways to act in changing environments [60,61,62,63,64,65]. Machine learning models enhance forecasting, scheduling, and resource optimization in transportation and energy systems across various operational contexts [91,96]. AI works at the mechanistic level in ultra-precision surface engineering and advanced manufacturing, not just at the supervisory level. Case studies in diamond polishing and multi-physical field coupling processes demonstrate that AI-driven optimization frameworks dynamically modify tool trajectories, energy-field inputs, and environmental parameters to maintain optimal material removal modes and reduce defect formation [287,289]. These examples show that AI does not replace physical rules; instead, it coordinates complex interactions between different types of physics by synchronizing sensing, actuation, and control in real time [34,35].

7.6.3. Cross-Domain Challenges and Maturity Levels

Despite demonstrated performance advances, recurrent issues are found across all application areas. Data dependency remains a major limitation, particularly in situations with rare events, new materials, or extreme working conditions [26,27]. The generality of models across machines, environments, and materials remains constrained, limiting scalability and industrial applicability. The deployment in real time constitutes an additional common constraint. Numerous deep learning models are computationally demanding, necessitating model reduction, surrogate modeling, or hybrid physics-informed AI methodologies for embedded deployment [107,108,109]. Moreover, interpretability and trust are important issues in healthcare, autonomous mobility, and ultra-precision manufacturing, where non-transparent decision-making obstructs certification and safe operation [34,40]. In terms of maturity, applications like industrial robots and predictive maintenance show higher levels of technology readiness, while AI-driven material-process optimization and human-machine collaborative intelligence are still in the early stages of being used in industry [3,24].

7.6.4. Unified Conceptual Framework for AI-Enabled Mechatronics

A cohesive conceptual framework for AI-enabled mechatronic systems can be developed by synthesizing the analyzed case studies. This framework comprises a closed-loop interaction among:
  • Material and physical system properties (e.g., stiffness, thermal behavior, surface integrity);
  • Mechanism-level processes (e.g., deformation, fracture, heat transfer, wear);
  • Multi-modal sensing (force, vibration, temperature, vision);
  • AI-based inference and optimization models (ML, DL, RL, surrogate models, physics-informed networks);
  • Control and actuation strategies (motion control, energy-field coupling, adaptive regulation);
  • Feedback and continuous learning enabled by embedded intelligence and digital twins.
This framework highlights that substantial performance gains arise when AI is tightly integrated with physical understanding and material behavior, rather than applied as an isolated data-processing layer [2,34].

7.6.5. Implications for Future Research and Industrial Deployment

The cross-case synthesis suggests that future advancements in AI-enabled mechatronic systems will be contingent upon the existence of understandable and physics-informed AI models, standardized integration frameworks, and robust interoperability across the sensing, control, and computation layers [23,26]. Integrating algorithmic innovation with an understanding of material-level processes is essential for advancing AI implementation in ultra-precision and high-end manufacturing sectors [35,287]. The analyzed cases indicate that AI-driven mechatronics is progressing toward distributed intelligence across sensing, computation, and actuation layers, facilitating adaptive, resilient, and material-aware systems that correspond with the requirements of Industry 4.0 and the forthcoming Industry 5.0 paradigms [3,106].

8. Challenges and Limitations in AI-Mechatronics Integration

The cross-case analysis in Section 7, which includes robotics, transportation systems, healthcare mechatronics, industrial automation, and ultra-precision manufacturing, shows that there are many important research gaps at the intersection of artificial intelligence and mechatronic systems. Even though AI-driven methods are clearly better in terms of flexibility, autonomy, and performance, they cannot be used on a large scale in industry because of problems with data quality, real-time functionality, system reliability, interoperability, and ethical issues. These problems are often written down in different application areas and show that there are still research gaps that need to be filled in order to make AI-enabled mechatronic systems safe, scalable, and reliable [2,106,107,120,289].

8.1. Research Gap: Data Availability and Quality

AI-driven mechatronic systems rely extensively on large volumes of representative, high-quality data acquired from sensors, actuators, and operational environments. However, industrial data collected in real-world settings are frequently noisy, incomplete, imbalanced, or non-stationary due to sensor degradation, communication interruptions, environmental variability, and evolving operating conditions. These data deficiencies can bias learning processes, degrade generalization capability, and lead to unreliable decision-making, particularly in safety-critical applications [2,106,107].
Moreover, labeled fault, defect, or failure data are often scarce, limiting the effectiveness of supervised learning approaches. Data collected from a specific machine, process, or environment may also lack transferability to other systems, thereby restricting scalability and cross-platform deployment. Addressing this research gap requires advances in data-efficient learning, domain adaptation, self-supervised learning, and robust data governance frameworks for industrial AI applications [289].

8.2. Research Gap: Real-Time Constraints and Deterministic Operation

Deterministic real-time responses with strict latency and reliability guarantees are necessary for numerous mechatronic applications, such as industrial automation, autonomous vehicles, and robotic manipulation. Advanced AI models, particularly deep learning architectures, are computationally intensive and frequently surpass the processing, memory, and energy budgets of embedded or edge-based mechatronic platforms. While edge AI, model compression, hardware acceleration, and Tiny Machine Learning (TinyML) techniques have demonstrated potential for reducing computational overhead, the ongoing research challenge of ensuring simultaneous real-time performance, accuracy, and robustness persists. This gap is especially critical for closed-loop control systems, as timing violations or inference delays may result in instability or hazardous behavior [106,107,121].

8.3. Research Gap: Safety, Reliability, and Robustness

Safety-critical mechatronic systems must function consistently in the presence of dynamic, uncertain, and partially observable environment. Black-box learning approaches, in particular, may exhibit unpredictable behavior when exposed to novel scenarios, sensor faults, distributional shifts, or adversarial disturbances in data-driven AI models. This absence of guaranteed robustness presents substantial risks in applications such as collaborative robotics, autonomous transportation systems, and medical devices [24,25]. In addition, the validation, certification, and regulatory approval procedures that are well-established for conventional control methodologies are further complicated by the limited explainability and formal verifiability of many AI models. The creation of safety-aware AI architectures that are specifically designed for mechatronic systems, formally verifiable learning-based controllers, and explainable AI are necessary to address this research gap [121].

8.4. Research Gap: System Interoperability and Integration

AI-enabled mechatronic systems typically comprise heterogeneous sensors, actuators, controllers, communication networks, and software platforms supplied by multiple vendors. The absence of standardized interfaces, reference architectures, and unified integration frameworks significantly complicates deployment, maintenance, and scalability.
Interoperability challenges are particularly pronounced when integrating AI modules with legacy industrial infrastructures such as PLCs, SCADA systems, and fieldbus-based automation architectures, which were not originally designed to support data-driven intelligence. This research gap highlights the need for standardized AI–mechatronics integration frameworks, middleware solutions, and interoperable cyber-physical system architectures [3,26].

8.5. Research Gap: Ethical, Economic, and Workforce Implications

The increasing autonomy of AI-enabled mechatronic systems introduces ethical concerns related to accountability, transparency, data privacy, and decision authority. In safety-critical contexts, unclear responsibility allocation between human operators and autonomous systems remains an unresolved issue.
From an economic perspective, the high upfront costs associated with AI model development, data infrastructure, computational resources, and specialized expertise may limit adoption, particularly for small and medium-sized enterprises. Additionally, workforce displacement, skills transformation, and the demand for interdisciplinary expertise pose socio-technical challenges that must be addressed to ensure sustainable and inclusive industrial transformation [25,26].

9. Future Research Directions

To completely harness the potential of AI-integrated mechatronic systems, future research must address existing limitations while investigating emerging technological opportunities.

9.1. Edge AI and TinyML for Real-Time Control

Future systems are anticipated to more frequently implement AI models directly on embedded and peripheral devices. Research into lightweight architectures, energy-efficient learning algorithms, and hardware–software co-design will be essential for facilitating real-time intelligent control within resource-constrained environments.

9.2. AI-Powered Digital Twins

Digital twins that integrate physics-based models with data-driven artificial intelligence are anticipated to serve a pivotal function in system design, optimization, and lifecycle management. Future research will concentrate on real-time synchronization between physical systems and their digital counterparts, facilitating predictive maintenance, adaptive control, and virtual commissioning.

9.3. Fully Autonomous Mechatronic Cells

The progression toward Industry 5.0 envisions highly autonomous, self-organizing mechatronic cells capable of decision-making, learning, and self-optimization. Research is required to advance multi-agent AI frameworks, decentralized intelligence, and robust coordination mechanisms for these systems.

9.4. Explainable and Trustworthy AI

Enhancing the transparency and interpretability of AI models is crucial for safety-critical mechatronic applications. Future research will focus on explainable artificial intelligence (XAI), uncertainty quantification, and formal verification techniques to enhance trust, support certification processes, and assure adherence to regulatory standards.

9.5. Self-Adapting and Lifelong Learning Systems

Improving the transparency and interpretability of AI models is essential for safety-critical mechatronic applications. Future research will concentrate on explainable AI (XAI), uncertainty quantification, and formal verification methods to improve trustworthiness, facilitate certification procedures, and ensure compliance with regulatory standards.

10. Conclusions

This review provides a comprehensive synthesis of how artificial intelligence is transforming contemporary mechatronic systems across sensing, control, optimization, and decision-making layers. It demonstrates how machine learning, deep learning, reinforcement learning, and embedded AI enable adaptive control, real-time process optimization, predictive maintenance, energy efficiency, and autonomous operation in mechatronic systems. Case studies in robotics, transportation, healthcare mechatronics, industrial automation, and ultra-precision manufacturing further illustrate the transformative role of AI-enhanced mechatronic systems in advanced manufacturing and intelligent production environments.
Despite these advancements, the analysis identifies several critical research gaps that limit large-scale industrial implementation. Many data-driven AI-based mechatronic solutions remain fragile under varying operational conditions and exhibit limited generalization capability. The opaque or “black-box” nature of deep learning models raises concerns regarding interpretability, trust, and safety, particularly in mission-critical applications such as autonomous machinery, healthcare devices, and ultra-precision manufacturing equipment. Additional challenges arise from real-time computational constraints, limited resources of embedded platforms, system interoperability issues, and the absence of standardized frameworks for AI–mechatronics integration.
Future research should prioritize the development of explainable and trustworthy AI for mechatronic control to support transparent decision-making and safety certification. Establishing standardized architectures, integration guidelines, and performance benchmarks for AI-enabled mechatronic systems is essential to ensure interoperability across sensing, actuation, edge computing, and digital twin technologies. Emerging paradigms such as Edge AI, TinyML, and neuromorphic computing are expected to address real-time processing and energy efficiency requirements. Furthermore, tighter integration between digital twins and physical systems will enable continuous learning, virtual commissioning, and lifecycle optimization of intelligent mechatronic systems.
Looking ahead, Industry 5.0 is expected to emphasize resilience, sustainability, and human–machine collaboration through the convergence of artificial intelligence, mechatronics, cyber–physical systems, and human-centered design. By addressing current limitations and pursuing these research directions, AI-enabled mechatronic systems can evolve into reliable, scalable, and deployable solutions for next-generation smart manufacturing and autonomous engineering applications.

Author Contributions

G.S. developed the study, performed the literature review, analyzed and integrated the findings, and authored the manuscript. B.G. has reviewed and authorized the final version of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The present study is a review article and does not involve the generation of new datasets. All data supporting the findings of this study are available within the cited literature.

Acknowledgments

The authors thank the Department of Mechanical Engineering at the University of KwaZulu-Natal, Durban, South Africa, for their support. They also recognize the use of AI-assisted tools, specifically Grammarly (v1.152.10) and QuillBot (v40.104.0), used during the manuscript’s preparation for language editing, grammar correction, and enhancing clarity and readability. These tools were not used to generate scientific content, data, interpretation, or conclusions.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
IoTInternet of Things
DTsDigital Twins
MLMachine Learning
HRCHuman–Robot Collaboration
MVMachine Vision
AIArtificial Intelligence
DLDeep Learning
RLReinforcement Learning

References

  1. Soori, M.; Dastres, R.; Arezoo, B.; Jough, F.K.G. Intelligent robotic systems in Industry 4.0: A review. J. Adv. Manuf. Sci. Technol. 2024, 4, 2024007. [Google Scholar] [CrossRef]
  2. Yakubu Rabiu, A.A.; Messager, N.D.; Musa, M.I.; Bayero, N.S.; Surajo, A. Artificial Intelligence in Smart Sensors and Actuators for Mechatronic Applications. Simulation 2025, 3, 20. [Google Scholar] [CrossRef]
  3. Ryalat, M.; Franco, E.; Elmoaqet, H.; Almtireen, N.; Al-Refai, G. The integration of advanced mechatronic systems into industry 4.0 for smart manufacturing. Sustainability 2024, 16, 8504. [Google Scholar] [CrossRef]
  4. Nüßgen, A.; Lerch, A.; Degen, R.; Irmer, M.; Fries, M.; Richter, F.; Boström, C.; Ruschitzka, M. Reinforcement Learning in Mechatronic Systems: A Case Study on DC Motor Control. Circuits Syst. 2025, 16, 1–24. [Google Scholar] [CrossRef]
  5. Windmann, A.; Wittenberg, P.; Schieseck, M.; Niggemann, O. Artificial intelligence in Industry 4.0: A review of integration challenges for industrial systems. In 2024 IEEE 22nd International Conference on Industrial Informatics (INDIN); IEEE: New York, NY, USA, 2024; pp. 1–8. [Google Scholar]
  6. Chatti, S.; Laperrière, L.; Reinhart, G.; Tolio, T. CIRP Encyclopedia of Production Engineering; Springer: Berlin/Heidelberg, Germany, 2019; pp. 1181–1186. [Google Scholar]
  7. Bishop, R.H. Mechatronics: An Introduction; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  8. Afolalu, S.A.; Ikumapayi, O.M.; Abdulkareem, A.; Soetan, S.B.; Emetere, M.E.; Ongbali, S.O. Enviable roles of manufacturing processes in sustainable fourth industrial revolution—A case study of mechatronics. Mater. Today Proc. 2021, 44, 2895–2901. [Google Scholar] [CrossRef]
  9. Stankovski, S.; Ostojić, G.; Zhang, X.; Baranovski, I.; Tegeltija, S.; Horvat, S. Mechatronics, identification tehnology, industry 4.0 and education. In 2019 18th International Symposium Infoteh-Jahorina (Infoteh); IEEE: New York, NY, USA, 2019; pp. 1–4. [Google Scholar]
  10. Geevarghese, K.P.; Gangadharan, K.V. Design and implementation of remote mechatronics laboratory for e-learning using labview and smartphone and cross-platform communication toolkit (scct). Procedia Technol. 2014, 14, 108–115. [Google Scholar][Green Version]
  11. Liagkou, V.; Stylios, C.; Pappa, L.; Petunin, A. Challenges and opportunities in industry 4.0 for mechatronics, artificial intelligence and cybernetics. Electronics 2021, 10, 2001. [Google Scholar] [CrossRef]
  12. Liao, S.H. Expert system methodologies and applications a decade review from 1995 to 2004. Expert Syst. Appl. 2005, 28, 93–103. [Google Scholar] [CrossRef]
  13. Clancey, W.J. The epistemology of a rule-based expert system a framework for explanation. Artif. Intell. 1983, 20, 215–251. [Google Scholar] [CrossRef]
  14. Nilsson, N.J. The Quest for Artificial Intelligence; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  15. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  16. Jordan, M.I.; Mitchell, T.M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef]
  17. Domingos, P. A few useful things to know about machine learning. Commun. ACM 2012, 55, 78–87. [Google Scholar] [CrossRef]
  18. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems 30 (NIPS 2017); Curran Associates, Inc.: Red Hook, NY, USA, 2017. [Google Scholar]
  19. Halevy, A.; Norvig, P.; Pereira, F. The unreasonable effectiveness of data. IEEE Intell. Syst. 2009, 24, 8–12. [Google Scholar] [CrossRef]
  20. Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. TensorFlow: A system for Large-Scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16); USENIX: Berkeley, CA, USA, 2016; pp. 265–283. [Google Scholar]
  21. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Chintala, S. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32 (NeurIPS 2019); Curran Associates, Inc.: Red Hook, NY, USA, 2019. [Google Scholar]
  22. Zhang, E.Y.; Cheok, A.D.; Pan, Z.; Cai, J.; Yan, Y. From turing to transformers: A comprehensive review and tutorial on the evolution and applications of generative transformer models. Sci 2023, 5, 46. [Google Scholar] [CrossRef]
  23. Garcez, A.D.A.; Lamb, L.C. Neurosymbolic ai: The 3rd wave. Artif. Intell. Rev. 2023, 56, 12387–12406. [Google Scholar] [CrossRef]
  24. Hashemi, A.; Dowlatshahi, M.B. A review on the feasibility of artificial intelligence in mechatronics. In Artificial Intelligence in Mechatronics and Civil Engineering: Bridging the Gap; Springer: Singapore, 2023; pp. 79–92. [Google Scholar] [CrossRef]
  25. Zaitceva, I.; Andrievsky, B. Methods of intelligent control in mechatronics and robotic engineering: A survey. Electronics 2022, 11, 2443. [Google Scholar] [CrossRef]
  26. Aheleroff, S.; Mostashiri, N.; Xu, X.; Zhong, R.Y. Mass personalisation as a service in industry 4.0: A resilient response case study. Adv. Eng. Inform. 2021, 50, 101438. [Google Scholar] [CrossRef]
  27. Guo, Y. AI in modular mechatronic system design: A review. Appl. Sci. 2023, 13, 158. [Google Scholar] [CrossRef]
  28. Katrantzis, E.; Moulianitis, V.C.; Miatliuk, K. Conceptual Design Evaluation of Mechatronic Systems. In Emerging Trends in Mechatronics; IntechOpen: London, UK, 2020. [Google Scholar]
  29. Jiménez López, E.; Cuenca Jiménez, F.; Luna Sandoval, G.; Ochoa Estrella, F.J.; Maciel Monteón, M.A.; Muñoz, F.; Limón Leyva, P.A. Technical considerations for the conformation of specific competences in mechatronic engineers in the context of industry 4.0 and 5.0. Processes 2022, 10, 1445. [Google Scholar] [CrossRef]
  30. Maksuti, R. Applications of smart materials in mechatronics technology. JAS-SUT J. Appl. Sci.-SUT 2019, 5, 9–13. [Google Scholar]
  31. Maksuti, R. Properties of Smart Materials for Mechatronic Applications. J. Appl. Sci.-SUT 2021, 7, 118–125. [Google Scholar]
  32. Whig, P.; Madavarapu, J.B.; Modhugu, V.R.; Kasula, B.Y.; Bhatia, A.B. Advancing mechatronics through artificial intelligence. In Computational Intelligent Techniques in Mechatronics; John Wiley & Sons: Hoboken, NJ, USA, 2024; pp. 381–399. [Google Scholar]
  33. Gehlot, V.; Rana, P.S. AI in Mechatronics. In Computational Intelligent Techniques in Mechatronics; John Wiley & Sons: Hoboken, NJ, USA, 2024; pp. 1–39. [Google Scholar]
  34. Yuan, S.; Cheung, C.F.; Fang, F.; Huang, H.; Wang, C. Multi-physical field coupling polishing of diamond for atomic-scale damage-free surface. Int. J. Extrem. Manuf. 2026, 8, 032004. [Google Scholar] [CrossRef]
  35. Zhao, L.; Wang, X.; Jiang, M.; Zhao, C.; Jiang, N.; Nishimura, K.; Yi, J.; Fang, S. Optimization of Diamond polishing process for sub-nanometer roughness using Ar/O2/SF6 plasma. Materials 2025, 18, 2615. [Google Scholar] [CrossRef]
  36. Pan, M.; Zhang, G.; Zhang, W.; Zhang, J.; Xu, Z.; Du, J. A Review of Intelligentization System and Architecture for Ultra-Precision Machining Process. Processes 2024, 12, 2754. [Google Scholar] [CrossRef]
  37. Li, J.P.; Polovina, N.; Konur, S. A Review of AI-Driven Engineering Modelling and Optimization: Methodologies, Applications and Future Directions. Algorithms 2026, 19, 93. [Google Scholar] [CrossRef]
  38. Peivaste, I.; Belouettar, S.; Mercuri, F.; Fantuzzi, N.; Dehghani, H.; Izadi, R.; Ibrahim, H.; Lengiewicz, J.; Belouettar-Mathis, M.; Bendine, K.; et al. Artificial intelligence in materials science and engineering: Current landscape, key challenges, and future trajectories. Compos. Struct. 2025, 372, 119419. [Google Scholar] [CrossRef]
  39. Jan, Z.; Ahamed, F.; Mayer, W.; Patel, N.; Grossmann, G.; Stumptner, M.; Kuusk, A. Artificial intelligence for industry 4.0: Systematic review of applications, challenges, and opportunities. Expert Syst. Appl. 2023, 216, 119456. [Google Scholar] [CrossRef]
  40. Wamba-Taguimdje, S.L.; Fosso Wamba, S.; Kala Kamdjoug, J.R.; Tchatchouang Wanko, C.E. Influence of artificial intelligence (AI) on firm performance: The business value of AI-based transformation projects. Bus. Process Manag. J. 2020, 26, 1893–1924. [Google Scholar] [CrossRef]
  41. Alzubi, J.; Nayyar, A.; Kumar, A. Machine learning from theory to algorithms: An overview. J. Phys. Conf. Ser. 2018, 1142, 012012. [Google Scholar] [CrossRef]
  42. Mukhamediev, R.I.; Symagulov, A.; Kuchin, Y.; Yakunin, K.; Yelis, M. From classical machine learning to deep neural networks: A simplified scientometric review. Appl. Sci. 2021, 11, 5541. [Google Scholar] [CrossRef]
  43. Nassehi, A.; Zhong, R.Y.; Li, X.; Epureanu, B.I. Review of machine learning technologies and artificial intelligence in modern manufacturing systems. In Design and Operation of Production Networks for Mass Personalization in the Era of Cloud Technology; Elsevier: Amsterdam, The Netherlands, 2022; pp. 317–348. [Google Scholar]
  44. Tercan, H.; Meisen, T. Machine learning and deep learning based predictive quality in manufacturing: A systematic review. J. Intell. Manuf. 2022, 33, 1879–1905. [Google Scholar] [CrossRef]
  45. McCulloch, W.S.; Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  46. Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 1958, 65, 386. [Google Scholar] [CrossRef]
  47. Cover, T.; Hart, P. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
  48. Toosi, A.; Bottino, A.G.; Saboury, B.; Siegel, E.; Rahmim, A. A brief history of AI: How to prevent another winter (a critical review). PET Clin. 2021, 16, 449–469. [Google Scholar] [CrossRef]
  49. Grigoras, C.C.; Zichil, V.; Ciubotariu, V.A.; Cosa, S.M. Machine learning, mechatronics, and stretch forming: A history of innovation in manufacturing engineering. Machines 2024, 12, 180. [Google Scholar] [CrossRef]
  50. Patil, D.; Rane, N.L.; Desai, P.; Rane, J. Machine learning and deep learning: Methods, techniques, applications, challenges, and future research opportunities. In Trustworthy Artificial Intelligence in Industry and Society; Deep Science Publishing: San Francisco, CA, USA, 2024; pp. 28–81. [Google Scholar]
  51. Taye, M.M. Understanding of machine learning with deep learning: Architectures, workflow, applications and future directions. Computers 2023, 12, 91. [Google Scholar] [CrossRef]
  52. Sarker, I.H. Machine learning: Algorithms, real-world applications and research directions. SN Comput. Sci. 2021, 2, 160. [Google Scholar] [CrossRef]
  53. Swapna, M.; Sharma, Y.K.; Prasadh, B.M.G. CNN Architectures: Alex Net, Le Net, VGG, Google Net, Res Net. Int. J. Recent Technol. Eng. 2020, 8, 953–960. [Google Scholar] [CrossRef]
  54. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 2002, 86, 2278–2324. [Google Scholar] [CrossRef]
  55. Hinton, G.E. Deep belief networks. Scholarpedia 2009, 4, 5947. [Google Scholar] [CrossRef]
  56. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed]
  57. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25 (NIPS 2012); Curran Associates, Inc.: Red Hook, NY, USA, 2012. [Google Scholar]
  58. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
  59. Ahmad, J.; Farman, H.; Jan, Z. Deep learning methods and applications. In Deep Learning: Convergence to Big Data Analytics; Springer: Singapore, 2018; pp. 31–42. [Google Scholar]
  60. Fayaz, S.A.; Jahangeer Sidiq, S.; Zaman, M.; Butt, M.A. Machine learning: An introduction to reinforcement learning. In Machine Learning and Data Science: Fundamentals and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2022; pp. 1–22. [Google Scholar]
  61. Lee, J.H.; Shin, J.; Realff, M.J. Machine learning: Overview of the recent progresses and implications for the process systems engineering field. Comput. Chem. Eng. 2018, 114, 111–121. [Google Scholar] [CrossRef]
  62. Busoniu, L.; Babuska, R.; De Schutter, B.; Ernst, D. Reinforcement Learning and Dynamic Programming Using Function Approximators; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar] [CrossRef]
  63. Kurrek, P.; Jocas, M.; Zoghlami, F.; Stoelen, M.; Salehi, V. Ai motion control—A generic approach to develop control policies for robotic manipulation tasks. In Proceedings of the Design Society: International Conference on Engineering Design; Cambridge University Press: Cambridge, UK, 2019; Volume 1, pp. 3561–3570. [Google Scholar]
  64. Kim, J.B.; Lim, H.K.; Kim, C.M.; Kim, M.S.; Hong, Y.G.; Han, Y.H. Imitation reinforcement learning-based remote rotary inverted pendulum control in openflow network. IEEE Access 2019, 7, 36682–36690. [Google Scholar] [CrossRef]
  65. Menghal, P.M.; Laxmi, A.J. Real time simulation: A novel approach in engineering education. In 2011 3rd International Conference on Electronics Computer Technology; IEEE: New York, NY, USA, 2011; Volume 1, pp. 215–219. [Google Scholar]
  66. Werbos, P.J. Reinforcement learning and approximate dynamic programming (RLADP) foundations, common misconceptions, and the challenges ahead. In Reinforcement Learning and Approximate Dynamic Programming for Feedback Control; John Wiley & Son: Hoboken, NJ, USA, 2012; pp. 1–30. [Google Scholar]
  67. Lewis, F.L.; Vrabie, D. Reinforcement learning and adaptive dynamic programming for feedback control. IEEE Circuits Syst. Mag. 2009, 9, 32–50. [Google Scholar] [CrossRef]
  68. Razzaq, K.; Shah, M. Machine learning and deep learning paradigms: From techniques to practical applications and research frontiers. Computers 2025, 14, 93. [Google Scholar] [CrossRef]
  69. Mumuni, Q.A.; Olayiwola-Mumuni, A.I.; Yussouff, A.A. The advent of the proportional integral derivative controller: A review. J. Adv. Eng. Technol. 2023, 2, 5–22. [Google Scholar]
  70. Noshadi, A.; Shi, J.; Lee, W.S.; Shi, P.; Kalam, A. Optimal PID-type fuzzy logic controller for a multi-input multi-output active magnetic bearing system. Neural Comput. Appl. 2016, 27, 2031–2046. [Google Scholar] [CrossRef]
  71. Kiss, A.N.; Marx, B.; Mourot, G.; Schutz, G.; Ragot, J. State estimation of two-time scale multiple models. Application to wastewater treatment plant. Control Eng. Pract. 2011, 19, 1354–1362. [Google Scholar] [CrossRef]
  72. Yang, Y.; Gao, Y.; Wu, J.; Ding, Z.; Zhao, S. Improving PID controller performance in nonlinear oscillatory automatic generation control systems using a multi-objective marine predator algorithm with enhanced diversity. J. Bionic Eng. 2024, 21, 2497–2514. [Google Scholar] [CrossRef]
  73. Charkoutsis, S.; Kara-Mohamed, M. A Particle Swarm Optimization tuned nonlinear PID controller with improved performance and robustness for First Order Plus Time Delay systems. Results Control Optim. 2023, 12, 100289. [Google Scholar] [CrossRef]
  74. Mumuni, Q.A.; Olaniyan, O.M.; Ipinnimo, O.; Akinyemi, L.A.; Olayiwola-Mumuni, A.I. Intelligent PID Gain Selection via Supervised Machine Learning Approach for Decoupled MIMO Control Systems. J. Future Artif. Intell. Technol. 2025, 2, 163–181. [Google Scholar] [CrossRef]
  75. Joseph, S.B.; Dada, E.G.; Abidemi, A.; Oyewola, D.O.; Khammas, B.M. Metaheuristic algorithms for PID controller parameters tuning: Review, approaches and open problems. Heliyon 2022, 8, e09399. [Google Scholar] [CrossRef]
  76. Fradkov, A.L. Scientific School of Vladimir Yakubovich in the 20th century. IFAC-PapersOnLine 2017, 50, 5231–5237. [Google Scholar] [CrossRef]
  77. Gusev, S.V.; Bondarko, V.A. Notes on Yakubovich’s method of recursive objective inequalities and its application in adaptive control and robotics. IFAC-PapersOnLine 2020, 53, 1379–1384. [Google Scholar] [CrossRef]
  78. Lipkovich, M. Yakubovich’s method of recursive objective inequalities in machine learning. IFAC-PapersOnLine 2022, 55, 138–143. [Google Scholar] [CrossRef]
  79. Fernández Mareco, E.R.; Pinto-Roa, D. Application of Artificial Intelligence in Control Systems: Trends, Challenges, and Opportunities. AI 2025, 6, 326. [Google Scholar] [CrossRef]
  80. Li, Y.; Chen, H.; Tsung, F. A review of AI-assisted motion control. In International Conference on Internet of Things and Machine Learning (IoTML 2023); SPIE: Bellingham, WA, USA, 2023; Volume 12937, pp. 212–222. [Google Scholar]
  81. Trinh, M.; Königs, M.; Gründel, L.; Beier, M.; Petrovic, O.; Brecher, C. Accuracy Optimization of Robotic Machining Using Grey-Box Modeling and Simulation Planning Assistance. J. Manuf. Mater. Process. 2025, 9, 126. [Google Scholar] [CrossRef]
  82. Tian, J.; Li, K.; Xue, W. An adaptive ensemble predictive strategy for multiple scale electrical energy usages forecasting. Sustain. Cities Soc. 2021, 66, 102654. [Google Scholar] [CrossRef]
  83. Sepehr, M.; Eghtedaei, R.; Toolabimoghadam, A.; Noorollahi, Y.; Mohammadi, M. Modeling the electrical energy consumption profile for residential buildings in Iran. Sustain. Cities Soc. 2018, 41, 481–489. [Google Scholar] [CrossRef]
  84. Batlle, E.A.O.; Palacio, J.C.E.; Lora, E.E.S.; Reyes, A.M.M.; Moreno, M.M.; Morejón, M.B. A methodology to estimate baseline energy use and quantify savings in electrical energy consumption in higher education institution buildings: Case study, Federal University of Itajubá (UNIFEI). J. Clean. Prod. 2020, 244, 118551. [Google Scholar] [CrossRef]
  85. He, F.; Zhou, J.; Feng, Z.K.; Liu, G.; Yang, Y. A hybrid short-term load forecasting model based on variational mode decomposition and long short-term memory networks considering relevant factors with Bayesian optimization algorithm. Appl. Energy 2019, 237, 103–116. [Google Scholar] [CrossRef]
  86. Ahmad, T.; Chen, H.; Guo, Y.; Wang, J. A comprehensive overview on the data driven and large scale based approaches for forecasting of building energy demand: A review. Energy Build. 2018, 165, 301–320. [Google Scholar] [CrossRef]
  87. Amasyali, K.; El-Gohary, N.M. A review of data-driven building energy consumption prediction studies. Renew. Sustain. Energy Rev. 2018, 81, 1192–1205. [Google Scholar] [CrossRef]
  88. Wei, Y.; Zhang, X.; Shi, Y.; Xia, L.; Pan, S.; Wu, J.; Han, M.; Zhao, X. A review of data-driven approaches for prediction and classification of building energy consumption. Renew. Sustain. Energy Rev. 2018, 82, 1027–1047. [Google Scholar] [CrossRef]
  89. Wang, Y.; Yang, D.; Zhang, X.; Chen, Z. Probability based remaining capacity estimation using data-driven and neural network model. J. Power Sources 2016, 315, 199–208. [Google Scholar] [CrossRef]
  90. Koschwitz, D.; Frisch, J.; Van Treeck, C. Data-driven heating and cooling load predictions for non-residential buildings based on support vector machine regression and NARX Recurrent Neural Network: A comparative study on district scale. Energy 2018, 165, 134–142. [Google Scholar] [CrossRef]
  91. Smarra, F.; Jain, A.; De Rubeis, T.; Ambrosini, D.; D’Innocenzo, A.; Mangharam, R. Data-driven model predictive control using random forests for building energy optimization and climate control. Appl. Energy 2018, 226, 1252–1272. [Google Scholar] [CrossRef]
  92. Ahmed, S.; Hasan, M.Z.; MacLennan, M.; Dorin, F.; Ahmed, M.W.; Hasan, M.M.; Hasan, S.M.; Islam, M.T.; Khan, J.A.M. Measuring the efficiency of health systems in Asia: A data envelopment analysis. BMJ Open 2019, 9, e022155. [Google Scholar] [CrossRef]
  93. Zhu, N.; Zhu, C.; Emrouznejad, A. A combined machine learning algorithms and DEA method for measuring and predicting the efficiency of Chinese manufacturing listed companies. J. Manag. Sci. Eng. 2020, 6, 435–448. [Google Scholar] [CrossRef]
  94. Zhong, K.; Wang, Y.; Pei, J.; Tang, S.; Han, Z. Super-efficiency SBM-DEA and neural network for performance evaluation. Inf. Process. Manag. 2021, 58, 102728. [Google Scholar] [CrossRef]
  95. Bataineh, A.A.; Kaur, D.; Jalali, S.M.J. Multi-layer perceptron training optimization using nature-inspired computing. IEEE Access 2022, 10, 36963–36977. [Google Scholar] [CrossRef]
  96. Salehi, V.; Veitch, B.; Musharraf, M. Measuring and improving adaptive capacity in resilient systems by means of an integrated DEA–machine learning approach. Appl. Ergon. 2020, 82, 102975. [Google Scholar] [CrossRef]
  97. Mirmozaffari, M.; Yazdani, M.; Boskabadi, A.; Ahady Dolatsara, H.; Kabirifar, K.; Amiri Golilarz, N. A novel machine learning approach combined with optimization models for eco-efficiency evaluation. Appl. Sci. 2020, 10, 5210. [Google Scholar] [CrossRef]
  98. Tayal, A.; Solanki, A.; Singh, S.P. Integrated framework for identifying sustainable manufacturing layouts based on big data, machine learning, meta-heuristic and data envelopment analysis. Sustain. Cities Soc. 2020, 62, 102383. [Google Scholar] [CrossRef]
  99. Li, Z.; Crook, J.; Andreeva, G. Dynamic prediction of financial distress using Malmquist DEA. Expert Syst. Appl. 2017, 80, 94–106. [Google Scholar] [CrossRef]
  100. Hafeez, G.; Alimgeer, K.S.; Khan, I. Electric load forecasting based on deep learning and optimized by heuristic algorithm in smart grid. Appl. Energy 2020, 269, 114915. [Google Scholar] [CrossRef]
  101. Wen, L.; Zhou, K.; Yang, S.; Li, X. Optimal load dispatch of community microgrid with deep learning-based solar power and load forecasting. Energy 2019, 171, 1053–1065. [Google Scholar] [CrossRef]
  102. Merei, G.; Berger, C.; Sauer, D.U. Optimization of an off-grid hybrid PV–wind–diesel system with different battery technologies using genetic algorithm. Sol. Energy 2013, 97, 460–473. [Google Scholar] [CrossRef]
  103. Król, J.; Ocłoń, P. Economic analysis of heat and electricity production in combined heat and power plant equipped with steam and water boilers and natural gas engines. Energy Convers. Manag. 2018, 176, 11–29. [Google Scholar] [CrossRef]
  104. Xu, X.; Wang, Y.; Zuo, L.; Chen, S. Multimaterial topology optimization of thermoelectric generators. In ASME 2019 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference; American Society of Mechanical Engineers: New York, NY, USA, 2019; Volume 59186, p. V02AT03A064. [Google Scholar] [CrossRef]
  105. Han, Y.; Liu, S.; Geng, Z.; Guo, H.; Qi, Y. Energy analysis and resources optimization of complex chemical processes: Evidence based on novel DEA cross-model. Energy 2021, 218, 119508. [Google Scholar] [CrossRef]
  106. Goyal, S.B.; Rajawat, A.S.; Solanki, R.K.; Zaaba, M.A.M.; Long, Z.A. Integrating AI with cyber security for smart industry 4.0 application. In 2023 International Conference on Inventive Computation Technologies (ICICT); IEEE: New York, NY, USA, 2023; pp. 1223–1232. [Google Scholar]
  107. Bargavi, S.M.; Muhammed, H.; Harish, P.S.; Dhanush, D. Edge Computing and AI for Real-time Analytics in Smart Devices. Asian J. Basic Sci. Res. 2025, 7, 1–9. [Google Scholar] [CrossRef]
  108. Khalifa, I.A.; Keti, F. The Role of Image Processing and Deep Learning in IoT-Based Systems: A Comprehensive Review. Eur. J. Appl. Sci. Eng. Technol. 2025, 3, 165–179. [Google Scholar] [CrossRef]
  109. Huang, X.; Wang, H.; Shiyin, Q.; Su-Kit, T. Embedded Artificial Intelligence: A Comprehensive Literature Review. Electronics 2025, 14, 3468. [Google Scholar] [CrossRef]
  110. Hu, J.; Wang, Y.; Cheng, S.; Xu, J.; Wang, N.; Fu, B.; Ning, Z.; Li, J.; Chen, H.; Feng, C.; et al. A survey of decision-making and planning methods for self-driving vehicles. Front. Neurorobot. 2025, 19, 1451923. [Google Scholar] [CrossRef]
  111. Zhao, C.; Li, L.; Pei, X.; Li, Z.; Wang, F.-Y.; Wu, X. A comparative study of state-of-the-art driving strategies for autonomous vehicles. Accid. Anal. Prev. 2021, 150, 105937. [Google Scholar] [CrossRef]
  112. Veres, S.M.; Molnar, L.; Lincoln, N.K.; Morice, C.P. Autonomous vehicle control systems a review of decision making. Proc. Inst. Mech. Eng. Part I J. Syst. Control Eng. 2011, 225, 155–195. [Google Scholar] [CrossRef]
  113. Ashraf, M. Enhancing Perception and Decision-Making in Autonomous Systems through Vision-based Technologies: Focus on Robotics, Drones, and Self-Driving Cars. Int. J. 2024, 13, 794–797. [Google Scholar] [CrossRef]
  114. Peralta, F.; Arzamendia, M.; Gregor, D.; Reina, D.G.; Toral, S. A comparison of local path planning techniques of autonomous surface vehicles for monitoring applications: The ypacarai lake case-study. Sensors 2020, 20, 1488. [Google Scholar] [CrossRef]
  115. Meng, Q.; Song, H.; Li, G.; Zhang, Y.A.; Zhang, X. A block object detection method based on feature fusion networks for autonomous vehicles. Complexity 2019, 2019, 4042624. [Google Scholar] [CrossRef]
  116. Ni, J.; Wu, L.; Shi, P.; Yang, S.X. A dynamic bioinspired neural network based real-time path planning method for autonomous underwater vehicles. Comput. Intell. Neurosci. 2017, 2017, 9269742. [Google Scholar] [CrossRef] [PubMed]
  117. Venu, S.; Gurusamy, M. A Comprehensive Review of Path Planning Algorithms for Autonomous Navigation. Results Eng. 2025, 28, 107750. [Google Scholar] [CrossRef]
  118. Yuan, C.; Wei, Y.; Shen, J.; Chen, L.; He, Y.; Weng, S.; Wang, T. Research on path planning based on new fusion algorithm for autonomous vehicle. Int. J. Adv. Robot. Syst. 2020, 17, 1729881420911235. [Google Scholar] [CrossRef]
  119. Zhang, J.; Shi, Z.; Yang, X.; Zhao, J. Trajectory planning and tracking control for autonomous parallel parking of a non-holonomic vehicle. Meas. Control 2020, 53, 1800–1816. [Google Scholar] [CrossRef]
  120. Min, H.; Xiong, X.; Wang, P.; Yu, Y. Autonomous driving path planning algorithm based on improved A* algorithm in unstructured environment. Proc. Inst. Mech. Eng. Part D J. Automob. Eng. 2021, 235, 513–526. [Google Scholar] [CrossRef]
  121. Mohammadzadeh, A.; Taghavifar, H. A novel adaptive control approach for path tracking control of autonomous vehicles subject to uncertain dynamics. Proc. Inst. Mech. Eng. Part D J. Automob. Eng. 2020, 234, 2115–2126. [Google Scholar] [CrossRef]
  122. Huang, C.; Chen, Z.; Liu, Y.; Xu, B.; Ling, Z.; Li, Z.; Yu, W.; Shen, Y.; Wang, H.; Li, J.; et al. Autonomous Driving Comfort Prediction: Improve Comfort through Global Path Planning. Res. Sq. 2024. [Google Scholar] [CrossRef]
  123. Yan, F.; Liu, Y.S.; Xiao, J.Z. Path planning in complex 3D environments using a probabilistic roadmap method. Int. J. Autom. Comput. 2013, 10, 525–533. [Google Scholar] [CrossRef]
  124. Aghababa, M.P. 3D path planning for underwater vehicles using five evolutionary optimization algorithms avoiding static and energetic obstacles. Appl. Ocean Res. 2012, 38, 48–62. [Google Scholar] [CrossRef]
  125. Choset, H.; Lynch, K.M.; Hutchinson, S.; Kantor, G.A.; Burgard, W. Principles of Robot Motion: Theory, Algorithms, and Implementations; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  126. Ibraheem, I.K.; Ajeil, F.H. Multi-objective path planning of an autonomous mobile robot in static and dynamic environments using a hybrid PSO-MFB optimisation algorithm. arXiv 2018, arXiv:1805.00224. [Google Scholar]
  127. AL-Nayar, M.M.; Dagher, K.E.; Hadi, E.A. A comparative study for wheeled mobile robot path planning based on modified intelligent algorithms. Iraqi J. Mech. Mater. Eng. 2019, 19, 60–74. [Google Scholar] [CrossRef]
  128. Hassani, I.; Maalej, I.; Rekik, C. Robot path planning with avoiding obstacles in known environment using free segments and turning points algorithm. Math. Probl. Eng. 2018, 2018, 2163278. [Google Scholar] [CrossRef]
  129. Gul, F.; Rahiman, W.; Nazli Alhady, S.S. A comprehensive study for robot navigation techniques. Cogent Eng. 2019, 6, 1632046. [Google Scholar] [CrossRef]
  130. Yang, S.; Behzadian, K.; Coleman, C.; Holloway, T.G.; Campos, L.C. Application of AI-based techniques for anomaly management in wastewater treatment plants: A review. J. Environ. Manag. 2025, 392, 126886. [Google Scholar] [CrossRef]
  131. Bayoudh, K.; Knani, R.; Hamdaoui, F.; Mtibaa, A. A survey on deep multimodal learning for computer vision: Advances, trends, applications, and datasets. Vis. Comput. 2022, 38, 2939–2970. [Google Scholar] [CrossRef]
  132. Robinson, N.; Tidd, B.; Campbell, D.; Kulić, D.; Corke, P. Robotic vision for human-robot interaction and collaboration: A survey and systematic review. ACM Trans. Hum.-Robot Interact. 2023, 12, 1–66. [Google Scholar] [CrossRef]
  133. Anthony, E.J.; Kusnadi, R.A. Computer vision for supporting visually impaired people: A systematic review. Eng. Math. Comput. Sci. J. (EMACS) 2021, 3, 65–71. [Google Scholar] [CrossRef]
  134. Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep learning for computer vision: A brief review. Comput. Intell. Neurosci. 2018, 2018, 7068349. [Google Scholar] [CrossRef] [PubMed]
  135. Manakitsa, N.; Maraslidis, G.S.; Moysis, L.; Fragulis, G.F. A review of machine learning and deep learning for object detection, semantic segmentation, and human action recognition in machine and robotic vision. Technologies 2024, 12, 15. [Google Scholar] [CrossRef]
  136. Zhang, L.; Jia, X.; Chang, Q.; Liu, X.; Zhang, Z.; Cao, Y.; Liu, J.; Yang, Y. The development of machine vision and its applications in different industries: A review. Mech. Eng. Adv. 2024, 2, 1746. [Google Scholar] [CrossRef]
  137. Wu, M.; Li, C.; Yao, Z. Deep active learning for computer vision tasks: Methodologies, applications, and challenges. Appl. Sci. 2022, 12, 8103. [Google Scholar] [CrossRef]
  138. Katti, H.; Peelen, M.V.; Arun, S.P. Machine vision benefits from human contextual expectations. Sci. Rep. 2019, 9, 2112. [Google Scholar] [CrossRef]
  139. Gorodokin, V.; Zhankaziev, S.; Shepeleva, E.; Magdin, K.; Evtyukov, S. Optimization of adaptive traffic light control modes based on machine vision. Transp. Res. Procedia 2021, 57, 241–249. [Google Scholar] [CrossRef]
  140. Panagakis, Y.; Kossaifi, J.; Chrysos, G.G.; Oldfield, J.; Nicolaou, M.A.; Anandkumar, A.; Zafeiriou, S. Tensor methods in computer vision and deep learning. Proc. IEEE 2021, 109, 863–890. [Google Scholar] [CrossRef]
  141. Zhang, D.; Dongru, H.; Kang, L.; Zhang, W. The generative adversarial networks and its application in machine vision. Enterp. Inf. Syst. 2022, 16, 326–346. [Google Scholar] [CrossRef]
  142. Gustafsson, F.K.; Danelljan, M.; Schon, T.B. Evaluating scalable bayesian deep learning methods for robust computer vision. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); IEEE: New York, NY, USA, 2020; pp. 318–319. [Google Scholar]
  143. Whatmough, P.N.; Zhou, C.; Hansen, P.; Venkataramanaiah, S.K.; Seo, J.S.; Mattina, M. Fixynn: Efficient hardware for mobile computer vision via transfer learning. arXiv 2019, arXiv:1902.11128. [Google Scholar] [CrossRef]
  144. Shu, Y.; Xiong, C.; Fan, S. Interactive design of intelligent machine vision based on human–computer interaction mode. Microprocess. Microsyst. 2020, 75, 103059. [Google Scholar] [CrossRef]
  145. Wu, W.; Li, Q. Machine vision inspection of electrical connectors based on improved Yolo v3. IEEE Access 2020, 8, 166184–166196. [Google Scholar] [CrossRef]
  146. Wu, P.; He, T.; Zhu, H.; Wang, Y.; Li, Q.; Wang, Z.; Fu, X.; Wang, F.; Wang, P.; Shan, C.; et al. Next-generation machine vision systems incorporating two-dimensional materials: Progress and perspectives. InfoMat 2022, 4, e12275. [Google Scholar] [CrossRef]
  147. Reggiannini, M.; Moroni, D. The use of saliency in underwater computer vision: A review. Remote Sens. 2020, 13, 22. [Google Scholar] [CrossRef]
  148. Akhtar, N.; Mian, A.; Kardan, N.; Shah, M. Advances in adversarial attacks and defenses in computer vision: A survey. IEEE Access 2021, 9, 155161–155196. [Google Scholar] [CrossRef]
  149. Goel, A.; Tung, C.; Lu, Y.H.; Thiruvathukal, G.K. A survey of methods for low-power deep learning and computer vision. In 2020 IEEE 6th World Forum on Internet of Things (WF-IoT); IEEE: New York, NY, USA, 2020; pp. 1–6. [Google Scholar]
  150. O’Mahony, N.; Campbell, S.; Carvalho, A.; Harapanahalli, S.; Hernandez, G.V.; Krpalkova, L.; Riordan, D.; Walsh, J. Deep learning vs. traditional computer vision. In Advances in Computer Vision; Science and Information Conference; Springer International Publishing: Cham, Switzerland, 2019; pp. 128–144. [Google Scholar]
  151. Yang, Z.; Nahrstedt, K.; Guo, H.; Zhou, Q. Deeprt: A soft real time scheduler for computer vision applications on the edge. In 2021 IEEE/ACM Symposium on Edge Computing (SEC); IEEE: New York, NY, USA, 2021; pp. 271–284. [Google Scholar]
  152. Baygin, M.; Karakose, M.; Sarimaden, A.; Akin, E. An image processing based object counting approach for machine vision application. arXiv 2018, arXiv:1802.05911. [Google Scholar] [CrossRef]
  153. Talebi, H.; Milanfar, P. Learning to resize images for computer vision tasks. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV); IEEE: New York, NY, USA, 2021; pp. 497–506. [Google Scholar]
  154. El-Komy, A.; Shahin, O.R.; Abd El-Aziz, R.M.; Taloba, A.I. Integration of computer vision and natural language processing in multimedia robotics application. Inf. Sci. 2022, 7, 1–12. [Google Scholar]
  155. Li, J.; Mengu, D.; Yardimci, N.T.; Luo, Y.; Li, X.; Veli, M.; Rivenson, Y.; Jarrahi, M.; Ozcan, A. Spectrally encoded single-pixel machine vision using diffractive networks. Sci. Adv. 2021, 7, eabd7690. [Google Scholar] [CrossRef]
  156. Hu, Y.; Yang, S.; Yang, W.; Duan, L.Y.; Liu, J. Towards coding for human and machine vision: A scalable image coding approach. arXiv 2020, arXiv:2001.02915. [Google Scholar] [CrossRef]
  157. Roggi, G.; Niccolai, A.; Grimaccia, F.; Lovera, M. A computer vision line-tracking algorithm for automatic UAV photovoltaic plants monitoring applications. Energies 2020, 13, 838. [Google Scholar] [CrossRef]
  158. Paul, N.; Chung, C. Application of HDR algorithms to solve direct sunlight problems when autonomous vehicles using machine vision systems are driving into sun. Comput. Ind. 2018, 98, 192–196. [Google Scholar] [CrossRef]
  159. Rebecq, H.; Ranftl, R.; Koltun, V.; Scaramuzza, D. Events-to-video: Bringing modern computer vision to event cameras. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); IEEE: New York, NY, USA, 2019; pp. 3857–3866. [Google Scholar]
  160. Mennel, L.; Symonowicz, J.; Wachter, S.; Polyushkin, D.K.; Molina-Mendoza, A.J.; Mueller, T. Ultrafast machine vision with 2D material neural network image sensors. Nature 2020, 579, 62–66. [Google Scholar] [CrossRef]
  161. Gamanayake, C.; Jayasinghe, L.; Ng, B.K.K.; Yuen, C. Cluster pruning: An efficient filter pruning method for edge ai vision applications. IEEE J. Sel. Top. Signal Process. 2020, 14, 802–816. [Google Scholar] [CrossRef]
  162. Ding, J.; Zhang, Z.; Yu, X.; Zhao, X.; Yan, Z. A novel moving object detection algorithm based on robust image feature threshold segmentation with improved optical flow estimation. Appl. Sci. 2023, 13, 4854. [Google Scholar] [CrossRef]
  163. Fang, S.; Zhang, B.; Hu, J. Improved Mask R-CNN multi-target detection and segmentation for autonomous driving in complex scenes. Sensors 2023, 23, 3853. [Google Scholar] [CrossRef] [PubMed]
  164. Moru, D.K.; Borro, D. A machine vision algorithm for quality control inspection of gears. Int. J. Adv. Manuf. Technol. 2020, 106, 105–123. [Google Scholar] [CrossRef]
  165. Li, J.; Telychko, M.; Yin, J.; Zhu, Y.; Li, G.; Song, S.; Yang, H.; Li, J.; Wu, J.; Lu, J.; et al. Machine vision automated chiral molecule detection and classification in molecular imaging. J. Am. Chem. Soc. 2021, 143, 10177–10188. [Google Scholar] [CrossRef]
  166. Yin, K.; Wang, L.; Zhang, J. ST-CSNN: A novel method for vehicle counting. Mach. Vis. Appl. 2021, 32, 108. [Google Scholar] [CrossRef]
  167. Dan, D.; Ge, L.; Yan, X. Identification of moving loads based on the information fusion of weigh-in-motion system and multiple camera machine vision. Measurement 2019, 144, 155–166. [Google Scholar] [CrossRef]
  168. Nardo, M.; Forino, D.; Murino, T. The evolution of man–machine interaction: The role of human in Industry 4.0 paradigm. Prod. Manuf. Res. 2020, 8, 20–34. [Google Scholar] [CrossRef]
  169. Solanes, J.E.; Gracia, L.; Valls Miro, J. Advances in Human–Machine Interaction, Artificial Intelligence, and Robotics. Electronics 2024, 13, 3856. [Google Scholar] [CrossRef]
  170. Sandini, G.; Sciutti, A.; Morasso, P. Artificial cognition vs. artificial intelligence for next-generation autonomous robotic agents. Front. Comput. Neurosci. 2024, 18, 1349408. [Google Scholar] [CrossRef]
  171. Ferreira, J.F.; Portugal, D.; Andrada, M.E.; Machado, P.; Rocha, R.P.; Peixoto, P. Sensing and artificial perception for robots in precision forestry: A survey. Robotics 2023, 12, 139. [Google Scholar] [CrossRef]
  172. Kumar, A. From mass customization to mass personalization: A strategic transformation. Int. J. Flex. Manuf. Syst. 2007, 19, 533–547. [Google Scholar] [CrossRef]
  173. Lu, Y.; Zheng, H.; Chand, S.; Xia, W.; Liu, Z.; Xu, X.; Wang, L.; Qin, Z.; Bao, J. Outlook on human-centric manufacturing towards Industry 5.0. J. Manuf. Syst. 2022, 62, 612–627. [Google Scholar] [CrossRef]
  174. Stączek, P.; Pizoń, J.; Danilczuk, W.; Gola, A. A digital twin approach for the improvement of an autonomous mobile robots (AMR’s) operating environment—A case study. Sensors 2021, 21, 7830. [Google Scholar] [CrossRef]
  175. Habib, L.; Pacaux-Lemoine, M.P.; Millot, P. A method for designing levels of automation based on a human-machine cooperation model. IFAC-PapersOnLine 2017, 50, 1372–1377. [Google Scholar] [CrossRef]
  176. Zieba, S.; Polet, P.; Vanderhaegen, F. Using adjustable autonomy and human–machine cooperation to make a human–machine system resilient—Application to a ground robotic system. Inf. Sci. 2011, 181, 379–397. [Google Scholar] [CrossRef]
  177. Semeraro, F.; Griffiths, A.; Cangelosi, A. Human–robot collaboration and machine learning: A systematic review of recent research. Robot. Comput.-Integr. Manuf. 2023, 79, 102432. [Google Scholar] [CrossRef]
  178. Wang, T.; Li, J.; Kong, Z.; Liu, X.; Snoussi, H.; Lv, H. Digital twin improved via visual question answering for vision-language interactive mode in human–machine collaboration. J. Manuf. Syst. 2021, 58, 261–269. [Google Scholar] [CrossRef]
  179. Simmler, M.; Frischknecht, R. A taxonomy of human–machine collaboration: Capturing automation and technical autonomy. AI Soc. 2021, 36, 239–250. [Google Scholar] [CrossRef]
  180. Xiong, W.; Fan, H.; Ma, L.; Wang, C. Challenges of human—Machine collaboration in risky decision-making. Front. Eng. Manag. 2022, 9, 89–103. [Google Scholar] [CrossRef]
  181. Trujillo, A.C.; Gregory, I.M.; Ackerman, K.A. Evolving relationship between humans and machines. IFAC-PapersOnLine 2019, 51, 366–371. [Google Scholar] [CrossRef]
  182. Simões, A.C.; Pinto, A.; Santos, J.; Pinheiro, S.; Romero, D. Designing human-robot collaboration (HRC) workspaces in industrial settings: A systematic literature review. J. Manuf. Syst. 2022, 62, 28–43. [Google Scholar] [CrossRef]
  183. Pizoń, J.; Gola, A.; Świć, A. The role and meaning of the digital twin technology in the process of implementing intelligent collaborative robots. In Advances in Manufacturing III; International Scientific-Technical Conference Manufacturing; Springer International Publishing: Cham, Switzerland, 2022; pp. 39–49. [Google Scholar]
  184. Santosuosso, A. About coevolution of humans and intelligent machines: Preliminary notes. BioLaw J.—Riv. BioDiritto 2021, 445–454. [Google Scholar] [CrossRef]
  185. Frazzon, E.M.; Agostino, Í.R.S.; Broda, E.; Freitag, M. Manufacturing networks in the era of digital production and operations: A socio-cyber-physical perspective. Annu. Rev. Control 2020, 49, 288–294. [Google Scholar] [CrossRef]
  186. Demir, K.A.; Döven, G.; Sezen, B. Industry 5.0 and human-robot co-working. Procedia Comput. Sci. 2019, 158, 688–695. [Google Scholar] [CrossRef]
  187. Dhanda, M.; Rogers, B.A.; Hall, S.; Dekoninck, E.; Dhokia, V. Reviewing human-robot collaboration in manufacturing: Opportunities and challenges in the context of industry 5.0. Robot. Comput.-Integr. Manuf. 2025, 93, 102937. [Google Scholar] [CrossRef]
  188. Lasi, H.; Fettke, P.; Kemper, H.G.; Feld, T.; Hoffmann, M. Industry 4.0. Bus. Inf. Syst. Eng. 2014, 6, 239–242. [Google Scholar] [CrossRef]
  189. Lee, J.; Bagheri, B.; Kao, H.A. A cyber-physical systems architecture for industry 4.0-based manufacturing systems. Manuf. Lett. 2015, 3, 18–23. [Google Scholar] [CrossRef]
  190. Zheng, P.; Wang, H.; Sang, Z.; Zhong, R.Y.; Liu, Y.; Liu, C.; Mubarok, K.; Yu, S.; Xu, X. Smart manufacturing systems for Industry 4.0: Conceptual framework, scenarios, and future perspectives. Front. Mech. Eng. 2018, 13, 137–150. [Google Scholar] [CrossRef]
  191. Monostori, L.; Kádár, B.; Bauernhansl, T.; Kondoh, S.; Kumara, S.; Reinhart, G.; Sauer, O.; Schuh, G.; Sihn, W.; Ueda, K. Cyber-physical systems in manufacturing. Cirp Ann. 2016, 65, 621–641. [Google Scholar] [CrossRef]
  192. Zhong, R.Y.; Xu, X.; Klotz, E.; Newman, S.T. Intelligent manufacturing in the context of industry 4.0: A review. Engineering 2017, 3, 616–630. [Google Scholar] [CrossRef]
  193. Lu, Y. Industry 4.0: A survey on technologies, applications and open research issues. J. Ind. Inf. Integr. 2017, 6, 1–10. [Google Scholar] [CrossRef]
  194. Grieves, M.; Vickers, J. Digital twin: Mitigating unpredictable, undesirable emergent behavior in complex systems. In Transdisciplinary Perspectives on Complex Systems: New Findings and Approaches; Springer International Publishing: Cham, Switzerland, 2016; pp. 85–113. [Google Scholar]
  195. Tao, F.; Qi, Q.; Liu, A.; Kusiak, A. Data-driven smart manufacturing. J. Manuf. Syst. 2018, 48, 157–169. [Google Scholar] [CrossRef]
  196. Koren, Y.; Gu, X.; Guo, W. Reconfigurable manufacturing systems: Principles, design, and future trends. Front. Mech. Eng. 2018, 13, 121–136. [Google Scholar] [CrossRef]
  197. Lu, B.; Zhou, X. Quality and reliability oriented maintenance for multistage manufacturing systems subject to condition monitoring. J. Manuf. Syst. 2019, 52, 76–85. [Google Scholar] [CrossRef]
  198. Frank, A.G.; Dalenogare, L.S.; Ayala, N.F. Industry 4.0 technologies: Implementation patterns in manufacturing companies. Int. J. Prod. Econ. 2019, 210, 15–26. [Google Scholar] [CrossRef]
  199. Xu, X.; Lu, Y.; Vogel-Heuser, B.; Wang, L. Industry 4.0 and Industry 5.0 Inception, conception and perception. J. Manuf. Syst. 2021, 61, 530–535. [Google Scholar] [CrossRef]
  200. Adeleke, A.K. Intelligent monitoring system for real-time optimization of ultra-precision manufacturing processes. Eng. Sci. Technol. J. 2024, 5, 803–810. [Google Scholar] [CrossRef]
  201. Jiang, Y.; Yin, S.; Dong, J.; Kaynak, O. A review on soft sensors for monitoring, control, and optimization of industrial processes. IEEE Sens. J. 2020, 21, 12868–12881. [Google Scholar] [CrossRef]
  202. Ucar, A.; Karakose, M.; Kırımça, N. Artificial intelligence for predictive maintenance applications: Key components, trustworthiness, and future trends. Appl. Sci. 2024, 14, 898. [Google Scholar] [CrossRef]
  203. Zonta, T.; Da Costa, C.A.; da Rosa Righi, R.; de Lima, M.J.; Da Trindade, E.S.; Li, G.P. Predictive maintenance in the Industry 4.0: A systematic literature review. Comput. Ind. Eng. 2020, 150, 106889. [Google Scholar] [CrossRef]
  204. Camacho, E.F.; Bordons, C. Constrained model predictive control. In Model Predictive Control; Springer: London, UK, 2007; pp. 177–216. [Google Scholar]
  205. Qin, S.J.; Badgwell, T.A. A survey of industrial model predictive control technology. Control Eng. Pract. 2003, 11, 733–764. [Google Scholar] [CrossRef]
  206. Hewing, L.; Wabersich, K.P.; Menner, M.; Zeilinger, M.N. Learning-based model predictive control: Toward safe learning in control. Annu. Rev. Control Robot. Auton. Syst. 2020, 3, 269–296. [Google Scholar] [CrossRef]
  207. Wang, J.; Ma, Y.; Zhang, L.; Gao, R.X.; Wu, D. Deep learning for smart manufacturing: Methods and applications. J. Manuf. Syst. 2018, 48, 144–156. [Google Scholar] [CrossRef]
  208. Parmar, T. Artificial Intelligence in High-tech Manufacturing: A Review of Applications in Quality Control and Process Optimization. Int. J. Innov. Res. Eng. Multidiscip. Phys. Sci. 2022, 10, 1–8. [Google Scholar] [CrossRef]
  209. Cheng, Y. Predictive Analysis on Maintenance of Mining Dump Truck. Appl. Mech. Mater. 2013, 340, 848–851. [Google Scholar] [CrossRef]
  210. Serin, G.; Sener, B.; Ozbayoglu, A.M.; Unver, H.O. Review of tool condition monitoring in machining and opportunities for deep learning. Int. J. Adv. Manuf. Technol. 2020, 109, 953–974. [Google Scholar] [CrossRef]
  211. Jaber, A.A. Design of an Intelligent Embedded System for Condition Monitoring of an Industrial Robot; Springer International Publishing: Cham, Switzerland, 2017. [Google Scholar] [CrossRef]
  212. Wong, S.Y.; Chuah, J.H.; Yap, H.J. Technical data-driven tool condition monitoring challenges for CNC milling: A review. Int. J. Adv. Manuf. Technol. 2020, 107, 4837–4857. [Google Scholar] [CrossRef]
  213. Lin, R.H.; Xi, X.N.; Wang, P.N.; Wu, B.D.; Tian, S.M. Review on hydrogen fuel cell condition monitoring and prediction methods. Int. J. Hydrogen Energy 2019, 44, 5488–5498. [Google Scholar] [CrossRef]
  214. Surucu, O.; Gadsden, S.A.; Yawney, J. Condition monitoring using machine learning: A review of theory, applications, and recent advances. Expert Syst. Appl. 2023, 221, 119738. [Google Scholar] [CrossRef]
  215. Jardine, A.K.; Lin, D.; Banjevic, D. A review on machinery diagnostics and prognostics implementing condition-based maintenance. Mech. Syst. Signal Process. 2006, 20, 1483–1510. [Google Scholar] [CrossRef]
  216. Lei, Y.; Li, N.; Guo, L.; Li, N.; Yan, T.; Lin, J. Machinery health prognostics: A systematic review from data acquisition to RUL prediction. Mech. Syst. Signal Process. 2018, 104, 799–834. [Google Scholar] [CrossRef]
  217. Zhao, R.; Yan, R.; Chen, Z.; Mao, K.; Wang, P.; Gao, R.X. Deep learning and its applications to machine health monitoring. Mech. Syst. Signal Process. 2019, 115, 213–237. [Google Scholar] [CrossRef]
  218. Si, X.S.; Wang, W.; Hu, C.H.; Zhou, D.H. Remaining useful life estimation—A review on the statistical data driven approaches. Eur. J. Oper. Res. 2011, 213, 1–14. [Google Scholar] [CrossRef]
  219. Lee, J.; Kao, H.A.; Yang, S. Service innovation and smart analytics for industry 4.0 and big data environment. Procedia CIRP 2014, 16, 3–8. [Google Scholar] [CrossRef]
  220. Carvalho, T.P.; Soares, F.A.; Vita, R.; Francisco, R.D.P.; Basto, J.P.; Alcalá, S.G. A systematic literature review of machine learning methods applied to predictive maintenance. Comput. Ind. Eng. 2019, 137, 106024. [Google Scholar] [CrossRef]
  221. Zhang, W.; Yang, D.; Wang, H. Data-driven methods for predictive maintenance of industrial equipment: A survey. IEEE Syst. J. 2019, 13, 2213–2227. [Google Scholar] [CrossRef]
  222. Milazzo, M.; Libonati, F. The synergistic role of additive manufacturing and artificial intelligence for the design of new advanced intelligent systems. Adv. Intell. Syst. 2022, 4, 2100278. [Google Scholar] [CrossRef]
  223. Huang, Z.; Shen, Y.; Li, J.; Fey, M.; Brecher, C. A survey on AI-driven digital twins in industry 4.0: Smart manufacturing and advanced robotics. Sensors 2021, 21, 6340. [Google Scholar] [CrossRef] [PubMed]
  224. Sundaram, S.; Zeid, A. Artificial intelligence-based smart quality inspection for manufacturing. Micromachines 2023, 14, 570. [Google Scholar] [CrossRef] [PubMed]
  225. Pookkuttath, S.; Rajesh Elara, M.; Sivanantham, V.; Ramalingam, B. AI-enabled predictive maintenance framework for autonomous mobile cleaning robots. Sensors 2021, 22, 13. [Google Scholar] [CrossRef]
  226. Gumbs, A.A.; Frigerio, I.; Spolverato, G.; Croner, R.; Illanes, A.; Chouillard, E.; Elyan, E. Artificial intelligence surgery: How do we get to autonomous actions in surgery. Sensors 2021, 21, 5526. [Google Scholar] [CrossRef] [PubMed]
  227. Murugamani, C.; Sahoo, S.K.; Kshirsagar, P.R.; Prathap, B.R.; Islam, S.; Naveed, Q.N.; Hussain, M.R.; Hung, B.T.; Teressa, D.M. Wireless communication for robotic process automation using machine learning technique. Wirel. Commun. Mob. Comput. 2022, 2022, 4723138. [Google Scholar] [CrossRef]
  228. Dimitropoulos, N.; Togias, T.; Zacharaki, N.; Michalos, G.; Makris, S. Seamless human–robot collaborative assembly using artificial intelligence and wearable devices. Appl. Sci. 2021, 11, 5699. [Google Scholar] [CrossRef]
  229. Hashmi, A.W.; Mali, H.S.; Meena, A.; Saxena, K.K.; Puerta, A.P.V.; Prakash, C.; Buddhi, D.; Davim, J.P.; Abdul-Zahra, D.S. Understanding the mechanism of abrasive-based finishing processes using mathematical modeling and numerical simulation. Metals 2022, 12, 1328. [Google Scholar] [CrossRef]
  230. Simeth, A.; Plapper, P. Artificial intelligence based robotic automation of manual assembly tasks for intelligent manufacturing. In Smart, Sustainable Manufacturing in an Ever-Changing World: Proceedings of International Conference on Competitive Manufacturing (COMA’22); Springer International Publishing: Cham, Switzerland, 2023; pp. 137–148. [Google Scholar]
  231. Abioye, S.O.; Oyedele, L.O.; Akanbi, L.; Ajayi, A.; Delgado, J.M.D.; Bilal, M.; Akinade, O.O.; Ahmed, A. Artificial intelligence in the construction industry: A review of present status, opportunities and future challenges. J. Build. Eng. 2021, 44, 103299. [Google Scholar] [CrossRef]
  232. Benotsmane, R.; Kovács, G.; Dudás, L. Economic, social impacts and operation of smart factories in Industry 4.0 focusing on simulation and artificial intelligence of collaborating robots. Soc. Sci. 2019, 8, 143. [Google Scholar] [CrossRef]
  233. Bałazy, P.; Gut, P.; Knap, P. Positioning algorithm for AGV autonomous driving platform based on artificial neural networks. Robot. Syst. Appl. 2021, 1, 41–45. [Google Scholar] [CrossRef]
  234. Chryssolouris, G.; Alexopoulos, K.; Arkouli, Z. Artificial intelligence in manufacturing equipment, automation, and robots. In A Perspective on Artificial Intelligence in Manufacturing; Springer International Publishing: Cham, Switzerland, 2023; pp. 41–78. [Google Scholar]
  235. Cho, D.Y.; Kang, M.K. Human gaze-aware attentive object detection for ambient intelligence. Eng. Appl. Artif. Intell. 2021, 106, 104471. [Google Scholar] [CrossRef]
  236. Lin, S.; Liu, A.; Wang, J.; Kong, X. An intelligence-based hybrid PSO-SA for mobile robot path planning in warehouse. J. Comput. Sci. 2023, 67, 101938. [Google Scholar] [CrossRef]
  237. Arinez, J.F.; Chang, Q.; Gao, R.X.; Xu, C.; Zhang, J. Artificial intelligence in advanced manufacturing: Current status and future outlook. J. Manuf. Sci. Eng. 2020, 142, 110804. [Google Scholar] [CrossRef]
  238. Khawar, H.; Soomro, T.R.; Kamal, M.A. Machine learning for internet of things-based smart transportation networks. In Machine Learning for Societal Improvement, Modernization, and Progress; IGI Global Scientific Publishing: Hershey, PA, USA, 2022; pp. 112–134. [Google Scholar]
  239. Yuan, T.; da Rocha Neto, W.; Rothenberg, C.E.; Obraczka, K.; Barakat, C.; Turletti, T. Machine learning for next-generation intelligent transportation systems: A survey. Trans. Emerg. Telecommun. Technol. 2022, 33, e4427. [Google Scholar] [CrossRef]
  240. Sharma, A.; Awasthi, Y.; Kumar, S. The role of blockchain, AI and IoT for smart road traffic management system. In 2020 IEEE India Council International Subsections Conference (INDISCON); IEEE: New York, NY, USA, 2020; pp. 289–296. [Google Scholar]
  241. Iyer, L.S. AI enabled applications towards intelligent transportation. Transp. Eng. 2021, 5, 100083. [Google Scholar] [CrossRef]
  242. Theissler, A.; Pérez-Velázquez, J.; Kettelgerdes, M.; Elger, G. Predictive maintenance enabled by machine learning: Use cases and challenges in the automotive industry. Reliab. Eng. Syst. Saf. 2021, 215, 107864. [Google Scholar] [CrossRef]
  243. Zantalis, F.; Koulouras, G.; Karabetsos, S.; Kandris, D. A review of machine learning and IoT in smart transportation. Future Internet 2019, 11, 94. [Google Scholar] [CrossRef]
  244. Olugbade, S.; Ojo, S.; Imoize, A.L.; Isabona, J.; Alaba, M.O. A review of artificial intelligence and machine learning for incident detectors in road transport systems. Math. Comput. Appl. 2022, 27, 77. [Google Scholar] [CrossRef]
  245. Halim, Z.; Kalsoom, R.; Bashir, S.; Abbas, G. Artificial intelligence techniques for driving safety and vehicle crash prediction. Artif. Intell. Rev. 2016, 46, 351–387. [Google Scholar] [CrossRef]
  246. Nikitas, A.; Michalakopoulou, K.; Njoya, E.T.; Karampatzakis, D. Artificial intelligence, transport and the smart city: Definitions and dimensions of a new mobility era. Sustainability 2020, 12, 2789. [Google Scholar] [CrossRef]
  247. Heidari, A.; Jafari Navimipour, N.; Unal, M.; Zhang, G. Machine learning applications in internet-of-drones: Systematic review, recent deployments, and open issues. ACM Comput. Surv. 2023, 55, 1–45. [Google Scholar] [CrossRef]
  248. De Swarte, T.; Boufous, O.; Escalle, P. Artificial intelligence, ethics and human values: The cases of military drones and companion robots. Artif. Life Robot. 2019, 24, 291–296. [Google Scholar] [CrossRef]
  249. Toma, C.; Popa, M.; Iancu, B.; Doinea, M.; Pascu, A.; Ioan-Dutescu, F. Edge machine learning for the automated decision and visual computing of the robots, IoT embedded devices or UAV-drones. Electronics 2022, 11, 3507. [Google Scholar] [CrossRef]
  250. Singh, A.; Raj, K.; Kumar, T.; Verma, S.; Roy, A.M. Deep learning-based cost-effective and responsive robot for autism treatment. Drones 2023, 7, 81. [Google Scholar] [CrossRef]
  251. Yazid, Y.; Ez-Zazi, I.; Guerrero-González, A.; El Oualkadi, A.; Arioua, M. UAV-enabled mobile edge-computing for IoT based on AI: A comprehensive review. Drones 2021, 5, 148. [Google Scholar] [CrossRef]
  252. Wu, X.; Li, W.; Hong, D.; Tao, R.; Du, Q. Deep learning for unmanned aerial vehicle-based object detection and tracking: A survey. IEEE Geosci. Remote Sens. Mag. 2021, 10, 91–124. [Google Scholar] [CrossRef]
  253. Bijjahalli, S.; Sabatini, R.; Gardi, A. Advances in intelligent and autonomous navigation systems for small UAS. Prog. Aerosp. Sci. 2020, 115, 100617. [Google Scholar] [CrossRef]
  254. Saranya, T.; Deisy, C.; Sridevi, S.; Anbananthen, K.S.M. A comparative study of deep learning and Internet of Things for precision agriculture. Eng. Appl. Artif. Intell. 2023, 122, 106034. [Google Scholar] [CrossRef]
  255. Alrayes, F.S.; Alotaibi, S.S.; Alissa, K.A.; Maashi, M.; Alhogail, A.; Alotaibi, N.; Mohsen, H.; Motwakel, A. Artificial intelligence-based secure communication and classification for drone-enabled emergency monitoring systems. Drones 2022, 6, 222. [Google Scholar] [CrossRef]
  256. Sorooshian, S.; Khademi Sharifabad, S.; Parsaee, M.; Afshari, A.R. Toward a modern last-mile delivery: Consequences and obstacles of intelligent technology. Appl. Syst. Innov. 2022, 5, 82. [Google Scholar] [CrossRef]
  257. Potočnik, J.; Foley, S.; Thomas, E. Current and potential applications of artificial intelligence in medical imaging practice: A narrative review. J. Med. Imaging Radiat. Sci. 2023, 54, 376–385. [Google Scholar] [CrossRef]
  258. Gupta, R.; Kumar, N.; Bansal, S.; Singh, S.; Sood, N.; Gupta, S. Artificial intelligence-driven digital cytology-based cervical cancer screening: Is the time ripe to adopt this disruptive technology in resource-constrained settings? A literature review. J. Digit. Imaging 2023, 36, 1643–1652. [Google Scholar] [CrossRef]
  259. Jahn, S.W.; Plass, M.; Moinfar, F. Digital pathology: Advantages, limitations and emerging perspectives. J. Clin. Med. 2020, 9, 3697. [Google Scholar] [CrossRef]
  260. Aggarwal, R.; Sounderajah, V.; Martin, G.; Ting, D.S.W.; Karthikesalingam, A.; King, D.; Ashrafian, H.; Darzi, A. Diagnostic accuracy of deep learning in medical imaging: A systematic review and meta-analysis. npj Digit. Med. 2021, 4, 65. [Google Scholar] [CrossRef]
  261. Hansun, S.; Argha, A.; Liaw, S.T.; Celler, B.G.; Marks, G.B. Machine and deep learning for tuberculosis detection on chest X-rays: Systematic literature review. J. Med. Internet Res. 2023, 25, e43154. [Google Scholar] [CrossRef]
  262. Nam, J.G.; Park, S.; Hwang, E.J.; Lee, J.H.; Jin, K.-N.; Lim, K.Y.; Vu, T.H.; Sohn, J.H.; Hwang, S.; Goo, J.M.; et al. Development and validation of deep learning–based automatic detection algorithm for malignant pulmonary nodules on chest radiographs. Radiology 2019, 290, 218–228. [Google Scholar] [CrossRef]
  263. Ehteshami Bejnordi, B.; Veta, M.; Johannes van Diest, P.; Van Ginneken, B.; Karssemeijer, N.; Litjens, G.; Van Der Laak, J.A.; CAMELYON16 consortium; Hermsen, M.; Manson, Q.F.; et al. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA 2017, 318, 2199–2210. [Google Scholar] [CrossRef]
  264. Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J.; et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA 2016, 316, 2402–2410. [Google Scholar] [CrossRef]
  265. Widner, K.; Virmani, S.; Krause, J.; Nayar, J.; Tiwari, R.; Pedersen, E.R.; Jeji, D.; Hammel, N.; Matias, Y.; Corrado, G.S.; et al. Lessons learned from translating AI from development to deployment in healthcare. Nat. Med. 2023, 29, 1304–1306. [Google Scholar] [CrossRef]
  266. Durkee, M.S.; Abraham, R.; Clark, M.R.; Giger, M.L. Artificial intelligence and cellular segmentation in tissue microscopy images. Am. J. Pathol. 2021, 191, 1693–1701. [Google Scholar] [CrossRef]
  267. Hanna, M.G.; Ardon, O.; Reuter, V.E.; Sirintrapun, S.J.; England, C.; Klimstra, D.S.; Hameed, M.R. Integrating digital pathology into clinical practice. Mod. Pathol. 2022, 35, 152–164. [Google Scholar] [CrossRef]
  268. Gómez-De-Mariscal, E.; García-López-De-Haro, C.; Ouyang, W.; Donati, L.; Lundberg, E.; Unser, M.; Muñoz-Barrutia, A.; Sage, D. DeepImageJ: A user-friendly environment to run deep learning models in ImageJ. Nat. Methods 2021, 18, 1192–1195. [Google Scholar] [CrossRef]
  269. Kumar, Y.; Koul, A.; Singla, R.; Ijaz, M.F. Artificial intelligence in disease diagnosis: A systematic literature review, synthesizing framework and future research agenda. J. Ambient Intell. Humaniz. Comput. 2023, 14, 8459–8486. [Google Scholar] [CrossRef]
  270. Serrano, D.R.; Luciano, F.C.; Anaya, B.J.; Ongoren, B.; Kara, A.; Molina, G.; Ramirez, B.I.; Sánchez-Guirales, S.A.; Simon, J.A.; Tomietto, G.; et al. Artificial Intelligence (AI) Applications in Drug Discovery and Drug Delivery: Revolutionizing Personalized Medicine. Pharmaceutics 2024, 16, 1328. [Google Scholar] [CrossRef]
  271. Liu, X.; Faes, L.; Kale, A.U.; Wagner, S.K.; Fu, D.J.; Bruynseels, A.; Mahendiran, T.; Moraes, G.; Shamdas, M.; Kern, C.; et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: A systematic review and meta-analysis. Lancet Digit. Health 2019, 1, e271–e297. [Google Scholar] [CrossRef]
  272. Wang, L.; Zhang, Y.; Wang, D.; Tong, X.; Liu, T.; Zhang, S.; Huang, J.; Zhang, L.; Chen, L.; Fan, H.; et al. Artificial intelligence for COVID-19: A systematic review. Front. Med. 2021, 8, 704256. [Google Scholar] [CrossRef]
  273. Ahsan, M.M.; Luna, S.A.; Siddique, Z. Machine-learning-based disease diagnosis: A comprehensive review. Healthcare 2022, 10, 541. [Google Scholar] [CrossRef]
  274. Wu, S.; Wang, J.; Guo, Q.; Lan, H.; Zhang, J.; Wang, L.; Janne, E.; Luo, X.; Wang, Q.; Song, Y.; et al. Application of artificial intelligence in clinical diagnosis and treatment: An overview of systematic reviews. Intell. Med. 2022, 2, 88–96. [Google Scholar] [CrossRef]
  275. Kumar, K.; Kumar, P.; Deb, D.; Unguresan, M.L.; Muresan, V. Artificial intelligence and machine learning based intervention in medical infrastructure: A review and future trends. Healthcare 2023, 11, 207. [Google Scholar] [CrossRef]
  276. Blumenthal, D.; Tavenner, M. The “meaningful use” regulation for electronic health records. N. Engl. J. Med. 2010, 363, 501–504. [Google Scholar] [CrossRef]
  277. De Francesco, D.; Reiss, J.D.; Roger, J.; Tang, A.S.; Chang, A.L.; Becker, M.; Aghaeepour, N. Data-driven longitudinal characterization of neonatal health and morbidity. Sci. Transl. Med. 2023, 15, eadc9854. [Google Scholar] [CrossRef]
  278. Tang, H.; Solti, I.; Kirkendall, E.; Zhai, H.; Lingren, T.; Meller, J.; Ni, Y. Leveraging food and drug administration adverse event reports for the automated monitoring of electronic health records in a pediatric hospital. Biomed. Inform. Insights 2017, 9, 1178222617713018. [Google Scholar] [CrossRef][Green Version]
  279. Jensen, P.B.; Jensen, L.J.; Brunak, S. Mining electronic health records: Towards better research applications and clinical care. Nat. Rev. Genet. 2012, 13, 395–405. [Google Scholar] [CrossRef]
  280. Tatonetti, N.P.; Ye, P.P.; Daneshjou, R.; Altman, R.B. Data-driven prediction of drug effects and interactions. Sci. Transl. Med. 2012, 4, 125ra31. [Google Scholar] [CrossRef]
  281. Xiao, C.; Choi, E.; Sun, J. Opportunities and challenges in developing deep learning models using electronic health records data: A systematic review. J. Am. Med. Inform. Assoc. 2018, 25, 1419–1428. [Google Scholar] [CrossRef] [PubMed]
  282. Mi, X.; Zou, B.; Zou, F.; Hu, J. Permutation-based identification of important biomarkers for complex diseases via machine learning models. Nat. Commun. 2021, 12, 3008. [Google Scholar] [CrossRef] [PubMed]
  283. Lewis, J.E.; Kemp, M.L. Integration of machine learning and genome-scale metabolic modeling identifies multi-omics biomarkers for radiation resistance. Nat. Commun. 2021, 12, 2700. [Google Scholar] [CrossRef] [PubMed]
  284. Manak, M.S.; Varsanik, J.S.; Hogan, B.J.; Whitfield, M.J.; Su, W.R.; Joshi, N.; Steinke, N.; Min, A.; Berger, D.; Saphirstein, R.J.; et al. Live-cell phenotypic-biomarker microfluidic assay for the risk stratification of cancer patients via machine learning. Nat. Biomed. Eng. 2018, 2, 761–772. [Google Scholar] [CrossRef]
  285. Chen, J.; Yuan, Y.; Ziabari, A.K.; Xu, X.; Zhang, H.; Christakopoulos, P.; Advincula, R. AI for Manufacturing and Healthcare: A chemistry and engineering perspective. arXiv 2024, arXiv:2405.01520. [Google Scholar] [CrossRef]
  286. Kusiak, A. Smart manufacturing. Int. J. Prod. Res. 2018, 56, 508–517. [Google Scholar] [CrossRef]
  287. Qi, M. Enhancing Industrial Automation Through AI-driven Sensors: A Comprehensive Study on Efficiency, Safety, and Predictive Maintenance. Appl. Comput. Eng. 2024, 80, 188–195. [Google Scholar] [CrossRef]
  288. Johansen, K.; Rao, S.; Ashourpour, M. The role of automation in complexities of high-mix in low-volume production—A literature review. Procedia CIRP 2021, 104, 1452–1457. [Google Scholar] [CrossRef]
  289. Li, C. Editorial for special issue on ultra-precision machining of difficult-to-machine materials. Micromachines 2025, 16, 1004. [Google Scholar] [CrossRef]
Figure 1. Evolution of Artificial Intelligence.
Figure 1. Evolution of Artificial Intelligence.
Technologies 14 00143 g001
Figure 2. AI–Materials-Mechanism-Mechatronics Closed-Loop Framework.
Figure 2. AI–Materials-Mechanism-Mechatronics Closed-Loop Framework.
Technologies 14 00143 g002
Figure 3. Overview of Artificial Intelligence.
Figure 3. Overview of Artificial Intelligence.
Technologies 14 00143 g003
Figure 4. Reinforcement Learning Procedure.
Figure 4. Reinforcement Learning Procedure.
Technologies 14 00143 g004
Figure 5. The PID controller.
Figure 5. The PID controller.
Technologies 14 00143 g005
Figure 6. Classification structure of knowledge-driven decision-making planning methods.
Figure 6. Classification structure of knowledge-driven decision-making planning methods.
Technologies 14 00143 g006
Figure 7. Schematic diagram of knowledge driven decision-making.
Figure 7. Schematic diagram of knowledge driven decision-making.
Technologies 14 00143 g007
Figure 8. Evolution of path planning techniques.
Figure 8. Evolution of path planning techniques.
Technologies 14 00143 g008
Figure 9. Classification for evolutionary, deterministic and non-deterministic methods.
Figure 9. Classification for evolutionary, deterministic and non-deterministic methods.
Technologies 14 00143 g009
Figure 10. The main function of machine vision.
Figure 10. The main function of machine vision.
Technologies 14 00143 g010
Figure 11. The main development process of machine vision.
Figure 11. The main development process of machine vision.
Technologies 14 00143 g011
Figure 12. Human–machine 5C relationship model.
Figure 12. Human–machine 5C relationship model.
Technologies 14 00143 g012
Figure 13. Illustration of industrial evolution towards I5.0.
Figure 13. Illustration of industrial evolution towards I5.0.
Technologies 14 00143 g013
Figure 14. Framework for optimizing production flow and scheduling through AI, integrating in mechatronic manufacturing systems.
Figure 14. Framework for optimizing production flow and scheduling through AI, integrating in mechatronic manufacturing systems.
Technologies 14 00143 g014
Figure 15. Diseases diagnosed by Machine Learning Teachniques (MLT).
Figure 15. Diseases diagnosed by Machine Learning Teachniques (MLT).
Technologies 14 00143 g015
Figure 16. Multi-Physical Field Coupling in Ultra-Precision Diamond Polishing.
Figure 16. Multi-Physical Field Coupling in Ultra-Precision Diamond Polishing.
Technologies 14 00143 g016
Table 1. Comparison of AI Techniques for Engineering and Mechatronic Applications [49,68].
Table 1. Comparison of AI Techniques for Engineering and Mechatronic Applications [49,68].
CriterionMachine Learning (ML)Deep Learning (DL)Reinforcement Learning (RL)
Learning paradigmSupervised, unsupervised, semi-supervised, reinforcement learningMultilayer neural networks with hierarchical feature learningTrial-and-error learning based on reward maximization
Data requirementsLow to moderate; performance depends on data quality and quantityHigh; typically requires large annotated datasetsHigh; requires extensive interaction data
Feature engineeringManual or domain-informed feature extractionAutomatic feature extraction from raw dataState and reward design required
Computational costLow to moderateHigh (training and inference)High (training and exploration)
InterpretabilityRelatively highLow (black-box nature)Low to moderate
Real-time feasibilityHigh; suitable for real-time systemsModerate; constrained by computationLimited; safety and latency concerns
RobustnessModerate; sensitive to data qualityHigh in perception tasks, data-dependentVariable; sensitive to environment dynamics
Safety suitabilityHigh; widely adopted in safety-critical domainsModerate; requires careful validationLow; exploration poses safety risks
Typical application domainsCrop yield prediction, fraud detection, smart city managementMedical imaging and cancer screening, vision and pattern recognitionRobotics and autonomous systems
Key limitationsLimited scalability; relies on feature designData-hungry, computationally expensive, low transparencyTraining instability; difficult deployment in real systems
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Salawu, G.; Glen, B. Integrating Artificial Intelligence into Mechatronics: A Comprehensive Study of Its Influence on System Performance, Autonomy, and Manufacturing Efficiency. Technologies 2026, 14, 143. https://doi.org/10.3390/technologies14030143

AMA Style

Salawu G, Glen B. Integrating Artificial Intelligence into Mechatronics: A Comprehensive Study of Its Influence on System Performance, Autonomy, and Manufacturing Efficiency. Technologies. 2026; 14(3):143. https://doi.org/10.3390/technologies14030143

Chicago/Turabian Style

Salawu, Ganiyat, and Bright Glen. 2026. "Integrating Artificial Intelligence into Mechatronics: A Comprehensive Study of Its Influence on System Performance, Autonomy, and Manufacturing Efficiency" Technologies 14, no. 3: 143. https://doi.org/10.3390/technologies14030143

APA Style

Salawu, G., & Glen, B. (2026). Integrating Artificial Intelligence into Mechatronics: A Comprehensive Study of Its Influence on System Performance, Autonomy, and Manufacturing Efficiency. Technologies, 14(3), 143. https://doi.org/10.3390/technologies14030143

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop