Next Article in Journal
Failure Detection of Laser Welding Seam for Electric Automotive Brake Joints Based on Image Feature Extraction
Previous Article in Journal
Information Reuse Methods for Multi-Dimensional Models in Discrete Workshops
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

AI-Driven Robotics: Innovations in Design, Perception, and Decision-Making

College of Mechanical Engineering, Guangxi University, Nanning 530004, China
*
Author to whom correspondence should be addressed.
Machines 2025, 13(7), 615; https://doi.org/10.3390/machines13070615
Submission received: 5 June 2025 / Revised: 10 July 2025 / Accepted: 15 July 2025 / Published: 17 July 2025
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)

Abstract

Robots are increasingly being used across industries, healthcare, and service sectors to perform a wide range of tasks. However, as these tasks become more complex and environments more unpredictable, the need for adaptable robots continues to grow—bringing with it greater technological challenges. Artificial intelligence (AI), driven by large datasets and advanced algorithms, plays a pivotal role in addressing these challenges and advancing robotics. AI enhances robot design by making it more intelligent and flexible, significantly improving robot perception to better understand and respond to surrounding environments and empowering more intelligent control and decision-making. In summary, AI contributes to robotics through design optimization, environmental perception, and intelligent decision-making. This article explores the driving role of AI in robotics and presents detailed examples of its integration with fields such as embodied intelligence, humanoid robots, big data, and large AI models, while also discussing future prospects and challenges in this rapidly evolving field.

1. Introduction

With the rapid development of technology, robot technology and AI are becoming the focus of attention in today’s society. Robotics technology is a rapidly developing field that encompasses multiple disciplines, such as mechanical engineering, electronics, and computer science, and is aimed at developing machines with the ability to mimic [1,2], assist [3], or replace humans in performing specific tasks. However, this technology also faces a series of challenges. First, robot design needs to consider the work requirements in various complex environments, such as narrow spaces or multiple obstacles [4,5,6], as well as the potential increase in structural complexity required to adapt to different tasks and unknown environments [7]. Second, robots need to process a large amount of complex data to accurately analyze results based on their perception [8], including achieving autonomous navigation in complex environments [9,10], human-computer interaction [11], etc. Finally, the robot intelligent control system needs to meet the requirements of real-time control [12] and have adaptive capabilities to cope with uncertainty [13].
AI is an important branch of computer science aimed at equipping computer systems with the ability to mimic human intelligent behavior. It covers techniques such as pattern recognition [14,15], machine learning [16,17], data mining [18,19], intelligent algorithms [20,21], etc. These technologies can analyze and process various forms of information, simulate human behavior, acquire new knowledge and skills, and continuously improve their own performance. Through intelligent algorithms, artificial intelligence can mine useful information, solve practical problems, and achieve intelligent functions.
AI has injected new vitality into robotics technology, and various fields have benefited from this integration. In terms of robot designs, artificial intelligence improves design efficiency and reliability through data analysis and simulation optimization, enhancing the energy efficiency and adaptability of robots [22,23]. For example, some industrial robots have significantly reduced energy consumption by optimizing their motion trajectories through AI [24]. In terms of robot perception, AI enables robots to more accurately understand their environment through multimodal data fusion, thereby responding more effectively to complex situations [25,26]. For example, mobile robots can combine visual, auditory, and tactile sensors to better recognize and respond to various complex environments [27]. In terms of intelligent control, advanced algorithms such as adaptive control and computer vision enable robots to accurately perform tasks in complex surgeries. By applying big data and these deep learning models, AI has improved the accuracy and performance of robotic surgery [28]. For example, autonomous vehicles can achieve precise control in complex road conditions through AI algorithms, improving driving safety and efficiency [29,30].
In the following chapters, we will delve deeper into the specific contributions of AI to robotics. In Section 2, “Artificial intelligence promotes robot design,” we will explore how AI techniques enhance the design process of robots, making them more efficient and adaptable. Section 3, “Application of artificial intelligence in robot perception technology,” will discuss the role of AI in enabling robots to perceive and understand their environment more accurately. Section 4, “The role of artificial intelligence in robot intelligent control,” will examine how AI algorithms improve the control systems of robots, allowing them to perform complex tasks with greater precision and adaptability. Finally, in Section 5, “Summary and outlook,” we will summarize the key findings of the paper and provide insights into the future development of AI in robotics.
In summary, AI is a key factor driving the development of robotics technology, not only expanding the capabilities of robots, but also providing innovative solutions to challenges in the field of robotics. In the future, with the further integration of AI technology and robotics technology, we can expect the emergence of more intelligent, flexible, and efficient robot systems. Intelligent robots will show broader application prospects in various fields, bringing more convenience and possibilities to human society. This review aims to provide a detailed overview of the current state and future trends in the application of AI in robotics, highlighting the significant advancements and potential for further innovation in this rapidly evolving field. By doing so, we aim to provide researchers and practitioners with a comprehensive, integrative perspective on how AI is transforming robotics and where the field should head next.

2. Artificial Intelligence Promotes Robot Design

Design is a crucial part of the robot development process, as it directly affects the functionality, appearance, and performance of the robot system. Integrating AI technology into various aspects of robot design has revolutionized the way robots are conceived and built. Specifically, AI-driven design processes often leverage advanced neural network models, such as Convolutional Neural Networks (CNNs) for image-based design optimization and Recurrent Neural Networks (RNNs) for sequential design tasks. These models have evolved over time to handle more complex tasks, such as predicting the structural integrity of robot components under various conditions. For example, by collecting large-scale tactile data through tactile gloves and analyzing this data using deep convolutional neural network models, researchers can better understand the operating rules of human hands and design more accurate and efficient robot grasping tools and prosthetics [31]. This approach not only enhances the adaptability of robots [32,33] but also promotes their autonomy in complex environments [34].

2.1. Automated Intelligent Design

This is a core process of robot design, where AI can generate a large number of design solutions in a short period of time and find the optimal one through continuous optimization and simulation testing. This not only significantly shortens the design cycle but also improves design quality and reliability. Figure 1 depicts the process of automated intelligent design. First, generative design utilizes AI algorithms to automatically generate design schemes for mechanical structures and components, then comprehensively optimizes multi-objective performance or reduces material consumption while meeting design constraints [35,36]. Alternatively, biomimetic design can be adopted to propose effective robot structural design and functionality to adapt to specific application scenarios [37]. Then, design optimization is achieved through machine learning or AI optimization tools to adjust parameters and optimize the performance of existing designs in order to find the optimal solution [38]. In addition, through simulation and testing, virtual testing and validation of robot design can be quickly carried out, reducing the need for physical prototypes, shortening the development cycle, and improving efficiency and effectiveness. Using AI algorithms and simulation data for automatic optimization, finding the best design solution, achieving multi-objective optimization, and significantly improving the efficiency and accuracy of the design process [39,40], this design concept has been widely used in the field of biology [41]. Any single process or combination of the above can optimize the robot design process, ensuring efficient and reliable design, and has broad potential for multi-domain applications.
Inspired by the sensory system of the star-nose mole, a biomimetic tactile olfactory sensing array installed on a robotic arm is proposed (Figure 2A) [42]. It can obtain real-time local terrain, stiffness, and odor of various objects without visual input. The tactile sensing array in the system consists of 70 force sensors located at the ends of the fingers, which can sense the local surface morphology and stiffness of objects. The olfactory sensing array consists of six gas sensors located on the palm, which can detect the odor characteristics of objects (Figure 2B). These sensors have high sensitivity and stability and can provide reliable data in harsh environments. By integrating tactile and olfactory information, the algorithm based on bioinspired olfactory-tactile (BOT) association learning can accurately recognize 11 typical objects with 96.9% accuracy in simulated rescue scenarios (Figure 2C). The compact, low-power, and robust design of the tactile olfactory sensing system highlights its enormous potential for object recognition in harsh environments where visual methods may fail.
In addition, a new optimization method based on metaheuristic optimization and deep neural network (DNN) surrogate model was proposed to optimize the mode parameters of concentric-tube robots for minimally invasive surgery [43]. First, topology optimization was used to determine the general shape of the pattern (Figure 2D). Then, training data were generated through finite element analysis (FEA), and a DNN surrogate model was constructed (Figure 2E). Finally, a metaheuristic-based optimization method was used to optimize the design variables, and the optimization results were verified through experiments (Figure 2F). This method can generate patterns that are superior to previous designs in a reasonable amount of time (less than 900 s) without the need for manual parameter studies or sensitivity analysis. The use of DNN surrogate models can allow the design space to be effectively explored, which is more efficient compared to previous comprehensive search methods.
On the other hand, a mechanical adhesive gripper for climbing robots has been developed that is inspired by beetle claws [44]. First, by analyzing the structure, morphology, and attachment process of beetle claws, a reference was provided for the preliminary design of biomimetic grippers (Figure 2G). Then, by simulating the attachment probability of different microspike layouts, the optimal microspike design parameters were further determined (Figure 2H). Then, a novel driving system was adopted to achieve lightweight, load sharing, adaptability, and strong adhesion on irregular rough surfaces. Finally, the gripper was integrated into the rock climbing robot RockClimbo, enabling it to move on extreme terrains, and its performance was verified through experiments (Figure 2I).

2.2. Intelligent Material Selection

In terms of material selection, materials can be selected for robots based on design requirements and performance indicators. AI can recommend the most suitable materials for different application scenarios and predict their performance under various conditions, ensuring the durability and adaptability of robots. For example, a new sensor system based on liquid metal (LM) improves the rheological properties of LM by introducing SiO2 particles, selecting appropriate materials and combinations, so that LM maintains high conductivity while also having good processability and mechanical properties, thus designing high-performance sensors [45]. Here, by showcasing the application of LM in sensors, the importance and potential of intelligent material selection are demonstrated, emphasizing the integration of multifunctional sensing systems through rational material selection and optimization with the assistance of deep learning algorithms. These systems are widely used in intelligent motion training, intelligent soft robots, and human–machine interfaces.

2.3. Modular Design

The origami robot is based on the principle of origami, using intelligent materials and biologically inspired design to achieve simple and efficient modular design, manufacturing, and control. They are not only suitable for various practical applications, but also enhance the flexibility, intelligence, and adaptability of robots through AI and embodied intelligence technologies. This design concept provides new directions and ideas for the future development of robot technology [46]. The modular design further demonstrates the advantages of AI in robot design. After identifying task requirements, it can provide multiple alternative design solutions to enhance system reliability, making robot design more flexible and scalable [47]. AI systems can recognize environmental information and optimize module combinations to meet different task requirements, significantly improving the adaptability and autonomy of robots [48]. In this regard, biomedical robots based on the principle of origami achieve compactness and reconfigurability by folding flat materials into three-dimensional structures [49]. This technology combines biocompatibility and high load capacity, making it suitable for building compact actuators, deployable stents, and minimally invasive surgical devices. By using computational models, the mechanical properties of these physical origami robots can be analyzed and simulated, providing scientific basis for their design and application in the biomedical field. AI and modular design further enhance the intelligence and flexibility of these robots, enabling them to better meet practical application needs, and the proposed computational model helps accelerate the design iteration of new biomedical origami robots.
In summary, the role of AI in robot design is pivotal in driving innovation and enhancing adaptability. By leveraging advanced techniques such as generative design algorithms, machine learning, and big data analytics, AI has revolutionized the way robots are conceived and built. For instance, generative design algorithms can rapidly generate and optimize complex mechanical structures, while machine learning models can predict and mitigate potential design flaws. The integration of big data and big models has further enhanced the design process by providing detailed insights into performance optimization and material selection, leading to more sustainable and cost-effective designs. This holistic approach has not only improved the efficiency and reliability of robot design but also paved the way for more innovative and versatile robotic systems.

3. Application of Artificial Intelligence in Robot Perception Technology

In modern robotics technology, perception ability is the foundation for achieving autonomous action and intelligent decision-making, and the introduction of AI has greatly improved the intelligence level of robot perception, enhanced environmental adaptability, promoted innovation and development of robot perception technology, and continuously emerged application scenarios. AI-driven perception systems often rely on a variety of neural network models, such as CNNs, for image recognition, and Generative Adversarial Networks (GANs) to generate synthetic training data to improve perception accuracy. These models have shown significant performance improvements over time, with advancements in areas such as real-time object detection and scene understanding. For example, by combining big data and big models, robots can more efficiently process data from various sensors. This ability enables robots to perceive the environment more comprehensively, understand accurately, and respond quickly, thereby performing more complex tasks. The following will explore the specific applications and importance of AI in robot perception technology from three aspects: sensor data processing, computer vision, and natural language processing.

3.1. Sensor Data Processing

This is the core of robot perception, which can obtain various pieces of information in the environment through various types of perception, such as vision, hearing, and touch. AI can effectively process and understand data from different sensors. Through multi-sensor fusion and real-time data analysis, robots can perceive the surrounding environment like humans, improving their perception accuracy and response speed (Table 1).
For example, visual perception can be achieved through image recognition technology, combined with big data and models, to enable robots to recognize and classify objects. In the manufacturing industry, visual perception is used to automatically detect the quality of products, discover defects and deficiencies, and improve production efficiency and product quality [50,51]; in medical image analysis, visual perception is used to segment and annotate organs or lesion areas to assist doctors in diagnosis [52,53]. Auditory perception can be achieved through speech recognition technology, enabling robots to understand and respond to human language. Voice assistants can be used in smart homes to recognize commands and control devices [54,55]. Auditory perception can also be used in monitoring to detect abnormal sounds and trigger alarms, ensuring environmental safety, or identifying user emotions, promoting human–computer interactions [56,57]. Tactile perception enables robots to perceive and measure the shape, hardness, or surface features of objects, enabling them to operate robots more flexibly and perform complex tasks. In medicine, tactile feedback is used for biomimetic surgical robots to provide precise surgical assistance [58,59]; for robots’ collaborative tasks it is used to precisely control contact force and operating force, achieving safe and efficient human–machine interaction [60,61].
Table 1. Examples of sensor data processing.
Table 1. Examples of sensor data processing.
CategorySensor DescriptionData Type and FormatData Processing MethodTypical ApplicationsRef
Visual perceptioncameraLiDAR point cloud data/SpatialAdaptive Moment Estimation (Adam)3D object detection, automatic driving[62]
An ordinary camera systemSports stimulation and sine wave grating stimulation/Time Serieslobula giant movement detector (LGMD2)Autonomous Navigation, Collision Detection, Obstacle Avoidance of Mobile Robots[63]
2D retinomorphic devicesframe difference time/Time Seriesconvolutional neural network (CNN)Intelligent Internet of Things,
human-eye biomimetic design
[64]
ZnO photo-synapse sensorphotocurrent matrix/Matrix
light stimulation/Time Series
Artificial Neural Network (ANN)Neural morphology, artificial visual systems[65]
Auditory perceptionThe skin-attachable acoustic sensorCapacitance–voltage/ScalarAnalog-to-digital converterAuditory electronic skin[66]
a thin-film flexible acoustic sensorSound signal/Time Seriesshort-time Fourier transform (STFT)Physiological Acoustic Signal Monitoring[67]
A spiral-artificial basilar membrane sensorElectrical signal/Scalar
Noise data/Time Series
Frequency response analysisSpeech recognition, dangerous situation recognition, hearing aids[68]
Tactile perceptionBB-SkinTemperature change/Time SeriesSupport Vector Machines (SVM)Real-Time Temperature Monitoring, Object Recognition System[69]
Triboelectric sensoropen circuit voltage, short circuit current, transferred charge/Mappinglinear discriminant analysis (LDA)Identified the type and roughness of various common materials[70]
Tattoo-like electronicsSurface electromyographic/VectorLDANatural communication in daily life[71]

3.1.1. Visual Perception

Computer or robot systems mimic the human visual system, obtaining image or video data through sensors such as cameras, and using image processing, machine learning, and deep learning algorithms to interpret and understand visual information, thereby achieving functions such as object recognition, motion detection, distance measurement, and scene understanding. For example, Time-of-Flight (ToF) cameras measure the time it takes for a light signal to travel from a camera to an object and back. This technology is particularly useful for real-time 3D mapping and distance measurement, making it ideal for applications such as autonomous navigation and obstacle avoidance in robotics. Light Detection and Ranging (LiDAR) sensors use laser light to measure distances to objects. They are highly effective for creating detailed 3D maps of environments and are widely used in autonomous vehicles and robots for navigation and obstacle detection. Stereo vision systems use two or more cameras to capture images from slightly different viewpoints, similar to human eyes. By comparing the images, stereo vision can compute depth information, enabling robots to perceive 3D environments. This technology is essential for tasks such as object recognition, motion detection, and scene understanding. For example, an efficient stereo geometry network has been developed to quickly and accurately detect 3D objects in complex environments [62], providing strong technical support for applications such as autonomous driving and robot navigation.
In addition to traditional sensors, biomimetic and advanced sensors are also being explored to enhance robotic vision. For example, by drawing inspiration from the visual neural network of locust in nature, an LGMD2 visual neural network model was proposed for detecting potential collision objects in images [63]. When running in complex and ever-changing environments, this model ensures safe and efficient task execution for autonomous vehicles and robots. In addition, a hardware device inspired by biomimetic retina is proposed [64], which realizes a compact and efficient motion detection and recognition (MDR) hardware system integrating optical sensing, storage, and computing functions, improving the efficiency and accuracy of motion detection and recognition. In addition, a sensor based on ZnO optical synapses has been proposed (Figure 3A) [65], which uses image binarization, denoising, and artificial neural network algorithms to achieve signal processing and pattern recognition functions of optoelectronic synapses (Figure 3E). This verifies the rapid response capability and reliable recognition accuracy of its hardware level image recognition system (Figure 3I).

3.1.2. Auditory Perception

This is the process of obtaining sound data through sensors such as microphones, which can perceive, analyze, and interpret sound signals, thereby achieving functions such as recognizing sound, detecting noise, understanding voice commands, and recognizing emotions. For example, in order to address the limitations of existing wearable acoustic sensors and provide a more accurate and sensitive acoustic perception tool, a high fidelity skin adhesive acoustic sensor has been proposed [66]. This sensor can accurately detect a wide range of sounds and communicate with AI virtual assistant programs through speech recognition functionality. In addition, a thin film flexible acoustic sensor capable of detecting ultra wideband acoustic signals in various applications has been proposed (Figure 3B) [67]. The sensor can perform real-time laryngeal monitoring (Figure 3J) through short-time Fourier transform (STFT) spectrum analysis (Figure 3F). A spiral-artificial basilar membrane sensor is proposed that mimics the basilar membrane in the human cochlea [68]. This sensor can respond to sounds at different frequencies and can be applied in fields such as speech recognition, danger situation recognition, hearing aids, and cochlear implants.

3.1.3. Tactile Perception

In addition, tactile perception refers to the ability to mimic the human skin and nervous system’s perception of touch. Collect tactile data through devices such as pressure sensors [72] and temperature sensors [73], and use algorithms such as machine learning [74] and deep learning [75,76] to perceive and interpret the shape, texture, temperature, and force characteristics of objects, thereby achieving precise tactile feedback and control. It has a wide range of application scenarios, including assisted perception in medical rehabilitation equipment [77], robot grasping and operation [78], and human–computer interaction interfaces [79,80]. For example, Force-Sensing Resistors (FSRs) are pressure-sensitive devices that change their resistance in response to applied force. They are widely used in robotic hands and prosthetics to measure the force exerted during grasping and manipulation tasks. Electromyography (EMG) sensors measure the electrical activity of muscles, providing valuable information about muscle contractions and movements. EMG sensors are commonly used in wearable devices and prosthetics to control robotic limbs and assistive devices. These sensors are essential for tasks such as robot grasping, manipulation, and human–robot interaction. For example, a highly realistic bionic bimodal electronic skin has been developed [69] that consists of temperature sensing, heating, and friction optimized electrode modules (Figure 3C). By using machine learning algorithms to analyze and process data (Figure 3G), accurate object recognition can be achieved, and remote temperature sensing feedback can be provided (Figure 3K). At the same time, by integrating frictional electric sensing and machine learning (Figure 3H) [70], a smart finger with surpassed human tactile perception was proposed (Figure 3D), which can accurately identify material types and roughness (Figure 3L). In addition, a silent speech recognition system was proposed [71], which uses tattoo like electronic devices to record biological data and implements speech recognition through machine learning algorithms, promoting communication among aphasia patients, communication while maintaining silence, and human–computer interaction without peripheral interference.

3.1.4. Multi-Sensor Fusion

Multiple sensors work together to fuse data from different types of sensors, providing more comprehensive and accurate environmental perception. For example, tactile gloves integrated with temperature and pressure sensor arrays can sense the contact pressure and thermal conductivity of objects (Figure 4A) [81]; they can also combine deep learning algorithms to distinguish 20 types of objects, improving the accuracy of object recognition. In addition, the multifunctional soft mechanical fingers integrated with nanoscale temperature–pressure tactile sensors can simultaneously measure temperature change rate and contact pressure (Figure 4B) [82], and combined with artificial neural networks, can accurately identify materials. Even a hierarchical pressure–temperature bimodal sensing E-skin based on all resistive output signals can classify various objects of different shapes, sizes, and surface temperatures (Figure 4C) [83]. At the same time, a new form of ultralight multifunctional tactile nano-layered carbon aerogel sensor (Figure 4D) [84] can provide a variety of tactile perceptions and combine multi-modal supervised learning algorithm for object recognition, further enhancing the universality and accuracy of tactile perception so that the robot has multi-functional tactile perception ability.

3.2. Computer Vision

In the field of computer vision, AI technologies such as image recognition and classification, simultaneous localization and map construction (SLAM), greatly enhance robots’ understanding and adaptability to the environment, and improve the flexibility and reliability of handling multi form tasks (Table 2).
Image recognition and classification enable robots to accurately recognize and classify surrounding objects and scenes. However, image-based data presents multiple challenges. To cope with low-quality image data, a generative adversarial network (GAN)-based DL method for low-quality defect image recognition is proposed [85], which effectively improves the visual quality of low-quality images and uses a Visual Geometry Group 16-layer network (VGG16) to recognize reconstructed images, achieving high recognition accuracy on low-quality defect images. However, GAN requires a large number of computing resources and training time, and the proposed method has poor generalization. A new contrastive generative adversarial network (CGAN) is proposed for limited data [86], and data augmentation techniques are used for surface defect recognition. The experiment shows that the proposed method generates defect images with higher quality and greater diversity, improving the accuracy of surface defect recognition in limited data situations. However, the generated images were not filtered before being input into the model, and the training time of GAN was long, resulting in poor real-time performance. A new multiscale multiattention convolutional neural network (MSMA-SDD) is proposed for fine-grained surface defect detection to address image data with different sizes of defects [87]. By combining multi-scale and multiattention modules, defects of various sizes and shapes can be detected more accurately. The experimental results show that this method performs well on different datasets. However, to ensure optimal performance of the model, a significant amount of time is required to manually tune the hyperparameters of MSMA-SDD. In addition, an Adaptive Classifier with Attention-wise Transformation is proposed for small sample data [88], which is specifically designed to solve the problem of surface defect recognition with few samples. Finally, through the evaluation of publicly available datasets and actual printed datasets, the proposed method not only improves the accuracy of surface defect recognition with few samples, but also enhances generalization ability. However, detecting minor defects still poses difficulties, and the training time is relatively long.
SLAM technology enables robots to build real-time maps and determine their own position in unknown environments, thereby achieving autonomous navigation, path planning, and obstacle avoidance. For example, the recently launched driverless car “Apollo Go” from Baidu (Beijing, China) uses SLAM. In order to solve the problem of observation noise sensitivity commonly encountered in outdoor scenarios, ref. [89] proposed a quadric initialization method based on the separation of the quadric parameters method, which improved the robustness to observation noise and combined semantic inlier distribution, Kalman motion prediction, and an object data association (ODA) algorithm with elliptical projection to achieve real-time stereo vision SLAM. In order to achieve collision-free trajectories while balancing SLAM accuracy and area coverage performance, an active simultaneous localization and mapping (SLAM) framework is proposed [90]. Through simulation and experimental verification, this method can generate more accurate SLAM results while covering the area and satisfying all constraints, which helps robots to serve more intelligently and efficiently in daily life. In order to ensure that autonomous off-road vehicles can navigate better in wilderness without GPS signals, a novel view planning method has been developed in visual simultaneous localization and mapping [91]. By actively and smoothly controlling the gimbal camera equipped on the robot, robust and accurate SLAM can be achieved in unknown outdoor environments.
In addition, computer vision technology can be used for object detection and tracking. For example, in order to reduce the impact of adverse weather conditions, different types of AI technology have been combined [92], and the performance of autonomous driving and intelligent transportation systems in adverse weather conditions has been significantly improved, ensuring the safety and efficiency of driving. By combining dynamic attention fusion units and temporal–spatial fusion networks with AI technology [93], the intelligent monitoring system can automatically analyze and recognize customer behavior, detect anomalies in a timely manner, and improve the efficiency and effectiveness of security monitoring. In medical image analysis, computer vision technology is used to segment and annotate organs or lesion areas to assist doctors in diagnosis [94]. In daily life applications, computer vision technology is used for security verification, social media, smart homes, etc., which can unlock devices and verify payment operations [95].

3.3. Natural Language Processing

Natural Language Processing (NLP) is another important perceptual technology. Through speech recognition, understanding, and semantic analysis, AI enables robots to interact more naturally with humans. Speech recognition and understanding technology enables robots to accurately understand human language instructions, while semantic analysis enables robots to understand the meaning behind language and make more intelligent responses. This ability enables robots to be applied more widely and deeply in fields such as services, education, and medical. For example, in stroke patients using multi degree of freedom hand exoskeletons for daily activities, in order to accurately recognize the patient’s movement intention, large language models and speech recognition technology are used to enable the hand exoskeletons to not only perform predefined operations, but also respond to non predefined tasks and commands [96]. As the aging population problem worsens, rehabilitation nursing robots that assist in elderly care have attracted much attention. The multi-modal rehabilitation system created using VR technology and speech recognition provides an innovative rehabilitation pathway for patients with learning and developmental disabilities [97]. This system is expected to be widely applied in the medical field in the future metaverse. In addition, chatbots such as OpenAI’s ChatGPT can engage in natural conversations with users, answer questions, provide information, or complete tasks [98].
In summary, the integration of AI into robot perception technology has profoundly enhanced the autonomy and decision-making capabilities of robots. Through advanced AI techniques such as convolutional neural networks (CNNs) for image recognition, recurrent neural networks (RNNs) for sequential data processing, and multimodal data fusion, robots are now capable of perceiving their environment with greater accuracy and comprehensiveness. These advancements enable robots to not only understand and respond to complex environmental conditions but also to accurately interpret human instructions, thereby making intelligent decisions in real-time. The result is a significant improvement in the efficiency and adaptability of robots, making them more capable of operating in dynamic and challenging environments. This progress has far-reaching implications for various industries, including industrial manufacturing, healthcare, and home services, where robots can now provide more effective and reliable support. By enhancing the perception capabilities of robots, AI has opened new possibilities for their application in diverse fields, ultimately bringing greater convenience and efficiency to human life and work. This progress underscores the transformative potential of AI in shaping the future of robotics and its integration into everyday life.

4. The Role of Artificial Intelligence in Robot Intelligent Control

In the development of robotics technology, the intelligence of control systems is the key to achieving efficient, safe, and autonomous robots. The introduction of AI has injected strong impetus into the intelligent control of robots, especially playing an important role in autonomous navigation and path planning [99,100,101], motion control and coordination [102,103,104], and human–machine collaboration [105,106,107]. AI-driven control systems often utilize advanced neural network models, such as Reinforcement Learning (RL) algorithms for adaptive control and Deep Q-Networks (DQN) for decision-making in dynamic environments. These models have evolved to handle more complex tasks, such as real-time path optimization and adaptive control in uncertain environments. By combining big data and big modeling technologies, we aim to improve the design and development of humanoid robots [108,109,110], enhancing their abilities in ontology control, rapid movement, and precise perception in complex and dangerous environments. This will help create highly reliable humanoid robot solutions suitable for special application scenarios, such as performing tasks that require specific human skills and flexibility, as well as conducting rescue operations in hazardous environments [111,112,113]. AI enables robots to perform complex tasks more accurately and flexibly, and to adapt to changing environments, Table 3 analyzes the advantages and disadvantages of some artificial intelligence methods in the field of robotics.

4.1. Autonomous Navigation and Path Planning

Autonomous navigation and path planning are critical components of robot intelligent control, enabling robots to operate efficiently and safely in various environments. The development of advanced algorithms and the integration of various perception sensors have significantly enhanced the capabilities of robots in these domains. In the following subsections, we will delve into the specific algorithms and techniques used in path planning, reinforcement learning, and adaptive control, highlighting their applications and benefits.

4.1.1. Path Planning Algorithm

This is one of the important areas of robot intelligent control. It involves the use of advanced algorithms to design efficient and safe paths for robots while considering obstacle avoidance and environmental awareness [122,123,124]. The application of this technology has become a particular research focus for natural disasters or urban accident rescues. Due to the complexity and diverse changes in the actual environment, quickly determining a reliable rescue route is crucial. Robust perception in diverse environments—such as aerial, underwater, and underground settings—also requires specialized sensor suites tailored to environmental constraints. In aerial robotics, platforms typically deploy lightweight radar, LiDAR, or thermal cameras, etc. In contrast, underwater robots rely on acoustic sensors. For underground robots, perception is commonly enabled by LiDAR, ToF laser or IR sensors, thermal cameras. By selecting sensor types suited to each environment and integrating them with advanced path-planning algorithms, research ensures robust, real-time path planning. For example, researchers have developed a feature learning-based bio-inspired neural network for rapidly generating heuristic rescue paths in complex and dynamic environments [114]. The system learns environmental information and optimizes path planning strategies, enabling real-time response to environmental changes and significantly improving the speed, efficiency, and optimality of rescue operations. Similarly, in complex underwater environments, efficient Reward acting on Reinforcement Learning and Particle Swarm Optimization strategies have been proposed for real-time rescue task allocation [117]. This strategy demonstrates the advantages of cost-effectiveness and rapid response in path planning and task allocation. In addition, drones are gradually demonstrating their advantages in search and rescue missions, especially in unknown environments that are difficult to access. To this end, an adaptive conversion speed Q-Learning algorithm is proposed [125], which is used to autonomously perform search and rescue tasks in unknown environments, thereby optimizing path selection. In order to further enhance the navigation ability of humanoid robots in complex environments and multirobot coordination, a hybrid neural-fuzzy-petri-net controller is proposed [119], which combines the Mamdani fuzzy system (Figure 5A). It takes the target angle and obstacle distance output by CNN as inputs (Figure 5B) and generates optimized effective target angles, thereby achieving dynamic path analysis and collision avoidance between multiple robots (Figure 5C).

4.1.2. Reinforcement Learning

Assists robots in autonomous learning and path optimization tasks in unknown, complex, and dynamic environments [127,128,129]. For example, in harsh, polluted, and even dangerous areas, it is very important to use cleaning robots for full coverage cleaning work. To this end, a Complete Coverage Planning (CCP) method based on deep reinforcement learning (DRL) was proposed [115] to optimize shape change and path planning, covering all areas that need to be cleaned with minimal energy consumption and time. Path planning for mobile robots in unknown environments is a common problem, and traditional path planning algorithms often suffer from serious dependence on environmental information, long inference time, and weak anti-interference ability. To overcome these challenges, an improved deep reinforcement learning algorithm has been proposed [116] which utilizes a double deep Q-network (DDQN), a comprehensive reward function, and Bezier curves to generate safer, shorter paths with strong adaptability. In addition, underwater environments are large-scale, complex, real-time, and dynamic, with many unknown obstacles present. In such harsh environments, conventional path planning methods are often ineffective. Therefore, a new path planning algorithm based on N-step priority Double DQN has been designed for underwater robots navigating dynamic and complex 3D underwater environments, effectively achieving obstacle avoidance in complex environments [130]. In order to further solve the path planning problem of multirobot systems in complex environments, a dynamic proximal meta policy optimization with covariance matrix adaptation evolutionary strategies (dynamic-PMPO-CMA) was proposed to improve the adaptability of robots in different environments (Figure 5D) [126]. During the training phase, transfer learning was introduced to improve learning efficiency (Figure 5E). The simulation experiment results show that this method has faster convergence speed and better obstacle avoidance navigation performance (Figure 5F).

4.1.3. Adaptive Control

Control strategies can be adjusted based on real-time feedback, allowing one’s own behavior and parameters to be automatically adjusted and optimized to adapt to different tasks and conditions, thereby improving navigation efficiency and task completion rate [131,132,133]. In addition, in order to ensure safety in an emergency, a real-time planning horizontal control algorithm suitable for following autonomous connected vehicles (ACVs) in a two-vehicle platoon is proposed [134]. First, estimate the driving curve of the vehicle in front, then adjust the driving direction and position of the following vehicle to ensure driving safety. A PTDRL (Parameter Tuning using Deep Reinforcement Learning) strategy has been proposed to adapt to different environments [135]. Through an adaptive control mechanism, the parameters of the navigation system are adjusted based on real-time feedback, making it suitable for navigation scenarios that require flexible response to different tasks and conditions. A trajectory planning method for autonomous ground vehicles (AGVs) based on a novel Reinforcement Learning Particle Swarm Optimization (RLPSO) algorithm is proposed in dynamic environments with obstacles [118]. This method uses a Hector SLAM algorithm to achieve real-time mobile robot positioning and map generation on ROS platform, generating binary occupancy grids (Figure 5G). The experimental environment includes static and dynamic obstacles to evaluate the performance of RLPSO algorithm and five classic PSO algorithms (Figure 5H). Meanwhile, the AGV utilizes the RLPSO algorithm to dynamically adjust its strategy based on feedback, achieving adaptable trajectory planning in diverse environments while effectively avoiding obstacles (Figure 5I).

4.2. Motion Control and Coordination

Motion control and coordination are essential for enabling robots to perform precise and efficient operations in various environments. In this section, we will explore the key aspects of motion control and coordination, including motion control, feedback control, and multi-robot coordination. We will discuss the latest advancements in these areas and their applications in different robotic systems.

4.2.1. Motion Control

This is the foundation for achieving precise operation of robots. With the support of big data and big models, the robot’s motion planning and execution capabilities have been significantly improved, enabling it to accurately control joints and achieve stable and accurate grasping actions [136,137,138]. For example, an artificial intelligence system called “Generative Grabbing Convolutional Neural Network” (GG-CNN) was proposed [139], which learns how to accurately grab items in various environments from large amounts of data. Compared with traditional methods, GG-CNN has faster computation speed, smaller size, and better performance in complex environments. When grasping objects in cluttered scenes, the maximum grasping metric method is used to analyze point cloud data from a single perspective to determine the optimal grasping point, which improves the robot’s grasping ability and success rate in various scenarios [140]. In addition, flexible grasping of unknown objects is particularly important; therefore, a two-stage grasping strategy is proposed [141]. First, a flexible hand consisting of a variable palm and four soft fingers was designed (Figure 6A) that can provide flexible grasping space and deformation ability to adapt to unknown objects. Next, a modular convolutional neural network (M-CNN) was used to classify the shape, material, and size of the target object, and determine the appropriate input pressure level (Figure 6B). Then, a vision-based adaptive detection (VAD) method was proposed, which utilizes the classification results to obtain object pose and optimal grasping configuration. Finally, the combination of classification and detection methods demonstrated the reliability, adaptability, and real-time performance of the method (Figure 6C).

4.2.2. Feedback Control

The system adjusts control strategies through real-time data to ensure the stability of the robot’s operation in various environments [144,145,146]. In order to address the instability of control during the forward movement of bionic robotic fish, a new bionic robotic fish control framework was introduced [147]. The stability of the robotic fish was enhanced using reaction wheels, and the control of tail swing and reaction wheels was coordinated through the multiagent reinforcement learning (MARL) method. The experimental results show that this method achieves precise motion control and stable swimming behavior, significantly improving the accuracy and stability of path tracking and reducing shaking. In addition, in order to reduce physical labor in repetitive industrial tasks, auxiliary mechanical exoskeletons can be used, but due to the uncertainty of the dynamic environment, precise control is difficult. Therefore, an exoskeleton control method combining fuzzy adaptive admittance control, barrier Lyapunov function (BLF), and neural network compensator is proposed [148]. The system utilizes real-time interaction force and end effector speed data to adjust the control strategy of the exoskeleton, ensuring stability and accuracy in different environments and tasks. On the other hand, in order to assist robots in walking on different surfaces, a Soft Terrain Adaptation and Compliance Estimation (STANCE) algorithm is proposed [120]. This allows the robot to adjust its control strategy using real-time ground feedback information without prior adjustment, which can help the robot adapt to changes in different ground conditions, thereby ensuring the stability and adaptability of the robot’s operation under different ground conditions. At the same time, in order to enable quadruped robots to maintain dynamic stability and efficiency in complex terrains while carrying heavy loads, a new method has been proposed that integrates adaptive control into a force-based control system [149]. In order to improve the generalization ability of existing reinforcement learning on variable terrain, a versatile and computationally efficient granular media model for reinforcement learning is proposed [142]. This method includes a parameterized terrain model (Figure 6D) that can simulate various terrain types from soft sand beaches to hard asphalt. In addition, an adaptive control architecture was introduced (Figure 6E) that can automatically adjust the robot’s movements according to changes in terrain, thereby improving its motion performance. Through this method, Raibo robots have demonstrated their high-speed mobility on different terrains, such as soft sand beaches, vinyl flooring, sports tracks, grasslands, and soft air cushions (Figure 6F).

4.2.3. Multi-Robot Coordination

By optimizing the collaboration between multiple robots through AI, the efficiency and coordination of task execution can be improved, enhancing the overall functionality of the system [150,151]. For example, in order to effectively achieve collaborative operation tasks, a hierarchical control system is proposed [152]. The system includes three levels: adaptive control, optimization control, and decentralized control. Each level has specific functions to deal with the unknown characteristics of objects, terrain complexity, and coordination between robots, achieving collaborative control of objects by quadruped robot teams in uncertain environments. In addition, a new genetic programming hyperheuristic (GPHH) method is proposed for the multipoint dynamic aggregation problem (MPDA) in multi robot systems [153]. This method can automatically generate real-time reactive coordination strategies to guide robots in executing dynamically occurring tasks. On the other hand, practical problems are often considered multimodal optimization problems (MMOPs), with the two main issues being the rational allocation of fitness evaluations (FEs) and avoiding individuals falling into local optima. Therefore, a strengthening evolution-based differential evolution with prediction strategy (SEDE-PS) is proposed to solve MMOPs (Figure 6G) [143]. First, a neighbor-based evolution prediction (NEP) strategy is proposed to accelerate convergence speed. Second, a prediction-based mutation (PM) strategy has been introduced to enhance exploration and utilization capabilities. Finally, a strengthening evolution (SE) strategy was proposed (Figure 6H). And the SEDE-PS algorithm was applied to the actual multi robot task assignment (MRTA) problem, demonstrating its effectiveness in solving practical optimization problems (Figure 6I).

4.3. Human Machine Collaboration

Human–machine collaboration is a critical area in the development of robotics technology, focusing on enabling robots to work safely and effectively alongside humans. This collaboration involves understanding human intentions, adapting to human actions, and ensuring safety during interactions. In this section, we will explore cooperative control and security monitoring and protection.

4.3.1. Cooperative Control

Cooperative control is one of the important directions for the development of robotics technology. Through AI technology, robots can understand human intentions and make appropriate responses, achieving collaborative work with humans and completing complex tasks. Based on the characteristics of large scale, fast pace, and the obvious aging trend of China’s older adult population, medical assistive devices for older adults that include intelligent functions, such as assisted optimization of rehabilitation [154], nursing and daily life [155,156], management of health data [157], and emotional voice communication [158], are one feasible ways to solve problems related to care for older adults. In the industrial field, human–machine collaboration is also a very important research hotspot. Robots should be able to recognize human intentions and ensure safe and adaptive behavior along the expected direction of motion. To this end, a Q-Learning-based Model Predictive Variable Impedance Control (Q-LMPVIC) method was proposed [159], which aims to enhance the performance of physical human–machine collaboration (pHRC) tasks, especially in industrial applications.

4.3.2. Security Monitoring and Protection

Real time monitoring of the interactions between robots and humans through AI technology ensures the safety of the collaborative process. The traditional momentum observer (MOB) method is used to estimate externally applied forces, but its estimation results are not accurate enough in the presence of model uncertainties, such as friction and modeling errors. To address this issue, a method based on the Long Short-Term Memory (LSTM) network has been proposed [121] that can effectively estimate external forces and detect collisions, improving the safety and response accuracy of robots when cooperating with humans, enabling robots to make timely and appropriate responses when facing external impacts. In the absence of force/torque sensors, using a nonlinear disturbance observer and Gaussian process estimation compensation zone to distinguish between external disturbances and interactive forces can achieve accurate motion tracking and compliance during interaction in complex tasks, ensuring the stability and safety of the robot [160]. This intelligent collaboration capability has wide applications in fields such as service robots and medical assistance, significantly improving the efficiency and safety of coexistence and collaboration between robots and humans.
In summary, the application of artificial intelligence in robot intelligent control has significantly elevated the autonomy and intelligence of robotic systems. By incorporating advanced AI techniques such as reinforcement learning, adaptive control algorithms, and deep neural networks, robots are now capable of more accurately perceiving their environment, optimizing navigation paths, coordinating complex movements, and collaborating efficiently with humans. These advancements have not only improved the overall efficiency and adaptability of robots but also provided robust technical support for the intelligent development of various industries. For example, reinforcement learning algorithms enable robots to learn optimal control policies through trial and error, adapting to changing conditions in real-time. Deep neural networks, particularly convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have enhanced robots’ ability to process and interpret sensory data, leading to improved perception and decision-making. Adaptive control algorithms further ensure that robots can adjust their behavior dynamically to maintain stability and performance in various environments. The integration of these AI-driven control mechanisms has opened new possibilities for the application of robots in fields such as industrial automation, healthcare, and emergency response.

5. Summary and Outlook

Previously, AI and robotics technology have mutually promoted each other, and the integration and development of the two have brought new perspectives to applications in various fields. In terms of design, AI-driven optimization algorithms have facilitated the creation of more efficient and adaptable robot structures. AI technologies, such as machine learning and deep learning, have played an important role in enhancing robots’ environmental perception and interpretation capabilities, thereby enhancing their abilities in object recognition, navigation, and more. Control systems also benefit from the application of AI, especially in dynamic and unstructured environments, where advanced algorithms make control strategies more precise and adaptive. Current trends indicate a growing emphasis on the integration of diverse AI methods, adaptive control strategies, and big data technologies to enhance robot capabilities. Future research should focus on addressing the challenges of real-time performance, computational efficiency, and safety, while also exploring new applications and hybrid approaches to further advance the field. Therefore, this article provides some prospects for future development directions.
First, through AI-driven generative design and optimization algorithms, the efficiency of robot design will be significantly improved, the design cycle will be shortened, and the design quality will be enhanced. For instance, generative design algorithms can rapidly generate and optimize complex mechanical structures, while machine learning models can predict and mitigate potential design flaws. However, the design process often requires a large amount of computing resources, especially for deep learning and complex simulations, which poses challenges to real-time performance and cost. The integration of quantum computing [161], mobile edge computing [162], and other emerging technologies is expected to significantly improve computing efficiency and response speed.
Second, convolutional neural networks (CNNs) have enabled robots to accurately recognize objects in diverse environments, while recurrent neural networks (RNNs) have improved their ability to process sequential data for real-time navigation. However, sensor data often contains noise and errors, which can impede perception accuracy. To solve this problem, sensor data can be preprocessed, combined with machine learning and deep learning to automatically remove or adjust data. By fusing data from multiple sensors, the robustness and accuracy of the system can be improved.
Third, ensuring the safety of robots working in collaboration with humans remains a challenge in achieving human-machine cooperation. Establishing comprehensive safety and reliability assessment frameworks, coupled with adaptive design principles, is essential. For example, reinforcement learning algorithms enable robots to learn optimal control policies through trial and error, adapting to changing conditions in real-time. Robots equipped with adaptive control systems can detect human intentions and actions, adjusting their movements accordingly to ensure safe and efficient collaboration.
In short, the integration of artificial intelligence in robot design, perception, and control has completely changed the field of robotics, enabling it to create advanced intelligent systems that can perceive the environment, make wise decisions, and accurately and efficiently perform tasks. The synergy between AI and robotics continues to drive innovation and opens up new possibilities for developing powerful and versatile robot systems across various industries.

Author Contributions

Conceptualization, L.L. (Li Li) and M.L.; methodology, L.L. (Li Li) and K.L.; validation, M.L. and K.L.; writing—original draft preparation, L.L. (Lei Li); writing—review and editing, K.L.; visualization, L.L. (Lei Li); supervision, L.L. (Li Li); funding acquisition, K.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 62461002) and the Guangxi Major Special Project (Grant No. Guike AA23062042).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zheng, Z.; Han, J.; Shi, Q.; Demir, S.O.; Jiang, W.; Sitti, M. Single-step precision programming of decoupled multiresponsive soft millirobots. Proc. Natl. Acad. Sci. USA 2024, 121, e2320386121. [Google Scholar] [CrossRef]
  2. Gong, S.; Ding, Q.; Wu, J.; Li, W.-B.; Guo, X.-Y.; Zhang, W.-M.; Shao, L. Bioinspired Multifunctional Mechanoreception of Soft–Rigid Hybrid Actuator Fingers. Adv. Intell. Syst. 2022, 4, 2100242. [Google Scholar] [CrossRef]
  3. Xia, G.; Zhang, L.; Dai, Y.; Xue, Y.; Zhang, J. Sound Feedback Fuzzy Control for Optimizing Bone Milling Operation During Robot-Assisted Laminectomy. IEEE Trans. Fuzzy Syst. 2024, 32, 2341–2351. [Google Scholar] [CrossRef]
  4. Kim, J.; de Mathelin, M.; Ikuta, K.; Kwon, D.-S. Advancement of Flexible Robot Technologies for Endoluminal Surgeries. Proc. IEEE 2022, 110, 909–931. [Google Scholar] [CrossRef]
  5. Zhang, H.; Yi, H.; Wang, C.; Yang, J.; Jin, T.; Zhao, J. A Robotic Microforceps for Retinal Microsurgery with Adaptive Clamping Method. IEEE/ASME Trans. Mechatron. 2024, 29, 4492–4503. [Google Scholar] [CrossRef]
  6. Inazawa, M.; Takemori, T.; Tanaka, M.; Matsuno, F. Motion Design for a Snake Robot Negotiating Complicated Pipe Structures of a Constant Diameter. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 8073–8079. [Google Scholar] [CrossRef]
  7. Liu, Z.; Liu, J.; Wang, H.; Yu, X.; Yang, K.; Liu, W.; Nie, S.; Sun, W.; Xie, Z.; Chen, B.; et al. A 1 mm-Thick Miniatured Mobile Soft Robot with Mechanosensation and Multimodal Locomotion. IEEE Robot. Autom. Lett. 2020, 5, 3290–3297. [Google Scholar] [CrossRef]
  8. Kang, X.; Song, B.; Guo, J.; Qin, Z.; Yu, F.R. Task-oriented image transmission for scene classification in unmanned aerial systems. IEEE Trans. Commun. 2022, 70, 5181–5192. [Google Scholar] [CrossRef]
  9. Yue, Y.; Wen, M.; Putra, Y.; Wang, M.; Wang, D. Tightly-Coupled Perception and Navigation of Heterogeneous Land-Air Robots in Complex Scenarios. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 10052–10058. [Google Scholar] [CrossRef]
  10. Singla, A.; Padakandla, S.; Bhatnagar, S. Memory-based deep reinforcement learning for obstacle avoidance in UAV with limited environment knowledge. IEEE Trans. Intell. Transp. Syst. 2019, 22, 107–118. [Google Scholar] [CrossRef]
  11. Su, H.; Di Lallo, A.; Murphy, R.R.; Taylor, R.H.; Garibaldi, B.T.; Krieger, A. Physical human–robot interaction for clinical care in infectious environments. Nat. Mach. Intell. 2021, 3, 184–186. [Google Scholar] [CrossRef]
  12. Zhang, L.; Cai, Z.; Yan, Y.; Yang, C.; Hu, Y. Multi-agent policy learning-based path planning for autonomous mobile robots. Eng. Appl. Artif. Intell. 2024, 129, 107631. [Google Scholar] [CrossRef]
  13. Li, Z.; Li, X.; Li, Q.; Su, H.; Kan, Z.; He, W. Human-in-the-Loop Control of Soft Exosuits Using Impedance Learning on Different Terrains. IEEE Trans. Robot. 2022, 38, 2979–2993. [Google Scholar] [CrossRef]
  14. Anh, P.T.Q.; Thuyet, D.Q.; Kobayashi, Y. Image classification of root-trimmed garlic using multi-label and multi-class classification with deep convolutional neural network. Postharvest Biol. Technol. 2022, 190, 111956. [Google Scholar] [CrossRef]
  15. Wang, M.; Yan, Z.; Wang, T.; Cai, P.; Gao, S.; Zeng, Y.; Wan, C.; Wang, H.; Pan, L.; Yu, J.; et al. Gesture recognition using a bioinspired learning architecture that integrates visual data with somatosensory data from stretchable sensors. Nat. Electron. 2020, 3, 563–570. [Google Scholar] [CrossRef]
  16. Sun, X.; Li, J.; Kovalenko, A.V.; Feng, W.; Ou, Y. Integrating Reinforcement Learning and Learning from Demonstrations to Learn Nonprehensile Manipulation. IEEE Trans. Autom. Sci. Eng. 2023, 20, 1–10. [Google Scholar] [CrossRef]
  17. Huang, X.; Wang, X.; Zhao, Y.; Hu, J.; Li, H.; Jiang, Z. Guided Model-Based Policy Search Method for Fast Motor Learning of Robots with Learned Dynamics. IEEE Trans. Autom. Sci. Eng. 2024, 22, 453–465. [Google Scholar] [CrossRef]
  18. Li, T.; Zhao, Z.; Sun, C.; Yan, R.; Chen, X. Multireceptive Field Graph Convolutional Networks for Machine Fault Diagnosis. IEEE Trans. Ind. Electron. 2021, 68, 12739–12749. [Google Scholar] [CrossRef]
  19. Nakhli, R.; Zhang, A.; Mirabadi, A.; Rich, K.; Asadi, M.; Gilks, B.; Farahani, H.; Bashashati, A. Co-pilot: Dynamic top-down point cloud with conditional neighborhood aggregation for multi-gigapixel histopathology image representation. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; pp. 21063–21073. [Google Scholar] [CrossRef]
  20. Ajeil, F.H.; Ibraheem, I.K.; Sahib, M.A.; Humaidi, A.J. Multi-objective path planning of an autonomous mobile robot using hybrid PSO-MFB optimization algorithm. Appl. Soft Comput. 2020, 89, 106076. [Google Scholar] [CrossRef]
  21. Han, M.; Lyu, Z.; Qiu, T.; Xu, M. A Review on Intelligence Dehazing and Color Restoration for Underwater Images. IEEE Trans. Syst. Man Cybern. Syst. 2020, 50, 1820–1832. [Google Scholar] [CrossRef]
  22. Zhou, X.; Zhang, L.; Chen, W.; Liang, Q.; Liu, B.; Zheng, H.; Li, K. Study of an Untethered Bioinspired Piezoelectric Father-Son Robot for Deep-Sea Narrow Space. IEEE Trans. Ind. Electron. 2024, 71, 16108–16119. [Google Scholar] [CrossRef]
  23. Kong, D.; Yang, G.; Pang, G.; Ye, Z.; Lv, H.; Yu, Z.; Wang, F.; Wang, X.V.; Xu, K.; Yang, H. Bioinspired Co-Design of Tactile Sensor and Deep Learning Algorithm for Human–Robot Interaction. Adv. Intell. Syst. 2022, 4, 2200050. [Google Scholar] [CrossRef]
  24. Jiang, P.; Wang, Z.; Li, X.; Wang, X.V.; Yang, B.; Zheng, J. Energy consumption prediction and optimization of industrial robots based on LSTM. J. Manuf. Syst. 2023, 70, 137–148. [Google Scholar] [CrossRef]
  25. Sun, Z.; Zhu, M.; Shan, X.; Lee, C. Augmented tactile-perception and haptic-feedback rings as human-machine interfaces aiming for immersive interactions. Nat. Commun. 2022, 13, 5224. [Google Scholar] [CrossRef]
  26. Yu, Y.; Li, J.; Solomon, S.A.; Min, J.; Tu, J.; Guo, W.; Xu, C.; Song, Y.; Gao, W. All-printed soft human-machine interface for robotic physicochemical sensing. Sci. Robot. 2022, 7, eabn0495. [Google Scholar] [CrossRef]
  27. Che, Y.; Okamura, A.M.; Sadigh, D. Efficient and Trustworthy Social Navigation via Explicit and Implicit Robot-Human Communication. IEEE Trans. Robot. 2020, 36, 692–707. [Google Scholar] [CrossRef]
  28. Saeidi, H.; Opfermann, J.D.; Kam, M.; Wei, S.; Leonard, S.; Hsieh, M.H.; Kang, J.U.; Krieger, A. Autonomous robotic laparoscopic surgery for intestinal anastomosis. Sci. Robot. 2022, 7, eabj2908. [Google Scholar] [CrossRef]
  29. Ji, Y.; Ni, L.; Zhao, C.; Lei, C.; Du, Y.; Wang, W. TriPField: A 3D Potential Field Model and Its Applications to Local Path Planning of Autonomous Vehicles. IEEE Trans. Intell. Transp. Syst. 2023, 24, 1–14. [Google Scholar] [CrossRef]
  30. Li, H.; Liu, W.; Yang, C.; Wang, W.; Qie, T.; Xiang, C. An Optimization-Based Path Planning Approach for Autonomous Vehicles Using the DynEFWA-Artificial Potential Field. IEEE Trans. Intell. Veh. 2022, 7, 263–272. [Google Scholar] [CrossRef]
  31. Sundaram, S.; Kellnhofer, P.; Li, Y.; Zhu, J.-Y.; Torralba, A.; Matusik, W. Learning the signatures of the human grasp using a scalable tactile glove. Nature 2019, 569, 698–702. [Google Scholar] [CrossRef]
  32. Shu, S.; Wang, Z.; Chen, P.; Zhong, J.; Tang, W.; Wang, Z.L. Machine-Learning Assisted Electronic Skins Capable of Proprioception and Exteroception in Soft Robotics. Adv. Mater. 2023, 35, e2211385. [Google Scholar] [CrossRef] [PubMed]
  33. Qiu, Y.; Sun, S.; Wang, X.; Shi, K.; Wang, Z.; Ma, X.; Zhang, W.; Bao, G.; Tian, Y.; Zhang, Z.; et al. Nondestructive identification of softness via bioinspired multisensory electronic skins integrated on a robotic hand. Npj Flex. Electron. 2022, 6, 1–10. [Google Scholar] [CrossRef]
  34. Gan, Y.; Zhang, B.; Shao, J.; Han, Z.; Li, A.; Dai, X. Embodied Intelligence: Bionic Robot Controller Integrating Environment Perception, Autonomous Planning, and Motion Control. IEEE Robot. Autom. Lett. 2024, 9, 4559–4566. [Google Scholar] [CrossRef]
  35. Kaveh, A.; Abedi, M. Analysis and optimal design of scissor-link foldable structures. Eng. Comput. 2019, 35, 593–604. [Google Scholar] [CrossRef]
  36. Stroppa, F. Design optimizer for planar soft-growing robot manipulators. Eng. Appl. Artif. Intell. 2024, 130, 107693. [Google Scholar] [CrossRef]
  37. Zhang, L.; Xing, S.; Yin, H.; Weisbecker, H.; Tran, H.T.; Guo, Z.; Han, T.; Wang, Y.; Liu, Y.; Wu, Y.; et al. Skin-inspired, sensory robots for electronic implants. Nat. Commun. 2024, 15, 4777. [Google Scholar] [CrossRef]
  38. Qiu, C.; Wu, Z.; Wang, J.; Tan, M.; Yu, J. Locomotion Optimization of a Tendon-Driven Robotic Fish with Variable Passive Tail Fin. IEEE Trans. Ind. Electron. 2023, 70, 4983–4992. [Google Scholar] [CrossRef]
  39. Liu, J.; Low, J.H.; Han, Q.Q.; Lim, M.; Lu, D.; Yeow, C.-H.; Liu, Z. Simulation Data Driven Design Optimization for Reconfigurable Soft Gripper System. IEEE Robot. Autom. Lett. 2022, 7, 5803–5810. [Google Scholar] [CrossRef]
  40. Kouritem, S.A.; Abouheaf, M.I.; Nahas, N.; Hassan, M. A multi-objective optimization design of industrial robot arms. Alex. Eng. J. 2022, 61, 12847–12867. [Google Scholar] [CrossRef]
  41. Jumper, J.; Evans, R.; Pritzel, A.; Green, T.; Figurnov, M.; Ronneberger, O.; Tunyasuvunakool, K.; Bates, R.; Žídek, A.; Potapenko, A.; et al. Highly accurate protein structure prediction with AlphaFold. Nature 2021, 596, 583–589. [Google Scholar] [CrossRef]
  42. Liu, M.; Zhang, Y.; Wang, J.; Qin, N.; Yang, H.; Sun, K.; Hao, J.; Shu, L.; Liu, J.; Chen, Q.; et al. A star-nose-like tactile-olfactory bionic sensing array for robust object recognition in non-visual environments. Nat. Commun. 2022, 13, 79. [Google Scholar] [CrossRef]
  43. Park, S.; Kim, J.; Park, J.; Burgner-Kahrs, J.; Noh, G. Design of patterns in tubular robots using DNN-metaheuristics optimization. Int. J. Mech. Sci. 2023, 251, 108352. [Google Scholar] [CrossRef]
  44. Zi, P.; Xu, K.; Tian, Y.; Ding, X. A mechanical adhesive gripper inspired by beetle claw for a rock climbing robot. Mech. Mach. Theory 2023, 181, 105168. [Google Scholar] [CrossRef]
  45. Qiu, Y.; Zou, Z.; Zou, Z.; Setiawan, N.K.; Dikshit, K.V.; Whiting, G.; Yang, F.; Zhang, W.; Lu, J.; Zhong, B.; et al. Deep-learning-assisted printed liquid metal sensory system for wearable applications and boxing training. Npj Flex. Electron. 2023, 7, 37. [Google Scholar] [CrossRef]
  46. Rus, D.; Tolley, M.T. Design, fabrication and control of origami robots. Nat. Reviews. Mater. 2018, 3, 101–112. [Google Scholar] [CrossRef]
  47. Hu, J.; Whitman, J.; Travers, M.; Choset, H. Modular Robot Design Optimization with Generative Adversarial Networks. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 4282–4288. [Google Scholar] [CrossRef]
  48. Huang, J.; Zhou, J.; Wang, Z.; Law, J.; Cao, H.; Li, Y.; Wang, H.; Liu, Y. Modular origami soft robot with the perception of interaction force and body configuration. Adv. Intell. Syst. 2022, 4, 2200081. [Google Scholar] [CrossRef]
  49. Zhang, H. Mechanics Analysis of Functional Origamis Applicable in Biomedical Robots. IEEE/ASME Trans. Mechatron. 2022, 27, 2589–2599. [Google Scholar] [CrossRef]
  50. Gao, Y.; Li, X.; Wang, X.V.; Wang, L.; Gao, L. A Review on Recent Advances in Vision-based Defect Recognition towards Industrial Intelligence. J. Manuf. Syst. 2022, 62, 753–766. [Google Scholar] [CrossRef]
  51. Shi, Y.; Li, L.; Yang, J.; Wang, Y.; Hao, S. Center-based Transfer Feature Learning with Classifier Adaptation for surface defect recognition. Mech. Syst. Signal Process. 2023, 188, 110001. [Google Scholar] [CrossRef]
  52. Ma, J.; He, Y.; Li, F.; Han, L.; You, C.; Wang, B. Segment anything in medical images. Nat. Commun. 2024, 15, 654. [Google Scholar] [CrossRef]
  53. Soomro, T.A.; Zheng, L.; Afifi, A.J.; Ali, A.; Soomro, S.; Yin, M.; Gao, J. Image Segmentation for MR Brain Tumor Detection Using Machine Learning: A Review. IEEE Rev. Biomed. Eng. 2023, 16, 70–90. [Google Scholar] [CrossRef]
  54. Dong, Y.; Yao, Y.-D. Secure mmWave-Radar-Based Speaker Verification for IoT Smart Home. IEEE Internet Things J. 2021, 8, 3500–3511. [Google Scholar] [CrossRef]
  55. Shang, J.; Wu, J. Voice Liveness Detection for Voice Assistants Through Ear Canal Pressure Monitoring. IEEE Trans. Netw. Sci. Eng. 2022, 9, 1225–1234. [Google Scholar] [CrossRef]
  56. Zhang, H.; Ruan, H.; Zhao, H.; Wang, Z.; Hu, S.; Cui, T.J.; del Hougne, P.; Li, L. Microwave Speech Recognizer Empowered by a Programmable Metasurface. Adv. Sci. 2024, 11, e2309826. [Google Scholar] [CrossRef]
  57. Khan, M.; Gueaieb, W.; El Saddik, A.; Kwon, S. MSER: Multimodal speech emotion recognition using cross-attention with deep fusion. Expert Syst. Appl. 2024, 245, 122946. [Google Scholar] [CrossRef]
  58. Li, Q.; Du, Z.; Liu, F.; Yu, H. Tactile Perception for Surgical Status Recognition in Robot-Assisted Laminectomy. IEEE Trans. Ind. Electron. 2022, 69, 11425–11435. [Google Scholar] [CrossRef]
  59. Wei, D.; Guo, J.; Qiu, Y.; Liu, S.; Mao, J.; Liu, Y.; Chen, Z.; Wu, H.; Yin, Z. Monitoring the delicate operations of surgical robots via ultra-sensitive ionic electronic skin. Natl. Sci. Rev. 2022, 9, nwac227. [Google Scholar] [CrossRef] [PubMed]
  60. Luo, Y.; Liu, C.; Lee, Y.J.; DelPreto, J.; Wu, K.; Foshey, M.; Rus, D.; Palacios, T.; Li, Y.; Torralba, A.; et al. Adaptive tactile interaction transfer via digitally embroidered smart gloves. Nat. Commun. 2024, 15, 868. [Google Scholar] [CrossRef]
  61. Zhang, T.; Sun, H.; Zou, Y.; Chu, H. An electromyography signals-based human-robot collaboration method for human skill learning and imitation. J. Manuf. Syst. 2022, 64, 330–343. [Google Scholar] [CrossRef]
  62. Gao, A.; Pang, Y.; Nie, J.; Shao, Z.; Cao, J.; Guo, Y.; Li, X. ESGN: Efficient Stereo Geometry Network for Fast 3D Object Detection. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 2000–2009. [Google Scholar] [CrossRef]
  63. Fu, Q.; Hu, C.; Peng, J.; Rind, F.C.; Yue, S. A Robust Collision Perception Visual Neural Network with Specific Selectivity to Darker Objects. IEEE Trans. Cybern. 2020, 50, 5074–5088. [Google Scholar] [CrossRef]
  64. Zhang, Z.; Wang, S.; Liu, C.; Xie, R.; Hu, W.; Zhou, P. All-in-one two-dimensional retinomorphic hardware device for motion detection and recognition. Nat. Nanotechnol. 2022, 17, 27–32. [Google Scholar] [CrossRef]
  65. Jiang, J.; Xiao, W.; Li, X.; Zhao, Y.; Qin, Z.; Xie, Z.; Shen, G.; Zhou, J.; Fu, Y.; Wang, Y.; et al. Hardware-Level Image Recognition System Based on ZnO Photo-Synapse Array with the Self-Denoising Function. Adv. Funct. Mater. 2024, 34, 2313507. [Google Scholar] [CrossRef]
  66. Lee, S.; Kim, J.; Roh, H.; Kim, W.; Chung, S.; Moon, W.; Cho, K. A High-Fidelity Skin-Attachable Acoustic Sensor for Realizing Auditory Electronic Skin. Adv. Mater. 2022, 34, e2109545. [Google Scholar] [CrossRef] [PubMed]
  67. Xia, Y.; Sun, C.; Liu, W.; Wang, X.; Wen, K.; Feng, Z.; Zhang, G.; Fan, E.; He, Q.; Lin, Z.; et al. Ultra-Broadband Flexible Thin-Film Sensor for Sound Monitoring and Ultrasonic Diagnosis. Small 2024, 20, 2305678. [Google Scholar] [CrossRef] [PubMed]
  68. Jeon, E.; Lee, U.; Yoon, S.; Hur, S.; Choi, H.; Han, C. Frequency-Selective, Multi-Channel, Self-Powered Artificial Basilar Membrane Sensor with a Spiral Shape and 24 Critical Bands Inspired by the Human Cochlea. Adv. Sci. 2024, 11, e2400955. [Google Scholar] [CrossRef] [PubMed]
  69. Chen, J.; Liu, A.; Shi, Y.; Luo, Y.; Li, J.; Ye, M.; Guo, W. Skin-Inspired Bimodal Receptors for Object Recognition and Temperature Sensing Simulation. Adv. Funct. Mater. 2024, 34, 2403528. [Google Scholar] [CrossRef]
  70. Qu, X.; Liu, Z.; Tan, P.; Wang, C.; Liu, Y.; Feng, H.; Luo, D.; Li, Z.; Wang, Z.L. Artificial tactile perception smart finger for material identification based on triboelectric sensing. Sci. Adv. 2022, 8, eabq2521. [Google Scholar] [CrossRef]
  71. Wang, Y.; Tang, T.; Xu, Y.; Bai, Y.; Yin, L.; Li, G.; Zhang, H.; Liu, H.; Huang, Y. All-weather, natural silent speech recognition via machine-learning-assisted tattoo-like electronics. Npj Flex. Electron. 2021, 5, 1–9. [Google Scholar] [CrossRef]
  72. Wei, X.; Li, H.; Yue, W.; Gao, S.; Chen, Z.; Li, Y.; Shen, G. A high-accuracy, real-time, intelligent material perception system with a machine-learning-motivated pressure-sensitive electronic skin. Matter 2022, 5, 1481–1501. [Google Scholar] [CrossRef]
  73. Ji, Y.; Yin, S.; Liu, Y.; Bowen, C.R.; Yang, Y. Dual-mode temperature sensor based on ferroelectric Bi0.5Na0.5TiO3 materials for robotic tactile perception. Nano Energy 2024, 128, 109982. [Google Scholar] [CrossRef]
  74. Li, G.; Liu, S.; Wang, L.; Zhu, R. Skin-inspired quadruple tactile sensors integrated on a robot hand enable object recognition. Sci. Robot. 2020, 5, eabc8134. [Google Scholar] [CrossRef]
  75. Xiong, Y.; Huo, Z.; Zhang, J.; Liu, Y.; Yue, D.; Xu, N.; Gu, R.; Wei, L.; Luo, L.; Chen, M.; et al. Triboelectric in-sensor deep learning for self-powered gesture recognition toward multifunctional rescue tasks. Nano Energy 2024, 124, 109465. [Google Scholar] [CrossRef]
  76. Zhou, H.; Huang, W.; Xiao, Z.; Zhang, S.; Li, W.; Hu, J.; Feng, T.; Wu, J.; Zhu, P.; Mao, Y. Deep-Learning-Assisted Noncontact Gesture-Recognition System for Touchless Human-Machine Interfaces. Adv. Funct. Mater. 2022, 32, 2208271. [Google Scholar] [CrossRef]
  77. Lee, J.; Kwon, K.; Soltis, I.; Matthews, J.; Lee, Y.J.; Kim, H.; Romero, L.; Zavanelli, N.; Kwon, Y.; Kwon, S.; et al. Intelligent upper-limb exoskeleton integrated with soft bioelectronics and deep learning for intention-driven augmentation. Npj Flex. Electron. 2024, 8, 11–13. [Google Scholar] [CrossRef]
  78. Li, S.; Yu, H.; Ding, W.; Liu, H.; Ye, L.; Xia, C.; Wang, X.; Zhang, X.-P. Visual-Tactile Fusion for Transparent Object Grasping in Complex Backgrounds. IEEE Trans. Robot. 2023, 39, 1–19. [Google Scholar] [CrossRef]
  79. Zhu, M.; Sun, Z.; Zhang, Z.; Shi, Q.; He, T.; Liu, H.; Chen, T.; Lee, C. Haptic-feedback smart glove as a creative human-machine interface (HMI) for virtual/augmented reality applications. Sci. Adv. 2020, 6, eaaz8693. [Google Scholar] [CrossRef]
  80. Zhuang, M.; Yin, L.; Wang, Y.; Bai, Y.; Zhan, J.; Hou, C.; Yin, L.; Xu, Z.; Tan, X.; Huang, Y. Highly Robust and Wearable Facial Expression Recognition via Deep-Learning-Assisted, Soft Epidermal Electronics. Research 2021, 2021, 9759601. [Google Scholar] [CrossRef]
  81. Qiu, Y.; Wang, Z.; Zhu, P.; Su, B.; Wei, C.; Tian, Y.; Zhang, Z.; Chai, H.; Liu, A.; Liang, L.; et al. A multisensory-feedback tactile glove with dense coverage of sensing arrays for object recognition. Chem. Eng. J. 2023, 455, 140890. [Google Scholar] [CrossRef]
  82. Yang, W.; Xie, M.; Zhang, X.; Sun, X.; Zhou, C.; Chang, Y.; Zhang, H.; Duan, X. Multifunctional Soft Robotic Finger Based on a Nanoscale Flexible Temperature–Pressure Tactile Sensor for Material Recognition. ACS Appl. Mater. Interfaces 2021, 13, 55756–55765. [Google Scholar] [CrossRef]
  83. Han, S.; Zhi, X.; Xia, Y.; Guo, W.; Li, Q.; Chen, D.; Liu, K.; Wang, X. All Resistive Pressure-Temperature Bimodal Sensing E-Skin for Object Classification. Small 2023, 19, e2301593. [Google Scholar] [CrossRef]
  84. Zhao, H.; Zhang, Y.; Han, L.; Qian, W.; Wang, J.; Wu, H.; Li, J.; Dai, Y.; Zhang, Z.; Bowen, C.R.; et al. Intelligent Recognition Using Ultralight Multifunctional Nano-Layered Carbon Aerogel Sensors with Human-Like Tactile Perception. Nano-Micro Lett. 2024, 16, 11. [Google Scholar] [CrossRef]
  85. Gao, Y.; Gao, L.; Li, X. A Generative Adversarial Network Based Deep Learning Method for Low-Quality Defect Image Reconstruction and Recognition. IEEE Trans. Ind. Inform. 2021, 17, 3231–3240. [Google Scholar] [CrossRef]
  86. Du, Z.; Gao, L.; Li, X. A New Contrastive GAN with Data Augmentation for Surface Defect Recognition Under Limited Data. IEEE Trans. Instrum. Meas. 2023, 72, 1–13. [Google Scholar] [CrossRef]
  87. Wen, L.; Zhang, Y.; Gao, L.; Li, X.; Li, M. A New Multiscale Multiattention Convolutional Neural Network for Fine-Grained Surface Defect Detection. IEEE Trans. Instrum. Meas. 2023, 72, 1–11. [Google Scholar] [CrossRef]
  88. Li, Z.; Gao, L.; Li, X.; Gao, Y. ACAT-transformer: Adaptive classifier with attention-wise transformation for few-sample surface defect recognition. Adv. Eng. Inform. 2024, 61, 102527. [Google Scholar] [CrossRef]
  89. Tian, R.; Zhang, Y.; Feng, Y.; Yang, L.; Cao, Z.; Coleman, S.; Kerr, D. Accurate and Robust Object SLAM With 3D Quadric Landmark Reconstruction in Outdoors. IEEE Robot. Autom. Lett. 2022, 7, 1534–1541. [Google Scholar] [CrossRef]
  90. Chen, Y.; Huang, S.; Fitch, R. Active SLAM for Mobile Robots with Area Coverage and Obstacle Avoidance. IEEE/ASME Trans. Mechatron. 2020, 25, 1182–1192. [Google Scholar] [CrossRef]
  91. Wang, Z.; Chen, H.; Zhang, S.; Lou, Y. Active View Planning for Visual SLAM in Outdoor Environments Based on Continuous Information Modeling. IEEE/ASME Trans. Mechatron. 2024, 29, 1–12. [Google Scholar] [CrossRef]
  92. Hassaballah, M.; Kenk, M.A.; Muhammad, K.; Minaee, S. Vehicle Detection and Tracking in Adverse Weather Using a Deep Learning Framework. IEEE Trans. Intell. Transp. Syst. 2021, 22, 4230–4242. [Google Scholar] [CrossRef]
  93. Hussain, A.; Khan, S.U.; Khan, N.; Shabaz, M.; Baik, S.W. AI-driven behavior biometrics framework for robust human activity recognition in surveillance systems. Eng. Appl. Artif. Intell. 2024, 127, 107218. [Google Scholar] [CrossRef]
  94. Tschandl, P.; Rinner, C.; Apalla, Z.; Argenziano, G.; Codella, N.; Halpern, A.; Janda, M.; Lallas, A.; Longo, C.; Malvehy, J.; et al. Human–computer collaboration for skin cancer recognition. Nat. Med. 2020, 26, 1229–1234. [Google Scholar] [CrossRef]
  95. Du, H.; Shi, H.; Zeng, D.; Zhang, X.-P.; Mei, T. The Elements of End-to-end Deep Face Recognition: A Survey of Recent Advances. ACM Comput. Surv. 2022, 54, 1–42. [Google Scholar] [CrossRef]
  96. Chen, W.; Li, G.; Li, M.; Wang, W.; Li, P.; Xue, X.; Zhao, X.; Liu, L. LLM-Enabled Incremental Learning Framework for Hand Exoskeleton Control. IEEE Trans. Autom. Sci. Eng. 2024, 22, 2617–2626. [Google Scholar] [CrossRef]
  97. Abdalla, M.; Hassan, H.; Mostafa, N.; Abdelghafar, S.; Al-Kabbany, A.; Hadhoud, M. An NLP-based system for modulating virtual experiences using speech instructions. Expert Syst. Appl. 2024, 249, 123484. [Google Scholar] [CrossRef]
  98. Scotti, V.; Sbattella, L.; Tedesco, R. A Primer on Seq2Seq Models for Generative Chatbots. ACM Comput. Surv. 2024, 56, 1–58. [Google Scholar] [CrossRef]
  99. Mac, T.T.; Copot, C.; Tran, D.T.; De Keyser, R. Heuristic approaches in robot path planning: A survey. Robot. Auton. Syst. 2016, 86, 13–28. [Google Scholar] [CrossRef]
  100. Pei, M.; An, H.; Liu, B.; Wang, C. An Improved Dyna-Q Algorithm for Mobile Robot Path Planning in Unknown Dynamic Environment. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 1–11. [Google Scholar] [CrossRef]
  101. Zhang, S.; Li, Y.; Dong, Q. Autonomous navigation of UAV in multi-obstacle environments based on a Deep Reinforcement Learning approach. Appl. Soft Comput. 2022, 115, 108194. [Google Scholar] [CrossRef]
  102. Yang, C.; Pu, C.; Xin, G.; Zhang, J.; Li, Z. Learning Complex Motor Skills for Legged Robot Fall Recovery. IEEE Robot. Autom. Lett. 2023, 8, 4307–4314. [Google Scholar] [CrossRef]
  103. Maroger, I.; Stasse, O.; Watier, B. Walking Human Trajectory Models and Their Application to Humanoid Robot Locomotion. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 3465–3472. [Google Scholar] [CrossRef]
  104. Wang, R.; Lu, Z.; Xiao, Y.; Zhao, Y.; Jiang, Q.; Shi, X. Design and Control of a Multi-Locomotion Parallel-Legged Bipedal Robot. IEEE Robot. Autom. Lett. 2024, 9, 1993–2000. [Google Scholar] [CrossRef]
  105. Yuan, F.; Boltz, M.; Bilal, D.; Jao, Y.-L.; Crane, M.; Duzan, J.; Bahour, A.; Zhao, X. Cognitive Exercise for Persons with Alzheimer’s Disease and Related Dementia Using a Social Robot. IEEE Trans. Robot. 2023, 39, 1–15. [Google Scholar] [CrossRef]
  106. Agravante, D.J.; Cherubini, A.; Sherikov, A.; Wieber, P.-B.; Kheddar, A. Human-Humanoid Collaborative Carrying. IEEE Trans. Robot. 2019, 35, 833–846. [Google Scholar] [CrossRef]
  107. Yu, X.; Li, B.; He, W.; Feng, Y.; Cheng, L.; Silvestre, C. Adaptive-Constrained Impedance Control for Human-Robot Co-Transportation. IEEE Trans. Cybern. 2022, 52, 13237–13249. [Google Scholar] [CrossRef]
  108. Tong, Y.; Liu, H.; Zhang, Z. Advancements in Humanoid Robots: A Comprehensive Review and Future Prospects. IEEE/CAA J. Autom. Sin. 2024, 11, 301–328. [Google Scholar] [CrossRef]
  109. Gama, F.; Shcherban, M.; Rolf, M.; Hoffmann, M. Goal-Directed Tactile Exploration for Body Model Learning Through Self-Touch on a Humanoid Robot. IEEE Trans. Cogn. Dev. Syst. 2023, 15, 419–433. [Google Scholar] [CrossRef]
  110. Qi, H.; Chen, X.; Yu, Z.; Huang, G.; Liu, Y.; Meng, L.; Huang, Q. Vertical Jump of a Humanoid Robot with CoP-Guided Angular Momentum Control and Impact Absorption. IEEE Trans. Robot. 2023, 39, 1–13. [Google Scholar] [CrossRef]
  111. Qin, Y.; Escande, A.; Kanehiro, F.; Yoshida, E. Dual-Arm Mobile Manipulation Planning of a Long Deformable Object in Industrial Installation. IEEE Robot. Autom. Lett. 2023, 8, 3039–3046. [Google Scholar] [CrossRef]
  112. Murooka, M.; Chappellet, K.; Tanguy, A.; Benallegue, M.; Kumagai, I.; Morisawa, M.; Kanehiro, F.; Kheddar, A. Humanoid Loco-Manipulations Pattern Generation and Stabilization Control. IEEE Robot. Autom. Lett. 2021, 6, 5597–5604. [Google Scholar] [CrossRef]
  113. Sun, Z.; Yang, H.; Ma, Y.; Wang, X.; Mo, Y.; Li, H.; Jiang, Z. BIT-DMR: A Humanoid Dual-Arm Mobile Robot for Complex Rescue Operations. IEEE Robot. Autom. Lett. 2022, 7, 802–809. [Google Scholar] [CrossRef]
  114. Li, J.; Yang, S.X. A Novel Feature Learning-Based Bio-Inspired Neural Network for Real-Time Collision-Free Rescue of Multirobot Systems. IEEE Trans. Ind. Electron. 2024, 71, 14420–14429. [Google Scholar] [CrossRef]
  115. Vo, D.T.; Le, A.V.; Ta, T.D.; Tran, M.; Van Duc, P.; Vu, M.B.; Nhan, N.H.K. Toward complete coverage planning using deep reinforcement learning by trapezoid-based transformable robot. Eng. Appl. Artif. Intell. 2023, 122, 105999. [Google Scholar] [CrossRef]
  116. Bai, Z.; Pang, H.; He, Z.; Zhao, B.; Wang, T. Path Planning of Autonomous Mobile Robot in Comprehensive Unknown Environment Using Deep Reinforcement Learning. IEEE Internet Things J. 2024, 11, 22153–22166. [Google Scholar] [CrossRef]
  117. Wu, J.; Song, C.; Ma, J.; Wu, J.; Han, G. Reinforcement Learning and Particle Swarm Optimization Supporting Real-Time Rescue Assignments for Multiple Autonomous Underwater Vehicles. IEEE Trans. Intell. Transp. Syst. 2022, 23, 6807–6820. [Google Scholar] [CrossRef]
  118. Ambuj; Nagar, H.; Paul, A.; Machavaram, R.; Soni, P. Reinforcement learning particle swarm optimization based trajectory planning of autonomous ground vehicle using 2D LiDAR point cloud. Robot. Auton. Syst. 2024, 178, 104723. [Google Scholar] [CrossRef]
  119. Muni, M.K.; Parhi, D.R.; Kumar, P.B.; Sahu, C.; Kumar, S. Towards motion planning of humanoids using a fuzzy embedded neural network approach. Appl. Soft Comput. 2022, 119, 108588. [Google Scholar] [CrossRef]
  120. Fahmi, S.; Focchi, M.; Radulescu, A.; Fink, G.; Barasuol, V.; Semini, C. STANCE: Locomotion Adaptation Over Soft Terrain. IEEE Trans. Robot. 2020, 36, 443–457. [Google Scholar] [CrossRef]
  121. Lim, D.; Kim, D.; Park, J. Momentum Observer-Based Collision Detection Using LSTM for Model Uncertainty Learning. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 4516–4522. [Google Scholar] [CrossRef]
  122. Xu, F.; Kang, X.; Wang, H. Hybrid Visual Servoing Control of a Soft Robot with Compliant Obstacle Avoidance. IEEE/ASME Trans. Mechatron. 2024, 29, 4446–4455. [Google Scholar] [CrossRef]
  123. Wu, X.; Li, J.; Liu, L.; Tao, D. The Visual Footsteps Planning System for Exoskeleton Robots Under Complex Terrain. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 5149–5160. [Google Scholar] [CrossRef]
  124. Tulbure, A.; Khatib, O. Closing the Loop: Real-Time Perception and Control for Robust Collision Avoidance with Occluded Obstacles. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 5700–5707. [Google Scholar] [CrossRef]
  125. Wu, J.; Sun, Y.; Li, D.; Shi, J.; Li, X.; Gao, L.; Yu, L.; Han, G.; Wu, J. An Adaptive Conversion Speed Q-Learning Algorithm for Search and Rescue UAV Path Planning in Unknown Environments. IEEE Trans. Veh. Technol. 2023, 72, 15391–15404. [Google Scholar] [CrossRef]
  126. Wen, S.; Wen, Z.; Zhang, D.; Zhang, H.; Wang, T. A multi-robot path-planning algorithm for autonomous navigation using meta-reinforcement learning based on transfer learning. Appl. Soft Comput. 2021, 110, 107605. [Google Scholar] [CrossRef]
  127. Li, X.; Liang, X.; Wang, X.; Wang, R.; Shu, L.; Xu, W. Deep reinforcement learning for optimal rescue path planning in uncertain and complex urban pluvial flood scenarios. Appl. Soft Comput. 2023, 144, 110543. [Google Scholar] [CrossRef]
  128. Puente-Castro, A.; Rivero, D.; Pedrosa, E.; Pereira, A.; Lau, N.; Fernandez-Blanco, E. Q-Learning based system for Path Planning with Unmanned Aerial Vehicles swarms in obstacle environments. Expert Syst. Appl. 2024, 235, 121240. [Google Scholar] [CrossRef]
  129. Wang, C.; Wang, J.; Shen, Y.; Zhang, X. Autonomous Navigation of UAVs in Large-Scale Complex Environments: A Deep Reinforcement Learning Approach. IEEE Trans. Veh. Technol. 2019, 68, 2124–2136. [Google Scholar] [CrossRef]
  130. Yang, J.; Ni, J.; Xi, M.; Wen, J.; Li, Y. Intelligent Path Planning of Underwater Robot Based on Reinforcement Learning. IEEE Trans. Autom. Sci. Eng. 2023, 20, 1983–1996. [Google Scholar] [CrossRef]
  131. Wang, Z.; Xiao, X.; Warnell, G.; Stone, P. APPLE: Adaptive Planner Parameter Learning from Evaluative Feedback. IEEE Robot. Autom. Lett. 2021, 6, 7744–7749. [Google Scholar] [CrossRef]
  132. Zhou, X.; Xu, C.; Gao, F. Automatic Parameter Adaptation for Quadrotor Trajectory Planning. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 3348–3355. [Google Scholar] [CrossRef]
  133. Shan, Y.; Zheng, B.; Chen, L.; Chen, L.; Chen, D. A Reinforcement Learning-Based Adaptive Path Tracking Approach for Autonomous Driving. IEEE Trans. Veh. Technol. 2020, 69, 10581–10595. [Google Scholar] [CrossRef]
  134. Liu, M.; Chour, K.; Rathinam, S.; Darbha, S. Lateral Control of an Autonomous and Connected Following Vehicle with Limited Preview Information. IEEE Trans. Intell. Veh. 2021, 6, 406–418. [Google Scholar] [CrossRef]
  135. Goldsztejn, E.; Feiner, T.; Brafman, R. PTDRL: Parameter Tuning Using Deep Reinforcement Learning. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 1–5 October 2023; pp. 11356–11362. [Google Scholar] [CrossRef]
  136. Hou, Y.; Li, J.; Chen, I.-M. Self-Supervised Antipodal Grasp Learning with Fine-Grained Grasp Quality Feedback in Clutter. IEEE Trans. Ind. Electron. 2024, 71, 1–8. [Google Scholar] [CrossRef]
  137. Fu, K.; Dang, X. Light-Weight Convolutional Neural Networks for Generative Robotic Grasping. IEEE Trans. Ind. Inform. 2024, 20, 6696–6707. [Google Scholar] [CrossRef]
  138. Zhang, H.; Wu, Y.; Demeester, E.; Kellens, K. BIG-Net: Deep Learning for Grasping with a Bio-Inspired Soft Gripper. IEEE Robot. Autom. Lett. 2023, 8, 584–591. [Google Scholar] [CrossRef]
  139. Morrison, D.; Corke, P.; Leitner, J. Learning robust, real-time, reactive robotic grasping. Int. J. Robot. Res. 2020, 39, 183–201. [Google Scholar] [CrossRef]
  140. Wei, D.; Cao, J.; Gu, Y. Robot Grasp in Cluttered Scene Using a Multi-Stage Deep Learning Model. IEEE Robot. Autom. Lett. 2024, 9, 6512–6519. [Google Scholar] [CrossRef]
  141. Chen, X.; Sun, Y.; Zhang, Q.; Liu, F. Two-stage grasp strategy combining CNN-based classification and adaptive detection on a flexible hand. Appl. Soft Comput. 2020, 97, 106729. [Google Scholar] [CrossRef]
  142. Choi, S.; Ji, G.; Park, J.; Kim, H.; Mun, J.; Lee, J.H.; Hwangbo, J. Learning quadrupedal locomotion on deformable terrain. Sci. Robot. 2023, 8, eade2256. [Google Scholar] [CrossRef] [PubMed]
  143. Zhao, H.; Tang, L.; Li, J.R.; Liu, J. Strengthening evolution-based differential evolution with prediction strategy for multimodal optimization and its application in multi-robot task allocation. Appl. Soft Comput. 2023, 139, 110218. [Google Scholar] [CrossRef]
  144. Wensing, P.M.; Posa, M.; Hu, Y.; Escande, A.; Mansard, N.; Del Prete, A. Optimization-Based Control for Dynamic Legged Robots. IEEE Trans. Robot. 2024, 40, 43–63. [Google Scholar] [CrossRef]
  145. Sombolestan, M.; Nguyen, Q. Hierarchical Adaptive Loco-manipulation Control for Quadruped Robots. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 12156–12162. [Google Scholar] [CrossRef]
  146. Zeng, C.; Li, S.; Jiang, Y.; Li, Q.; Chen, Z.; Yang, C.; Zhang, J. Learning compliant grasping and manipulation by teleoperation with adaptive force control. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 717–724. [Google Scholar] [CrossRef]
  147. Qiu, C.; Wu, Z.; Wang, J.; Tan, M.; Yu, J. Multiagent-Reinforcement-Learning-Based Stable Path Tracking Control for a Bionic Robotic Fish with Reaction Wheel. IEEE Trans. Ind. Electron. 2023, 70, 12670–12679. [Google Scholar] [CrossRef]
  148. Wu, Q.; Wang, Z.; Chen, Y.; Wu, H. Barrier Lyapunov Function-Based Fuzzy Adaptive Admittance Control of an Upper Limb Exoskeleton Using RBFNN Compensation. IEEE/ASME Trans. Mechatron. 2024, 30, 3–14. [Google Scholar] [CrossRef]
  149. Sombolestan, M.; Nguyen, Q. Adaptive-Force-Based Control of Dynamic Legged Locomotion over Uneven Terrain. IEEE Trans. Robot. 2024, 40, 2462–2477. [Google Scholar] [CrossRef]
  150. Chen, B.; Zhang, H.; Zhang, F.; Jiang, Y.; Miao, Z.; Yu, H.; Wang, Y. DIBNN: A Dual-Improved-BNN Based Algorithm for Multi-Robot Cooperative Area Search in Complex Obstacle Environments. IEEE Trans. Autom. Sci. Eng. 2024, 22, 2361–2374. [Google Scholar] [CrossRef]
  151. Wen, C.; Ma, H. An efficient two-stage evolutionary algorithm for multi-robot task allocation in nuclear accident rescue scenario. Appl. Soft Comput. 2024, 152, 111223. [Google Scholar] [CrossRef]
  152. Sombolestan, M.; Nguyen, Q. Hierarchical Adaptive Control for Collaborative Manipulation of a Rigid Object by Quadrupedal Robots. In Proceedings of the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 1–5 October 2023; pp. 2752–2759. [Google Scholar] [CrossRef]
  153. Gao, G.; Mei, Y.; Xin, B.; Jia, Y.-H.; Browne, W.N. Automated Coordination Strategy Design Using Genetic Programming for Dynamic Multipoint Dynamic Aggregation. IEEE Trans. Cybern. 2022, 52, 13521–13535. [Google Scholar] [CrossRef]
  154. Li, J.; Gu, X.; Qiu, S.; Zhou, X.; Cangelosi, A.; Loo, C.K.; Liu, X. A Survey of Wearable Lower Extremity Neurorehabilitation Exoskeleton: Sensing, Gait Dynamics, and Human-Robot Collaboration. IEEE Trans. Syst. Man Cybern. Syst. 2024, 54, 3675–3693. [Google Scholar] [CrossRef]
  155. Madan, R.; Jenamani, R.K.; Nguyen, V.T.; Moustafa, A.; Hu, X.; Dimitropoulou, K.; Bhattacharjee, T. SPARCS: Structuring Physically Assistive Robotics for Caregiving with Stakeholders-in-the-loop. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 641–648. [Google Scholar] [CrossRef]
  156. Yang, G.; Wang, S.; Yang, J.; Shi, P. Desire-Driven Reasoning Considering Personalized Care Preferences. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 5758–5769. [Google Scholar] [CrossRef]
  157. Do, H.M.; Sheng, W.; Harrington, E.E.; Bishop, A.J. Clinical Screening Interview Using a Social Robot for Geriatric Care. IEEE Trans. Autom. Sci. Eng. 2021, 18, 1229–1242. [Google Scholar] [CrossRef]
  158. Lima, M.R.; Wairagkar, M.; Gupta, M.; Baena, F.R.Y.; Barnaghi, P.; Sharp, D.J.; Vaidyanathan, R. Conversational Affective Social Robots for Ageing and Dementia Support. IEEE Trans. Cogn. Dev. Syst. 2022, 14, 1378–1397. [Google Scholar] [CrossRef]
  159. Roveda, L.; Testa, A.; Shahid, A.A.; Braghin, F.; Piga, D. Q-Learning-based model predictive variable impedance control for physical human-robot collaboration. Artif. Intell. 2022, 312, 103771. [Google Scholar] [CrossRef]
  160. Ko, D.; Lee, D.; Lee, W.; Kim, K.; Chung, W.K. Compensation Modulation for Tracking Accuracy in Free Motion and Compliance During Interaction. IEEE/ASME Trans. Mechatron. 2024, 30, 180–190. [Google Scholar] [CrossRef]
  161. Gill, S.S.; Xu, M.; Ottaviani, C.; Patros, P.; Bahsoon, R.; Shaghaghi, A.; Golec, M.; Stankovski, V.; Wu, H.; Abraham, A.; et al. AI for next generation computing: Emerging trends and future directions. Internet Things 2022, 19, 100514. [Google Scholar] [CrossRef]
  162. Zabihi, Z.; Moghadam, A.M.E.; Rezvani, M.H. Reinforcement Learning Methods for Computation Offloading: A Systematic Review. ACM Comput. Surv. 2024, 56, 1–41. [Google Scholar] [CrossRef]
Figure 1. Process of automated intelligent design.
Figure 1. Process of automated intelligent design.
Machines 13 00615 g001
Figure 2. Process of automated intelligent design: (A) Biomimetic tactile olfactory sensing array [42]. (B) The array-architecture of tactile and olfactory sensing [42]. (C) BOT-M recognition rate [42]. (D) General pattern shape [43]. (E) FEA for data generation [43]. (F) Flowchart of DNN-metaheuristic algorithm [43]. (G) Sectional model of the mechanical interlock [44]. (H) The recommended layout of claws/spines [44]. (I) Outdoor grasping experiments [44].
Figure 2. Process of automated intelligent design: (A) Biomimetic tactile olfactory sensing array [42]. (B) The array-architecture of tactile and olfactory sensing [42]. (C) BOT-M recognition rate [42]. (D) General pattern shape [43]. (E) FEA for data generation [43]. (F) Flowchart of DNN-metaheuristic algorithm [43]. (G) Sectional model of the mechanical interlock [44]. (H) The recommended layout of claws/spines [44]. (I) Outdoor grasping experiments [44].
Machines 13 00615 g002
Figure 3. Overview of sensor data processing: (A) Optical synapses sensor [65]. (B) Acoustic sensor [67]. (C) Skin sensor [69]. (D) Tactile sensor [70]. (E) Signal processing and pattern recognition [65]. (F) Spectrum analysis [67]. (G) Optimized prediction results [69]. (H) Machine learning for material identification [70]. (I) The process of image recognition [65]. (J) Laryngeal monitoring [67]. (K) Object recognition [69]. (L) Roughness recognition [70].
Figure 3. Overview of sensor data processing: (A) Optical synapses sensor [65]. (B) Acoustic sensor [67]. (C) Skin sensor [69]. (D) Tactile sensor [70]. (E) Signal processing and pattern recognition [65]. (F) Spectrum analysis [67]. (G) Optimized prediction results [69]. (H) Machine learning for material identification [70]. (I) The process of image recognition [65]. (J) Laryngeal monitoring [67]. (K) Object recognition [69]. (L) Roughness recognition [70].
Machines 13 00615 g003
Figure 4. Structural design of multi-sensor fusion: (A) Multisensory tactile glove [81]. (B) Multifunctional soft robotic finger [82]. (C) E-skin [83]. (D) Multifunctional tactile system for intelligent identification [84].
Figure 4. Structural design of multi-sensor fusion: (A) Multisensory tactile glove [81]. (B) Multifunctional soft robotic finger [82]. (C) E-skin [83]. (D) Multifunctional tactile system for intelligent identification [84].
Machines 13 00615 g004
Figure 5. Autonomous navigation and path planning scheme: (A) Mamdani fuzzy controller [119]. (B) Cascade neural network structure [119]. (C) Experimental result of multiple humanoids [119]. (D) Dynamic-PMPO-CMA algorithm [126]. (E) Dynamic PMPO-CMA obstacles avoidance algorithm with transfer learning [126]. (F) Good simulation experiment results [126]. (G) Binary occupancy grid [118]. (H) Four distinct environment scenarios [118]. (I) Optimized travel path obtained from RLPSO [118].
Figure 5. Autonomous navigation and path planning scheme: (A) Mamdani fuzzy controller [119]. (B) Cascade neural network structure [119]. (C) Experimental result of multiple humanoids [119]. (D) Dynamic-PMPO-CMA algorithm [126]. (E) Dynamic PMPO-CMA obstacles avoidance algorithm with transfer learning [126]. (F) Good simulation experiment results [126]. (G) Binary occupancy grid [118]. (H) Four distinct environment scenarios [118]. (I) Optimized travel path obtained from RLPSO [118].
Machines 13 00615 g005
Figure 6. Motion control and coordination system: (A) Flexible hand [141]. (B) Architectures of M-CNN [141]. (C) Grasp experiments [141]. (D) The terrain model [142]. (E) Adaptive control architecture [142]. (F) Test environments [142]. (G) Flowchart of the SEDE-PS [143]. (H) Flowchart of the SE strategy [143]. (I) Schemes of multi robot task assignment problem [143].
Figure 6. Motion control and coordination system: (A) Flexible hand [141]. (B) Architectures of M-CNN [141]. (C) Grasp experiments [141]. (D) The terrain model [142]. (E) Adaptive control architecture [142]. (F) Test environments [142]. (G) Flowchart of the SEDE-PS [143]. (H) Flowchart of the SE strategy [143]. (I) Schemes of multi robot task assignment problem [143].
Machines 13 00615 g006
Table 2. Summary of the application of computer vision [85,86,87,88,89,90,91,92,93].
Table 2. Summary of the application of computer vision [85,86,87,88,89,90,91,92,93].
Application AreaAlgorithmsAdvantagesDisadvantages
Image recognition and classificationGAN + VGG16Improving image quality and recognition accuracy in low-quality imagesLong training time, Poor generalization ability
contrastive GANGenerate high-quality and diverse defect images in limited data image to Improve the accuracy of surface defect recognitionThe generated images were not selected and entered into the model, and Gan training time was too long and poor real-time performance.
MSMA-SDDAccurately detect defects of various sizes and shapesSpending a lot of time tuning hyperparameters for MSMA-SDD
Adaptive Classifier with Attention-wise Transformation (ACAT)Improved the recognition accuracy of surface defects in the case of few samples and enhanced generalizationLong training time, not suitable for small and fine defect detection
SLAMseparating quadric parameters (SQP) + ODAImprove the robustness and accuracy of ellipsoid reconstruction, ensures highly accurate object pose estimation and ellipsoid landmark representationNot considering the semantic relationship between object ellipsoids
model predictive control (MPC) + SQPImprove its runtime performance and generate a collision-free trajectory to finish the coverage taskOnly suitable for the 2-D case
Fisher+receding horizon optimization (RHO) + Visual simultaneous localization and mapping (vSLAM)Improvement of localization robustness and accuracyThe sensor has strong dependence, and in poor visual conditions, the data quality is poor
Object detection and trackingautomatic white balance fused by Laplacian pyramid (AWBLP) + You Only Look Once version 3 (YOLOv3)Improve images quality and the detection accuracyUnable to completely solve the problem of image distortion under various adverse weather conditions
dynamic attention fusion unit (DAFU) + temporal-spatial fusion (TSF)Accurately identifying human activities, improving the reliability, accuracy, and model generalization of activity recognition in complex environmentsFurther improvement and optimization are still needed in certain complex scenarios and real-time aspects
Table 3. Comparative of artificial intelligence methods in the field of robotics.
Table 3. Comparative of artificial intelligence methods in the field of robotics.
Method CategoryRepresentative AlgorithmAdvantagesDisadvantages
Biological inspired neural networkBio-inspired NN [114]Real time adaptation to dynamic environments; Fast path generation speedcomputationally expensive
Reinforcement LearningDRL [115], DDQN [116]Autonomous optimization strategy; Dealing with dynamic and complex environmentsDifficulty in simulation migration; High consumption of computing resources
Particle swarm optimizationPSO [117], RLPSO [118]Fast convergenceEasy to fall into local optima
Fuzzy controlMamdani fuzzy system [119], STANCE [120]Rules interpretable; Real time adjustment of control strategyComplex tasks are limited
Memory networkLSTM [121]Processing time-series dataHigh training data requirements; High computational complexity
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, L.; Li, L.; Li, M.; Liang, K. AI-Driven Robotics: Innovations in Design, Perception, and Decision-Making. Machines 2025, 13, 615. https://doi.org/10.3390/machines13070615

AMA Style

Li L, Li L, Li M, Liang K. AI-Driven Robotics: Innovations in Design, Perception, and Decision-Making. Machines. 2025; 13(7):615. https://doi.org/10.3390/machines13070615

Chicago/Turabian Style

Li, Lei, Li Li, Mantian Li, and Ke Liang. 2025. "AI-Driven Robotics: Innovations in Design, Perception, and Decision-Making" Machines 13, no. 7: 615. https://doi.org/10.3390/machines13070615

APA Style

Li, L., Li, L., Li, M., & Liang, K. (2025). AI-Driven Robotics: Innovations in Design, Perception, and Decision-Making. Machines, 13(7), 615. https://doi.org/10.3390/machines13070615

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop