A Perspective on Prosthetic Hands Control: From the Brain to the Hand

: The human hand is a complex and versatile organ that enables humans to interact with the environment, communicate, create, and use tools. The control of the hand by the brain is a crucial aspect of human cognition and behaviour, but also a challenging problem for both neuroscience and engineering. The aim of this study is to review the current state of the art in hand and grasp control from a neuroscientiﬁc perspective, focusing on the brain mechanisms that underlie sensory integration for hand control and the engineering implications for developing artiﬁcial hands that can mimic and interface with the human brain. The brain controls the hand by processing and integrating sensory information from vision, proprioception, and touch, using different neural pathways. The user’s intention can be obtained to control the artiﬁcial hand by using different interfaces, such as electromyography, electroneurography, and electroencephalography. This and other sensory information can be exploited by different learning mechanisms that can help the user adapt to changes in sensory inputs or outputs, such as reinforcement learning, motor adaptation, and internal models. This work summarizes the main ﬁndings and challenges of each aspect of hand and grasp control research and highlights the gaps and limitations of the current approaches. In the last part, some open questions and future directions for hand and grasp control research are suggested by emphasizing the need for a neuroscientiﬁc approach that can bridge the gap between the brain and the hand.


Introduction
The human hand is a complex and versatile organ, capable of performing a wide range of movements and manipulations with precision and dexterity [1][2][3].The hand enables humans to interact with the environment, communicate, create, and use tools [4].However, how the human brain controls the hand and its grasp is not fully understood and poses several challenges for both neuroscience and engineering.
From a neuroscience perspective, understanding the brain mechanisms of hand control can shed light on the neural basis of motor skills, sensorimotor integration, action perception, and tool use [5,6].These are fundamental processes for human evolution and culture, as well as for the development and rehabilitation of motor functions.From an engineering perspective, understanding the brain mechanisms of hand control can inspire the design and control of robotic and prosthetic hands [7,8].These devices aim to restore or enhance the functionality and appearance of the natural hand, enabling users to perform daily activities and interact with objects.For people who use prosthesis, the device is a way to recover what they have lost, i.e., the hand.
Current prosthetic hands and their control strategies often fall short of the natural hand [2].They are usually less complex and dexterous than natural hands [9], and their control strategies are often unnatural and unintuitive [10], requiring high cognitive effort from the user and resulting in poor naturalness and fluidity of movements.Furthermore, artificial hands lack effective sensory feedback [11], which is crucial for hand perception and control.

Prosthetic Hands Control
The control of prosthetic hands is a challenging problem that involves the integration of different components, such as sensors, actuators, signal processing algorithms, and feedback mechanisms [8,12].The control strategies can be classified into two main categories: feedforward and feedback control.Feedforward control relies on the user's intention to generate commands for the prosthetic hand, while feedback control provides information to the user about the state and outcome of the hand's actions [13].
EMG signals are the most used input signals for prosthetic hands and can be recorded from surface electrodes attached to the skin or from implanted electrodes inserted into the muscles or nerves [12,14].EMG signals can be processed by using various methods as pattern recognition, regression, or proportional control, to decode the user's intention [12].Pattern recognition methods use machine learning algorithms to classify different hand gestures based on EMG features, while regression methods use mathematical models to estimate continuous variables, such as joint angles or forces, based on EMG signals [15].Proportional control methods use the amplitude or frequency of EMG signals to modulate the speed or force of the prosthetic hand [16,17].
EEG signals can be acquired from scalp electrodes or from implanted electrodes in the brain and processed by using methods, e.g., event-related potentials (ERPs), steady-state visual evoked potentials (SSVEPs), or motor imagery (MI) to decode the user's intention [18].ERPs are transient changes in EEG signals that respond to specific stimuli, like visual cues or tactile feedback.SSVEPs are oscillatory changes in EEG signals that occur when the user is exposed to flickering lights at different frequencies [19].MI is a mental process that involves imagining performing a specific movement without executing it.These methods can be used to select different hand gestures or activate different degrees of freedom (DoFs) of the prosthetic hand [18].
Eye tracking reflects the user's visual attention and gaze direction [20], recorded from cameras or glasses that capture the movement of the eyes and pupils and processed by using methods such as gaze fixation, gaze switching, or gaze estimation to decode the user's intention [12,21].Gaze fixation is used when the user is looking at a specific target for a certain amount of time and activating a corresponding hand gesture [22,23].Gaze switching is a method adopted when the user is shifting their gaze from one target to another and activating a corresponding transition between hand gestures [24].Gaze estimation involves estimating the 3D position and orientation of the user's gaze and mapping it to the 3D position and orientation of the prosthetic hand [21].
Residual limb motion is another type of input signal involving the user's natural movement patterns and preferences [25].Residual limb motion can be obtained from sensors attached to the stump or from wearable devices [26].In particular, gloves or bracelets capture the movement of other body parts that are processed by using methods like direct mapping, inverse kinematics, or synergy-based control to decode the user's intention and generate commands for the prosthetic hand [27].Direct mapping maps each DoF of the residual limb to a corresponding one of the prosthetic hand.Inverse kinematics calculates the joint angles of the prosthetic hand that are required to achieve a desired end-effector position and orientation based on residual limb motion.Synergy-based control reduces the dimensionality of the control space by using predefined combinations of joint angles that correspond to natural hand postures [28].
Feedback control can be achieved by using different output signals: tactile feedback, proprioceptive feedback, auditory feedback, or visual feedback.
Tactile feedback provides information about the contact between the prosthetic hand and objects [8,29,30], such as location, pressure, texture, shape, or temperature, and can be delivered by using different actuators, e.g., vibrotactile motors [31], electrotactile stimulators, pneumatic devices, shape memory alloys (SMAs) [32], or soft robotics.Vibrotactile motors are devices that produce vibrations perceived by the skin.Electrotactile stimulators generate electrical pulses that can be sensed by the nerves [33].Pneumatic devices are devices that use air pressure to create deformations or displacements that can be felt by the skin.SMAs are materials that change their shape or length when heated by an electric current [32].Soft robotics are devices that create deformations or displacements that can be perceived by the skin [34].
Proprioceptive feedback provides information about the position and movement of the prosthetic hand: joint angles, velocities, or accelerations [35].Proprioceptive feedback can be delivered by using different methods, such as sensory substitution, sensory augmentation, or sensory restoration [35][36][37].Sensory substitution converts proprioceptive information into another sensory modality, e.g., tactile, auditory, or visual [35].Sensory augmentation enhances proprioceptive information with additional cues: force, torque, or stiffness [36].Sensory restoration restores proprioceptive information to the original sensory modality by using neural interfaces, such as peripheral nerve stimulation or cortical stimulation [35].
Auditory feedback is a type of output signal that uses different sounds, like speech, music, tones, clicks, or noises, to provide information about the state and outcome of the prosthetic hand's actions, such as success, failure, error, or warning [38].
Visual feedback is a type of output signal that provides information about the appearance and performance of the prosthetic hand through different types of displays, like monitors, projectors, glasses, or virtual reality.Visual and auditory feedback can be used to complement or replace other types of feedback [12].
The advantages and disadvantages of the different control strategies for prosthetic hands depend on various factors: the type and quality of the input and output signals, the complexity and robustness of the signal processing algorithms, the usability and acceptability of the user interface devices, and the personal preferences and needs of the users.Some general advantages and disadvantages are summarized in Table 1.Prosthetic hand control is a complex and multifaceted problem that depends on various factors, such as the following:

•
The type and quality of the input and output signals, which determine the information content and fidelity of the control system; • The complexity and robustness of the signal processing algorithms, which affect the accuracy and reliability of the control commands; • The usability and acceptability of the user interface devices, which influence the comfort and satisfaction of the users.
The most common and widely used control method for prosthetic hands is based on superficial EMG (sEMG) signals, which can provide a natural and intuitive way of controlling the prosthetic hand but are also noisy, variable, and limited in information content.
Different approaches for improving the control of prosthetic hands and for overcoming the limitations of sEMG-based control have been investigated, using hybrid sensor modalities, applying motor learning principles, employing hierarchical or biomimetic control methods, implementing invasive or noninvasive sensory feedback mechanisms, developing shape memory alloy actuators, and studying eye tracking metrics.However, these approaches are still in their early stages of development and require further research and evaluation to ensure their safety, reliability, durability, usability, and functionality.
Moreover, these approaches have limitations due to the lack of a neurophysiological perspective on the human hand.
More research and development are needed to design and control prosthetic hands that can better mimic human hand functions and provide a satisfactory user experience for different users and tasks.A potential solution is to design and implement novel and effective control strategies for artificial hands that are based on the neural mechanisms of hand function in the human brain [3].The human brain uses a series of hierarchical and parallel processes to control the hand, integrating information from different sensory, cognitive, and emotional levels [39,40], and can adapt to changes in the environment and the body, learning new skills and optimizing performance [41,42].These features make the human brain a source of inspiration for developing natural, intuitive, and efficient control strategies for artificial hands.
This article aims to provide a comprehensive and insightful overview of the current state of the art and future directions of the human brain mechanisms involved in hand and grasp control and their engineering implications for the design and control of artificial hands.The main brain regions and pathways that are responsible for the generation, execution, and regulation of hand movements, as well as for the integration of sensory, cognitive, and emotional information, will be described.How these brain mechanisms can inspire the development of novel and effective control architectures and interfaces for artificial hands, which can mimic the natural and efficient coordination and regulation of hand movements by the brain, will also be discussed.Some suggestions and recommendations for future research and development on brain-inspired hand and grasp control will be concluded.The first step will be to investigate the functioning and user satisfaction of commercial prosthetic hands.

Current Commercial Prosthetic Hands and Their Limitations
Commercial prosthetic hands are classified into three main types: passive cosmetic hands, body-powered hands (also known as kinematic hands), and active myoelectric hands [43].Passive cosmetic hands resemble the natural hand in shape and colour but have no movement capabilities [44].Kinematic hands enable the users to grasp objects by controlling opening and closing through cables and jigs linked to the hand's contralateral side [44].Due to their high robustness and durability, kinematic hands are usually adopted to hard work which requires lifting and manipulating heavy objects [10].Active myoelectric hands are powered by electric motors and can perform different movements and grasps, depending on the number and configuration of the fingers and joints [45].The user controls active myoelectric hands by using EMG signals from the residual muscles of the limb, which are detected by electrodes attached to the skin [46] or implanted in the muscles [47].Despite current myoelectric hands allowing the users to partially recover the hand's simpler functions [48], several limitations that affect their performance and acceptance by users persist and are related to the control methods and the sensory restitution.The control methods are usually very simple and are based on proportional or on/off control [49], which are unnatural and not intuitive, as they require the user to generate arbitrary muscle contractions that are not related to the desired movement or grasp.Moreover, these control methods can only control one DoF at a time, which limits the versatility and dexterity of the prosthetic hand [50].Sensory restitution is essential for grasping and manipulating objects [51] and provides information about the state of the hand and the object, such as contact, pressure [52], slippage [53], and temperature [54].Moreover, sensory feedback also enables error correction and adaptation, as well as emotional and aesthetic satisfaction.Commercial prosthetic hands do not provide any sensory feedback to the user, except for some visual and auditory cues from the device itself.This makes the user's control of the artificial hand difficult during the phases of grip force and load force, which are required in order to grasp and manipulate objects without slipping [55,56] or dropping [57].Due to these limitations, commercial prosthetic hands have low functionality and usability and high user abandonment rates.Indeed, only 56% of upper limb prosthesis users reported wearing their prosthesis every day, while 23% reported never wearing their prosthesis [43].The main reasons for dissatisfaction and abandonment were poor fit, comfort, appearance, weight, functionality, reliability, noise, maintenance, cost, and social stigma [43].Understanding the needs and preferences of amputees is crucial for designing and controlling prosthetic hands that can meet their expectations and improve their quality of life.Therefore, amputees should be involved in the development process from the beginning and not only at the end.

Needs of Upper Limb Prosthesis Users
Upper limb amputees face many challenges and difficulties in their daily lives, as they lose the ability to perform various tasks that require the use of the hand.These tasks include grasping, manipulating, writing, or gesturing, among others [58,59].Moreover, they may experience psychological and social problems regarding low self-esteem, depression, stigma, or isolation [58,60].Therefore, prosthetic devices should address their different needs and expectations in terms of functionality, comfort, appearance, and social acceptance [58,61].
The main needs of upper limb prosthesis users are comfort, reliability, functionality, sensory feedback, and affordability [58].Comfort is related to the physical and psychological aspects of wearing and using a prosthesis, including weight, fit, ease of donning and doffing, and cosmetic appearance [58,62].Reliability is associated with the technical and mechanical aspects of the prosthesis, covering robustness, durability, maintenance, and repair [58,63].Functionality is concerned with the performance and versatility of the prosthesis, involving adaptability, intuitiveness, and control [48,58].Sensory feedback is about the information provided by the prosthesis to the user, encompassing touch, force, or position [53,58].Affordability is linked to the economic and social aspects of acquiring and using a prosthesis, comprising cost, insurance, or reimbursement [58,64].
More user-centred design and evaluation of prosthetic devices is needed to address the diverse and dynamic needs of upper limb prosthesis users [58,61].A new design paradigm in the field of prosthetic hand control could be a possible way to achieve this, which does not merely aim to replicate the natural hand and its brain control, but rather leverages the synergies and simplifications that the brain employs to control the hand [59,65].Firstly, understanding the physical and physiological principles underlying the motion control of the brain is essential.

Brain Regions for Hand Control
The human brain consists of several regions that are involved in hand control, each with specific roles and functions.These regions can be broadly divided into two categories: cortical (Figure 1a) and subcortical (Figure 1b) regions.Cortical regions are the outermost layers of the brain, composed of grey matter.They include the primary motor cortex (M1), the premotor cortex (PMC), the supplementary motor area (SMA), the parietal cortex (PC), and the temporal cortex (TC).These regions are responsible for the generation, execution, and regulation of hand movements, as well as for the integration of sensory, cognitive, and emotional information [5,6,66].The M1, located in the precentral gyrus of the frontal lobe, is the primary source of motor commands to the spinal cord and muscles and has two ways of controlling movement.The lateral group, on one side, controls the limbs, hands, and fingers.The medial group, on the other side, controls the trunk and proximal muscles [67].
The PMC, situated in the frontal lobe anterior to M1, plays a role in planning and preparing hand movements.It is especially involved in those movements that are guided by sensory cues or learned by imitation and helps with motor learning and adaptation [40,41].
The SMA, located on the medial surface of the frontal lobe, participates in planning and initiating hand movements.It is particularly involved in those movements that are self-generated or memory-based and coordinates bimanual movements and action sequences [42,68].
The PC, situated behind the central sulcus in the posterior part of the brain, processes somatosensory information from the hand, e.g., touch, pressure, temperature [54], pain, and proprioception.It also combines multisensory information from vision, audition, and cognition and maps it onto motor representations of the hand [4,39].
The TC, located below the lateral fissure in the lateral part of the brain, processes visual information from the hand.It handles shape, size, orientation, and motion, identifying objects and tools by their appearance and function and linking them with hand actions [69].
Subcortical regions are located below the cortex, composed of white matter.They include the cerebellum and the basal ganglia.These regions are responsible for modulating and refining hand movements, as well as for learning and habit formation [70,71].
The cerebellum, situated below the occipital lobe at the back of the brain, coordinates and fine-tunes hand movements.It ensures accuracy and smoothness and also corrects and detects errors in movement execution and stores motor memories [24,72].
The basal ganglia, a group of nuclei deep within the brain near the thalamus, select and initiate hand movements.They suppress unwanted movements and switch between different movement patterns.They also encode reward and reinforcement signals and help with motor learning and habit formation [71,73].

Brain Pathways for Hand Control
The human brain uses several pathways for transmitting and receiving information between the brain regions and the spinal cord and muscles involved in hand control.These pathways can be broadly divided into two categories: descending and ascending pathways [69].
Descending pathways are the ones that carry motor commands from the brain to the spinal cord and muscles, initiating and modulating hand movements.They include the corticospinal tract, the corticobulbar tract, and the cortico-cortical connections [66].
The corticospinal tract, the main pathway for voluntary hand control, originates from the M1, the PMC, and the SMA and descends through the brainstem.It then branches into two: the lateral corticospinal tract, which crosses to the opposite side of the spinal cord and controls the hand and finger muscles; and the anterior corticospinal tract, which stays on the same side of the spinal cord and controls the arm and shoulder muscles [67,74].
The corticobulbar tract, a pathway for cranial nerve control, originates from the M1, PMC, and SMA and descends through the brainstem, ending in various brainstem nuclei that control cranial nerves related to hand movements.These include the trigeminal nerve (for jaw movements), the facial nerve (for facial expressions), and the hypoglossal nerve (for tongue movements) [75,76].
The cortico-cortical connections, pathways for interhemispheric and intrahemispheric communication, originate from various cortical regions related to hand control.These include the M1, PMC, SMA, PC, and TC.They connect them with other cortical regions through white matter fibers.They comprise the corpus callosum, which links homologous regions of both hemispheres, and the superior longitudinal fasciculus, which links frontal and parietal regions within each hemisphere [77,78].
Ascending pathways are the ones that carry sensory feedback from the spinal cord and muscles to the brain, informing and adjusting hand movements.They include the dorsal column-medial lemniscus pathway, the spinothalamic pathway, and the spinocerebellar pathway [70,71].
Carrying touch, pressure, vibration, and proprioception information from the hand and fingers to the brain, the dorsal column-medial lemniscus pathway is the main pathway for somatosensory feedback.This pathway begins from the dorsal root ganglia of the spinal cord and ascends through the dorsal columns of the spinal cord.In the medulla, it connects and crosses to the opposite side, forming the medial lemniscus and, from there, reaches the thalamus, where it connects again and sends signals to the primary somatosensory cortex and the parietal cortex [5,79].
The spinothalamic pathway, a pathway for pain and temperature [54] feedback, carries pain, temperature [54], itch, and crude touch information from the hand and fingers to the brain.This pathway begins from the dorsal root ganglia of the spinal cord and enters the dorsal horn of the spinal cord.In the dorsal horn, it connects and crosses to the opposite side of the spinal cord, forming the spinothalamic tract.From there, it goes up through the brainstem and reaches the thalamus, where it connects again and sends signals to the primary somatosensory cortex and the parietal cortex [80,81].
Carrying muscle length, tension, and joint position information from the hand and fingers to the cerebellum, the spinocerebellar pathway is a pathway for proprioceptive feedback.This pathway begins from the dorsal root ganglia of the spinal cord and enters the dorsal horn of the spinal cord.It then branches into two: the posterior spinocerebellar tract, which goes up on the same side of the spinal cord and reaches the cerebellum through the inferior cerebellar peduncle, and the anterior spinocerebellar tract, which crosses to the opposite side of the spinal cord, goes up through the brainstem, and crosses again to reach the cerebellum through the superior cerebellar peduncle [80,82].
Besides the descending and ascending pathways, the brain also uses other mechanisms to integrate the sensory information from the hand and fingers with other modalities, such as vision, audition, and vestibular systems.These mechanisms involve the interaction of multiple brain regions, such as the parietal cortex, the temporal cortex, the frontal cortex, and the cerebellum, as well as the subcortical structures, such as the thalamus, the basal ganglia, and the brainstem.Sensory integration enables the brain to form a coherent representation of the hand and its environment and to coordinate hand movements with other sensory cues [83].

Brain Mechanisms for Sensory Integration
The human brain integrates sensory information from different modalities, such as vision, audition, and touch, to form coherent and accurate representations of the external world and the body [84].Sensory integration is essential for hand control, as it allows the brain to perceive the properties and location of objects, to guide and adjust hand movements, and to monitor the outcomes of actions [69].However, sensory integration is not a fixed or automatic process.The brain can flexibly modulate sensory integration depending on the context, the task, and the reliability of sensory inputs [85,86].
Cortical regions for sensory integration are mainly located in the PC, which receives and processes somatosensory, visual, and auditory information from the hand and fingers [5,6].The PC consists of several subregions that perform different functions for sensory integration.These include the primary somatosensory cortex (S1), which maps the tactile sensations from the hand; the secondary somatosensory cortex (S2), which integrates tactile information with other modalities; the posterior parietal cortex (PPC), which combines sensory information with motor commands and spatial representations; and the intraparietal sulcus (IPS), which encodes object features and hand postures [79,87].These parietal regions interact with other cortical regions involved in hand control, such as the M1, PMC, SMA, and TC [66,69].
Subcortical regions for sensory integration are mainly located in the thalamus, which acts as a relay station between the sensory receptors and the cortex, and in the cerebellum, which coordinates and fine-tunes hand movements based on sensory feedback [67,70].The thalamus consists of several nuclei that receive and transmit sensory information from different modalities to specific cortical regions.These include the ventral posterior nucleus (VPN), which relays somatosensory information to the S1; the lateral geniculate nucleus (LGN), which relays visual information to the V1; the medial geniculate nucleus (MGN), which relays auditory information to the A1; and the pulvinar nucleus (PUL), which integrates multisensory information and modulates attention [80,81].The cerebellum consists of several lobules that receive and send sensory information from different sources to various cortical regions.These include the anterior lobe (lobules I-V), which receives proprioceptive information from the spinal cord and projects to the M1; the posterior lobe (lobules VI-X), which receives visual, auditory, and vestibular information from the brainstem and projects to the PPC and the SMA; and the flocculonodular lobe (lobule X), which receives vestibular information from the inner ear and projects to the vestibular nuclei [82,88].
The human brain controls the hand by performing a series of processes that are organized in levels and run in parallel, combining information from different sources of sensation, cognition, and emotion.However, for people who have lost their hands due to injury or disease, these mechanisms are disrupted or impaired, leading to difficulties in performing daily activities and reducing their quality of life.To restore hand function and sensation, prosthetic hands have been developed that can mimic the appearance and movements of the natural hand.However, current prosthetic hands are still limited in terms of sensory feedback, control complexity, and user acceptance.Therefore, new approaches are needed to improve the performance and usability of prosthetic hands, by taking into account the hierarchical control strategies that the brain uses for natural hand control [10].Which prosthetic hand control strategies are most similar to the model of the human brain?

Hierarchical Control Strategies for Prosthetic Hands
Hierarchical control strategies are based on the idea that the brain controls the hand by using different levels of abstraction and representation, from high-level goals and intentions to low-level motor commands and signals [48,[89][90][91][92], and can provide a framework for mapping between different levels of control and representation [93], as well as for integrating multiple sources of information, such as visual, proprioceptive, tactile, and emotional cues [94,95].
Hierarchical control strategies for prosthetic hands can be divided into three main levels: task level, gesture level, and action level (Figure 2) [48,49,96].The task level refers to the desired outcome or goal of a movement, such as reaching for an object or typing a word [97,98].This can be influenced by social and contextual cues that affect the user's motivation and emotions [99,100].To control the task level, the user employs high-level inputs (e.g., EMG, neural signals [101], or EEG signals) that reflect their intention or volition [47,102,103].
The gesture level selects a suitable motor program or strategy for achieving the goal by choosing a grasp type and/or a finger sequence [90,104,105] and can be influenced by visual and proprioceptive cues that provide information about the object and the hand, such as shape, size, weight, texture, and orientation [106,107].Machine learning algorithms control the gesture level by learning from the user's behavior, preferences, and feedback, such as reinforcement learning, deep learning, or neural networks [108,109].
The action level generates motor signals to move the prosthetic hand's motors and joints [110,111] and can be influenced by tactile cues that provide feedback about state hand objects, such as contact, pressure, slippage [112], and temperature [51,57,113].Sensors on the device or in the environment control the action level by reflecting motor commands or signals, such as force, position, velocity, and acceleration [114,115].
Using hierarchical control strategies for prosthetic hands can provide greater flexibility and adaptability in controlling the hand's movements [90].This is achieved by adjusting to different task contexts and sensory feedback.For instance, the prosthetic hand can choose from various grasp types depending on the shape, size, and weight of the object being manipulated [116,117].Additionally, the prosthetic hand can adjust its motor commands by using its own sensors or feedback from the user's residual limb to correct issues like slipping or misalignment when grasping objects [118,119].
However, hierarchical control strategies for prosthetic hands also have some limitations and challenges.One of them is the high dimensionality and complexity of the hand's movements, which require a large number of inputs and outputs to control each degree of freedom of the prosthetic hand.Another one is the variability and uncertainty of sensory feedback, which can affect the accuracy and reliability of prosthetic hand control [85,86].To overcome these limitations and challenges, the use of control synergies for prosthetic hands has been proposed.

Control Synergies
The brain controls the hand by coordinating many DoFs and integrating sensory feedback [120].Instead of controlling each DoF separately, the brain exploits synergies, which are patterns of correlated movements or postures that simplify the control space [121].Synergies enhance the robustness and adaptability of the hand [65] and enable the design of more functional, usable, and acceptable devices that satisfy the user's needs and expectations [59,122].
The human brain manages hand movements by coordinating various DoFs and incorporating sensory feedback [120].Rather than controlling each DoF individually, the brain utilizes synergies-patterns of correlated movements or postures-to simplify the control space [121].These synergies not only enhance the hand's resilience and flexibility [65] but also assist in creating more practical, user-friendly devices that meet the user's requirements and expectations [59,122].
The development of artificial hands that mimic natural and intuitive control has been inspired by the concept of human hand synergies [59].Proposed strategies are typically divided into two categories: software-based and hardware-based.
Algorithms are used in software strategies to map the signals from the user to the movements or configurations of the artificial hand [58].These algorithms can be based on statistical methods, machine learning, optimization, or bio-inspired models [26].Software strategies are capable of adapting to any artificial hand with any number of DoFs and can be personalized to meet the user's preferences and requirements [123,124].However, they necessitate considerable computational power and a dependable signal acquisition system [62].
Physical mechanisms, such as tendons, springs, gears, or other mechanical components, can rigidly or compliantly couple some of the DoFs of an artificial hand [63,125].Artificial hands benefit from hardware strategies that reduce the need for actuators and sensors.These strategies also enable passive adaptation to various object shapes and sizes [126].Using this approach may restrict the hand's flexibility and fine motor skills, which may not match the user's desired movements or expectations [53].
Achieving optimal performance in software and hardware strategies requires finding a balance between complexity and performance.This can be a challenging task, as both have their own advantages and disadvantages [48].However, current solutions have found success through a hybrid approach that leverages the strengths of both software and hardware [127].
The way humans use their hands is not set in stone, but rather adaptable and influenced by different situations [128].Compared to artificial hands, there are differences in movement, force, and sensory responses [129].As a result, artificial hands need to be able to learn and adjust their movements based on experience and specific tasks [130,131].

Artificial Intelligence (AI): Mimicking the Human Brain to Support Prosthetic Hand Control
The control of prosthetic hands based on sEMG signals is a challenging task that requires a lot of computational power and complex processing.Different methods based on deep, continuous, and incremental learning have been proposed to improve the accuracy and robustness of sEMG-based gesture recognition and control for prosthetic hands [132].These methods aim to mimic the human brain's ability to control hand movements in a natural and intuitive way, by learning from sEMG signals and adapting to the user's preferences and needs [133].Some examples of how these methods have been applied to the control of prosthetic hands are the following:

•
Deep learning models, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), long short-term memory (LSTM) networks, generative adversarial networks (GANs), vision transformers, and temporal convolutional networks (TCNs), can automatically extract features and accurately classify sEMG signals for various hand movements and grasp types, eliminating the need for manual feature engineering [134,135].Furthermore, these models can leverage multimodal signals, such as sEMG, EE, and nerve interface, to achieve more natural and intuitive control of prosthetic limbs [136].

•
Continuous learning techniques, such as adversarial and sparsity prior learning, update the sEMG-based control system while preserving previous knowledge [137].
To address the variability and nonstationarity of sEMG signals caused by fatigue, sweat, electrode displacement, and arm position, certain techniques can be employed to assist the sEMG-based control system.These techniques can further adapt to user preferences and feedback, refining the control model [138].

•
Incremental learning techniques like online, transfer, and lifelong learning have been utilized to enhance the sEMG-based control system with new knowledge and skills, without the need to retrain the entire model from the beginning [139].By utilizing techniques that adapt successful models from previous subjects, the training time and effort required to control an upper limb prosthesis can be reduced.Additionally, these techniques can allow the user to learn new gestures or movements with the prosthetic hand without interfering with pre-existing ones [138,139].
The use of AI for improving prosthetic hand control is an innovative and promising approach that can enhance the functionality and acceptance of sEMG-based control systems.With the aid of deep, continual, and incremental learning methods, these control systems can learn from sEMG signals and adapt to the user's preferences and needs, much like the human brain.This approach can also overcome challenges and limitations of conventional sEMG-based control systems, such as sEMG signal variability and nonstationarity, data interference and loss, the scalability and generalization of learning models, and the need for user training and adaptation.

Discussion, Opportunities, and Open Issues
This work summarizes the brain mechanisms for hand control and engineering implications for developing artificial hands that can interface with the human brain.The significance of creating prosthetic hands that can offer sensory feedback, creating control interfaces that can translate sensory information into neural signals, utilizing control algorithms that can integrate sensory information from various sources, and including learning mechanisms that can adjust to modifications in sensory inputs or outputs, has been emphasized.
However, current control strategies for prosthetic hands still face several limitations and challenges, compared to the natural hand and its control by the brain.
Natural hands receive a diverse range of somatosensory information, such as texture, shape, weight, pain, or proprioception, that helps them to perceive and manipulate objects [8,84].However, prosthetic hands have limited sensory capabilities that are not realistic.They only rely on a few sensors that measure contact force, position, or temperature, which are insufficient to capture the richness and complexity of natural touch [113].
The delivery of sensory information from prosthetic hands to users through control interfaces is usually noninvasive but less reliable than natural pathways.Invasive techniques requiring implanting electrodes or wires into the peripheral nerves or brain can cause tissue damage, inflammation, infection, or rejection [9,140], and the quality and stability of neural signals can degrade over time due to scar formation, electrode displacement, or noise [52,141].This may affect the safety, comfort, functionality, and durability of a prosthetic hand [122].
The brain processes sensory information from different modalities and sources in a complex and flexible way, depending on the context, task, and reliability of sensory inputs [58,59].The brain also uses hierarchical and parallel networks that process sensory information at different levels, from low-level signals to high-level representations [142,143].However, control algorithms for prosthetic hands are often simplistic and rigid.They use predefined rules or models that combine sensory inputs with motor outputs or spatial representations but do not account for the variability and uncertainty of natural touch [144,145].This may affect the accuracy, adaptability, naturalness, and intuitiveness of a prosthetic hand.
Controlling prosthetic hands using sEMG signals poses several challenges due to the signals' noisy, variable, and complex nature.Sophisticated techniques are required to translate these signals into reliable and meaningful commands.The main obstacles associated with using sEMG signals for prosthetic hand control include the following: • sEMG signals are nonstationary and can vary due to physiological and environmental factors like fatigue, sweat, electrode displacement, and arm position.Such changes can impact the signals' amplitude, frequency, and morphology, thereby affecting the control system's accuracy and stability [146,147].Therefore, adaptive and robust methods are needed to cope with these changes.[45,134].Additionally, sEMG signals may also have a low spatial and temporal resolution, especially for fine and dexterous movements like grasping and manipulating objects [10,148].These limitations may affect the functionality and performance of the control system and require feature extraction and enhancement techniques to improve them.• The control system's performance can be improved by using feature extraction and enhancement techniques on sEMG signals.However, the number and quality of sEMG sensors and electrodes may be limited, which can affect their ability to adequately cover and sample the signals.The sensors and electrodes may also be constrained by size, shape, placement, and contact, which can further impact spatial and temporal coverage and sampling [96,138].The quality of sensors and electrodes can also affect signal acquisition and transmission [45].These limitations may affect the overall usability and efficiency of the system.

•
Standardization and validation of sEMG datasets and protocols are lacking, resulting in potential incomparability and irreproducibility between different studies and systems.Variations in data collection, preprocessing, segmentation, labelling, and evaluation, as well as user characteristics such as age, gender, health condition, and amputation level, can all impact the consistency and generalization of sEMG datasets and protocols [148].

•
Implementing sEMG-based control systems in real time and online may be limited by the computational complexity and power consumption of machine learning and deep learning algorithms.Achieving good performance and accuracy with these algorithms may require high computational resources and training data [138].The algorithms' complexity and power consumption can affect the control system's speed and efficiency, necessitating optimization and compression techniques to reduce them [148].

•
Using sEMG-based control systems for prosthetic hands requires user training and adaptation, which can be challenging and time-consuming.The user needs to learn the gestures and motions that the control system can recognize and adjust the control parameters and feedback mechanisms to suit their preferences and needs [10,138].These factors can affect the usability and comfort of the control system and demand user-friendly and personalized techniques to facilitate them [48,93].

•
The ethical and social implications of using AI for prosthetic hand control may raise some concerns and challenges, such as privacy, security, accountability, and responsibility [149].To control the prosthetic hand with AI, the system needs to collect and process sensitive and personal data from the users, such as their sEMG signals, hand movements, and preferences [150,151].These data help the system to learn the users' intentions and behaviors and to provide them with suitable feedback and control options [152].However, this also means that the system may make decisions and actions that may affect the users' health, safety, and well-being, such as moving the prosthetic hand in unexpected or harmful ways [153].The ethical and social implications of using AI for prosthetic hand control may affect the trust and acceptance of the control system and require ethical and social guidelines and regulations to address them [154].
Different methods based on deep, continuous, and incremental learning have been proposed to improve the accuracy and robustness of the control of prosthetic hands.These methods aim to mimic the human brain's ability to control hand movements in a natural and intuitive way, by learning from sEMG signals and adapting to the user's preferences and needs.Both the human brain and deep, continuous, and incremental learning can process information, learn from data, and adapt to changing environments and tasks.
The human brain and deep learning models have some similarities in their structure, function, and mechanisms, as they are both composed of networks of neurons that can transmit and process signals [109,155].However, there are several differences between the neurons in the human brain and the ones in deep learning models.Indeed, the neurons in the human brain are biological cells that communicate through electrical and chemical synapses, which are complex and dynamic structures that can adjust signal transmission and plasticity [156,157].On the other hand, the neurons in deep learning models are artificial units that perform mathematical operations and activations, which are simpler and more deterministic functions that can approximate signal processing and learning.Although both systems have multiple layers and connections of neurons, the human brain has many more neurons and synapses than deep learning models, allowing it to handle more complex and diverse tasks [10].Additionally, the human brain can use more sophisticated and flexible learning rules than deep learning models and can balance between stability and plasticity.The human brain controls hand movements by sending signals from the motor cortex to the spinal cord and the muscles, which are coordinated by the sensory feedback and the cerebellum [147,158].Conversely, deep learning models control prosthetic hand movements by sending commands from the classifier to the actuator, based on sEMG signals and the control algorithm [19,159,160].
The human brain and continuous learning techniques share a common ability to update their knowledge and skills based on new data while retaining previous ones [158,161].However, they also have differences and limitations in their methods and mechanisms.The human brain can use neuroplasticity, which allows it to modify its synaptic connections and neural pathways in response to experience and learning [10,162].This helps the brain adapt to the variability and nonstationarity of sEMG signals caused by factors like fatigue, sweat, electrode displacement, and arm position [138,162].Continuous learning techniques use methods to reduce the interference with and forgetting of old data.They update the control model based on new data and feedback [137,163].However, both face the trade-off between stability and plasticity, which can lead to catastrophic forgetting or interference [164,165].The human brain uses mechanisms like sleep, rehearsal, and consolidation to prevent these issues [159,162], while continuous learning techniques use methods like regularization, replay, and distillation to mitigate them [160,161,166].
The human brain and incremental learning techniques share some similarities in their ability to add new information or abilities without needing to start from scratch [162].However, they also have differences and limitations in their methods.The human brain can use memory consolidation to store and retrieve new information by transferring it from shortterm to long-term memory [10].This allows it to learn new gestures or motions without affecting existing ones by forming new neural associations and representations [137,139].Incremental learning techniques can use online learning, transfer learning, and lifelong learning methods to update and expand learning models with new data or tasks [10].This enables users to learn new gestures or motions with a prosthetic hand by adding new classes or features to the learning models [162].However, both the human brain and incremental learning techniques face challenges in scalability and generalization, meaning they may struggle to handle a large number of classes or tasks or transfer knowledge and skills to new scenarios [10].The human brain can use chunking, abstraction, and analogy strategies to overcome limitations, while incremental learning techniques need to use curriculum learning, metalearning, and multitask learning methods to address scalability and generalization challenges [160,166].
These methods can enhance the functionality and acceptability of the control systems for prosthetic devices and ultimately improve the quality of life of amputees.Before these methods can be widely adopted and implemented in real-world scenarios, their challenges and limitations must be addressed.
The human brain and deep learning models have different levels of complexity and scalability.The human brain has about 86 billion neurons and 100 trillion synapses [167], while deep learning models have much fewer artificial neurons and connections [168].The human brain can perform more complex and diverse tasks than deep learning models but is limited by energy consumption, speed, and storage capacity [169].The human brain cannot process large numbers of data as fast and efficiently as deep learning models, but these models require more computational resources and training data to achieve good performance [109].
The human brain and continuous learning techniques have different degrees of stability and plasticity.The human brain can maintain existing knowledge and skills while also adapting to new situations and challenges, balancing stability and plasticity [170].Often, techniques for continuous learning must strike a balance between stability and plasticity.This means they may face two potential problems: catastrophic forgetting (losing previous knowledge and skills) or interference (degrading new knowledge and skills) [162].The brain can prevent forgetting and interference through sleep, rehearsal, and consolidation [159], while continuous learning techniques use methods such as regularization, replay, and distillation to mitigate them [171].
The human brain and incremental learning techniques have different challenges in terms of scalability and generalization.The human brain has a limited capacity and lifespan but can learn new things incrementally [161].Incremental learning techniques allow for the gradual acquisition of new information and skills without having to relearn everything from the beginning [172].However, these techniques may face obstacles in scalability and generalization.This means that they may struggle to handle a large number of tasks or classes and may not be able to transfer learned knowledge and skills to new situations or domains [162].The human brain has the ability to overcome limitations in capacity and lifespan by using strategies like chunking, abstraction, and analogy [173].On the other hand, incremental learning techniques must utilize methods like curriculum learning, metalearning, and multitask learning to tackle challenges related to scalability and generalization [139].
Despite proposed solutions, there are still unsolved issues with prosthetic hands that need attention: 1.
To design artificial hands that can emulate the sensory capabilities of the natural hand, using novel materials and technologies that can sense and generate different types of somatosensory stimuli [128,130].2.
To develop control interfaces that can deliver sensory information from artificial hands to the user's nervous system, using biocompatible and biodegradable materials and devices that can interface with the nervous tissue without causing damage or rejection [128,130].3.
To implement control algorithms that can integrate sensory information from different modalities and sources, using machine learning and artificial intelligence methods that can learn from data and feedback and adapt to context and task [174,175].4.
To incorporate learning mechanisms that can adapt to changes in sensory inputs or outputs, using computational neuroscience models and theories that can capture the plasticity and adaptability of the brain, and incorporate reward and reinforcement signals [70,71].
In conclusion, developing a control strategy for artificial hands inspired by the human brain is a promising but challenging research direction.Since January 2022, various control strategies have been developed to address the issue of achieving human-like control of prosthetic hands (Figure 3).However, the problem remains unsolved.By replicating human hand synergies with prosthetic hands, engineers could create more natural, intuitive, and adaptable devices that can meet the needs and expectations of people who need or use artificial hands.Although progress has been made, there is still a need for better prosthetic hands that can closely interact with the human nervous system.More sophisticated control algorithms are required to accommodate diverse sensory inputs and outputs.Moreover, more efficient learning mechanisms are necessary in order to promote motor learning and habit formation.By doing so, researchers could develop artificial hands that can truly mimic and interface with the human brain and improve the quality of life of people who need or use artificial hands.

Figure 2 .
Figure 2. Workflow of a hierarchical control strategy.

Figure 3 .
Figure 3. Timeline of the control strategies developed from January 2022 to September 2023.

Table 1 .
Advantages and disadvantages of different control strategies for prosthetic hands.
[96,138]ignals are susceptible to external interference, such as electromagnetic fields, power lines, and other biological signals.These interference signals may introduce noise and artefacts and degrade their quality and signal-to-noise ratio[96,138].These interference signals may require filtering and denoising techniques to remove them.