A Deep Reinforcement Learning Algorithm Based on Tetanic Stimulation and Amnesic Mechanisms for Continuous Control of Multi-DOF Manipulator

: Deep Reinforcement Learning (DRL) has been an active research area in view of its capability in solving large-scale control problems. Until presently, many algorithms have been developed, such as Deep Deterministic Policy Gradient (DDPG), Twin-Delayed Deep Deterministic Policy Gradient (TD3), and so on. However, the converging achievement of DRL often requires extensive collected data sets and training episodes, which is data inefﬁcient and computing resource consuming. Motivated by the above problem, in this paper, we propose a Twin-Delayed Deep Deterministic Policy Gradient algorithm with a Rebirth Mechanism, Tetanic Stimulation and Amnesic Mechanisms (ATRTD3), for continuous control of a multi-DOF manipulator. In the training process of the proposed algorithm, the weighting parameters of the neural network are learned using Tetanic stimulation and Amnesia mechanism. The main contribution of this paper is that we show a biomimetic view to speed up the converging process by biochemical reactions generated by neurons in the biological brain during memory and forgetting. The effectiveness of the proposed algorithm is validated by a simulation example including the comparisons with previously developed DRL algorithms. The results indicate that our approach shows performance improvement in terms of convergence speed and precision.


Introduction
Deep Reinforcement Learning (DRL) is an advanced intelligent control method. It uses a neural network to parameterize the Markov decision processes (MDP). DRL has been successfully applied to the field of robots [1][2][3][4], Machine Translation [5], Auto-Driving [6], Target Positioning [7], and shows strong adaptability. There are two kinds of the DRL algorithms, one is based on value function such as Deep Q Network (DQN) [8] and Nature DQN [9]. The output of DRL algorithm based on value function is discrete state-action value. The other is policy-based, such as Deep Deterministic Policy Gradient (DDPG) [10], Trust Region Policy Optimization (TRPO) [11], Asynchronous Advantage Actor-Critic (A3C) [12], Distributed Proximal Policy Optimization (DPPO) [13,14], and Twin-Delayed Deep Deterministic Policy Gradient (TD3) [15]. For continuous action space, advanced search policy can improve the sampling efficiency of the underlying algorithms [10]. Many research results focus on improving exploration policy. Among them, Fortunato et al. [16] and Plppert et al. [17] put forward a noise-based exploration policy in adding noise to the action space and observation space. Bellemare et al. propose an exploratory algorithm based on pseudo-counting for efficient exploration. The algorithm evaluates frequency by designing a density model that satisfies certain properties and calculates pseudo-counts that are generalized in continuous space to encourage exploration [18]. In [19], Fox et al. 2 of 16 innovatively propose a framework, DORA, that uses two parallel MDP processes to inject exploration signals into random tasks. Reward-based exploration results in a slower approximation of functions and failure to provide an intrinsic reward signal in time. So, Badia et al. [20] propose a "never give up" exploration strategy (NGU), which is designed to quickly prevent repeated access to the same state in the same episode.
Due to the coupling and complexity, a multi-degree-of-freedom (multi-DOF) manipulator is a hot application background of DRL. The direct applications of DRL algorithms have, so far, been restricted to simulated settings and relatively simple tasks, due to their apparent high sample complexity [21]. Therefore, the application of DRL in multi-DOF robots requires a more effective DRL algorithm. Neuroscience provides a rich source of inspiration for new types of algorithms and architectures, independent of and complementary to the mathematical and logic-based methods and ideas that have largely dominated traditional approaches to AI [22].
In this paper, we design a new DRL algorithm named ATRTD3 based on the research results of neuroscience and the analysis of the human brain memory learning process. Our team shows a biomimetic view to speed up the converging process by biochemical reactions generated by neurons in the biological brain during memory and forgetting. We apply the neural network parameter updating mechanism with Tetanic stimulation and Amnesia mechanism to the DRL algorithm to further improve the efficiency in the application of manipulator.

Related Work
The advancement of DRL has led to the development of intelligent control of a multi-DOF manipulator in recent years. Kim et al. [23] propose a motion planning algorithm for robot manipulators using TD3, which designed paths are smoother and shorter after 140,000 episodes than those designed by Probabilistic Roadmap. Based on the classic DDPG algorithm, Zhang et al. smoothly add Gaussian parameters to improve the exploratory nature of the algorithm, dynamically sets the robot grasping space parameters to adapt to the workspace of multiple scales, and realizes the accurate grasping of the robot [24]. Robert Kwiatkowski et al. [25] used DL methods to make the manipulator build a selfmodel after 35 h of training. By comparing the application of DDPG and Proximal Policy Optimization (PPO) in the manipulator, Lriondo et al. [26] concluded that the current DRL algorithm could not obtain robust motion ability and acceptable training efficiency. The difficulty of applying DRL to the motion control of the multi-DOF manipulator lies in how to improve the exploration efficiency and improve the robustness of the output action of the manipulator. Therefore, it is necessary to get inspiration from the research results of neuroscience. For flexible manipulators, some research [27,28] are very interesting, which brings some help to kinematics modeling and control design in this paper.
Long-term potentiation (LTP) is a form of activity-dependent plasticity which results in a persistent enhancement of synaptic transmission. LTP has been a source of great fascination to neuroscientists since its discovery in the early 1970s [29] because it satisfies the criteria proposed by Donald Hebb for a synaptic memory mechanism in his influential book 'The Organization of Behavior' [30].
LTP is a persistent enhancement of excitatory synaptic transmission induced by some kinds of preceding operations of high-frequency stimulation (HFS) [31]. In LTP, stimulation changes the synaptic proteins, that is, changes the sensitivity of postsynaptic neurons to presynaptic neurons, thus changing the strength and efficiency of synaptic signal transmission. Memory formation is considered to be the result of long-term synaptic plasticities, such as long-term depression (LTD) and LTP [32].
LTP and LTD have another potentially important role in modern neuroscience, and that is the possibility that they may be exploited to treat disorder and disease in the human central nervous system (CNS). A variety of neurological conditions arise from lost or excessive synaptic drive due to sensory deprivation during childhood, brain damage, or disease [33]. Memory and forgetting are the stages that the human brain must go through in the process of accepting knowledge and accumulating experience. This paper will modify the Actor-network module in DRL, and change the Actor-network module optimized by gradient descent into a network module with biological characteristics.

Methods
ATRTD3 is a DRL algorithm proposed in this paper to improve the motion ability of a multi-DOF manipulator. The innovation of the algorithm is to transform the research results of neuroscience into DRL. This algorithm is based on the Twin-Delayed Deep Deterministic Policy Gradient algorithm with Rebirth Mechanism (RTD3) [34] and improves the update of network weight parameters of Actor-network module. The highlight of the algorithm is that it uses tetanic stimulation and amnesia mechanism to randomly enhance and weaken the weighting parameters of the neural network, thus realizing the bionic update of the neural network. The Actor-network module obtained through the deterministic policy gradient needs to be further updated through the above two mechanisms. Compared with other DRL algorithms, ATRTD3 increases the network update part of biological characteristics and further expands the scope of exploration. Figure 1 shows the framework of the overall algorithm.
to presynaptic neurons, thus changing the strength and efficiency of synaptic signal transmission. Memory formation is considered to be the result of long-term synaptic plasticities, such as long-term depression (LTD) and LTP [32].
LTP and LTD have another potentially important role in modern neuroscience, and that is the possibility that they may be exploited to treat disorder and disease in the human central nervous system (CNS). A variety of neurological conditions arise from lost or excessive synaptic drive due to sensory deprivation during childhood, brain damage, or disease [33]. Memory and forgetting are the stages that the human brain must go through in the process of accepting knowledge and accumulating experience. This paper will modify the Actor-network module in DRL, and change the Actor-network module optimized by gradient descent into a network module with biological characteristics.

Methods
ATRTD3 is a DRL algorithm proposed in this paper to improve the motion ability of a multi-DOF manipulator. The innovation of the algorithm is to transform the research results of neuroscience into DRL. This algorithm is based on the Twin-Delayed Deep Deterministic Policy Gradient algorithm with Rebirth Mechanism (RTD3) [34] and improves the update of network weight parameters of Actor-network module. The highlight of the algorithm is that it uses tetanic stimulation and amnesia mechanism to randomly enhance and weaken the weighting parameters of the neural network, thus realizing the bionic update of the neural network. The Actor-network module obtained through the deterministic policy gradient needs to be further updated through the above two mechanisms. Compared with other DRL algorithms, ATRTD3 increases the network update part of biological characteristics and further expands the scope of exploration. Figure 1 shows the framework of the overall algorithm. The pseudo-code of ATRTD3 is shown in Appendix A at the end of this paper. The following is a detailed description of the Tetanic stimulation and Amnesia mechanism. The pseudo-code of ATRTD3 is shown in Appendix A at the end of this paper. The following is a detailed description of the Tetanic stimulation and Amnesia mechanism.

Tetanic Stimulation
Tetanic stimulation is the memory part of the Actor-network module. In the process of back propagation, the neural network obtains the updating quantity of parameters by gradient descent, to realize the iterative updating of the network. By evaluating the updating of network parameters, we can get which neural nodes' weights are enhanced and which ones are weakened. For the parameters of the strengthened neurons, there are also differences in the strengthened degree. Therefore, it is necessary to evaluate and sort the parameters of the strengthened neuron nodes, select the ranking of the strengthened degree, and then obtain the neuron nodes qualified for Tetanic stimulation, and conduct Tetanic stimulation on the above parameters of the neuron nodes to achieve LTP, as shown in Figure 2, and the specific pseudo code is shown in Algorithm 1. Directly modifying the parameters of neural nodes will directly affect the nonlinear expression results of neural networks. This is a kind of immediate influence, which will immediately show the change effect of neuron weight in the continuous MDP, so we need to control this kind of influence in a reasonable range, and constantly update the parameters of the neural network in the iterative process. The designed Tetanic stimulation mechanism is nested in the fixed delay update step, which can exert the effect of Tetanic stimulation mechanism to a certain extent, and will not affect the overall effect of network update iteration, to ensure that the network converges towards the direction of performance improvement in the training process, and has the function of not weakening the attempt to explore the process. If random(0, 1) < κ: 9: If A.w(row t , col t ) > 0 10: A.w(row t , col t ) ← (1 + random(0, 0.01)) × A.w(row t , col t ) 11: else:

Amnesia Mechanism
The Amnesia mechanism is the forgetting part of the Actor-network module. When there are problems in information transmission between neurons, synapses cannot normally perform the function of neurotransmitter transmission, the brain begins to have problems in information transmission, forgetting begins to occur, at this time, the huge redundant brain neural network begins to have problems, and some memory and logic units begin to have problems. Neuron function is not always in a stable and good working state, there will be all kinds of accidents, just as the world's best snooker players cannot guarantee that every shot will be accurate. So, there is forgetting in the whole period and process of neurons. For this reason, the Amnesia mechanism is added to the neural network by randomly selecting the neurons in the network with small probability, and then weakening the parameters of the neuron nodes. When using the Amnesia mechanism to weaken the representation ability of the neural network, the influence of this weakening must be controllable, that is to say, it will not affect the convergence trend of the neural network. Therefore, to ensure that the influence of the Amnesia mechanism can be controlled, the weights of neurons are weakened by random force (a random percentage number) in this paper, as shown in Figure 3, and the specific pseudo code is shown in Algorithm 2.

Amnesia Mechanism
The Amnesia mechanism is the forgetting part of the Actor-network module. When there are problems in information transmission between neurons, synapses cannot normally perform the function of neurotransmitter transmission, the brain begins to have problems in information transmission, forgetting begins to occur, at this time, the huge redundant brain neural network begins to have problems, and some memory and logic units begin to have problems. Neuron function is not always in a stable and good working state, there will be all kinds of accidents, just as the world's best snooker players cannot guarantee that every shot will be accurate. So, there is forgetting in the whole period and process of neurons. For this reason, the Amnesia mechanism is added to the neural network by randomly selecting the neurons in the network with small probability, and then weakening the parameters of the neuron nodes. When using the Amnesia mechanism to weaken the representation ability of the neural network, the influence of this weakening must be controllable, that is to say, it will not affect the convergence trend of the neural network. Therefore, to ensure that the influence of the Amnesia mechanism can be controlled, the weights of neurons are weakened by random force (a random percentage number) in this paper, as shown in Figure 3, and the specific pseudo code is shown in Algorithm 2. Random(0, 1) number ξ, Amnesia threshold value τ, Mutation coefficient k 5: If ξ < τ: 6: End if 8: End for

Experiment Setup
For the control problem of the multi-DOF manipulator, if we only consider the kinematics model of the manipulator, and the motion of the multi-DOF manipulator is regarded as a discrete process from one position of the end effector to another position, the DRL method of deterministic policy gradient, such as RTD3, can achieve good results. However, if the motion of the manipulator is regarded as a continuous motion process, a group of new inputs and outputs of DRL must be found. The idea adopted in this paper is to discretize the motion process of the manipulator in time, take the position deviation of the end-effector from the target, the angular velocity of the joints, and the angular of the joints as the input information of the DRL in this paper, and then take the angular acceleration command for the next interval of the control joint as the output information, as shown in Figure 4. In this way, by controlling the angular acceleration of the joint, the problem of discrete processes in the previous control position process can be solved. How-

Experiment Setup
For the control problem of the multi-DOF manipulator, if we only consider the kinematics model of the manipulator, and the motion of the multi-DOF manipulator is regarded as a discrete process from one position of the end effector to another position, the DRL method of deterministic policy gradient, such as RTD3, can achieve good results. However, if the motion of the manipulator is regarded as a continuous motion process, a group of new inputs and outputs of DRL must be found. The idea adopted in this paper is to discretize the motion process of the manipulator in time, take the position deviation of the end-effector from the target, the angular velocity of the joints, and the angular of the joints as the input information of the DRL in this paper, and then take the angular acceleration command for the next interval of the control joint as the output information, as shown in Figure 4. In this way, by controlling the angular acceleration of the joint, the problem of discrete processes in the previous control position process can be solved. However, this change will inevitably lead to the increase in model dimensions, which not only puts forward new requirements for the ability of the DRL algorithm but also puts forward new requirements for the redesign of the reward function.

Task Introduction
In this paper, the DRL algorithm is used to train a model controller of the multi-DOF manipulator. The model can control the angular acceleration of the joints of the manipulator so that the manipulator can start to move from the initial position of the workspace in the static state, and then move to the target position and stop. In the whole training process, the target position of the task is a fixed position in the workspace of the manipulator. The core of the task is that the manipulator can reach the target position and when it reaches the target position at the same time, each joint of the manipulator is at static state. Finally, the manipulator can reach the target position smoothly by controlling the angular acceleration of joints. In order to limit the boundaries of the entire task, the entire training process must be restricted. Each episode is divided into twenty steps. This setting mainly takes into account that the training convergence process takes a long time and the time of a single episode must be shortened. This task is a simulation experiment to test the convergence ability and learning ability of the improved algorithm ATRTD3.

Simulation Environment Construction
The DRL algorithm establishes the manipulator model through the standard DH [35] method and uses the positive kinematics solution to solve the spatial pose of the end effector through the joint angle. DH modeling method is a general modeling method of multi-link mechanism. Standard DH models are used for serial structure robots. In Table   1, we show the DH parameters of this manipulator, where a is the length of the link, d is the offset of the link,  is the twist angle of the link,  is the joint angle. The units of a and d are meters, and the units of  and  are radians.  UR manipulator is a representative manipulator in industrial production and scientific research. Therefore, this paper will use the structural size and joint layout of manipulator to establish the simulation manipulator in this paper.

Task Introduction
In this paper, the DRL algorithm is used to train a model controller of the multi-DOF manipulator. The model can control the angular acceleration of the joints of the manipulator so that the manipulator can start to move from the initial position of the workspace in the static state, and then move to the target position and stop. In the whole training process, the target position of the task is a fixed position in the workspace of the manipulator. The core of the task is that the manipulator can reach the target position and when it reaches the target position at the same time, each joint of the manipulator is at static state. Finally, the manipulator can reach the target position smoothly by controlling the angular acceleration of joints. In order to limit the boundaries of the entire task, the entire training process must be restricted. Each episode is divided into twenty steps. This setting mainly takes into account that the training convergence process takes a long time and the time of a single episode must be shortened. This task is a simulation experiment to test the convergence ability and learning ability of the improved algorithm ATRTD3.

Simulation Environment Construction
The DRL algorithm establishes the manipulator model through the standard DH [35] method and uses the positive kinematics solution to solve the spatial pose of the end effector through the joint angle. DH modeling method is a general modeling method of multi-link mechanism. Standard DH models are used for serial structure robots. In Table 1, we show the DH parameters of this manipulator, where a is the length of the link, d is the offset of the link, α is the twist angle of the link, θ is the joint angle. The units of a and d are meters, and the units of α and θ are radians. During the experiment, only base, shoulder, and elbow are controlled, wrist1, wrist2, and wrist3 were locked. Because the problem studied in this paper focuses on the position reaching ability of the manipulator, it can be realized only by using three joints, so the three joints of the wrist joints are locked. The homogeneous transformation matrix is established through D-H parameters, as shown in Equation (1).
The solution of the forward kinematics of the manipulator can be obtained by multiplying the homogeneous transformation matrix, as shown in Equation (2). In the base coordinate system {B}, the position of the end of the manipulator can be obtained.
In each episode, the target position is randomly generated in the workspace of the manipulator. In the experiment, the distance difference between the center position of the end effector and the target position in three directions (dx, dy, and dz), the angular velocity (ω Joint_Base , ω Joint_Shoulder , and ω Joint_Eblow ) and absolute angle of the first three joints (θ Joint_Base , θ Joint_Shoulder , and θ Joint_Eblow ) are used as the input of DRL, and the angular acceleration control commands of the base, shoulder and elbow ( ω Joint_Elbow ) are output by DRL. In order to ensure the safe operation of the virtual manipulator, the angular acceleration (θ/s 2 ) is limited, as shown in Equation (3).
. ω i ∈ (−0.5, 0.5), i ∈ (Base, Shoulder, Eblow) When the DRL outputs the angular acceleration control command . ω i , the joint angle increment ∆θ i obtained in this step is calculated by Equation (4) according to the interval time t = 0.1 s and the angular velocity of the previous step ω _i . The current joint angle θ i_ is updated through the joint angle increment ∆θ i and the joint angle of the previous step θ _i in Equation (5). The position of the manipulator end effector in {B} coordinate is obtained by homogeneous transformation matrix 0 6 T calculation. The joint angular velocity is updated as shown in Equation (6).
In this motion process, the DRL model sends the angular acceleration command (the output of the DRL algorithm) of the joints to manipulator according to the perceived environment and its state (the input of the DRL algorithm) and gives the termination command when judging the motion.

Rewriting Experience Playback Mechanism and Reward Function Design
The motion process of a multi-DOF manipulator is no longer multiple discrete spatial position points. As can be seen from Figure 5 below, this experiment completes the specified task by controlling the angular acceleration of the three joints. In the joint angular velocity field drawn by three joint angular velocities, in Case 1, the manipulator can reach the target position and stop, which means that the task is successfully completed; in Case 2, the angular velocity of the joint does not stop at all, or it passes through the target position quickly and fails to complete the task; for Case 3, although the manipulator stops in the scheduled time, the end of the manipulator does not reach the target position, and the task fails to complete. Figure 5 also shows that the task of this experiment is more difficult than the task of only achieving the goal through discrete spatial position.

Rewriting Experience Playback Mechanism and Reward Function Design
The motion process of a multi-DOF manipulator is no longer multiple discrete spatial position points. As can be seen from Figure 5 below, this experiment completes the specified task by controlling the angular acceleration of the three joints. In the joint angular velocity field drawn by three joint angular velocities, in Case 1, the manipulator can reach the target position and stop, which means that the task is successfully completed; in Case 2, the angular velocity of the joint does not stop at all, or it passes through the target position quickly and fails to complete the task; for Case 3, although the manipulator stops in the scheduled time, the end of the manipulator does not reach the target position, and the task fails to complete. Figure 5 also shows that the task of this experiment is more difficult than the task of only achieving the goal through discrete spatial position. Therefore, the design of the reward function must be reconsidered, that is to say, the angular velocity information of the joint must be introduced. In the whole process of the manipulator movement, each instance of acceleration and deceleration has an impact on the angular velocity of each joint when the termination condition is reached and the iteration of the round stops. Therefore, it is necessary to further improve the experience playback mechanism and change the experience pool storage. In other words, the final angular velocity of each joint should be shared by all the previous continuous movements. As shown in Equation (7), here the absolute value of the angular velocity of the joint will be taken, multiplied by the constant i  , divided by the number of steps Stop T , and the corresponding reward value will be obtained.
This part of the reward as a negative reward is added to the corresponding reward in the experience pool, so as to realize the feedback of the joint angular velocity state in the neural network update parameters, as shown in Figure 6. Therefore, the design of the reward function must be reconsidered, that is to say, the angular velocity information of the joint must be introduced. In the whole process of the manipulator movement, each instance of acceleration and deceleration has an impact on the angular velocity of each joint when the termination condition is reached and the iteration of the round stops. Therefore, it is necessary to further improve the experience playback mechanism and change the experience pool storage. In other words, the final angular velocity of each joint should be shared by all the previous continuous movements. As shown in Equation (7), here the absolute value of the angular velocity of the joint will be taken, multiplied by the constant λ i , divided by the number of steps T Stop , and the corresponding reward value will be obtained.
This part of the reward as a negative reward is added to the corresponding reward in the experience pool, so as to realize the feedback of the joint angular velocity state in the neural network update parameters, as shown in Figure 6. The design of the reward function in this paper adds the angular velocity reward part to the Step-by-Step reward function ( Step by Step r   ) [35]. The Step-by-Step reward function mainly includes two parts: the first part is the negative value of the Euclidean distance between the end of the manipulator and the target. The second part is the reward obtained by comparing the distance closed to the target position between the current position and the last position of the manipulator end during the movement. Therefore, the reward function in this paper is shown in Equation (8)  are two constants.

Simulation Experimental Components
In the application of the DRL algorithm, a problem that cannot be avoided is the random generation of a large number of neural network parameters. It is because of the randomness of the parameters that we cannot train and learn effectively in the face of specific tasks, so we need to explore a more efficient, faster convergence, and more stable algorithm framework to make up for this disadvantage. Since ATRTD3, RTD3, and TD3 are all improved and innovated on the basis of DDPG, the contrast group of this experiment chooses the above four algorithms. In the contrast experiment, we train with the same task and acquire the ability to solve the target task through learning and training. In the experiment, we specify two kinds of evaluation indexes. The first index is to calculate all the reward scores in an episode and divide them by the total number of steps in the episode to get the average score. The second index is to record the absolute value of the angular velocity of base, shoulder, and elbow at the end of an episode. Additionally, in the same group of experiments, in order to ensure a fair comparison of the practical application ability and model convergence ability of the four algorithms, we use the same initialization model as the test algorithm model.

Discussion
Through the comparison of DDPG, TD3, RTD3, and ATRTD3 in Figure 7a, we can clearly see the improvement effect of ATRTD3 learning ability. Therefore, we need to further analyze the evaluation index of the average score when the final distance error is roughly the same. From Figure 7a, we can see that the average score of ATRTD3 is higher than other algorithms when the final distance error is the same in purple areas A, B, and C, which indicates that the learning effect of ATRTD3 is better. The reason is that part of the reward function is the negative reward introduced by the absolute value of the angular velocity of each joint after each termination of an episode. Through purple areas A, B, and The design of the reward function in this paper adds the angular velocity reward part to the Step-by-Step reward function (r Step−by−Step ) [35]. The Step-by-Step reward function mainly includes two parts: the first part is the negative value of the Euclidean distance between the end of the manipulator and the target. The second part is the reward obtained by comparing the distance closed to the target position between the current position and the last position of the manipulator end during the movement. Therefore, the reward function in this paper is shown in Equation (8): where λ 4 and λ 5 are two constants.

Simulation Experimental Components
In the application of the DRL algorithm, a problem that cannot be avoided is the random generation of a large number of neural network parameters. It is because of the randomness of the parameters that we cannot train and learn effectively in the face of specific tasks, so we need to explore a more efficient, faster convergence, and more stable algorithm framework to make up for this disadvantage. Since ATRTD3, RTD3, and TD3 are all improved and innovated on the basis of DDPG, the contrast group of this experiment chooses the above four algorithms. In the contrast experiment, we train with the same task and acquire the ability to solve the target task through learning and training. In the experiment, we specify two kinds of evaluation indexes. The first index is to calculate all the reward scores in an episode and divide them by the total number of steps in the episode to get the average score. The second index is to record the absolute value of the angular velocity of base, shoulder, and elbow at the end of an episode. Additionally, in the same group of experiments, in order to ensure a fair comparison of the practical application ability and model convergence ability of the four algorithms, we use the same initialization model as the test algorithm model.

Discussion
Through the comparison of DDPG, TD3, RTD3, and ATRTD3 in Figure 7a, we can clearly see the improvement effect of ATRTD3 learning ability. Therefore, we need to further analyze the evaluation index of the average score when the final distance error is roughly the same. From Figure 7a, we can see that the average score of ATRTD3 is higher than other algorithms when the final distance error is the same in purple areas A, B, and C, which indicates that the learning effect of ATRTD3 is better. The reason is that part of the reward function is the negative reward introduced by the absolute value of the angular velocity of each joint after each termination of an episode. Through purple areas A, B, and C of Figure 7b, ATRTD3 can better guide the multi-DOF manipulator to move to the target position and stop better, so ATRTD3 is more efficient and stable than other algorithms. Secondly, through Figure 7, we can draw the conclusion that ATRTD3 sho stronger stability than other algorithms in the late training period. As can be seen in Figu 8, although DDPG has reached the score level close to that of ATRT3 through the la training, we can clearly see from the average score curve and the final error distance cur that ATRTD3 has better stability in the later training stage compared with DDPG, and t two curves of ATRTD3 are straighter, there are many spikes in the two curves of DDPG (a) (b) Figure 7. Four randomized experiments (Group1, Group2, Group3, and Group4) were conducted to evaluate the performance of four algorithms (RTD3, ATRTD3, TD3 and DDPG). There are Group1, Group2, Group3, and Group4 in the average score (Avg_score) in (a) and the final error distance (Final_Dis) in (b). Purple areas A, B, and C need special attention. Secondly, through Figure 7, we can draw the conclusion that ATRTD3 shows stronger stability than other algorithms in the late training period. As can be seen in Figure 8, although DDPG has reached the score level close to that of ATRT3 through the later training, we can clearly see from the average score curve and the final error distance curve that ATRTD3 has better stability in the later training stage compared with DDPG, and the two curves of ATRTD3 are straighter, there are many spikes in the two curves of DDPG.
From Figure 9, we can see that ATRTD3 improves the average score by at least 49.89% compared with the other three algorithms. In terms of stability, ATRTD3 performs better, with an improvement of at least 89.27% compared with the other three algorithms. From Figure 9, we can see that ATRTD3 improves the average score by at least 49.89% compared with the other three algorithms. In terms of stability, ATRTD3 performs better, with an improvement of at least 89.27% compared with the other three algorithms. To further demonstrate the advantages of the ATRTD3 from the underlying control variables, we will collect the end angular speeds of the three joints during model training as shown in Figure 10a,b. In Figure 10a, we show the final angular velocity of the three joints of the high score model (Group1). In Figure 10b, we show the final angular velocity of the three joints of the arm controlled by a low score model (Group2). By placing a local magnification in each image, we can clearly see the final angular velocity of the three joints at the end of the training process, where the curve represented by the red color is the speed of each joint under the guidance of the ATRTD3 throughout the training process. ATRTD3 has obvious advantages over the other three algorithms.    From Figure 9, we can see that ATRTD3 improves the average score by at least 49.89% compared with the other three algorithms. In terms of stability, ATRTD3 performs better with an improvement of at least 89.27% compared with the other three algorithms. To further demonstrate the advantages of the ATRTD3 from the underlying contro variables, we will collect the end angular speeds of the three joints during model training as shown in Figure 10a,b. In Figure 10a, we show the final angular velocity of the three joints of the high score model (Group1). In Figure 10b, we show the final angular velocity of the three joints of the arm controlled by a low score model (Group2). By placing a loca magnification in each image, we can clearly see the final angular velocity of the three joints at the end of the training process, where the curve represented by the red color is the speed of each joint under the guidance of the ATRTD3 throughout the training process ATRTD3 has obvious advantages over the other three algorithms.   To further demonstrate the advantages of the ATRTD3 from the underlying control variables, we will collect the end angular speeds of the three joints during model training as shown in Figure 10a,b. In Figure 10a, we show the final angular velocity of the three joints of the high score model (Group1). In Figure 10b, we show the final angular velocity of the three joints of the arm controlled by a low score model (Group2). By placing a local magnification in each image, we can clearly see the final angular velocity of the three joints at the end of the training process, where the curve represented by the red color is the speed of each joint under the guidance of the ATRTD3 throughout the training process. ATRTD3 has obvious advantages over the other three algorithms. The final joint angular velocity of the manipulator based on ATRTD3 is always around 0 (rad/s), with the minimum fluctuation and the best stability. In the high score model (Group1), only DDPG can achieve a score similar to ATRTD3 in the training score, so only the joint angular velocity of DDPG and ATRTD3 is compared. It can be seen from Table 2 that DDPG has an order of magnitude disadvantage in the comparison of the angular velocity of the other two joints except that the final angular velocity of the base joint is roughly the same as that of ATRTD3. By comparing the variances in Table 2, it is not difficult to find that ATRTD3 has absolute advantages in stability, which is generally one order of magnitude higher. Because the smaller the variance is, the more stable the training is. In the low model (Group2), the advantages of ATRTD3 cannot be seen in Table 3. After accumulating and comparing the angular velocity and angular velocity variance of the three joints, it can be seen from Figure 11 that ATRTD3 is still better than the other three algorithms. The final joint angular velocity of the manipulator based on ATRTD3 is always around 0 (rad/s), with the minimum fluctuation and the best stability. In the high score model (Group1), only DDPG can achieve a score similar to ATRTD3 in the training score, so only the joint angular velocity of DDPG and ATRTD3 is compared. It can be seen from Table 2 that DDPG has an order of magnitude disadvantage in the comparison of the angular velocity of the other two joints except that the final angular velocity of the base joint is roughly the same as that of ATRTD3. By comparing the variances in Table 2, it is not difficult to find that ATRTD3 has absolute advantages in stability, which is generally one order of magnitude higher. Because the smaller the variance is, the more stable the training is. In the low model (Group2), the advantages of ATRTD3 cannot be seen in Table 3. After accumulating and comparing the angular velocity and angular velocity variance of the three joints, it can be seen from Figure 11 that ATRTD3 is still better than the other three algorithms.   Figure 11. In the high score model (Group1) and low score model (Group2), the ATR rithm compared with the other three algorithms in the control of joint angular veloc and late training stability are two aspects of improvement effect.
Through Figure 11, we can see that ATRTD3 can significantly improve and stability compared with the other three algorithms. ATRTD3  Performance Improvement Skill Level Improvement Stability Improvement Figure 11. In the high score model (Group1) and low score model (Group2), the ATRTD3 algorithm compared with the other three algorithms in the control of joint angular velocity skill level and late training stability are two aspects of improvement effect.
Through Figure 11, we can see that ATRTD3 can significantly improve the skill level and stability compared with the other three algorithms. ATRTD3 generally improves the skill level by more than 25.27% and improves stability by more than 15.90%. Compared with TD3, the stability improvement of ATRTD3 in Group 2 is −5.54%. The main reason is that TD3 falls into a more stable local optimization, so the performance of ATRTD3 cannot be questioned under this index.
In the Actor neural network, the Tetanic stimulation mechanism changes the parameters of some neurons compulsively and improves the exploration space of action in a certain range. Through Tetanic stimulation, the weight of selected neurons can be increased again, which helps to improve the convergence speed of the neural network and speed up the training speed. The Amnesia mechanism makes the parameters of neurons changed compulsively, which is in line with the situation of biological cell decay.

Conclusions
In this paper, we propose an algorithm named ATRTD3, for continuous control of a multi-DOF manipulator. In the training process of the proposed algorithm, the weighting parameters of the neural network are learned using Tetanic stimulation and Amnesia mechanism. The main contribution of this paper is that we show a biomimetic view to speed up the converging process by biochemical reactions generated by neurons in the biological brain during memory and forgetting. The integration of the two mechanisms is of great significance to expand the scope of exploration, jump out of the local optimal solution, speed up the learning process and enhance the stability of the algorithm. The effectiveness of the proposed algorithm is validated by a simulation example including the comparisons with previously developed DRL algorithms. The results indicate that our approach shows performance improvement in terms of convergence speed and precision in the multi-DOF manipulator.
The ATRTD3 we proposed successfully applies the research results of neuroscience to the DRL algorithm to enhance its performance, and uses computer technology to carry out biological heuristic design through code to approximately realize some functions of the biological brain. In the future, we will further learn from neuroscience achievements, improve the learning efficiency of DRL, and use DRL to achieve reliable and accurate manipulator control. Furthermore, we will continue to challenge the scheme of controlling six joints to realize the cooperative control of position and orientation.

Conflicts of Interest:
The authors declare no conflict of interest.