Artificial Intelligence-Based Optimal Grasping Control

A new tactile sensing module was proposed to sense the contact force and location of an object on a robot hand, which was attached on the robot finger. Three air pressure sensors are installed at the tip of the finger to detect the contacting force at the points. To obtain a nominal contact force at the finger from data from the three air pressure sensors, a force estimation was developed based upon the learning of a deep neural network. The data from the three air pressure sensors were utilized as inputs to estimate the contact force at the finger. In the tactile module, the arrival time of the air pressure sensor data has been utilized to recognize the contact point of the robot finger against an object. Using the three air pressure sensors and arrival time, the finger location can be divided into 3 × 3 block locations. The resolution of the contact point recognition was improved to 6 × 4 block locations on the finger using an artificial neural network. The accuracy and effectiveness of the tactile module were verified using real grasping experiments. With this stable grasping, an optimal grasping force was estimated empirically with fuzzy rules for a given object.


Introduction
Among the pieces of information that are received as feedback during robot movements, vision and tactile sensations are the most useful. When there is information on direct contact with a target object, both the target object and robot can be protected, and the operation can be performed safely. Tactile sensing is mainly used to judge the hardness of objects that are touched or to sense the temperature to carry out additional avoidance actions. Studies have been investigating tactile sensors for humanoid robot hands for mimicking human tactile sensing [1][2][3][4][5]. In particular, methods have been proposed for collecting tactile sensing information from various sensors fused together and for developing a tactile sensing module for diverse body parts [6,7]. However, as tactile sensors are still in the development stage and the manufacturing costs of tactile sensors are high in many cases, it is difficult to find commercialized products using tactile sensors [8,9]. Moreover, there are tactile sensors for robots that function based on pressure sensors actuated by fluids [10,11].
There is also a method of configuring a tactile sensor as a force sensing resistor (FSR) [12,13]. A method using a super resolution algorithm has also been proposed, although a lower spatial resolution is measured compared with other types of force or pressure sensors [14]. However, both methods are not sufficiently portable to be applied to other systems.
Unlike existing tactile sensors, a tactile sensing module constructed with air pressure sensors (as developed by us) has a similar response speed and wider sensing range than an FSR [15]. In addition, it can be manufactured at a cost several dozen times lower than that of tactile sensors. It can also perform diverse applied operations according to the temperature of an object through the temperature sensing function and can measure tactile sensations more precisely.
In robot gripping operations, neural networks are being universalized. In most research, vision is used as the input value of the system [16,17]. In particular, the research that used an optical-based tactile sensor called GelSight is impressive [18]. Our system is robust against the effects of object characteristics or the environment. It does not require light or sound-related characteristics when measuring the pressure value in contact with a robot. This paper is organized as follows. Section 2 discusses the sensing of contact forces using pressure sensors. In Section 3, an arrival of time (AoT) algorithm is proposed for sensing the degree of contact of objects that are touched by the robot finger through the developed tactile sensor. In Section 4, the resolution is improved by configuring a neural network, and the improvement is indicated through a contact map. In Section 5, the object is grasped with the optimum grasping force based on the fuzzy controller, after detecting the contact force. Section 7 discusses performing adaptive grasping for different objects using the developed tactile sensor.

Configuration of Tactile Sensing Module through Air Pressure Sensors
In this study, to sense touches with a gripper-type robot hand composed of three fingers, a tactile sensing module is developed for implantation at the tip of a finger, as shown in Figure 1. The developed tactile sensor can be implanted in parts corresponding to the thumb and the middle finger. Considering the structure in which the robot finger skin is implanted, a structure made of silicone was designed; three MS5611 sensors can be inserted therein. Using the aforementioned method, the degree to which the robot hand touches an object can be sensed [19].
Sensors 2020, 20, x FOR PEER REVIEW 2 of 17 it can be manufactured at a cost several dozen times lower than that of tactile sensors. It can also perform diverse applied operations according to the temperature of an object through the temperature sensing function and can measure tactile sensations more precisely.
In robot gripping operations, neural networks are being universalized. In most research, vision is used as the input value of the system [16,17]. In particular, the research that used an optical-based tactile sensor called GelSight is impressive [18]. Our system is robust against the effects of object characteristics or the environment. It does not require light or sound-related characteristics when measuring the pressure value in contact with a robot. This paper is organized as follows. Section 2 discusses the sensing of contact forces using pressure sensors. In Section 3, an arrival of time (AoT) algorithm is proposed for sensing the degree of contact of objects that are touched by the robot finger through the developed tactile sensor. In Section 4, the resolution is improved by configuring a neural network, and the improvement is indicated through a contact map. In Section 5, the object is grasped with the optimum grasping force based on the fuzzy controller, after detecting the contact force. Section 7 discusses performing adaptive grasping for different objects using the developed tactile sensor.

Configuration of Tactile Sensing Module through Air Pressure Sensors
In this study, to sense touches with a gripper-type robot hand composed of three fingers, a tactile sensing module is developed for implantation at the tip of a finger, as shown in Figure 1. The developed tactile sensor can be implanted in parts corresponding to the thumb and the middle finger. Considering the structure in which the robot finger skin is implanted, a structure made of silicone As shown in Figure 2, the skin of the robot finger was configured with a silicone-based material, using a material capable of (and suitable for) transmitting external shocks, while withstanding said shocks. In addition, to transmit shock waves well with a softer material, the inside was made using liquid silicone and solid silicone in a 1:1 ratio; thus, the material was similar to a solid jelly material, thereby facilitating the sensing of touches by the air pressure sensors. As shown in Figure 2, the skin of the robot finger was configured with a silicone-based material, using a material capable of (and suitable for) transmitting external shocks, while withstanding said shocks. In addition, to transmit shock waves well with a softer material, the inside was made using liquid silicone and solid silicone in a 1:1 ratio; thus, the material was similar to a solid jelly material, thereby facilitating the sensing of touches by the air pressure sensors. When grasping an object that is a target to be worked on, the weight of the object should be sensed and an appropriate force should be applied to perform stable grasping. To this end, initially, the weight of the touched object is sensed through the air pressure sensor. Thereafter, the output values of the air pressure sensor according to the weights are learned through a deep neural network (DNN). For the experimental conditions, the mass is constantly increased, but the temperature and surface area of the object remain the same. Based on the aforementioned information, the linearized weights can be predicted, according to the outputs of the sensor. The experiment was conducted by continuously adding and replacing 10 g weights in a cylindrical storage box, as shown in Figure 3.

Neural Network Configuration for Predicting Contact Force
When the weight of an object is linearly increased and measured using a module composed of air pressure sensors, sections may exist where the values stand out non-linearly. For the module to function as a sensor, such nonlinearity should be linearized. This is achieved by configuring a DNN as shown in Figure 4, and the values for the weights of the object are predicted through learning. When grasping an object that is a target to be worked on, the weight of the object should be sensed and an appropriate force should be applied to perform stable grasping. To this end, initially, the weight of the touched object is sensed through the air pressure sensor. Thereafter, the output values of the air pressure sensor according to the weights are learned through a deep neural network (DNN). For the experimental conditions, the mass is constantly increased, but the temperature and surface area of the object remain the same. Based on the aforementioned information, the linearized weights can be predicted, according to the outputs of the sensor. The experiment was conducted by continuously adding and replacing 10 g weights in a cylindrical storage box, as shown in Figure 3. When grasping an object that is a target to be worked on, the weight of the object should be sensed and an appropriate force should be applied to perform stable grasping. To this end, initially, the weight of the touched object is sensed through the air pressure sensor. Thereafter, the output values of the air pressure sensor according to the weights are learned through a deep neural network (DNN). For the experimental conditions, the mass is constantly increased, but the temperature and surface area of the object remain the same. Based on the aforementioned information, the linearized weights can be predicted, according to

Neural Network Configuration for Predicting Contact Force
When the weight of an object is linearly increased and measured using a module composed of air pressure sensors, sections may exist where the values stand out non-linearly. For the module to function as a sensor, such nonlinearity should be linearized. This is achieved by configuring a DNN as shown in Figure 4, and the values for the weights of the object are predicted through learning.

Neural Network Configuration for Predicting Contact Force
When the weight of an object is linearly increased and measured using a module composed of air pressure sensors, sections may exist where the values stand out non-linearly. For the module to function as a sensor, such nonlinearity should be linearized. This is achieved by configuring a DNN as shown in Figure 4, and the values for the weights of the object are predicted through learning. supervised learning. For the output from one layer to the next, the input values from the preceding layer are multiplied by the weight (i.e., the connection strength), b is added to obtain the element values, and the activation function is applied to obtain the parameter value of the next layer to obtain the value of  . Here, sigmoid is used as the activation function for the input and output layers, and a rectified linear unit (ReLU) and sigmoid are used for the hidden layer. Adam is used as the optimizer [20]. The weights are updated through the gradient descent method, as shown in Equation (2). The gradient descent method updates the weights through a product of the differential value of the objective function and learning rate and modifies the weights in all hidden layers to reduce the error(s) output from the final layer.
where  is a weight vector and  is a learning rate (used when applying the current variation of  to the update of  ). The proposed DNN performs repetitive learning using a backpropagation algorithm. Table 1 presents the weights of the object as predicted when the average measured values of the air pressure sensors that change according to the linear weight increase are linearly distributed, and the relevant values are input as described above. Accordingly, linear output values are obtained; thus, the sensors can function as sensors. The object function for training the DNN is defined as the mean squared error of the neural network output of the training data and the actual object weight. The structure is as follows: where m represents the number of training data sets,ŷ i represents the ith predicted value, y i represents the ith label, x i represents the ith input data vector, θ represents the weight vector, and b represents the bias.
The DNN consists of an input layer, hidden layer, and output layer. The variables of the three air pressure sensors are the input elements; the linear weight gain values are input as y i for supervised learning. For the output from one layer to the next, the input values from the preceding layer are multiplied by the weight (i.e., the connection strength), b is added to obtain the element values, and the activation function is applied to obtain the parameter value of the next layer to obtain the value of θ. Here, sigmoid is used as the activation function for the input and output layers, and a rectified linear unit (ReLU) and sigmoid are used for the hidden layer. Adam is used as the optimizer [20]. The weights are updated through the gradient descent method, as shown in Equation (2). The gradient descent method updates the weights through a product of the differential value of the objective function and learning rate and modifies the weights in all hidden layers to reduce the error(s) output from the final layer.
where ω is a weight vector and η is a learning rate (used when applying the current variation of ω to the update of ω). The proposed DNN performs repetitive learning using a backpropagation algorithm. Table 1 presents the weights of the object as predicted when the average measured values of the air pressure sensors that change according to the linear weight increase are linearly distributed, and the relevant values are input as described above. Accordingly, linear output values are obtained; thus, the sensors can function as sensors.

Touch Sensing Using Arrival of Time (AoT) Algorithm
When robot fingers grasp an object, the contact position may vary depending on the shape of the object, position where the object is placed, and posture of the fingers. The AoT algorithm is based on using the position and characteristics of the air pressure sensor constituting the touch sensing unit. Using the algorithm, the touch sensing unit can sense the position where the object is touched, in addition to predicting the weight of the object [21]. The AoT algorithm senses a touch position basis the sensing time of each sensor and distance value (r) between each sensor, as shown in Figure 5. In this study, the touch sensing unit was divided into nine equal-sized rectangles, and the positions of the air pressure sensors were produced (as shown by the P points) in an inverted triangular shape, in consideration of the binding structure with the robot hand frame. The divided area was denoted as Arr n , and the distances between the center point of each area and sensors were denoted by r n,k .

Touch Sensing Using Arrival of Time (AoT) Algorithm
When robot fingers grasp an object, the contact position may vary depending on the shape of the object, position where the object is placed, and posture of the fingers. The AoT algorithm is based on using the position and characteristics of the air pressure sensor constituting the touch sensing unit. Using the algorithm, the touch sensing unit can sense the position where the object is touched, in addition to predicting the weight of the object [21]. The AoT algorithm senses a touch position basis the sensing time of each sensor and distance value (r) between each sensor, as shown in Figure 5. In this study, the touch sensing unit was divided into nine equal-sized rectangles, and the positions of the air pressure sensors were produced (as shown by the P points) in an inverted triangular shape, in consideration of the binding structure with the robot hand frame. The divided area was denoted as , and the distances between the center point of each area and sensors were denoted by , . The AoT algorithm for detecting an object when the object comes into contact with a touch sensing module is expressed as shown in Equation (3). It is calculated by multiplying the value obtained by dividing the measured air pressure value by , nk r with the total time during which the air pressure sensors sensed (as divided by sensor). 1 , The AoT algorithm for detecting an object when the object comes into contact with a touch sensing module is expressed as shown in Equation (3). It is calculated by multiplying the value obtained by dividing the measured air pressure value by r n,k with the total time during which the air pressure sensors sensed (as divided by sensor).
Sensors 2020, 20, 6390 6 of 17 where P k refers to an air pressure value as measured by the kth air pressure sensor, t x refers to the sensing time of the air pressure sensor, and t k refers to the time sensed by the kth air pressure sensor after contact.

Enhancement of Sensing Resolution through Learning
To enhance the sensing precision of the contact area (as sensed according to the contact position and weight of the object), an artificial neural network is configured to learn the contact area. Through the foregoing, the resolution of the touch sensing can be increased from Arr 9 to Arr 24 . The hidden layers of the artificial neural network are composed of 64 and 32 layers, respectively. ReLU is used as the activation function, and Adam is used as the optimizer. For the contact area input into the artificial neural network, as shown in Figure 6, the contact area Arr n (from the AoT algorithm parameters) is used. The existing Arr 9 with 3 × 3 resolution is learned sequentially in a clockwise direction according to the center of gravity, as shown in Figure 7. The learning is conducted again after increasing the resolution to 4 × 6.

Enhancement of Sensing Resolution Through Learning
To enhance the sensing precision of the contact area (as sensed according to the contact position and weight of the object), an artificial neural network is configured to learn the contact area. Through the foregoing, the resolution of the touch sensing can be increased from 9 Arr to 24 Arr . The hidden layers of the artificial neural network are composed of 64 and 32 layers, respectively. ReLU is used as the activation function, and Adam is used as the optimizer. For the contact area input into the artificial neural network, as shown in Figure 6, the contact area n Arr (from the AoT algorithm parameters) is used. The existing 9 Arr with 3 × 3 resolution is learned sequentially in a clockwise direction according to the center of gravity, as shown in Figure 7. The learning is conducted again after increasing the resolution to 4 × 6.

Enhancement of Sensing Resolution Through Learning
To enhance the sensing precision of the contact area (as sensed according to the contact position and weight of the object), an artificial neural network is configured to learn the contact area. Through the foregoing, the resolution of the touch sensing can be increased from 9 Arr to 24 Arr . The hidden layers of the artificial neural network are composed of 64 and 32 layers, respectively. ReLU is used as the activation function, and Adam is used as the optimizer. For the contact area input into the artificial neural network, as shown in Figure 6, the contact area n Arr (from the AoT algorithm parameters) is used. The existing 9 Arr with 3 × 3 resolution is learned sequentially in a clockwise direction according to the center of gravity, as shown in Figure 7. The learning is conducted again after increasing the resolution to 4 × 6.    Cases are further divided to build the training data by considering the contact area of the tactile sensing module with the contact object. The divided contact areas are first moved to the right by one block and are then moved to the bottom by one block when completed, i.e., they are learned sequentially, as shown in Figure 7. To change the contact weight, the contact part on the gripper is accurately sized to modify the contact area, as shown in Figure 8. To change the force exerted on each area, a precise-control gripper is used. Figure 9 shows that the experiment was conducted by varying the contact force to 5, 7, and 9 kg, among others, by changing the torque of the gripper. The size measured in Figure 9 corresponds to that in Figure 7e. The gripper used is the RH-P12-RN model from ROBOTIS. Cases are further divided to build the training data by considering the contact area of the tactile sensing module with the contact object. The divided contact areas are first moved to the right by one block and are then moved to the bottom by one block when completed, i.e., they are learned sequentially, as shown in Figure 7. To change the contact weight, the contact part on the gripper is accurately sized to modify the contact area, as shown in Figure 8. To change the force exerted on each area, a precise-control gripper is used. Figure 9 shows that the experiment was conducted by varying the contact force to 5, 7, and 9 kg, among others, by changing the torque of the gripper. The size measured in Figure 9 corresponds to that in Figure 7e. The gripper used is the RH-P12-RN model from ROBOTIS.  In Figure 7, the contact areas of b and f are of the same size as that in Figure 7, but they are different cases. Hence, a unique number is assigned to each range, such as  Cases are further divided to build the training data by considering the contact area of the tactile sensing module with the contact object. The divided contact areas are first moved to the right by one block and are then moved to the bottom by one block when completed, i.e., they are learned sequentially, as shown in Figure 7. To change the contact weight, the contact part on the gripper is accurately sized to modify the contact area, as shown in Figure 8. To change the force exerted on each area, a precise-control gripper is used. Figure 9 shows that the experiment was conducted by varying the contact force to 5, 7, and 9 kg, among others, by changing the torque of the gripper. The size measured in Figure 9 corresponds to that in Figure 7e. The gripper used is the RH-P12-RN model from ROBOTIS.  In Figure 7, the contact areas of b and f are of the same size as that in Figure 7, but they are different cases. Hence, a unique number is assigned to each range, such as  In Figure 7, the contact areas of b and f are of the same size as that in Figure 7, but they are different cases. Hence, a unique number is assigned to each range, such as Arr 1 = 1.1, Arr 6 = 1.6, and Arr 24 = 4.6. A numerator is set by sequentially adding Arr i , which is constructed depending on each Sensors 2020, 20, 6390 8 of 17 contact shape, and the contact size is set as the denominator. The average contact area is calculated as shown in Equation (4).
where n refers to the size, and i refers to the unique number of Arr i , which is the area unit depending on each contact shape. Table 2 shows some of the training data for the average of the contact area defined in Equation (4). In total, 10,000 training data are measured for each size, and the value is averaged through several experiments. As there are many similar values, only typical values are described as examples. Each pressure sensor has an initial value of 1010.xx. The decimals change randomly, but the integers have meaningful values depending on the contact force. The value may continue to increase during the experiment, and when the experiment is paused for approximately 2 min to correct this, the bias value is adjusted again to set the initial value to 1010.xx. In area learning, the area and position measured by each sensor can be inferred through an average value of the area, which is the last column of the table corresponding to the label value. To prevent the model's overfitting, the L2 regularization was set to 0.001 [22], and the dropout ratio was set to 30%. Figure 10 shows the predicted result of the contact location through training. Since it is a model that has various parameters as input values and predicts the contacted location, it is suitable for MLP (Multi-Layer Perceptron) to use.
where n refers to the size, and i refers to the unique number of i Arr , which is the area unit depending on each contact shape. Table 2 shows some of the training data for the average of the contact area defined in Equation (4). In total, 10,000 training data are measured for each size, and the value is averaged through several experiments. As there are many similar values, only typical values are described as examples. Each pressure sensor has an initial value of 1010.xx. The decimals change randomly, but the integers have meaningful values depending on the contact force. The value may continue to increase during the experiment, and when the experiment is paused for approximately 2 min to correct this, the bias value is adjusted again to set the initial value to 1010.xx. In area learning, the area and position measured by each sensor can be inferred through an average value of the area, which is the last column of the table corresponding to the label value. To prevent the model's overfitting, the L2 regularization was set to 0.001 [22], and the dropout ratio was set to 30%. Figure 10 shows the predicted result of the contact location through training. Since it is a model that has various parameters as input values and predicts the contacted location, it is suitable for MLP (Multi-Layer Perceptron) to use.

Fuzzy Controller for Optimal Grasping
The control system for controlling grasping is shown in Figure 12. This controller controls the robot hand using the optimum torque operating value when the pressure value output from the tactile sensing module located at the fingertip of the robot hand is grasped through the fuzzy controller [23]. In the developed touch sensing module, the difference between the average of the contact pressure values, the output value at the time of grasping, and its differential value are entered into the fuzzy controller. In Figure 13, the optimum grasping position and torque force are derived by using the outputs of the grasping torques. That is, to control the grasping force with the given output torque values, position control is implemented.

Fuzzy Controller for Optimal Grasping
The control system for controlling grasping is shown in Figure 12. This controller controls the robot hand using the optimum torque operating value when the pressure value output from the tactile sensing module located at the fingertip of the robot hand is grasped through the fuzzy controller [23].
Sensors 2020, 20, x FOR PEER REVIEW 9 of 17 Figure 11 is divided into 24 areas of the touch sensing module, and the touches are expressed differently, according to the contact position with the object. When a touch is detected according to a contact position, it is expressed with a high concentration of red.

Fuzzy Controller for Optimal Grasping
The control system for controlling grasping is shown in Figure 12. This controller controls the robot hand using the optimum torque operating value when the pressure value output from the tactile sensing module located at the fingertip of the robot hand is grasped through the fuzzy controller [23]. In the developed touch sensing module, the difference between the average of the contact pressure values, the output value at the time of grasping, and its differential value are entered into the fuzzy controller. In Figure 13, the optimum grasping position and torque force are derived by using the outputs of the grasping torques. That is, to control the grasping force with the given output torque values, position control is implemented. In the developed touch sensing module, the difference between the average of the contact pressure values, the output value at the time of grasping, and its differential value are entered into the fuzzy controller. In Figure 13, the optimum grasping position and torque force are derived by using the outputs of the grasping torques. That is, to control the grasping force with the given output torque values, position control is implemented. Figure 14 presents a fuzzy control system. The knowledge base refers to knowledge on the average input value of the pressure value and the amount of its change arising from contact, which is the input value, and knowledge on the membership function (MF) of the torque value during grasping, which is the output value. The rule base is information on the control rules of the robot hand. Fuzzification first makes a control decision by normalizing a crisp input value to a value between 0 and 1. The fuzzy inference engine infers the fuzzy output through the Mamdani method for an input value using the rules from the rule base and the membership functions of the database. Defuzzification generates a numerical value for the extent of torque during grasping by converting the fuzzy output from the fuzzy inference engine into a crisp value through the center average method. As a result of this, grasping position control is performed.  Figure 14 presents a fuzzy control system. The knowledge base refers to knowledge on the average input value of the pressure value and the amount of its change arising from contact, which is the input value, and knowledge on the membership function (MF) of the torque value during grasping, which is the output value. The rule base is information on the control rules of the robot hand. Fuzzification first makes a control decision by normalizing a crisp input value to a value between 0 and 1. The fuzzy inference engine infers the fuzzy output through the Mamdani method for an input value using the rules from the rule base and the membership functions of the database. Defuzzification generates a numerical value for the extent of torque during grasping by converting the fuzzy output from the fuzzy inference engine into a crisp value through the center average method. As a result of this, grasping position control is performed.  Table 3. Two types of MF are used for the average value of the pressure sensor, which is an input value, and the derivative of the average value. For NH and PH in Table 3, the Gaussian function is used as shown in Equation (5).   Figure 14 presents a fuzzy control system. The knowledge base refers to knowledge on the average input value of the pressure value and the amount of its change arising from contact, which is the input value, and knowledge on the membership function (MF) of the torque value during grasping, which is the output value. The rule base is information on the control rules of the robot hand. Fuzzification first makes a control decision by normalizing a crisp input value to a value between 0 and 1. The fuzzy inference engine infers the fuzzy output through the Mamdani method for an input value using the rules from the rule base and the membership functions of the database. Defuzzification generates a numerical value for the extent of torque during grasping by converting the fuzzy output from the fuzzy inference engine into a crisp value through the center average method. As a result of this, grasping position control is performed.  Table 3. Two types of MF are used for the average value of the pressure sensor, which is an input value, and the derivative of the average value. For NH and PH in Table 3, the Gaussian function is used as shown in Equation (5).  Table 3. Two types of MF are used for the average value of the pressure sensor, which is an input value, and the derivative of the average value. For NH and PH in Table 3, the Gaussian function is used as shown in Equation (5).
Hence, among the seven MFs, the cases in which i is one and nine correspond to Equation (5). Here, σ i and µ i refer to the standard deviation and average, respectively. The remaining MFs are defined as a trigonometric function, as shown in Equation (6).
where i is 2, 3, . . . , 8; b i denotes the peak point of an MF; and a i and c i are the boundary values of the MF. The Gaussian function is used for the MF of the output value, as shown in Equation (5). The composition of the entire fuzzy MF for the contact pressure value of the robot hand grasping, rate of change of the contact pressure value, and output force are as shown in  ( ; , ) = 2 (5) Hence, among the seven MFs, the cases in which i is one and nine correspond to Equation (5). Here, i  and i  refer to the standard deviation and average, respectively. The remaining MFs are defined as a trigonometric function, as shown in Equation (6).

Robot Hand Control System
The system is controlled by a PC and three microcontroller units (MCUs), as shown in Figure 19. The PC performs kinematics operation through a MATLAB graphic user interface and forwards the target position derived through inverse kinematics to an MCU. The MCU proceeds to control through the transmitted information and performs a feedback of the current information to the PC. The robot used herein has a gripper-type robot hand attached to a five-degree-of-freedom manipulator comprising five DC motors.

Robot Hand Control System
The system is controlled by a PC and three microcontroller units (MCUs), as shown in Figure 19. The PC performs kinematics operation through a MATLAB graphic user interface and forwards the target position derived through inverse kinematics to an MCU. The MCU proceeds to control through the transmitted information and performs a feedback of the current information to the PC. The robot used herein has a gripper-type robot hand attached to a five-degree-of-freedom manipulator comprising five DC motors.

Robot Hand Control System
The system is controlled by a PC and three microcontroller units (MCUs), as shown in Figure 19. The PC performs kinematics operation through a MATLAB graphic user interface and forwards the target position derived through inverse kinematics to an MCU. The MCU proceeds to control through the transmitted information and performs a feedback of the current information to the PC. The robot used herein has a gripper-type robot hand attached to a five-degree-of-freedom manipulator comprising five DC motors. Figure 19. Control system configuration. Figure 19. Control system configuration.
The specifications of the DC motors and the robot hand that comprises joints 1-5 of the robotic arm are shown in Table 4. To control the robot hand equipped with the tactile sensing module shown in Figure 1, a control module is constructed, as shown in Figure 20. It is made up of two layers. The upper layer has an operation unit and a sensing unit, and the lower layer has a communication unit and a power supply unit.
The pressure sensor at the fingertip transmits the sensed data to the MCU through the interintegrated circuit (I2C) communication, and the MCU transmits the data to the PC through the controller area network (CAN) communication. The robot hand is controlled through this process. The robot hand uses the RS 485 communication, and the MCU uses the SCI, CAN, and I2C communications. As CAN communication is used when the manipulator is controlled through the PC, there is a communication unit, which includes a transceiver for level conversion when transmitting the data to control the robot hand. In addition, a DC-DC converter exists on the bottom layer of the module, as shown in Figure 21, for stable voltage supply to the pressure sensor, MCU, and the communication terminal.  On the upper layer, one MCU and a pressure sensor module for sensing a contact can be installed. One tactile sensing module requires a total of three pressure sensor modules, and a total of six pressure sensor modules can be mounted on one robot hand control module. This structure enables the control of the modules mounted on the thumb and middle finger simultaneously.
The control commands and encoder data between the MCU and the robot hand are converted through a communication converter between the serial communication interface (SCI) and RS 485. The pressure sensor at the fingertip transmits the sensed data to the MCU through the inter-integrated circuit (I2C) communication, and the MCU transmits the data to the PC through the controller area network (CAN) communication. The robot hand is controlled through this process. The robot hand uses the RS 485 communication, and the MCU uses the SCI, CAN, and I2C communications. As CAN communication is used when the manipulator is controlled through the PC, there is a communication unit, which includes a transceiver for level conversion when transmitting the data to control the robot hand. In addition, a DC-DC converter exists on the bottom layer of the module, as shown in Figure 21, for stable voltage supply to the pressure sensor, MCU, and the communication terminal.

Adaptive Grasping Experiment
A grasping experiment was conducted, based on a robot hand comprising a motor driven by a cortex m3. To configure diverse contact forces and areas, objects with diverse shapes and hardness levels were configured as grasping objects, as shown in Figure 22. When tissue paper (with a low

Adaptive Grasping Experiment
A grasping experiment was conducted, based on a robot hand comprising a motor driven by a cortex m3. To configure diverse contact forces and areas, objects with diverse shapes and hardness levels were configured as grasping objects, as shown in Figure 22. When tissue paper (with a low level of hardness) or paper cups (with low strength) were grasped, the touch sensing value and current amount measured by the touch sensing module were measured to perform adaptive grasping through torque control. The maximum torque output limit of the motor for moving the robot fingers was set to 0 (0%)-1023 (100%). When grasping was conducted after setting the torque to 100%, the current continued to increase so that the grasping action was released, leading to a failure of grasping.

Adaptive Grasping Experiment
A grasping experiment was conducted, based on a robot hand comprising a motor driven by a cortex m3. To configure diverse contact forces and areas, objects with diverse shapes and hardness levels were configured as grasping objects, as shown in Figure 22. When tissue paper (with a low level of hardness) or paper cups (with low strength) were grasped, the touch sensing value and current amount measured by the touch sensing module were measured to perform adaptive grasping through torque control. The maximum torque output limit of the motor for moving the robot fingers was set to 0 (0%)-1023 (100%). When grasping was conducted after setting the torque to 100%, the current continued to increase so that the grasping action was released, leading to a failure of grasping.  Figure 23 shows that the current value increases when a roll of tissue is grasped. The current value generated during the grasping depends on the object being grasped and degree of touch sensing; these are received as feedback to conduct grasping compliant with the set torque value.  Figure 23 shows that the current value increases when a roll of tissue is grasped. The current value generated during the grasping depends on the object being grasped and degree of touch sensing; these are received as feedback to conduct grasping compliant with the set torque value.  Figure 24 shows the output values of the grasping torque and grasping position, as adjusted according to the contact force of the touch sensing module fed back when grasping the paper cup. Thus, it can be seen that optimum control of the grasping torque and position is conducted.    Figure 24 shows the output values of the grasping torque and grasping position, as adjusted according to the contact force of the touch sensing module fed back when grasping the paper cup. Thus, it can be seen that optimum control of the grasping torque and position is conducted.  Table 5 shows the torque values set when the objects shown in Figure 21 above are grasped. The "torque min" is the value when the object is grasped so that it is not damaged, and the "torque max" is the value when the grasping release phenomenon does not occur, even if the grasping action continues for more than 2 min. The relevant values are set through feedback from the touch sensing module, and thus, adaptive grasping is conducted.

Discussion/Conclusions
A method for improving the sensitivity of a contact module equipped with a touch sensing function was proposed and implemented. The output values of the touch sensor (according to the weight values of the object) were linearized through DNN learning, and additional learning based on the contact areas and force enabled high-resolution touch sensing. The values of the proposed  Table 5 shows the torque values set when the objects shown in Figure 21 above are grasped. The "torque min" is the value when the object is grasped so that it is not damaged, and the "torque max" is the value when the grasping release phenomenon does not occur, even if the grasping action continues for more than 2 min. The relevant values are set through feedback from the touch sensing module, and thus, adaptive grasping is conducted.

Discussion/Conclusions
A method for improving the sensitivity of a contact module equipped with a touch sensing function was proposed and implemented. The output values of the touch sensor (according to the weight values of the object) were linearized through DNN learning, and additional learning based on the contact areas and force enabled high-resolution touch sensing. The values of the proposed touch sensing module were received through feedback to conduct adaptive grasping operations, thereby showing the effectiveness of the touch sensor. In the future, optimal grasping torque control will be conducted for a more diverse set of general objects.