Collision Detection of a HEXA Parallel Robot Based on Dynamic Model and a Multi-Dual Depth Camera System

This paper introduces a Hexa parallel robot and obstacle collision detection method based on dynamic modeling and a computer vision system. The processes to deal with the collision issues refer to collision detection, collision isolation, and collision identification applied to the Hexa robot, respectively, in this paper. Initially, the configuration, kinematic and dynamic characteristics during movement trajectories of the Hexa parallel robot are analyzed to perform the knowledge extraction for the method. Next, a virtual force sensor is presented to estimate the collision detection signal created as a combination of the solution to the inverse dynamics and a low-pass filter. Then, a vision system consisting of dual-depth cameras is designed for obstacle isolation and determining the contact point location at the end-effector, an arm, and a rod of the Hexa robot. Finally, a recursive Newton-Euler algorithm is applied to compute contact forces caused by collision cases with the real-Hexa robot. Based on the experimental results, the force identification is compared to sensor forces for the performance evaluation of the proposed collision detection method.


Introduction
A six-degree-of-freedom parallel structure called the Hexa parallel robot was introduced by Pierrot et al. in [1,2]. Despite its complicated kinematics structure and limited workspace, the Hexa robot has remarkable advantages over the serial robot: high accuracy, high stiffness, high speed, and high carrying capability. However, working with this type of robot must imply solving complex and computationally not only inverse/forward kinematics problems but also dynamics ones, which are complicated. Meanwhile, the inverse kinematics issue is tractable by geometrical methods [3,4], but the forward kinematics problem involves a system of nonlinear equations that usually has no closed-form solution. Therefore, the artificial neural networks (ANNs) method is proposed for finding a unique closed-form analytic solution to the forward kinematics problem [5]. There have been few studies on the dynamic modeling of the experimental configuration Hexa parallel robot [6,7]. In-depth study of the Hexa robot dynamics is needed to take full advantage of its strengths for trajectory control and fault detection.
It is a critical issue to study the collisions of a robot manipulator due to diverse application scenarios. An approach for accurate and real-time monitoring of serial robot manipulator collisions is presented in [8], which uses proprioception and IMU sensing for velocity and acceleration estimation. Another observer method to adapt the erroneous dynamics model of the serial manipulators is introduced in [9], where the observer significantly improves the collision detection sensitivity. Sensorless collision detection approaches based on different methods are presented in [10][11][12]. Accelerometers are attached along the surface of the robot arm to detect and localize contact events [13]. Tactile sensors mounted with the robot manipulator are applied to provide a tactile perception of the

•
The analysis of implementing a dynamic model of the moving Hexa parallel robot for the collision detection problem is proposed.

•
The obstacles and the Hexa robot are isolated by the proposed geometrical algorithm and the set of real-time images taken from the multi-cameras system is used to locate the contact point. • Finally, the Cartesian external force at the contact point is estimated by the recursive Newton-Euler algorithm for the Hexa parallel robot, compared to the force sensor in different collision point cases.
The rest of this paper is organized as follows. We present the configuration, the kinematics and dynamics problems of the Hexa robot in Section 2. In Section 3, the proposed collision detection, isolation, and identification methods applied to the Hexa robot are presented. We show block diagrams to illustrate the entire Hexa robot system and introduce equipment used in this study in Section 4. Section 5 includes the experimental scheme of study cases where the proposed methods are validated based on the given trajectories. Finally, we conclude the paper in Section 6.

Configurations of the Hexa Robot
The model of the Hexa parallel robot with six serial kinematics chains is shown in Figure 1. Each kinematic chain is described as a revolute-universal-spherical chain link between the base platform and the active plate. The arms driven by six motors at revolute joints connect with the rods at the universal joints. These rods are associated with the active plate at spherical joints. There are two main coordinate frames when the Hexa robot is considered. The first fixed coordinate frame {O} = {OXYZ} is located in the center of the base platform with the Z-axis perpendicular to the base platform upwards. The X-axis direction points from the origin O to the midpoint of the edge A 1 A 2 . The Y-axis is determined by the right-hand rule. The second rotating coordinate frame {P} = {Pxyz} is located in the center of the active plate with the z-axis perpendicular to the surface of the active plate. The x-axis direction points from the end-effector to the midpoint of the edge C 1 C 2 . The y-axis is determined in the same way as as the Y-axis. For the ith kinematic chain (i = 1, . . . , 6), the point A i represents the center of the revolute joint located in the ith arm. The point B i is the center of the universal joint that connects the ith rod with the ith arm. The point C i is the center of the spherical joint that links between the ith rod and the active plate. The length of the ith arm and the ith rod are l a and l d , respectively. The angle ζ i is the rotation angle of the revolute joint of the ith arm. are presented. We show block diagrams to illustrate the entire Hexa robot system and introduce equipment used in this study in Section 4. Section 5 includes the experimental scheme of study cases where the proposed methods are validated based on the given trajectories. Finally, we conclude the paper in Section 6.

Configurations of the Hexa Robot
The model of the Hexa parallel robot with six serial kinematics chains is shown in Figure 1. Each kinematic chain is described as a revolute-universal-spherical chain link between the base platform and the active plate. The arms driven by six motors at revolute joints connect with the rods at the universal joints. These rods are associated with the active plate at spherical joints. There are two main coordinate frames when the Hexa robot is considered. The first fixed coordinate frame { } = { } is located in the center of the base platform with the -axis perpendicular to the base platform upwards. The -axis direction points from the origin to the midpoint of the edge 1 2 . The -axis is determined by the right-hand rule. The second rotating coordinate frame { } = { } is located in the center of the active plate with the -axis perpendicular to the surface of the active plate. The -axis direction points from the end-effector to the midpoint of the edge 1 2 . The -axis is determined in the same way as as the Y-axis. For the th kinematic chain ( = 1, … ,6), the point represents the center of the revolute joint located in the th arm. The point is the center of the universal joint that connects the th rod with the th arm. The point is the center of the spherical joint that links between the th rod and the active plate. The length of the th arm and the th rod are and , respectively. The angle is the rotation angle of the revolute joint of the th arm. The point representing the end-effector and the Euler angles ( , , ) of the active plate with respect to the fixed frame { } are given. Three angles ( , , ) are defined in compliance with the intrinsic rotation convention of Euler angles. The coordinate with respect to the active frame { } can be transformed to the coordinate with respect to the fixed frame { } by using the rotation matrix : The point P representing the end-effector and the Euler angles (φ, θ, ψ) of the active plate with respect to the fixed frame {O} are given. Three angles (φ, θ, ψ) are defined in compliance with the intrinsic rotation convention of Euler angles. The coordinate with respect to the active frame {P} can be transformed to the coordinate with respect to the fixed frame {O} by using the rotation matrix R O P : cos φ cos θ cos ψ − sin φ sin ψ − cos φ cos θ sin ψ − sin φ cos ψ cos φ sin θ sin φ cos θ cos ψ + cos φ sin ψ − sin φ cos θ sin ψ + cos φ cos ψ sin φ sin θ − sin θ cos ψ sin θ sin ψ cos θ   (1) The position and orientation of the active plate can be determined by three Euler angles and the position of point P: The angular velocity ω P of the active plate can be expressed as: At the point O on the base platform, three frames 3,5) are built with the Z i -axis coinciding with the Z-axis. The X i -axis direction points to the center of edge A i A i+1 . The Y i -axis is determined by the right-hand rule. Simultaneously, at the point P on the active plate, three frames {P i } = {P i x i y i z i } (i = 1, 3, 5) are also constructed. The rotation matrix of frame {O i } relative to frame {O} and frame {P i } relative to frame {P} can be expressed as: is built with the Z 1i -axis along with the rotation axis. The Y 1i -axis is perpendicular to the base platform upwards. The X 1i -axis is determined by the right-hand rule. The rotation matrix of frame {A 1i } relative to frame {O i } is: 6) attached to the ith arm is established with the Z 2i -axis coinciding with the Z 1i -axis. The X 2i -axis coincides with the axial line of the ith arm. The Y 2i -axis is determined by the right-hand rule. The rotation matrix of frame {A 2i } relative to frame {A 1i } is: where ζ i is the rotation angle of the revolute joint of the arm. At the point B i , a frame {B 1i } = {B 1i x 1i y 1i z 1i } (i = 1, . . . , 6) attached to the ith arm and a frame {B 2i } = {B 2i x 2i y 2i z 2i } (i = 1, . . . , 6) attached to the ith rod are built with the x 1i -axis pointing from A i to B i . The x 2i -axis points from B i to C i . The z 1i -axis and the z 2i -axis coincide with the two rotating axes of the universal joint, respectively. The y 1i -axis and the y 2i -axis are determined by the right-hand rule. The rotation matrix of frame {B 1i } relative to frame {A 2i } is: The rotation matrix of the frame {B 2i } relative to the frame {B 1i } can be calculated by transformations. Initially, the points of A i , B i , and C i align on the same line. The frame {B 2i } is obtained by rotating the frame {B 1i } around the two rotating axes of the universal joint by the two rotation angles β i and γ i , respectively. The rotation matrix of frame {B 2i } relative to frame {B 1i } is: Finally, the rotation matrix of the frame {A 2i } relative to the frame {O} and the frame {B 2i } relative to the frame {O} are: where and

Inverse Kinematics
In the inverse kinematics, the position and Euler angles of the active plate are given. The rotation angles of motors need to be determined. A kinematics chain of the Hexa robot is shown in Figure 2. The coordinate of point A i in frame {O} and C i in frame {P} are given. The vector r O C i can be expressed as follows: The kinematics equations can be expressed as follows: where The rotation angle ζ i can be calculated as the following expression: The rotation angle can be calculated as the following expression:

Forward Kinematics
The procedure of neural network training is presented in Figure 3. In the forward kinematics problem, the rotation angles of the arms are given, and the pose of the end-effector is determined by solving kinematics equations. These equations are nonlinear, so analytically, finding their solutions is highly complicated. For the above reason, artificial neural networks (ANNs) are employed to calculate the approximate pose of the end-effector. Firstly, a data set consisting of the potential pose positions of the active plate is created randomly. This data set should be modified to reject the singular points and points outside the workspace. The modified data set is called the target data in ANN

Forward Kinematics
The procedure of neural network training is presented in Figure 3. In the forward kinematics problem, the rotation angles of the arms are given, and the pose of the endeffector is determined by solving kinematics equations. These equations are nonlinear, so analytically, finding their solutions is highly complicated. For the above reason, artificial neural networks (ANNs) are employed to calculate the approximate pose of the end-effector. Firstly, a data set consisting of the potential pose positions of the active plate is created randomly. This data set should be modified to reject the singular points and points outside the workspace. The modified data set is called the target data in ANN training. From the target data, the inverse kinematics problem is solved and the data set of angles ζ i of six arms is created. This data set is called the input data in ANN training. The training results for weight and bias factors are used to calculate the forward kinematics problem. The training results for weight and bias factors are used to calculate the forward kinematics problem.

Dynamics
The dynamics equations can be obtained by several methods, such as the Newton-Euler Formulation, Lagrange equations of the first type, and the Virtual Work Principle. In this study, the Lagrange equations of the first type are applied to derive dynamics equations. Those equations are arranged into two sets and expressed as follows: 6 Figure 3. The procedure of neural network training [3].

Dynamics
The dynamics equations can be obtained by several methods, such as the Newton-Euler Formulation, Lagrange equations of the first type, and the Virtual Work Principle. In this study, the Lagrange equations of the first type are applied to derive dynamics equations. Those equations are arranged into two sets and expressed as follows: where q = ζ 1 ζ 2 ζ 3 ζ 4 ζ 5 ζ 6 x p y p z p φ θ ψ T presents the generalized coordinates; L denotes the Lagrangian function of the whole system, which is the Lagrangian functions summation of the active plate, the arms, and the rods; τ j = K t I j is the actuator torque; K t is the motor torque constant and I j is the motor armature current; Q j is the generalized force; λ i (i = 1, . . . , 6) is the ith Lagrangian multiplier; , and c i are in Equation (17).

Inverse Dynamics
Twelve motion variables q j and six generalized forces τ j contributed by the actuators are given. Solving the inverse dynamics problem is to find out the six generalized external forces.
The first six-equation set Equation (19) contains the six Lagrange multipliers λ i (i = 1, . . . , 6) as the unknowns. The first set can be written in the form: The six generalized external forces Q j (j = 7, . . . , 12) can be calculated by solving the second six-equation set Equation (20):

Forward Dynamics
Six generalized external forces Q j and six generalized forces τ j contributed by the actuators are given. Solving the forward dynamics problem is to find out the twelve motion variables q j .
Analyzing the problem, we have eighteen equations, including twelve dynamics equations in Equations (19) and (20), and six kinematics equations in Equation (15). There are eighteen unknown variables, including twelve motion variables q j and six Lagrange multipliers λ i (i = 1, . . . , 6). This equation system is a differential-algebraic equations system containing differential equations and algebraic equations. The strategy to solve this system of equations is to reduce differential order, leading to an increment in the number of variables and equations. Then, this system of equations becomes algebraic equations. MATLAB has a function to solve differential algebraic equations (DAEs), but solving forward dynamics problems is still extremely complicated.

Collision Detection
When robot manipulators move freely in the environment, an obstacle may come to the workspace of the robot manipulator at any time. If that obstacle collides with the robot, a collision has occurred. Implementing collision detection is to recognize whether a robot collision occurred or not. An observer calculated based on the dynamics equations can detect the occurrence of a collision. A suitable threshold is selected based on the requirement of the accuracy for this observer. A collision occurs if the observer signal is greater than the chosen threshold.
The generalized momentum of the Hexa robot is defined as: According to the second dynamics equations set Equation (19), the generalized force can be written in the form: The collision identification signal, or simply the residual r, is: .
Lagrange multipliers λ i (i = 1, . . . , 6) can be computed using the first dynamics equations set Equation (19). When a collision occurs, at least an element r j of vector r raises with a time constant 1/K j . The detection is performed as soon as r j > r low , where r low is the chosen threshold.

Collision Isolation
After the residual indicates that a collision occurs on the Hexa robot, it is necessary to identify the collision location. Collision isolation aims at localizing the contact point. In our paper, a vision system consisting of dual depth cameras is applied to detect the contact point if a collision occurs. The first camera's photos at the time of a collision case are analyzed to identify whether the contact point is at the active plate or the rods. Otherwise, the contact point is detected by processing the image captured by the second camera. Based on the multi-camera vision system, we propose the flowchart of the collision isolation procedure in Figure 4 and the mathematics processing for localization of the contact point in Figure 5.
In this study, every external object coming to the workspace should be from the outside and cause the Hexa robot obscuration. Because the whole Hexa parallel manipulator cannot be observed in only images captured by the first dual depth camera, this camera is responsible for the external object isolation on the active plate and the rods from these images. On the contrary, the external obstacle for the arms case is identified by the second dual depth camera. Both arm and rod situations are considered in the direction to see the obstacle obscuring them. isolation procedure in Figure 4 and the mathematics processing for localization of the contact point in Figure 5.  In this study, every external object coming to the workspace should be from the outside and cause the Hexa robot obscuration. Because the whole Hexa parallel manipulator cannot be observed in only images captured by the first dual depth camera, this camera is responsible for the external object isolation on the active plate and the rods from these images. On the contrary, the external obstacle for the arms case is identified by the second dual depth camera. Both arm and rod situations are considered in the direction to see the obstacle obscuring them.
• Case 1: Contact points are determined by the first camera.
Firstly, a bounding box around the active plate is recognized using YOLOv3. Because the color of the rods is different from the background, the rods can be isolated from the image, so the RGB color image is converted to a YCbCr color image. Then, the Canny edge detector and Hough transform are applied to recognize straight-line profiles around rods. From these detected lines, the rods are identified in the image. The considered image and the background image are converted to grayscale images. A binary image is created by subtracting the grayscale background image from the grayscale considered  isolation procedure in Figure 4 and the mathematics processing for localization of the contact point in Figure 5.  In this study, every external object coming to the workspace should be from the outside and cause the Hexa robot obscuration. Because the whole Hexa parallel manipulator cannot be observed in only images captured by the first dual depth camera, this camera is responsible for the external object isolation on the active plate and the rods from these images. On the contrary, the external obstacle for the arms case is identified by the second dual depth camera. Both arm and rod situations are considered in the direction to see the obstacle obscuring them.
• Case 1: Contact points are determined by the first camera.
Firstly, a bounding box around the active plate is recognized using YOLOv3. Because the color of the rods is different from the background, the rods can be isolated from the image, so the RGB color image is converted to a YCbCr color image. Then, the Canny edge detector and Hough transform are applied to recognize straight-line profiles around rods. From these detected lines, the rods are identified in the image. The considered image and the background image are converted to grayscale images. A binary image is created by subtracting the grayscale background image from the grayscale considered

•
Case 1: Contact points are determined by the first camera.
Firstly, a bounding box around the active plate is recognized using YOLOv3. Because the color of the rods is different from the background, the rods can be isolated from the image, so the RGB color image is converted to a YCbCr color image. Then, the Canny edge detector and Hough transform are applied to recognize straight-line profiles around rods. From these detected lines, the rods are identified in the image. The considered image and the background image are converted to grayscale images. A binary image is created by subtracting the grayscale background image from the grayscale considered image. After combining the image subtraction with depth conditions of the objects in the workspace, the external object is identified by removing the Hexa robot from the binary image.
To find the contact point on the active plate, all points belonging to the external object are checked to see if any of them touch the bounding box around the active plate. Then, the image depth is considered at these points. The difference between the depth of these points and the bounding box must be less than a chosen threshold. A point represents intersections of the external object and the bounding box around the active plate, which is obtained by taking an average. A line passes through the mentioned point and the end-effector. Finally, the contact point is located at the junction of the line and the active plate's hexagonal border.
To find the contact point on a rod, the possible pairs of points belonging to the external object are checked respectively to see if the midpoint of these points is on the rod. Then, the image depth is considered at these points. The difference between these points' depth and the midpoint's depth on this rod must be less than the chosen threshold. All of these midpoints are probably the contact point. As a result, a point represents the contact point location, obtained by averaging all the midpoint locations.

•
Case 2: Contact points are determined by the second camera.
Based on the image depth, only objects in a limited workspace are considered in the image captured from the second camera. Because the color of the arms is blue, the arms can be isolated from the image. An appropriate threshold for the HSV color scale is chosen for probable extracting of the blue region from the image. Then, the Canny edge detector and Hough transform are applied to recognize straight-line profiles along arms. The direction of arms can be identified from these lines. Two fixed points are chosen as the motor location where two arms are connected with two motors. According to the fixed points and the directions of the arms, all points belonging to the blue region in the arm directions are checked to see if any of them intersect the non-blue region. The intersections are probable contact point locations. The correct contact point location can be detected with the depth conditions.
After the RGB color image is converted to a YCbCr color image, the rods can be isolated from the image. The Canny edge detector and Hough transform are applied to recognize straight-line profiles along rods. The direction of rods is calculated from these profiles. Two points are computed as the connector location where two arms are connected with two rods. According to the computed points and the direction of the rods, the white region points in the rod direction are checked for the non-white intersect probability. The intersections are probable contact point locations. The correct location of the contact point is determined based on the depth conditions and the above junctions.

Collision Identification
After the collision occurs and the contact point location is known, the information on the contact force should be determined. This study applies a Recursive Newton-Euler Algorithm (RNEA) to calculate how much the contact force was instead of using more devices. The inputs for this algorithm are the current intensity measured from current sensors and the pose of the end-effector obtained from image processing.
Three cases correspond to three possibilities that a collision occurs at an arm, a rod, or the active plate. RNEA is developed separately for the application with the Hexa robot in each case.
where F ai is the inertia force acting on the ith arm; M ai is the inertia moment acting on the ith arm; m a is the mass of the arm; v ai is the linear velocity of the center of mass (CoM) of the ith arm; I ai is the tensor of mass moment of inertia of the ith arm; ω ai is the angular velocity of the ith arm; F ri is the inertia force acting on the ith rod; M ri is the inertia moment acting on ith rod; m r is the mass of the rod; v ri is the linear velocity of the center of mass (CoM) of the ith rod; I ri is the tensor of mass moment of inertia of the ith rod; ω ri is the angular velocity of the ith rod; F p is the inertia force acting on the active plate; M p is the inertia moment acting on the active plate; m p is the mass of the active plate; v p is the linear velocity of the center of mass (CoM) of the active plate; I p is the tensor of mass moment of inertia of the active plate; ω p is the angular velocity of the active plate.

•
Case 1: A collision occurs on the active plate.
First step: According to six kinematics chains, calculate f B i and f C i (i = 1, . . . , 6).
where f A i and n A i are the reaction force and the reaction moment exerted on the ith arm, a quantity of n A i can be computed by the ith motor torque; f B i and n B i = 0 are the force and the moment exerted on the rod by the arm; f C i and n C i = 0 are the force and the moment exerted on the active plate by the ith rod. Second step: Calculate the external force F ep .
• Case 2: A collision occurs at an arm.
First step: Assume collision occurred on the first kinematics chain. According to the five remaining kinematics chains, calculate f B i and f C i (i = 2, . . . , 6) according to Equations (28)- (30).
Second step: Calculate f C 1 based on the force balance equation at the active plate.
Third step: Calculate the external force F e1 on the first arm.
where D is the contact point, which is determined in the collision isolation.
• Case 3: A collision occurs at a rod.
First step: Assume collision occurred in the first kinematics chain. According to the five remaining kinematics chains, calculate f B i and f C i (i = 2, . . . , 6) according to Equations (28)- (30).
Second step: Calculate f C 1 based on the force balance equation at the active plate according to Equation (32).
Third step: Calculate the external force F e1 on the first rod.
where D is the contact point, which is determined in the collision isolation. Figure 6 is the block diagram of the Hexa robot system. Both the computer and control software are responsible for controlling the robot via RS232 and solving collision problems. The robot controller is an Arduino Uno R3 microcontroller board. The six Servo motors are used to drive the arms. A current sensor module measures and sends current signals to Arduino Mega 2560. The vision system, including dual stereo cameras, monitors the Hexa robot. Meanwhile, the computer and software perform image processing. The first camera is mounted under the base platform, and the second camera is fixed away from the workspace of the Hexa robot. A force sensor should be used to verify the accuracy of the algorithm. A force/torque sensor is attached to the active plate to measure forces and torques.

Experimental Setup
where is the contact point, which is determined in the collision isolation. Figure 6 is the block diagram of the Hexa robot system. Both the computer and control software are responsible for controlling the robot via RS232 and solving collision problems. The robot controller is an Arduino Uno R3 microcontroller board. The six Servo motors are used to drive the arms. A current sensor module measures and sends current signals to Arduino Mega 2560. The vision system, including dual stereo cameras, monitors the Hexa robot. Meanwhile, the computer and software perform image processing. The first camera is mounted under the base platform, and the second camera is fixed away from the workspace of the Hexa robot. A force sensor should be used to verify the accuracy of the algorithm. A force/torque sensor is attached to the active plate to measure forces and torques.  ] can be obtained from processed images captured by the first camera. Six rotation angles [ 1 2 3 4 5 6 ] and the actuator torque can be sequentially computed by solving the inverse kinematics problems based on the current [ 1 2 3 4 5 6 ] measured by current sensors. In the second step, with the above variables, Lagrangian multipliers ( = 1, … ,6) can be calculated using the first dynamics equations set. In the third step, the residual = [ 1 2 3 4 5 6 ] can be computed with the end effector pose and Lagrangian multipliers. According to the residual , it denotes whether a robot collision occurred or not. In the fourth step, the contact point location is determined based on images captured by two stereo cameras. In the final step, after collision isolation, the Recursive Newton-Euler Algorithm is used to identify the external force . This system is like a sensor wherein the inputs are measured current and the pose of the end-effector, and the outputs are the external contact force and the contact point position. Further research will probably cover exploiting those outputs and regenerating the trajectory.  Figure 7 is the block diagram of the collision problems solving process. In the first step, the end-effector pose x p y p z p φ θ ψ T can be obtained from processed images captured by the first camera. Six rotation angles [ζ 1 ζ 2 ζ 3 ζ 4 ζ 5 ζ 6 ] T and the actuator torque τ can be sequentially computed by solving the inverse kinematics problems based on the current [I 1 I 2 I 3 I 4 I 5 I 6 ] T measured by current sensors. In the second step, with the above variables, Lagrangian multipliers λ i (i = 1, . . . , 6) can be calculated using the first dynamics equations set. In the third step, the residual r = [r 1 r 2 r 3 r 4 r 5 r 6 ] T can be computed with the end effector pose and Lagrangian multipliers. According to the residual r, it denotes whether a robot collision occurred or not. In the fourth step, the contact point location r D is determined based on images captured by two stereo cameras. In the final step, after collision isolation, the Recursive Newton-Euler Algorithm is used to identify the external force F e . This system is like a sensor wherein the inputs are measured current and the pose of the end-effector, and the outputs are the external contact force and the contact point position. Further research will probably cover exploiting those outputs and regenerating the trajectory. Figure 8 shows the Hexa robot system used in this study. Six arms are driven by six Servo motors fixed on the base platform. The first stereo camera is mounted under the base platform to monitor the workspace from above. The second stereo camera is fixed on an immovable bar to observe from the outside. The force/torque sensor is mounted on the active plate to evaluate the external force in the experiment.  Figure 8 shows the Hexa robot system used in this study. Six arms are driven by six Servo motors fixed on the base platform. The first stereo camera is mounted under the base platform to monitor the workspace from above. The second stereo camera is fixed on an immovable bar to observe from the outside. The force/torque sensor is mounted on the active plate to evaluate the external force in the experiment.  Servo motors fixed on the base platform. The first st base platform to monitor the workspace from above. T an immovable bar to observe from the outside. The for active plate to evaluate the external force in the exper

Collision Detection at the Active Plate
In this experiment, the Hexa robot is controlled t trajectory in the workspace. A collision occurred due active plate. When the active plate moves in a circle t under the base platform captures images. These ima pose of the end-effector, shown in Figure 9. The ana sensors is continuously sent to the computer via RS23 shown in Figure 10. Equations of the circular trajector

Collision Detection at the Active Plate
In this experiment, the Hexa robot is controlled to move the active plate in a circular trajectory in the workspace. A collision occurred due to an external force acting on the active plate. When the active plate moves in a circle trajectory, the first camera mounted under the base platform captures images. These images are processed to compute the pose of the end-effector, shown in Figure 9. The analog voltage signal from six current sensors is continuously sent to the computer via RS232. The measured current intensity is shown in Figure 10. Equations of the circular trajectory are presented as follows:     Three angles (α, β, γ) are defined in compliance with the Z-Y-X extrinsic rotation convention of Tait-Bryan angles. The three angles (α, β, γ) can be converted to the three angles (φ, θ, ψ). The rotation matrix is expressed as follows: cos α cos β cos α sin β sin γ − sin α cos γ cos α sin β cos γ + sin α sin γ sin α cos β sin α sin β sin γ + cos α cos γ sin α sin β cos γ + cos α sin γ − sin β cos β sin γ cos β cos γ According to Equation (26), the residuals are computed to detect a collision, shown in Figure 11. Because there is a model error less than 2 (N), a suitable threshold r low is chosen equal to 2 (N). When a collision occurs, the residual r j (j = 7,8,9) raises with a time constant 1/K. Detection is performed as soon as r j > r low , where r low is the chosen threshold. According to Figure 11, a collision occurred at 3.4 s because the residual in the Z-axis is greater than r low = 2 (N). The external force reached the maximum value at 3.8 s during the collision. When contact is lost, r j decreases quickly until r j < r low .

2022, 22, x FOR PEER REVIEW
threshold. According to Figure 11, a collision occurred at 3.4 s because th -axis is greater than = 2 (N). The external force reached the maxim s during the collision. When contact is lost, | | decreases quickly until | Figure 11. Comparison of the residuals for the force sensor in Case 1.

Collision Isolation at the Active Plate
The image captured when a collision started occurring is considere contact point location. According to the flowchart of the procedure of c shown in Figure 4, a bounding box around the active plate is recognized shown in Figure 12. The RGB color image shown above is converted t image. Then, the Canny edge detector and Hough transform are appl straight-line profiles around rods. These straight lines are shown in Figu can be identified in the image, shown in Figure 12c. The considered ima ground image are converted to grayscale images. A binary image is cre ing the grayscale background image from the grayscale considered im forming image subtraction, objects that are not in the workspace cann eliminated. Combined with the image depth, only objects in the works ered in the image, which is shown in Figure 12d. Because the Hexa rob active plate and the rods, is determined, the external object can be identif the Hexa robot from the binary image. The image containing the externa in Figure 12e. According to the flowchart of the mathematics processing 5, 180 points are determined to satisfy the conditions. All of them are tween the external object and the bounding box around the active plate, 12f. A point represents the above 180 pixels by taking an average. A lin the mentioned pixel and the end-effector. According to the marker's l Figure 11. Comparison of the residuals for the force sensor in Case 1.

Collision Isolation at the Active Plate
The image captured when a collision started occurring is considered to find out the contact point location. According to the flowchart of the procedure of collision isolation shown in Figure 4, a bounding box around the active plate is recognized using YOLOv3, shown in Figure 12. The RGB color image shown above is converted to a YCbCr color image. Then, the Canny edge detector and Hough transform are applied to recognize straight-line profiles around rods. These straight lines are shown in Figure 12b. The rods can be identified in the image, shown in Figure 12c. The considered image and the background image are converted to grayscale images. A binary image is created by subtracting the grayscale background image from the grayscale considered image. After performing image subtraction, objects that are not in the workspace cannot be completely eliminated. Combined with the image depth, only objects in the workspace are considered in the image, which is shown in Figure 12d. Because the Hexa robot, including the active plate and the rods, is determined, the external object can be identified by removing the Hexa robot from the binary image. The image containing the external object is shown in Figure 12e. According to the flowchart of the mathematics processing shown in Figure 5, 180 points are determined to satisfy the conditions. All of them are intersections between the external object and the bounding box around the active plate, shown in Figure 12f.

Collision Identification at the Active Plate
According to Equation (27), the inertia forces acting on the arms, the rods, and the active plate are calculated. According to Equations (28)-(31), the external force acting on the active plate can be calculated, which is shown in Figure 13.

Collision Identification at the Active Plate
According to Equation (27), the inertia forces acting on the arms, the rods, and the active plate are calculated. According to Equations (28)-(31), the external force acting on the active plate can be calculated, which is shown in Figure 13.

Collision Identification at the Active Plate
According to Equation (27), the inertia forces acting on the arms, the active plate are calculated. According to Equations (28)- (31), the external f the active plate can be calculated, which is shown in Figure 13. Based on Figure 13, the external forces computed by Lagrange's e RNEA are close to each other and acceptable when these results are comp Based on Figure 13, the external forces computed by Lagrange's equations and RNEA are close to each other and acceptable when these results are compared with the force sensor. The collision started occurring on the active plate at 3.4 s, and the external force was about [−0.67 − 0.69 3.13] T (N). Then, the force reached the maximum value of [−5.36 − 2. 27 11.036] T (N) at 3.8 s.
The root mean square error (RMSE) is applied to evaluate the accuracy of RNEA, expressed as follows: where e i (i = 1, 2, . . . , 50) is the error between the computed force by RNEA and the force sensor; n = 50 is the number of samples. RMSE in the X-axis is equal to 0.819 (N), RMSE in the Y-axis is equal to 0.4762 (N), and RMSE in the Z-axis is equal to 1.2823 (N).

Collision Detection an Arm
In this experiment, when the active plate moved in a helix trajectory, a collision occurred due to an external force acting on this arm. The pose of the end-effector is shown in Figure 14. The measured current intensity is shown in Figure 15. Equations of the helix trajectory are presented as follows: of [−5.36 − 2.27 11.036] (N) at 3.8 s.
The root mean square error (RMSE) is applied to evaluate the accuracy of RNEA expressed as follows: where ( = 1,2, … ,50) is the error between the computed force by RNEA and the forc sensor; = 50 is the number of samples.
RMSE in the -axis is equal to 0.819 (N), RMSE in the -axis is equal to 0.4762 (N and RMSE in the -axis is equal to 1.2823 (N).

Collision Detection an Arm
In this experiment, when the active plate moved in a helix trajectory, a collision o curred due to an external force acting on this arm. The pose of the end-effector is show in Figure 14. The measured current intensity is shown in Figure 15. Equations of the hel trajectory are presented as follows:   According to Figure 16, a collision occurred at 5.2 s because the residual in t -axis is greater than = 2 (N). The external force reached the maximum value at 5 s during the collision.

Collision Isolation an Arm
When a collision started occurring, no contact point was detected in the image ca tured from the first camera. The image captured from the second camera at 5.2 s is co sidered to find out the contact point location, shown in Figure 17a. According to t flowchart of the procedure of collision isolation, shown in Figure 4, objects in this limit According to Figure 16, a collision occurred at 5.2 s because t -axis is greater than = 2 (N). The external force reached the ma s during the collision.

Collision Isolation an Arm
When a collision started occurring, no contact point was detected tured from the first camera. The image captured from the second cam sidered to find out the contact point location, shown in Figure 17a flowchart of the procedure of collision isolation, shown in Figure 4, ob workspace are considered in the image captured from the second cam ure 17b. Because the color of the arms is blue, the arms can be isolated

Collision Isolation an Arm
When a collision started occurring, no contact point was detected in the image captured from the first camera. The image captured from the second camera at 5.2 s is considered to find out the contact point location, shown in Figure 17a. According to the flowchart of the procedure of collision isolation, shown in Figure 4, objects in this limited workspace are considered in the image captured from the second camera, shown in Figure 17b. Because the color of the arms is blue, the arms can be isolated from the image. An appropriate threshold for the HSV color scale is chosen to extract the blue region in the image, which is shown in Figure 17c. Then, the Canny edge detector and Hough transform are applied to recognize straight-line profiles along arms. These straight lines are shown in Figure 17d. According to the flowchart of the mathematics processing, shown in Figure 5, a fixed pixel is chosen as a motor location where an arm is connected with that motor. According to the fixed pixel and the direction of the arm, all pixels belonging to the blue region in the arm direction are checked to see if any of them intersect the non-blue regions, shown in

Collision Identification an Arm
According to Equation (27), the inertia forces acting on the arms, the rods, and th active plate are calculated. According to Equations (32) and (33), the external force acting on the arm can be calculated, shown in Figure 18.

Collision Identification an Arm
According to Equation (27), the inertia forces acting on the arms, the rods, and the active plate are calculated. According to Equations (32) and (33), the external force acting on the arm can be calculated, shown in Figure 18.
According to Equation (27), the inertia forces ac active plate are calculated. According to Equations (32 on the arm can be calculated, shown in Figure 18. According to Equation (39), root mean square error (RMSE) is applied to evaluate the accuracy of RNEA. RMSE in the X-axis is equal to 1.1193 (N), RMSE in the Y-axis is equal to 1.4974 (N), and RMSE in the Z-axis is equal to 1.1056 (N).

Collision Detection at a Rod
In this experiment, when the active plate moved in an ellipse trajectory, a collision occurred due to an external force acting on this rod. The pose of the end-effector is shown in Figure 19. The measured current intensity is shown in Figure 20. Equations of the ellipse trajectory are presented as follows:    According to Figure 21, a collision occurred at 4.4 s because the residual in the X-axis is greater than r low = 2 (N). The external force reached the maximum value at 4.6 s during the collision. According to Figure 21, a collision occurred at 4.4 s because th -axis is greater than = 2 (N). The external force reached the maxi s during the collision.

Collision Isolation at a Rod
In a similar method to Case 1, the collision isolation results are show e. According to the flowchart of the mathematics processing shown in points are determined to satisfy the conditions, as shown in Figure 22 sents the above 46 pixels by taking an average. The contact point loca Figure 22 g. Following the strategy mentioned above, the contact poin considered image is determined as [382.8 698.83] (pixels). Accordin model, the contact point coordinate is also identified as [122.93 − 1 (mm) when a collision occurs at a rod.

Collision Isolation at a Rod
In a similar method to Case 1, the collision isolation results are shown in Figure 22a-e. According to the flowchart of the mathematics processing shown in Figure 5, 46 midpoints are determined to satisfy the conditions, as shown in Figure 22f. A point represents the above 46 pixels by taking an average. The contact point location is shown in Figure 22  According to Figure 21, a collision occurred at 4.4 s because the residual in the -axis is greater than = 2 (N). The external force reached the maximum value at 4.6 s during the collision.

Collision Isolation at a Rod
In a similar method to Case 1, the collision isolation results are shown in Figure 22ae. According to the flowchart of the mathematics processing shown in Figure 5, 46 midpoints are determined to satisfy the conditions, as shown in Figure 22f. A point represents the above 46 pixels by taking an average. The contact point location is shown in Figure 22 g. Following the strategy mentioned above, the contact point location in the considered image is determined as [382.8 698.83] (pixels). According to the camera model, the contact point coordinate is also identified as [122.93 − 144.08 − 411.59] (mm) when a collision occurs at a rod.

Collision Identification at a Rod
According to Equation (27), the inertia forces acting on the arms, the rods, and the active plate are calculated. According to Equations (34)-(36), the external force acting on the rod can be calculated, which is shown in Figure 23.
At 4.6 s, a collision occurred on the rod, and the external force reached t value. Similarly, 447 pixels are potential contact points on the rod. The con cation in the considered image is determined as [399.56 702.97] (pixels) point coordinate is also identified as [125.49 − 137.89 − 406.5207] (m external force reaches the maximum value.

Collision Identification at a Rod
According to Equation (27), the inertia forces acting on the arms, the active plate are calculated. According to Equations (34)-(36), the external fo the rod can be calculated, which is shown in Figure 23. According to (39), the root mean square error (RMSE) is applied to ev curacy of RNEA. RMSE in the -axis is equal to 1.2268 N, RMSE in the -a 1.1874 N, and RMSE in the -axis is equal to 1.2627 N.

Conclusions
This study has presented a strategy to solve collision problems for the robot. The proposed virtual force observer is created by combining the inv problem solution and a low-pass filter. When the Hexa robot controls th move with the designed trajectory, the observer monitors and indicates the in the cases of three main parts: active plate, rods, and arms. The multi-du era system performs effectively for monitoring the workspace and the exte The contact point location in each collision case is obtained based on flowchart for the collision phases. The Recursive Newton-Euler algorithm analyze the dynamics free-body diagram of the Hexa robot. With the determ point location, finally, the intensity and directions of the external force based on the Recursive Newton-Euler algorithm. The comparison results f According to (39), the root mean square error (RMSE) is applied to evaluate the accuracy of RNEA. RMSE in the X-axis is equal to 1.2268 N, RMSE in the Y-axis is equal to 1.1874 N, and RMSE in the Z-axis is equal to 1.2627 N.

Conclusions
This study has presented a strategy to solve collision problems for the Hexa parallel robot. The proposed virtual force observer is created by combining the inverse dynamic problem solution and a low-pass filter. When the Hexa robot controls the active plate move with the designed trajectory, the observer monitors and indicates the collision time in the cases of three main parts: active plate, rods, and arms. The multi-dual depth camera system performs effectively for monitoring the workspace and the external obstacle. The contact point location in each collision case is obtained based on the proposed flowchart for the collision phases. The Recursive Newton-Euler algorithm is applied to analyze the dynamics free-body diagram of the Hexa robot. With the determined contact point location, finally, the intensity and directions of the external force are identified based on the Recursive Newton-Euler algorithm. The comparison results for the experimental force sensor show the potential for solving the collision problem in the Hexa robot structure.