Development and Control of a Pneumatic-Actuator 3-DOF Translational Parallel Manipulator with Robot Vision †

A vision-based three degree-of-freedom translational parallel manipulator (TPM) was developed. The developed TPM has the following characteristics. First, the TPM is driven by three rodless pneumatic actuators and is designed as a horizontal structure to enlarge its horizontal working space to cover a conveyor. Then, a robot-vision system (including a webcam mounted on the TPM) collects images of objects on the conveyor and transfers them through the LabVIEW application programming interface for image processing. Since it is very difficult to achieve precise position control of the TPM due to the nonlinear couplings among the robot axes, feedback linearization is utilized to design an adaptive interval type-2 fuzzy controller with self-tuning fuzzy sliding-mode compensation (AIT2FC-STFSMC) for each rodless pneumatic actuator to attenuate nonlinearities, function approximation errors, and external disturbances. Finally, experiments proved that the vision-based three degree-of-freedom TPM was capable of accurately tracking desired trajectories and precisely executing pick-and-place movement in real time.


Introduction
Robotic manipulators are efficient at picking, placing, and assembling objects and at tracking movements. With respect to their kinematic structures, robotic manipulators are generally divided into two types, namely a serial and a parallel type. A serial manipulator is designed as a series of links, which are sequentially connected by actuated joints, from a base to an end-effector. This arm-like structure is highly flexible in large-scale operations. However, the serially linked mechanism provides low positioning accuracy because the error for each joint and link is superimposed and the stiffness is insufficient for heavy loads. For a parallel manipulator (PM), several independent kinematic chains are connected by a moving platform, termed an end-effector, and a fixed base platform, while the actuators are located on or near the fixed base. PMs have many advantages, mainly because the load is shared by the several links connecting the moving platform to the base. Compared to the serial type of manipulators, the advantages of PMs also include higher stiffness, higher load-carrying capacity,

•
The presented adaptive interval type-2 fuzzy controller with self-tuning fuzzy sliding-mode compensation (AIT2FC-STFSMC) effectively attenuates the nonlinearities in the TPM, which come from two parts: (1) the pneumatic cylinder leads to its low stiffness and compressibility of air and large friction forces; and (2) the use of the valve leads to dead zones and varying rates of air flow through servo valves.

•
The assembly configuration of the RPAs arranges the PM as a horizontal structure, which offers a large horizontal working space, and a soft pneumatic gripper installed on the end-effector of the TPM is able to easily grip an object.

•
The develop vision-based 3-DOF TPM has the ability to accurately execute pick-and-place movement in real time.
The remainder of this paper is organized as follows. In Section 2, the mechanical parts of the RPA-driven 3-DOF TPM and its experimental setup are described. Further, mathematical models for representations of the used RPA with cylinders and pressure dynamics are developed in Section 3.
The control system design with the stability analysis is given in Section 4. Section 5 introduces the robot-vision technique with the LabVIEW API, and three experiments are presented in Section 6 to investigate the trajectory tracking performance and pick-and-place operation. Finally, Section 7 concludes this paper.

Test Rig Layout of 3-DOF TPM
In this section, the geometrical configuration of the constructed RPA-driven 3-DOF TPM is first introduced. The layout of the 3-DOF TPM is shown in Figure 1, and its photo is shown in Figure 2. A PC-based control unit labeled 12 in Figure 1, which is installed inside the control box under the fixed platform, controls the system. This RPA-driven 3-DOF TPM communicates through analog-to-digital and digital-to-analog converter (AD/DA) interface cards. Three RPAs with 25-mm-diameter pistons are horizontally installed under the parallel links at 120 degrees to each other to move the platform in three dimensions. Three Festo-mode DGC-25-500 RPAs produce horizontal motion in this design; each RPA has a 500-mm stroke. For each horizontal axis, a Festo 5/3 MPYE-5-1/8-HF-010-B PDCV regulates the flow of air into the cylinder. This device is crucial to controlling the PM. The voltage supplied to each PDCV is at 24 V, and the control voltage is in the range of 0 to 10 V. The maximum recommended nominal flow is at 700 L/min. In this system, the pressure source is set at 6 bar. The two pressure levels in the two cylinder chambers are measured by two Festo model SDE1 pressure sensors, which are installed on the two ports of the cylinder. A PC-based controller provides a control signal that regulates the PDCVs to drive the horizontal RPAs through a D/A interface that operates at a sampling frequency of 200 Hz. To accurately measure the position of each horizontal RPA, three linear encoders with a resolution of 1 µm are installed on the RPAs. The position signals are sent to the PC-based controller through a counter card. For robot vision, a Logitech C920 webcam mounted on the support frame is used to capture images of each object.

Analysis of Kinematics
Typically, there are two parts for manipulator kinematics analysis, namely inverse kinematics and forward kinematics. From the analysis of inverse kinematics, a set of actuated joint variables can be identified to achieve a targeted position and orientation for the end-effector of TPM. In the forward kinematics, the position and orientation of the end-effector can be determined from the given actuated joint coordinates of TPM. This study employs the vector-loop closure equation [33,34] to derive both of the inverse and forward kinematics in this section. In the beginning, the desired 3D path profiles can be converted to the trajectory profiles for each of the three RPAs by using inverse kinematics. With the connection of three RPAs in a parallel mechanism with a link and RPA-driven

Analysis of Kinematics
Typically, there are two parts for manipulator kinematics analysis, namely inverse kinematics and forward kinematics. From the analysis of inverse kinematics, a set of actuated joint variables can be identified to achieve a targeted position and orientation for the end-effector of TPM. In the forward kinematics, the position and orientation of the end-effector can be determined from the given actuated joint coordinates of TPM. This study employs the vector-loop closure equation [33,34] to derive both of the inverse and forward kinematics in this section. In the beginning, the desired 3D path profiles can be converted to the trajectory profiles for each of the three RPAs by using inverse kinematics. With the connection of three RPAs in a parallel mechanism with a link and RPA-driven

Analysis of Kinematics
Typically, there are two parts for manipulator kinematics analysis, namely inverse kinematics and forward kinematics. From the analysis of inverse kinematics, a set of actuated joint variables can be identified to achieve a targeted position and orientation for the end-effector of TPM. In the forward kinematics, the position and orientation of the end-effector can be determined from the given actuated joint coordinates of TPM. This study employs the vector-loop closure equation [33,34] to derive both of the inverse and forward kinematics in this section. In the beginning, the desired 3D path profiles can be converted to the trajectory profiles for each of the three RPAs by using inverse kinematics. With the Sensors 2019, 19, 1459 6 of 26 connection of three RPAs in a parallel mechanism with a link and RPA-driven joints, the end-effector mounted on a movable platform can, thus, be driven to perform 3-D motions while controlling each RPA to track the associated trajectory profile.

Inverse Kinematic Analysis
Initially, two coordinate frames are investigated for the RPA-driven 3-DOF TPM to analyze the kinematic model, as shown in Figure 3. The first frame is a static Cartesian coordinate frame, O(x 0, y 0 , z 0 ), which is fixed at the center of the base, whereas a mobile Cartesian coordinate frame, P(x p, y p , z p ), is assigned to the center of the mobile platform. In Figure 3, A i indicates the joints located at the center of the base and B i represents the passive joints of the moveable platform, (i = 1, 2, 3). Since three links are installed between the mobile platform and the fixed platform, the lengths of the link R 1 and R 2 in Figure 3 can be expressed as: Let C A i be the position of the point A i , and let C B i be the position of the point B i ; then, it follows that and where D i represents the linear displacement of ith RPA that is also the cylinder piston position of ith limb; γ i and β i are computed as β i = γ i = (i − 1) × 120 • , as displayed in Figure 4. According to Equation (4) can be further expressed as: Let U i and V i be defined as: and Then, substituting Equations (6) and (7) into Equation (5), the following kinematics of each RPA can be yielded.
Notably, Equation (8) has two solutions for each actuator. According to the current assembly of the mechanism, only the positive square root solution is the feasible condition while the actuator is translational from outside to inside. Consequently, the inverse kinematics equation for each RPA of the 3-DOF TPM can be found as: translational from outside to inside. Consequently, the inverse kinematics equation for each RPA of the 3-DOF TPM can be found as:

Forward Kinematic Analysis
The objective of forward kinematics is to acquire the 3-D pose information for the end-effector. If given i D , then Substituting Equations (10) and (11) into Equation (4) yields translational from outside to inside. Consequently, the inverse kinematics equation for each RPA of the 3-DOF TPM can be found as:

Forward Kinematic Analysis
The objective of forward kinematics is to acquire the 3-D pose information for the end-effector. If given i D , then Substituting Equations (10) and (11) into Equation (4) yields

Forward Kinematic Analysis
The objective of forward kinematics is to acquire the 3-D pose information for the end-effector. If given D i , then C A i and C B i can be calculated by and Substituting Equations (10) and (11) into Equation (4) yields Expressing the vectors in Equation (12) as: Let (D i − R i ) = Q i , i = 1, 2, and 3, Equation (13) can then be rewritten as: and Therefore, from Equations (14)-(16) the forward kinematics is described as follows: and

Dynamic Model of the RPA
In the RPA, the orifice area of the PDCVs can be controlled by the input air flow. With the pressure difference resulted from two chambers of the cylinder, the RPA can be moved in the desired position. According to the analysis in [35], the dynamic model of RPA can be derived while considering the dynamics and the mass flow rate of PDCV, the continuity equation, and the motion equation. Thus, the nonlinear model of the RPA can be presented in the form of a fourth-order nonlinear system as [35]: where x 1 = x, which is the piston position, x 2 = .
x; x 3 = P a , which is the pressure in chamber a; x 4 = P b , which is the pressure in chamber b; the control input u indicates the spool displacement of PDCV; K s−c (x 1 ) denotes a combination of the static and dynamic frictions; w denotes a port width of the PDCV; l denotes a stroke of the RPA; ∆ is the general residual chamber volume; R is the universal gas constant; T s = 293 K is the supply temperature; C 0 is the flow constant, and C d = 0.8 is the discharge coefficient. For the convenience of the analysis, the following function is introduced: where P atm is the atmospheric pressure; p r = P d /P u is the ratio between the downstream and the upstream pressures at the orifice; k = 1.4 is the specific heat constant; C r = (2/k + 1) k/(k+1) = 0.528, It can be shown that the function f (·) and its derivative are continuous with respect to p r . According to Equation (21) the functionsf (x 3 , P s , P e ) and f (x 4 , P s , P e ) in Equation (20) are defined as: where P s = 6 × 10 5 N/m 2 is the supply pressure; P e = 1 × 10 5 N/m 2 is the exhaust pressure; T a and T b are the cylinder air temperature of chamber A and B, respectively. In Equation (20), , denotes the sum of the effects of the system static and dynamic friction forces, and where A denotes the piston area of the cylinder; K s (x 1 ) indicates the position-dependent static friction forces, and K c (x 1 ) is the variable position-dependent load due to friction.

Input-Output Feedback Linearization
According to [36], an arbitrary nonlinear single-input single-output system can be linearized by differentiating its output. By applying feedback linearization theory to an RPA and neglecting the static frictional forces in Equation (20), the system can be expressed as: .
where the state vector x ≡ [x 1 x 2 x 3 x 4 ] T and u are, respectively, defined as the spool displacement of PDCV and the control input, and the corresponding vector fields f (x) and g(x) are described as: and where f (x) and g(x) are partially unknown and smooth vector functions. After linearization, Equation (20) becomes: where ..

y(t) ]
T ∈ R 3 is the state vector; u ∈ R and y ∈ R are the control input and the system output, respectively; d(x) denotes the external disturbance and the unmodeled friction force of the piston. It is assumed that |d(x)| ≤ D for all states x(t), and F(x) and G(x) are partially unknown functions with uncertain time-varying parameters. Without loss of generality, G(x) can be assumed to be strictly positive. Evidently, an additional disturbance compensator is necessary to take account of the lump of disturbances.

Development of Control Strategy AIT2FC-STFSMC
As to the control strategy, the proposed AIT2FC-STFSMC is to attenuate disturbances and track trajectories for the RPA-driven 3-DOF TPM with high nonlinearity and time variation. Figure 5 shows the relationship between the AIT2FC and the STFSMC. In the AIT2FC-STFSMC, the adaptive interval type-2 fuzzy controller (AIT2FC) is designed as a trajectories tracking controller, where the interval type-2 fuzzy system is used to mimic an ideal controller. However, an approximation error may happen when using the AIT2FC. Hence, the self-tuning fuzzy sliding-mode compensator (STFSMC) is derived to compensate for the difference between the ideal controller and AIT2FC, as well as external disturbances. The AIT2FC is able to automatically adjust the fuzzy rules and reduce the number of fuzzy rules. Nevertheless, the boundary of approximation error is very difficult to measure in industrial applications. A large pre-set boundary will lead to a large chattering phenomenon in control output and will wear the bearing mechanism and excite unstable dynamics. Otherwise, a small boundary may make the controlled system unstable. To overcome the above-mentioned problem, a simple estimation algorithm is investigated to observe the boundary of approximation error in real-time. According to the on-line adjustment for the boundary, the chattering in control output can be much reduced.
The reference signals are defined as T , so the tracking error vector is expressed as: The sliding surface is: where c i is specified such that ∑ n i=1 c i λ i−1 is a Hurwitz polynomial and λ is a Laplace operator. If the functions F(x) and G(x) in Equation (28) are known and the external disturbance d(x) is measurable, the ideal control law is derived as: where η > 0 is a constant; Λ s = [0, c 1 , c 2 , · · · , c n−2 ] T is the constant vector, and S ∆ (t) = S − Φsat(S/Φ), for which Φ ≥ 0 is the width of the boundary layer of the sliding surface S. The properties of the function S ∆ are described as follows: S(t).
S ∆ (t) = 0. These properties of the boundary layer are applied in the design of the controller, such that the adaptation terminates as soon as the boundary layer is reached to avoid the possibility of unbounded growth. Differentiating Equation (32) gives: .
However, some variables in Equations (29) and (30) may be unknown or perturbed, and d(x) may not be measurable. Thus, it is difficult to obtain a precise model for the functions F(x) and G(x), and the implementation of the ideal control law u * is also impossible for the RPA. In this regard, the AIT2FC yieldsû f z to approximate the ideal control law and combines the STFSMC with u comp (S) to compensate for the disturbance and modeling error. The proposed control law is:

Design of the AIT2FC
In this section, a single-input AIT2FC is used to formulate the control lawû f z . For the AIT2FC, the ith fuzzy rule is: where S is the input variable; F i T 2 S is an interval type-2 fuzzy set;α i T 2 f z is an interval type-2 singleton fuzzy set, and M is the number of rules. Using singleton fuzzification, product inference, and center-average defuzzification, the output of the AIT2FC is: where y l and y r , respectively, represent the farthest left and the farthest right points of the interval type-2 set. In (38), the weight vectorα T = [α 1 ,α 2 , · · · ,α M ] is used to estimate the optimal weight vector α * T = α * 1 , α * 2 , · · · , α * M and the parameterα * T is reasonably assumed to be bounded. The farthest left point of the interval type-2 set is defined as: , respectively, represent the upper and lower degrees of the membership function.
right point of the interval type-2 set is defined as: (s). The parameters L and R in (39) and (40), respectively, are calculated by using type reduction [37]. The adaptive law for the AIT2FC is given as: where η 1 > 0 is the adaptive learning rate.

Design of the STFSMC
The design objective of STFSMC is to compensate for the disturbances [38]. The sliding surface S is again specified as the input and the sliding control law u f s is the output. Figure 6 shows the membership functions and the associated linguistic variables as follows: The fuzzy rules are simply expressed as: By using singleton fuzzification, max-min inference, and center-average defuzzification, the output of STFSMC is: To avoid substantial computational cost for the general fuzzy control algorithm, u f s can be calculated for two cases: Case 2: k = S/Φ ≥ 0: where Φ > 0 is given to denote the width of the boundary layer. It can be verified that u f s = −sgn(S) as |S| ≥ Φ, and, thus, the STFSMC is designed as: where k c is a compensation gain, which is: where M 0 (x) and M 2 (x) are specified variables.ρ is an adaptive compensation gain [38] that is represented as: where η 2 > 0 is a learning rate that is greater than zero. Figure 5 illustrates the overall system for the proposed AIT2FC-STFSMC. For the AIT2FC, the parameter g s is used to ensure that the sliding surface S is within the range of the fuzzy input and the gain factor g u is used to regulate the fuzzy outputû f z .   It is noted that Assumption 1 is needed for the design of the AIT2FC, Assumption 2 is needed for the both of the AIT2FC and the STFSMC, and Assumption 3 is needed for the design of the STFSMC.   It is noted that Assumption 1 is needed for the design of the AIT2FC, Assumption 2 is needed for the both of the AIT2FC and the STFSMC, and Assumption 3 is needed for the design of the STFSMC.  (47)), and the adaptive laws (41) and (49), it is observed that (i) the It is noted that Assumption 1 is needed for the design of the AIT2FC, Assumption 2 is needed for the both of the AIT2FC and the STFSMC, and Assumption 3 is needed for the design of the STFSMC.

Theorem 1.
If the RPA is represented as the form of an affine system (28) that satisfies Assumptions 1, 2, and 3. By using the control law (36), whereû f z (S,α) represents the AIT2FC (see Equation (38)) and u comp (S) represents the STFSMC (See Equation (47)), and the adaptive laws (41) and (49), it is observed that (i) the system state x and the control law u are bounded and (ii) the tracking errors converge to 0 as t → ∞.
Proof. See Appendix A.

Image Capturing
In this study, images of an object were captured by a webcam and then transferred to LabVIEW through an API with image processing toolkits. The API IMAQdx Open Camera is used to open a video source in the RGB color model, in which the Property Node is a function applied to set the image resolution and the frames per second. IMAQ Create is an API applied to create a buffer for temporarily storing images. The images in the buffer are grabbed by the API IMAQdx Grab as requested.

Define a Template Pattern
To define a template pattern, the captured image for an object in RGB is first converted to grayscale by color transformation, which is represented as: A mask for the object is obtained by manually drawing the minimum rectangular region that contains the object. The region is termed the region of interest (ROI). The API IMAQ ConstructROI VI extracts image features within the ROI and creates the template pattern. The template pattern and its features for the targeted object are stored in a database.

Pattern Recognition
In this paper, pattern recognition is applied to determine the location of an object that has the same image features as the template pattern. Once activated, the pattern recognition system captures an image and uses the NI IMAQ Vision API to search for patterns similar to the template over the input image by using block matching. Figure 7 indicates the processes of block matching, in which f (x, y) is the greyscale of the image f with the dimension M × N and w(x, y) is the greyscale of the image w(x, y) with the dimension K × L. It is noted that M ≥ K and N ≥ L. Block matching searches similar patterns from the pixel coordinate (1, 1) to (M − K, N − L) with a K × L dimension block, w(x, y), and calculates the relationship C(i, j) between f (i, j) and w(i, j) by Equation (51), which is: After searching is conducted, the pattern with the biggest value C is considered a candidate pattern. After that, its correlation is defined as: The pattern is defined as the best matching pattern, especially if −1 < R(i, j) < 1. Image recognition is implemented by LabVIEW with three APIs as follows: (1) the IMAQ Read Image and Vision Info VI API calls the built template pattern; (2) The IMAQ Match Pattern API searches for the matches with respect to the preset matching parameters; (3) The Unbundle by Name Function API calculates the centroid of the optimal matching pattern, which represents the entirety of the optimal matching pattern with respect to the image.

Spatial Calibration
Spatial calibration converts a pixel coordinate to a real-world coordinate. In the meantime, two APIs can be applied to compensate for the potential perspective errors and nonlinear distortions in the image, as introduced as follows: (1) The IMAQ Calibration Target to Points-Circular Dots API detects circular dots in a binary image and returns pixels and real-world points for calibration; (2) The IMAQ Get Calibration Info API returns calibration information associated with the image.
After spatial calibration is performed, one can identify the coordinate of the target object in the real world.

Experiments and Discussion
The RPA is regulated by the PDCV. Instead of using acceleration sensors, in this study the velocity and acceleration are estimated by numerically differentiating the position and the velocity with respect to time. To reduce the signal disturbances during the numerical difference operation, a digital filter is used as follows:

Experiment 1(Trajectory Tracking-Square Trajectory)
The square trajectory is often used in many practical applications, such as the pick-and-place operation. It can be segmented by five segments and has four positioning points at the vertex of the square. The profile and the moving direction of the square trajectory are illustrated in Figure 8. Segment 1 is modeled as a fifth-order trajectory [27,33], and segments 2 to 5 are modeled as a line. In the experiment, at the beginning, the end-effector moves from the initial position (0,0, 66.6 cm) − to ( 15cm,15cm, 60 cm) − − in 3 s. Then, the end-effector moves along the square loop path with the edge Figure 7. Block matching process.

Spatial Calibration
Spatial calibration converts a pixel coordinate to a real-world coordinate. In the meantime, two APIs can be applied to compensate for the potential perspective errors and nonlinear distortions in the image, as introduced as follows: (1) The IMAQ Calibration Target to Points-Circular Dots API detects circular dots in a binary image and returns pixels and real-world points for calibration; (2) The IMAQ Get Calibration Info API returns calibration information associated with the image.
After spatial calibration is performed, one can identify the coordinate of the target object in the real world.

Experiments and Discussion
The RPA is regulated by the PDCV. Instead of using acceleration sensors, in this study the velocity and acceleration are estimated by numerically differentiating the position and the velocity with respect to time. To reduce the signal disturbances during the numerical difference operation, a digital filter is used as follows: where y out (t) represents the filtered signal and y in (t) is the input data from the position measurement of the piston. The input voltage of the PDCV is applied as the control signal. The membership functions for S and u f s are displayed in Figure 5.

Experiment 1(Trajectory Tracking-Square Trajectory)
The square trajectory is often used in many practical applications, such as the pick-and-place operation. It can be segmented by five segments and has four positioning points at the vertex of the square. The profile and the moving direction of the square trajectory are illustrated in Figure 8. Segment 1 is modeled as a fifth-order trajectory [27,33], and segments 2 to 5 are modeled as a line. In the experiment, at the beginning, the end-effector moves from the initial position (0, 0, −66.6 cm) to (−15 cm, 15 cm, −60 cm) in 3 s. Then, the end-effector moves along the square loop path with the edge length of 30 cm and back to the positioning point (−15 cm, 15 cm, −60 cm) in 6 s. To effectively control the RPAs, the trajectories for each RPA are also modeled as a fifth-order polynomial function [27,33] with zero initial and final velocities as well as zero initial and final accelerations.
The experimental results of the RPA response for each axis are shown in Figures 9-11. The estimated end-effector position calculated from the forward kinematics is shown in Figure 12a the RPAs, the trajectories for each RPA are also modeled as a fifth-order polynomial function [27,33] with zero initial and final velocities as well as zero initial and final accelerations. The experimental results of the RPA response for each axis are shown in Figures 9-11. The estimated end-effector position calculated from the forward kinematics is shown in Figure 12a and b. The estimated position error of the end-effector is calculated from the position error of the actuated joint, as shown in Figure 12c. The root-mean-square error (RMSE) of the path tracking control for the cylinder A, B, and C axes are 0.214, 0.203, and 0.173 cm, respectively. The maximum estimated error of the end-effector position is approximately 0.6097 at 3.37 s.  the RPAs, the trajectories for each RPA are also modeled as a fifth-order polynomial function [27,33] with zero initial and final velocities as well as zero initial and final accelerations. The experimental results of the RPA response for each axis are shown in Figures 9-11. The estimated end-effector position calculated from the forward kinematics is shown in Figure 12a and b. The estimated position error of the end-effector is calculated from the position error of the actuated joint, as shown in Figure 12c. The root-mean-square error (RMSE) of the path tracking control for the cylinder A, B, and C axes are 0.214, 0.203, and 0.173 cm, respectively. The maximum estimated error of the end-effector position is approximately 0.6097 at 3.37 s.

Experiment 2 (Trajectory Tracking-Star Trajectory)
In Experiment 2, a star motion is set as a reference 3D motion trajectory. The star trajectory is composed of six segments. The profile and moving direction of the designed trajectory are illustrated in Figure 13. Segment 1 is modeled as a fifth-order trajectory [27,33] from the initial position (0,0, 66.6 cm) − to (10 cm,0 cm, 60 cm) − , and segments 2 to 6 are modeled as a straight line. At the beginning, the end-effector moves along segment 1 in 3 s. Then, the end-effector moves sequentially along segment 2 to segment 6, where segment 2 is a straight line from 1 P (10 cm,0 cm, 60 cm) . The traveling duration for each of segments 2 to 6 is set to 2 s. To effectively control the RPAs, the trajectories for each RPA are also modeled as a fifth-order polynomial function [27,33] with zero initial and final velocities as well as zero initial and final accelerations. The experimental results of the RPA response for each axis are presented in Figures 14-16. The estimated end-effector position calculated from the forward kinematics is depicted in Figure 17a,b The estimated position error of the end-effector is calculated from the position error of the actuated joint and illustrated in Figure 17c. The RMSE values on the path tracking control of cylinders of A, B, and

Experiment 2 (Trajectory Tracking-Star Trajectory)
In Experiment 2, a star motion is set as a reference 3D motion trajectory. The star trajectory is composed of six segments. The profile and moving direction of the designed trajectory are illustrated in Figure 13. Segment 1 is modeled as a fifth-order trajectory [27,33] from the initial position (0, 0, −66.6 cm) to (10 cm, 0 cm, −60 cm), and segments 2 to 6 are modeled as a straight line. At the beginning, the end-effector moves along segment 1 in 3 s. Then, the end-effector moves sequentially along segment 2 to segment 6, where segment 2 is a straight line from P 1 (10 cm, 0 cm, −60 cm) to P 2 (−8.1 cm, 5.9 cm, −60 cm), segment 3 is a straight line from P 2 (−8.1 cm, 5.9 cm, −60 cm) to P 3 (3.1 cm, −9.5 cm, −60 cm), segment 4 is a straight line from P 3 (3.1 cm, −9.5 cm, −60 cm) to P 4 (3.1 cm, 9.5 cm, −60 cm), segment 5 is a straight line from P 4 (3.1 cm, 9.5 cm, −60 cm) to P 5 (−8.1 cm, −5.9 cm, −60 cm), and segment 6 is a straight line from P 5 (−8.1 cm, −5.9 cm, −60 cm) to P 1 (10 cm, 0 cm, −60 cm). The traveling duration for each of segments 2 to 6 is set to 2 s. To effectively control the RPAs, the trajectories for each RPA are also modeled as a fifth-order polynomial function [27,33] with zero initial and final velocities as well as zero initial and final accelerations. The experimental results of the RPA response for each axis are presented in Figures 14-16. The estimated end-effector position calculated from the forward kinematics is depicted in Figure 17a

Stationary Object Localization
This experiment measured the accuracy of the object-located technique for a stationary object. In the first experiment, a hexagonal object was placed on the conveyor belt, and the power was turned off. By using the presented robot vision after repeating this experiment 10 times, the average, variation, and maximum of the error for the x-coordinate, y-coordinate, and angle were discovered and are shown in Table 1. The variations of the x-coordinate and y-coordinate were less than 0.03 cm, and the variation of the angle was less than 0.25°, which are within the position tolerance error for the robot arm. Table 1 shows the measuring accuracy of the visual camera. The maximum error of the x-coordinate and y-coordinate are, respectively, around 0.05 cm and 0.08 cm, as well as the maximum error of the angle which was around 0.09°. In this paper, the spatial calibration is applied to convert a pixel coordinate to a real-world coordinate. Clearly, the measuring error in the pixel coordinate will produce an error in the real-world coordinate after coordinate transformation. Furthermore, the error in the real-world coordinate will transfer to the motion control for the RPAs. In this paper, a soft pneumatic gripper installed on the end-effector is soft and triangular with hard crossbeams, so that it can buckle and deform in to conform around an object. It is able to easily grip an object with the radius between the interval of [35 mm 65 mm]. That is, if the motion-error, which may come from the measuring error, happens in trajectory tracking control for the RPAs, the soft pneumatic gripper can

Stationary Object Localization
This experiment measured the accuracy of the object-located technique for a stationary object. In the first experiment, a hexagonal object was placed on the conveyor belt, and the power was turned off. By using the presented robot vision after repeating this experiment 10 times, the average, variation, and maximum of the error for the x-coordinate, y-coordinate, and angle were discovered and are shown in Table 1. The variations of the x-coordinate and y-coordinate were less than 0.03 cm, and the variation of the angle was less than 0.25 • , which are within the position tolerance error for the robot arm. Table 1 shows the measuring accuracy of the visual camera. The maximum error of the x-coordinate and y-coordinate are, respectively, around 0.05 cm and 0.08 cm, as well as the maximum error of the angle which was around 0.09 • . In this paper, the spatial calibration is applied to convert a pixel coordinate to a real-world coordinate. Clearly, the measuring error in the pixel coordinate will produce an error in the real-world coordinate after coordinate transformation. Furthermore, the error in the real-world coordinate will transfer to the motion control for the RPAs. In this paper, a soft pneumatic gripper installed on the end-effector is soft and triangular with hard crossbeams, so that it can buckle and deform in to conform around an object. It is able to easily grip an object with the radius between the That is, if the motion-error, which may come from the measuring error, happens in trajectory tracking control for the RPAs, the soft pneumatic gripper can still successfully pick the object up. In our design, Experiment 3 shows the vision-based RPA-driven TPM accurately locates the object and successfully executes the pick-and-place operation, as shown in Figure 18. In this experiment, the conveyor belt moved at a fixed speed of 2.1 cm/s along the y-axis of the image. The conveyor belt conveyed a targeted object at a speed of 2.1 cm/s. The proposed robot-vision system estimated the location of the moving object by calculating the velocity formula. In the physical test, the estimated error of the location of the moving object was approximately 0.1 cm when the conveyor belt moved the object 20 cm along the y-axis of the image. Figure 18 illustrates a pick-and-place experiment. Figure 18a illustrates an object on the conveyor belt. After the power was connected to the conveyor belt, the conveyor belt conveyed the object along the y-axis of the image. The image of the moving object was captured by the webcam. Figure 18b depicts the robot vision system locating the object and the PM picking it up. The PM moved the object to the desired location, as illustrated in Figure 18c,d. still successfully pick the object up. In our design, Experiment 3 shows the vision-based RPA-driven TPM accurately locates the object and successfully executes the pick-and-place operation, as shown in Figure 18. In this experiment, the conveyor belt moved at a fixed speed of 2.1 cm/s along the y-axis of the image. The conveyor belt conveyed a targeted object at a speed of 2.1 cm/s. The proposed robot-vision system estimated the location of the moving object by calculating the velocity formula. In the physical test, the estimated error of the location of the moving object was approximately 0.1 cm when the conveyor belt moved the object 20 cm along the y-axis of the image. Figure 18 illustrates a pick-andplace experiment. Figure 18a illustrates an object on the conveyor belt. After the power was connected to the conveyor belt, the conveyor belt conveyed the object along the y-axis of the image. The image of the moving object was captured by the webcam. Figure 18b depicts the robot vision system locating the object and the PM picking it up. The PM moved the object to the desired location, as illustrated in Figure 18c, d.

Conclusions
This study developed and implemented a vision-based RPA-driven 3-DOF TPM, which not only allows 3D path tracking control in a full-scale test rig but also provides robot vision to locate an object for vision-based operations. The AIT2FC-STFSMC was developed to allow path tracking control for the RPAs, in which the AIT2FC approximates the ideal control law and the STFSMC attenuates disturbances and uncertainties. The system's webcam collected images of objects and transferred them to LabVIEW for image processing. Two types of experiments were conducted to confirm the feasibility of the proposed system. First, the evidence demonstrated that the end-effector of the manipulator successfully tracks the two given complex 3D trajectories under the RMSE of 0.22 cm. Second, two experiments were conducted on the vision-based RPA-driven 3-DOF TPM. The first experiment proved that the robot-vision system accurately located a stationary object, and the second experiment confirmed that the TPM successfully completed the pick-and-place operation for a moving object.

Patents
There are two Taiwan utility model patents resulting from the developed PM, which are (1) Because S ∆ u f s = −|S ∆ |, Equation (A4) is derived as: , Equation (A5) becomes: ).