Recycling and Updating an Educational Robot Manipulator with Open-Hardware-Architecture

This article presents a methodology to recycle and upgrade a 4-DOF educational robot manipulator with a gripper. The robot is upgraded by providing it an artificial vision that allows obtaining the position and shape of objects collected by it. A low-cost and open-source hardware solution is also proposed to achieve motion control of the robot through a decentralized control scheme. The robot joints are actuated through five direct current motors coupled to optical encoders. Each encoder signal is fed to a proportional integral derivative controller with anti-windup that employs the motor velocity provided by a state observer. The motion controller works with only two open-architecture Arduino Mega boards, which carry out data acquisition of the optical encoder signals. MATLAB-Simulink is used to implement the controller as well as a friendly graphical interface, which allows the user to interact with the manipulator. The communication between the Arduino boards and MATLAB-Simulink is performed in real-time utilizing the Arduino IO Toolbox. Through the proposed controller, the robot follows a trajectory to collect a desired object, avoiding its collision with other objects. This fact is verified through a set of experiments presented in the paper.


Introduction
Robot manipulators are one of the most widely used mechatronic systems in the industry, whose applications include the assembly of elements, as well as the welding and painting of parts. Due to its great usefulness in the industry, it is very important to study its kinematics, dynamics, and automatic control in engineering careers related to mechatronics and robotics. A characteristic of robot manipulators is that they are usually manufactured with a closed architecture in their automatic control. Once a robot meets its end-of-Life, it is resold, reused, or recycled, which are known as the "3Rs" [1]. A manipulator is usually classified as unusable equipment when its controller is damaged. The reason is that the cost of its reparation can be expensive. In this case, it could be convenient to propose a low cost methodology to re-manufacture the robot, where its mechanical components can be reused and its control system is redesigned using an open architecture.
In the literature, there are several motion controllers for robot manipulators, some of which are recycled and are employed in experimental educational platforms to validate the theory seen in class. Bomfim et al. [2] re-manufactured the controller of a robot manipulator for the automotive controller by means of scopes, and of using blocks that facilitate the design of other control algorithms such as robust, optimal, adaptable, fuzzy, and neural networks, among others. It is worth mentioning that the proposed experimental educational platform is a key element of the Robotics Laboratory of the Faculty of Mechanical and Electrical Engineering (FIME) at the Universidad de Colima in Mexico, where undergraduate students validate the theory seen in courses of robotics and automatic control, and they also use the robot for research purposes. For example, it was used by three undergraduate students during their final degree projects, whose achievements are reflected in this manuscript. Similarly, the robot has also been used in internal workshops to motivate students to join and remain at the FIME, as well as to show them the importance of robotics and automatic control.
The article is organized as follows. Section 2 describes the architecture of the recycled robot manipulator. Its kinematics and dynamics are presented in Sections 3 and 4, respectively. Section 5 shows the parameter identification of the robot actuators, and their parameter estimates are used in Section 6 for the design of PID controllers and state observers. On the other hand, the trajectory planning, the artificial vision, the GUI interface, and experimental experiments are discussed in Sections 7-10, respectively. Finally, Section 11 establishes the conclusions of the manuscript.

Robot Architecture
The achitecture of the recycled robot manipulator is shown in Figure 1. It consists of a 4-DOF robotic arm, model ED7220C, developed by the ED Corporation from Korea. Its joints are shown in Figure 2, which are located at the body, shoulder, elbow, and wrist. The manipulator also has a gripper to collect objects, and, over it, there is a resistive force sensor, model FSR 402 from the Interlink Electronics company of USA. This sensor determines if an object is inside the gripper. All joints and gripper have limit switches, which indicate their minimum and maximum displacements. Moreover, these switches allow establishing the manipulator initial position. To achieve this position, the manipulator also has an 11.43 cm flex sensor from the American company SparkFun Electronics; this sensor is located at the elbow joint. The body, shoulder, and elbow joints are coupled to permanent dc motors, model DME38B50G-116 from the company Servo of Japan .In the sequel, these joints are denoted as q 1 , q 2 , and q 3 , respectively. On the other hand, the wrist and gripper are, respectively, driven by DME38B50-115 and DME33B37G-171 dc motors also from Servo. The wrist joint, denoted as q 4 , is actuated by a differential gear mechanism [28] coupled to two dc motors. Each motor of the robot includes an optical encoder to determine its position and is connected to a gearbox to increase its torque while reducing its speed.
Two Arduino Mega boards, from the Italian company Arduino, control the position of the motors. Each board acquires the position data of three motors, and it is communicated with a personal computer through a USB connection. The control signal of each motor is produced by the program MATLAB-Simulink from the American corporation Mathworks. This signal is communicated to the Arduino Mega board via the Arduino IO toolbox, which converts the control signal to Pulse Width Modulation (PWM). The PWM signals are fed to L298N Dual H-Bridge Driver Modules from the Chinese corporation Haitronic. These modules provide the power to the dc motors. On the other hand, a webcam from the Swiss company Logitech, model C525, provides artificial vision to the manipulator and permits obtaining the positions of the objects taken by the manipulator. Image processing is carried out in MATLAB-Simulink using the Computer Vision Toolbox. A user-friendly interface GUI, created in MATLAB, permits selecting the shape and color of the objects taken by the manipulator, whose kinematics is described below. It is worth mentioning that the main components of the proposed controller, such as webcam, Arduino boards, motor drivers, force and flex sensors, connectors, and cables, have a total cost of about 230 USD.   Figure 3 shows the structure of the manipulator, as well as its 4-DOF q 1 , q 2 , q 3 , q 4 , which are the joint positions. This figure also shows the coordinates (x, y, z) of the end-effector, as well as the angle φ that specifies its orientation.

Forward Kinematics
Using the trigonometric relationships between the links and their lengths leads to the forward kinematics of the manipulator, which provides the position of the end-effector with respect to the joint positions. It is given by: where d 1 = 370 mm, l 1 = l 2 = 220 mm, and l 3 = 140 mm are the lengths of the links. Furthermore, we used the following shorthand notations

Robot Workspace
The workspace is the volume that can reach the end-effector, and it is constructed through the forward kinematics in Equation (1), as well as the range covered by the joints of the manipulator, as shown in Table 1.  Figure 4 shows the arm movement range in the plane xy at a height of 115 mm with respect to the robot base. The figure shows that this range encompasses a radius of approximately 400 mm. The objects manipulated by the robot are placed over a rectangle area A located in front of the robot base. The dimensions n and l of this area are determined analytically to maximize it, as follows. Note that n and l are given by: where r = 400 mm, p = 200 mm, and α is an unknown angle to be determined. Therefore, the area A can be written as The derivative of Equation (3) with respect to α is given by By equaling this derivative to zero produces the critical point α = 0.568 rad = 32.5 • , which gives the maximum rectangular area A. For simplicity, a value of α = 0.568 rad = 30 • is used that yields the dimensions n = 200 mm, l = 140 mm, and the area A shown in Figure 5, which is represented as a purple rectangle. Note that this figure also shows that the robot camera is located 585 mm above the area A.

Inverse Kinematics
The section describes the inverse kinematics of the manipulator whose objective is to obtain the joint positions q 1 , i = 1, . . . , 4 so that the end-effector is placed in a specific position and orientation. The inverse kinematics of the robot is given by [29]: where atan 2(y * , x * ) represents the arctangent of y * /x * and takes into account the sign of each argument to determine the quadrant corresponding to the angle between x * and y * . The next section presents the dynamic equation of the manipulator that takes into account the torques required for the execution of the robot motion.

Robot Dynamics
The dynamic behavior of the manipulator is described by the following expression [29] M(q)q + C(q,q)q + g(q) + f (q) = τ (6) where q = [q 1 , q 2 , q 3 , q 4 ] T is the vector of joint positions;q is the angular velocity vector, M(q) = M(q) ∈ R 4×4 is called inertia matrix, C(q,q)q ∈ R 4×1 represents a centrifugal and Coriolis force vector, and g(q) ∈ R 4×1 is a gravitational forces vector. In addition, f (q) ∈ R 4×1 is a frictional forces vector and τ ∈ R 4×1 is a vector of torques applied by the actuators at the joints.

Dynamic Model of the Actuators
The set of the joint actuators can be represented by the following matrix differential equation [30]: a i and b i , i = 1, 2, 3, 4 are positive parameters. u i , τ i , J mi , and r i are the input voltage, load torque, motor inertia, and gear reduction ratio of the ith joint actuator, respectively. diag(p) represents a diagonal square matrix with the elements of p in the main diagonal.

Mathematical Model of the Robot Manipulator with Actuators
Substituting τ of Equation (6) into Equation (7) yields where I is the identity matrix of size 4 × 4. The previous model is considerably reduced when the gear ratios r i , i = 1, 2, 3, 4 are high, i.e., r i 0. In this case, R ≈ O and Equation (9) approximates to: The gear ratios r of the dc motors corresponding to the body, shoulder, elbow, and wrist of the manipulator are 720, 576, 576, and 133, respectively. Since the gear ratios of these motors are high, the dynamics of the manipulator in Equation (6) can be neglected. Therefore, an independent controller can be designed for each robot joint using the linear model in Equation (10).
Parameters a i and b i of the dc motor models are unknown, and they are estimated using the recursive least squares algorithm described in the following section.

Parameter Identification of the Actuators
The Recursive Least Squares algorithm (RLSM) [31] is used to identify the parameters of the robot actuators, which permit designing: (1) controllers to obtain high precision movements in the manipulator; and (2) state observers to estimate the motor speed. Moreover, the parameter identification is necessary to simulate the robot manipulator and to detect faults on it [32]. To estimate the parameters a i and b i , the actuators are operated in closed loop using a proportional controller and a sinusoidal reference input signal.
Since signalsq i (t) yq i (t) of the model in Equation (10) are not available, parameters a i and b i are estimated using only measurements of the motor voltage u i and its position q i . To this end, each uncoupled model in Equation (7) is filtered by means of the filter H(s) = λ 2 /(s 2 + λ 1 s + λ 2 ), λ 1 , λ 2 > 0, which also attenuates measurement noise, thus minimizing its effect in the parameter identification algorithm. This filtering procedure produces: where L and L −1 are the Laplace operator and its inverse, respectively; similarly, Signals z i (t) and ψ i (t) in Equation (12) are sampled every T s seconds, and they are used by the RLSM given by [31]: T is an estimate of θ i . γ i is called forgetting factor and satisfies 0 < γ i ≤ 1.
To experimentally identify the parameters a i and b i , i = 1, 2, 3, 4, the RLSM was configured with the following values: Figure 6 shows the time evolution of estimatesâ 1 andb 1 corresponding to the base actuator. It is shown that the estimates converge in approximately 1 s. Table 2 shows the estimated parameters of each actuator and its corresponding joint.

Robot Control
A PID controller is used to regulate the position of the actuators. This controller is a modification of the basic Proportional Derivative Controller (PID), and it is employed to avoid the set-point kick phenomenon, which consists of abrupt changes of the control signal due to sudden changes of the reference input [33]. The PID controller is given by [33]: Note that the derivative action is applied only to the output signal q i (t). e i (t) is the position error of the ith joint that is defined as e i (t) = q di − q i , where q di is the desired position of the ith joint. Moreover, k Pi , k Ii , and k Di are, respectively, the proportional, integral, and derivative gains of the ith position controller. The Routh-Hurwitz stability criterion [34] allows determining the following range of gains k Pi , k Ii , and k Di that guarantee a stable closed-loop system.
Since the nominal values a i and b i of each motor are not available, the estimatesâ i andb i produced by the RLSM are replaced in Equation (15).
In order for the integral term of the PID controller not to cause a slow transient position response due to the voltage saturation of the actuators, this term is implemented using the anti-windup compensation scheme in Figure 7, where K ai is the anti-windup gain. In this figure, u min and u max denote the minimum and maximum voltage of the actuators, respectively. The PID controller with anti-windup requires the velocities of the robot actuators, which are not available. However, these signals are estimated by means of a state observer described below.

State Observer
The model of an actuator in Equation (10) can be written as the following state space equation: where For estimating the speedq i of the ith motor, a Luenberger observer is programmed, whose mathematical model is given by [35]:x T is the observer gain, and matrices A and B are constituted with the parameter estimatesâ i andb i , respectively. Note thatx 2 =q i is the estimated velocity employed by the PID controller. Table 3 presents the gains of the PID controllers with anti-windup, as well as the gains of the state observer corresponding to each motor. On the other hand, Figure 8 shows the coupling of the state observer with the PID controller, whose gains K Pi , K Ii , K Di , and K ai are tuned so that the joint response q i under a step input is sufficiently fast and damped. Moreover, the integral action of the actuator controllers corresponding to the shoulder and elbow joints allow counteracting the gravity forces of the links connected to these joints. Likewise, the observer gains are selected to produce an observer dynamics with both poles equal to −6.  The next section describes the trajectory of the end-effector to reach, take, and release an object in the robot workspace. This planned trajectory generates the reference inputs q di to the PID controllers of the actuators, which assure that the robot executes the desired motion. Figure 9 shows a flowchart representing the trajectory planning in the Cartesian workspace of the manipulator. The path planning algorithm consist of a sequence of points along the path, which are denoted as A-D. Point A is the robot initial position, Point B is a position above the object, Point C is the object position, andPoint D is the position where the manipulator deposits the object, as illustrated in Figure 10. Through the sequence of points A-B-C-B-A-D, the robot collects an object and deposits it in a container, avoiding collisions with other objects. To execute the trajectory planning, it is necessary to resort to the robot inverse kinematics in order to convert the Cartesian Points A-D into joint input references q di provided to the PID controllers of the actuators. Due to the high gear reduction ratio of the dc motors, the movement from one point to another point is smooth. To detect the position and shape of the objects gripped by the robot, it has artificial vision, as mentioned below.

Artificial Vision
The robot has artificial vision through a webcam located at a height of 700 mm above the robot base. The Image Acquisition Toolbox of Simulink is used to provide the artificial vision to the robot, and students can use this powerful tool for image processing, image segmentation [36], image enhancement, visual perception [37], recognition of 3D objects [36], human-like visual-attention-based artificial vision [38], visual SLAM [39], feature extraction [40], and noise reduction, just to mention a few. Camera images are acquired using the block From Video Device (FVD) of this toolbox, which extracts their RGB values in a matrix that can be processed by the armory of matrix operators and functions of MATLAB [41]. Images are acquired at five frames per second (FPS). The image of a red circle on the robot workspace is shown in Figure 11. Figure 11. Acquired image of a red circle.

Color Detection
The FVD block is configured to only visualize objects within the purple rectangle of Figure 5. For this purpose, a resolution of 419 × 147 pixels is used, where each pixel is equal to 0.955 mm. RGB values obtained from an image are processed to produce a grayscale image for each RGB plane. The gray scale of the red circle in Figure 11 is shown in Figure 12 for each RGB plane.  The grayscale image of each RGB plane is filtered in order to smooth the edges of the objects using the block Median Filter of Simulink. Subsequently, a thresholding is applied to the filtered images so that the manipulator can recognize red and yellow objects. The thresholding produces a binary image, where a value of 1 means white, whereas a value of 0 means black. The thresholding employed to detect red and yellow colors is represented by means of the following expression where f th (p r , p g , p b ) is the output of the thresholding and p j , j = r, g, b, represents a pixel in the planes red, green, and blue, respectively. Their upper and lower limits are written, respectively, as p j sup and p j inf , whose values in Table 4 permit detecting red and yellow colors. Table 4. Thresholding values to detect red and yellow colors.

Detected Color Limits for the Limits for the Limits for the Grayscale Image Grayscale Image Grayscale Image of the Red Plane of the Green Plane of the Blue Plane
Red Finally, a morphological operation, executed with the Erosion and Dilation Simulink blocks, permits smoothing the edges of the resulting binary image, thus producing Figure 13.

Shape Detection
To determine the object shape, its area A, perimeter P, compaction C, and centroid C e are calculated. The previous operations, except for compaction, are performed by the block Blob Analysis of Simulink. The object compaction C is defined as [42][43][44]: The manipulator collects circle and square objects whose compaction is given by: where L is the length of the sides of the square and R is the radius of the circle. Figure 14 briefly describes the artificial vision process and its interaction with the robot motion control.

Correction of the Object Position
The top surface of the objects, which is seen by the camera, has a height with respect to the xy plane where the objects are placed, as shown in Figure 15. For this reason, the position of an object, denoted as x obj , does not coincide with the one provided by the camera, defined as x cam . The following equation is used to compute x obj : where h obj and h cam are, respectively, the heights of the object and camera with respect to the robot base.
It is important to mention that a similar correction of the object position is carried out on the y axis.
h h x x obj cam cam obj Figure 15. Correction of the object position due to its height.

Graphical User Interface
A graphic user interface (GUI) is designed to facilitate the interaction between the user and the recycled manipulator. The GUIDE tool, included in MATLAB to develop high-level graphical and simply layout, was used to design the GUI that is shown in Figure 16. It has buttons to place the manipulator in its initial position, to stop it for emergency, and to select the color and shape of the objects taken by the robot. Moreover, the GUI permits visualizing the positions of the objects in the robot workspace as well as capturing camera images.

Experimental Results
Experimental results obtained with the proposed controller and artificial vision of the robot manipulator are presented in this section. All experiments used a sampling period T s of 0.06 s. The manipulator's aim was to collect four cylinders randomly arranged, on that bases of which geometric figures with a shape of circle or square that were yellow or red were attached, as shown in Figure 17. These cylinders weighed 46 g; however, the actuators of the manipulator have enough torque to move payloads up to 1 kg [45]. The MATLAB and Simulink files of the robot control programming were uploaded to the open-source website Github [46], including the instructions to run the proposed controller. Moreover, this website contains videos of the experiments that are shown in this section. The first experiment consisted of gripping a yellow circle, whose coordinates were x = 46 mm, y = 297 mm, and z = 145 mm, which were obtained with the help of the artificial vision of the robot. The trajectories of the end-effector in the x, y, and z axes, as well as its angle φ, are shown in Figure  18. In this figure, x d , y d , z d , and φ d represent the desired trajectories of the end-effector, where φ d is fixed to −90 • . The yellow circle was gripped by the robot at approximately 14 s, and it was released in a container at around 24 s. This fact is corroborated with the help of Figure 19 that shows the voltage of the force sensor placed inside the robot gripper. A voltage above 0.4 V of this sensor indicates that the object is gripped by the end-effector.
The joint trajectories q 1 , q 2 , q 3 , and q 4 , corresponding to this experiment, and their desired references q d1 , q d2 , q d3 , and q d4 , are shown in Figure 20. Note that, based on this figure, the responses of q 1 , q 2 , q 3 , and q 4 are overdamped. Moreover, for each desired value of q d1 , the joint q 1 has a settle time of about 1 s. On the other hand, joints q 2 and q 3 have a settle time less than 6 s. It is worth mentioning that the gains of PID controllers for the actuators of joints q 2 and q 3 were selected so that their responses are fast enough with the least possible tracking error, despite the gravitational forces acting over them, which are considered as disturbances. Finally, note that q 4 remains close to its reference q d4 = 0 • . Figure 21 shows the velocitiesq 1 ,q 2 ,q 3 , andq 4 estimated by the designed state observers. It can be observed thatq 1 ,q 2 ,q 3 , andq 4 reach velocities up 41.75, 23.47, 22.5, and 2.3 degrees per second, respectively. These signals are used by the proposed PID controllers of the actuators, whose control signals u 1 , u 2 , u 3 , and u 4 can be seen in Figure 22. Note that, to reduce the tracking errors, the controllers produce signals u 1 , u 2 , and u 3 that reach their maximum and minimum values of 24 V and −24 V at some instants of time.
Define the position errors in the Cartesian space as follows: Similarly, the position error of the ith joint is defined as: Table 5 shows the position error at the instant when the yellow circle is gripped by the robot. Moreover, this table presents the position errors obtained in the remaining three experiments that consisted in collecting a yellow rectangle, a red circle, and a red rectangle. In this table, it is possible to observe that the maximum error in the Cartesian coordinates is less than 6 mm. Table 5 also shows that the maximum error in the joint space is 2.5 • . Table 6 compares the position accuracy of the proposed recycled platform with respect to two platforms based also on the ED-7220C robot. The first platform, described in [45], contains the manufacturer controller of this robot produced by the ED-Corporation company. The second platform, called AUTAREP, uses the controller designed in [12]. A shown in this table, the manufacturer and the AUTAREP platforms have better position accuracy than the proposed recycled platform. We attribute this fact to the mechanical wear of the robot, since it has almost twenty years of service and the backlash in its gears has increased. However, the position accuracy of our proposal is appropriate since the robot gripper opens up to 60 mm, and it collects cylinders with a height of 55 mm and diameter of 40 mm. Despite this accuracy, the proposed platform has fulfilled its aim of allowing undergraduate students to experimentally validate the theory seen in robotics and automatic control courses using the low-cost controller of open architecture. Moreover, the proposed platform has a great utility for educational purposes since their high-level programming based on MATLAB-Simulink permits students to design their own controllers in a simple way and to use several toolboxes to acquire, process, and generate signals for verifying the controller performance. Note also that this software could be used to operate the manipulator as a remote platform without attending to the laboratory, thus allowing its use by students with physical disabilities.        Table 6. Robot accuracy of experimental platforms based on the ED-7220C robot manipulator.

Platform Accuracy
Proposed recycled robot 6 mm Manufacturer of the ED-7220C robot 0.5 mm AUTAREP 2 m

Conclusions
This article describes a methodology to recycle a 4-DOF educational robot with gripper, which is used for Mechatronics Engineering courses at the University of Colima. Its kinematics, dynamics, and artificial vision are presented along with a proposed low-cost and decentralized controller for the robot joint's actuators. Furthermore, the process of identifying the parameters of the manipulator actuators is also presented, as well as the use of them for designing PID controllers and implementing state observers that estimate the speed of the actuators. Experimental tests in the manipulator were executed through a proposed graphical user interface that allows selecting the shape and color of the objects gripped by the manipulator. Experiments with the proposed controller were successful in avoiding collisions between the robot and the objects collected by it. It was verified that the maximum positioning errors in the Cartesian and joint coordinates of the robot are 6 mm and 2.5 • , respectively. As a future work, we will add another DOF to the manipulator to produce the roll motion in its wrist so that it can pick up objects with an irregular geometry. We will also implement adaptable and robust control schemes in the platform, and we will also upgrade the mechanical components of the manipulator to obtain more precise movements. In addition, we plan to recognize 3D objects with the artificial vision of the robot and to use its GUI for remote practices.