Next Article in Journal
An Intelligent Sorting Method of Film in Cotton Combining Hyperspectral Imaging and the AlexNet-PCA Algorithm
Previous Article in Journal
CaFANet: Causal-Factors-Aware Attention Networks for Equipment Fault Prediction in the Internet of Things
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Depth-Dependent Control in Vision-Sensor Space for Reconfigurable Parallel Manipulators

1
Facultad de Ingenieria, Universidad Autonoma de San Luis Potosi, San Luis Potosi 78290, Mexico
2
Escuela de Ingeniería y Ciencias, Tecnologico de Monterrey, Querétaro 76130, Mexico
3
Biomedical, Industrial and Systems Engineering Department, Gannon University, Erie, PA 16541, USA
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(16), 7039; https://doi.org/10.3390/s23167039
Submission received: 21 June 2023 / Revised: 25 July 2023 / Accepted: 2 August 2023 / Published: 9 August 2023
(This article belongs to the Topic Intelligent Systems and Robotics)

Abstract

:
In this paper, a control approach for reconfigurable parallel robots is designed. Based on it, controls in the vision-sensor, 3D and joint spaces are designed and implemented in target tracking tasks in a novel reconfigurable delta-type parallel robot. No a priori information about the target trajectory is required. Robot reconfiguration can be used to overcome some of the limitations of parallel robots like small relative workspace or multiple singularities, at the cost of increasing the complexity of the manipulator, making its control design even more challenging. No general control methodology exists for reconfigurable parallel robots. Tracking objects with unknown trajectories is a challenging task required in many applications. Sensor-based robot control has been actively used for this type of task. However, it cannot be straightforwardly extended to reconfigurable parallel manipulators. The developed vision-sensor space control is inspired by, and can be seen as an extension of, the Velocity Linear Camera Model–Camera Space Manipulation (VLCM-CSM) methodology. Several experiments were carried out on a reconfigurable delta-type parallel robot. An average positioning error of 0.6 mm was obtained for static objectives. Tracking errors of 2.5 mm, 3.9 mm and 11.5 mm were obtained for targets moving along a linear trajectory at speeds of 6.5, 9.3 and 12.7 cm/s, respectively. The control cycle time was 16 ms. These results validate the proposed approach and improve upon previous works for non-reconfigurable robots.

1. Introduction

Parallel robots have received increased attention in recent years, and one of the most popular applications at an industrial level is pick and place based on the parallel delta robot [1]. A parallel robot can be described as a manipulator composed of a fixed and a mobile platform connected by two or more kinematic chains generating closed loops with the end effector mounted on the mobile platform [2,3]. The payload is therefore distributed among the kinematic chains; besides, the actuators are mounted directly in the fixed base, which means that the kinematic chains do not carry them. This results in thinner and lighter links. Parallel robots have some advantages over serial robots: higher positioning accuracy, better rigidity, greater load-capacity-to-weight ratio, higher speeds and accelerations, lower inertia, good stability, and simple inverse kinematics [3,4,5,6]. On the other hand, parallel robots have some disadvantages: relatively small and complex workspace, often having complex singularities inside the workspace: complex geometric (position relationships), kinematic (velocity relationships) and dynamic models that complicate their modeling and control design [5,7,8].

1.1. Reconfiguration and Redundancy on Parallel Robots

Reconfiguration has the ability to facilitate the adaptation and optimization of parallel robots to a wide range of applications. It overcomes some of their typical difficulties such as the reduced workspace. Reconfiguration on parallel robots may involve changes to the length of the arms, the orientation of the kinematic chains, or the position of the mobile platform in relation to the fixed base. These changes can be made manually or automatically using intelligent control systems. The implementation of a reconfigurable robot in which the reconfiguration is performed via a mechanism that can be controlled in operation time makes the robot redundant [9,10], which requires an even more complex control design.
In parallel robots, there are multiple types of redundancy, the most important being the kinematic redundancy and the actuator redundancy. A way to generate kinematic redundancy consists of adding active joints to the kinematic chains [11,12,13,14]. This implies that there is an infinite combination of joint velocities producing the same specific velocity of the end effector. Actuator redundancy consists of having more than the necessary kinematic chains between the fixed and mobile platforms. From the static point of view, for each force vector and torque at the end effector, there will be an infinite number of force and wrench vectors for the active joints [15]. For redundant robots, obtaining a single solution to the inverse kinematic problem is called solving the redundancy problem.
Redundancy has great advantages in parallel robots, such as enhancing the workspace, avoidance or reduction of singularities, increasing the payload capacity, improving the dynamics of the mechanism, and eliminating motion clearance, among others [5,16,17]. However, these advantages come at a price such as increased complexity in the kinematic and dynamic models, challenging control design, and higher energy consumption due to extra actuators and internal forces/torques [18]. Moreover, extra actuators imply extra costs and introduce additional friction from the supplementary active joints. In summary, robot reconfiguration can be used to tackle some of the problems of parallel robots at the cost of increasing the complexity of the manipulator, resulting in a more challenging control design.

1.2. Camera Space Manipulation

Providing parallel robots with sensors that enable them to monitor the environment around them is another effective method of extending the range of applications for these machines. With this information, the robot is able to adapt to its environment and the events occurring in it. In particular, for an object-tracking task, it is important for a robot to be able to respond appropriately to changes in the environment. Vision systems are one of the most frequently used options to obtain information about the interaction of the robot with the environment due to their ease of operation [19]. Vision-based control is a sensor-based control strategy, strongly resembling the process of our central nervous system [20]. Among the control schemes that use vision systems, two of the most general and popular are Visual Servoing (VS) [21,22,23] and Camera Space Manipulation (CSM) [24,25]. VS control is commonly applied for real-time control, incorporating visual information from one or more cameras in a closed-loop control cycle. It provides increased precision in manipulation tasks based on visual feedback [22,26,27]; it also requires the Image Jacobian, which depends on the visual features used and the camera parameters (obtained through a calibration process) and can be complex and difficult to obtain [23]. In its first applications, the CSM methodology [24] and its variants, such as Linear Camera Model–Camera Space Manipulation (LCM-CSM) [28] consisted of an open-loop position level control. One of the most interesting features is that it is not necessary to know the view parameters of the camera a priori. Instead, a data capture is performed in which either a set of points in three-dimensional coordinates or a set of joint coordinates are mapped to their corresponding coordinates in image space to obtain such parameters. A practical way to accomplish this is by placing visual markers on the robot’s end effector and using the geometric model of the camera to associate the coordinates of the markers with their corresponding coordinates in camera space. This circumvents the need to know a priori the Jacobian transformation of the manipulator, which can be a complex task [29,30]. With the data set and the camera model, the estimation of the so-called vision parameters is performed. This process is called “preplan”. An even simpler method is to approximate the orthographic camera model with the pin-hole camera model obtaining the LCM-CSM [28] variant where the view parameters are estimated from a linear model. Recent work has derived a camera–Jacobian matrix from calculating the derivative with respect to time for a pin-hole camera model. This allowed the development of the VLCM-CSM variant enabling the design of velocity-based closed-loop control laws [6].
Some vision techniques have been reported in the literature to guide parallel robots through various tasks. In [29], a vision system is used to estimate the pose of the Hexa parallel robot, while [31] uses a computer vision system to develop an obstacle collision detection method for the same robot, no control description is included in either work. In [32], computer vision is used to identify the geometric parameters of a Delta parallel architecture. Regular PIDs are then used to control the robot joints. Ref. [33] uses computer vision and an unscented Kalman filter to estimate the pose of a planar 3RRR parallel robot and uses a PID control to guide the robot through a point to point path-tracking task. In [34,35], VS is used to control a cable-driven parallel robot, pose-based (3D and 2D) schemes are developed as well as a mix of them (2D 1/2); a conservative sufficient condition for stability is obtained. Ref. [6] develops the VLCM-CSM approach to control a Delta-type parallel robot in target tracking tasks. In summary, there are still few works developing vision-based control for parallel robots, some of them are used only to detect the end-effector pose to then guide it to complete the task. However, to the best of the authors’ knowledge, no general control design approach for reconfigurable parallel robots has been reported in the literature.
This work presents a control approach for reconfigurable parallel robots. Following this approach, controls are designed and implemented in a novel reconfigurable Delta-type parallel robot [16]. In this work. the geometric and kinematic models are derived for this robot for the first time, yielding an initial contribution of this paper. Moreover, the control design in image space is based on the VLCM-CSM approach. For this, an original derivation of the image Jacobian associated with this approach is obtained, where it is shown that the only variable is a factor depending on the depth of the observed feature. This makes this Jacobian extremely simple to implement and update on each control cycle. This simple Jacobian can be used for any variant of the VLCM-CSM methodology and constitutes a second contribution of this work. Using the aforementioned elements, a general control approach, and main contribution of this work, for reconfigurable parallel robots was developed. Based on this approach, control laws in the the vision-sensor, 3D and joint spaces are designed and implemented. The controls in the 3D and joint spaces have the limitation that it is in general difficult to obtain the reference information (target position) in such spaces, particularly in the case of a target moving along an unknown trajectory. Usually this information has to be obtained through indirect estimates using available measurements and position reconstruction models prone to errors. This limitation is intrinsic to any control design in these spaces. The control in the vision sensor space does not require reconstruction as this type of sensor provides direct measurements of the target in that space regardless of its trajectory, this results in a robust approach.
This article is organized as follows: Section 2 analyzes the kinematic model of the reconfigurable delta robot considered in this work. Section 3 presents an original derivation of the image Jacobian based on the VLCM-CSM methodology. In Section 4, a control approach for reconfigurable parallel robots is developed and several control laws based on this approach are synthesized for a reconfigurable delta-type parallel robot. Section 5 details the experimental setup and the experiments performed. Section 6 presents and discusses the results. Finally, Section 7 describes the authors’ conclusions of this work and outlines opportunities for future work.

2. Modeling of the Reconfigurable Delta Robot

2.1. Description of the Reconfigurable Delta Robot

Let us consider the delta robot in Figure 1; it consists of two platforms, one fixed and a mobile one, connected by three kinematic chains (numbered with index i = 1 , 2 , 3 ), evenly distributed about the vertical axis Z of the robot’s fixed frame. In this typical configuration, the fixed platform is on top of the mechanism (horizontal element), and the mobile platform (i.e., the part where the end-effector is attached) is a small, horizontal element at the bottom of the mechanism. Each chain has two links; one end of the chain is connected to the fixed platform through an actuated, rotational joint placed at point A i with joint variable θ i 1 , and the other end is connected to the mobile platform through a passive universal joint (some designs use a spherical one), at point C i . The two links on each chain are connected together at point B i through a passive universal (can be spherical) joint with (passive) joint variables θ i 2 and θ i 3 . The use of links with a parallelogram configuration, together with the robot architecture, ensures the parallelism between both platforms at all times. R is the radius of the fixed platform (i.e., the distance between the origin of the robot’s frame and point A i ), r is the radius of the moving platform, (i.e., the distance between the center of the platform and point C i ), α i is the angle between axis X of the fixed frame and each of the three kinematic chains that make up the robot, measured about the fixed Z axis; L 1 and L 2 are the length of the first and second links of the kinematic chains, respectively, and P is the 3D end-effector position expressed in the fixed frame. With the reconfiguration capability considered in this work, R becomes a (joint) variable. In Figure 2, the reconfiguration mechanism for the robot is shown. More details on the reconfiguration analysis and mechanism can be found in [16].
The following key points can be defined with respect to the robot’s fixed reference frame:
A i = R cos ( α i ) R sin ( α i ) 0
B i = ( R + L 1 cos ( θ i 1 ) ) cos ( α i ) ( R + L 1 cos ( θ i 1 ) ) sin ( α i ) L 1 sin ( θ i 1 )
C i = P x + r cos ( α i ) P y + r sin ( α i ) P z
P = P x P y P z

2.2. Forward Geometric Model

Based on the work reported in [16], and considering the reconfiguration, we define the active joints and Cartesian coordinate vectors of the end-effector as
q = R θ 11 θ 21 θ 31
and
x = P = P x P y P z
respectively. Note that R is included in the vector of joint variables because the reconfiguration mechanism can be controlled together with the three kinematic chains. On this basis, the reconfigurable robot is then modeled as a redundant robot with four control inputs.
A common way to solve the direct kinematics problem for the delta robot is to solve the intersection of three spheres centered at points B ˜ i defined as
B ˜ i = B i x r cos ( α i ) B i y r sin ( α i ) B i z
and radii L 2 . The spheres are associated with the free motion of points C i with respect to points B i , and obviously do not intersect at P, however, by horizontally shifting the spheres (their centers) a distance r towards fixed axis Z, the shifted spheres intersect at P. With this, we define a sphere per each kinematic chain with center B ˜ i , and the intersection of the three spheres with the smallest value on the Z axis will be P = x , which is the solution to the direct kinematics problem. This can be represented by the following system of equations:
( B ˜ i x P x ) 2 + ( B ˜ i y P y ) 2 + ( B ˜ i z P z ) 2 = L 2 2
It may be observed that this approach is valid regardless of whether R is constant or variable. For that, this solution is valid for a reconfigurable delta robot.

2.3. Forward Kinematic Model

Based on Figure 1, the following closed-loop geometric constraint system can be defined:
OP + PC i = OA i + A i B i + B i C i
which has the following matrix form:
P x cos ( α i ) P y sin ( α i ) P x sin ( α i ) + P y cos ( α i ) P z + r 0 0 = R 0 0 + L 1 cos ( θ 1 i ) 0 sin ( θ 1 i ) + L 2 sin ( θ i 3 ) cos ( θ i 2 + θ i 1 ) cos ( θ i 3 ) sin ( θ i 3 ) cos ( θ i 3 + θ i 1 )
This can be rewritten as
x i + r = R + L 1 i + L 2 i
Deriving with respect to time, we obtain the following result:
x ˙ i = R ˙ + L ˙ 1 i + L ˙ 2 i
for the reconfigurable robot, R has a variable derivative, while in the case of the robot without reconfiguration, the derivative of R is zero.
Notice that the links can only move around a pivot; hence, we can simplify the previous equation in the following form:
x i = R + w L 1 i × L 1 i + w L 2 i × L 2 i
To further simplify the equation above it is convenient to express the equation above on a reference frame located on B i and obtain an expression that depends only on the articular variables. To accomplish this, we multiply the above equation by the director vector L 2 i ^ , obtaining
L 2 i ^ · x ˙ i = L 2 i ^ · R ˙ + L 2 i ^ · w L 1 i × L 1 i
Substituting Equation (10) in the left side of the above equation, we have:
L 2 i ^ · x ˙ i = sin θ i 3 cos θ i 2 + θ i 1 P ˙ x cos α i P ˙ y sin α i + cos θ i 3 P ˙ x sin α i + P ˙ y cos α i sin θ i 3 sin θ i 2 + θ i 1 P ˙ z = a i x P ˙ x + a i y P ˙ z + a i z P ˙ z
where
a i x = sin θ i 3 cos θ i 2 + θ i 1 cos α i + cos θ i 3 sin α i a i y = sin θ i 3 cos θ i 2 + θ i 1 sin α i + cos θ i 3 cos ϕ i a i z = sin θ i 3 sin θ i 2 + θ i 1
Substituting the right side of Equation (14), we have
L 2 i ^ · w L 1 i × L 1 i = L 1 sin θ i 2 sin θ i 3 θ ˙ i 1
and
L 2 i ^ · R ˙ = R ˙ sin θ i 3 cos θ i 2 + θ i 1
in such a way that
L 2 i ^ · R ˙ + L 2 i ^ · w L 1 i × L 1 i = sin θ i 3 cos θ i 2 + θ i 1 R ˙ L 1 sin θ i 2 sin θ i 3 θ ˙ i 1
Substituting in the Equations (16) and (17) for each i we have
a 1 x P ˙ x + a 1 y P ˙ y + a 1 z P ˙ z = sin ( θ 13 ) cos ( θ 12 + θ 11 ) R ˙ L 1 sin ( θ 12 ) sin ( θ 13 ) θ 11 ˙
a 2 x P ˙ x + a 2 y P ˙ y + a 2 z P ˙ z = sin ( θ 23 ) cos ( θ 22 + θ 21 ) R ˙ L 1 sin ( θ 22 ) sin ( θ 23 ) θ 21 ˙
a 3 x P ˙ x + a 3 y P ˙ y + a 3 z P ˙ z = sin ( θ 33 ) cos ( θ 32 + θ 31 ) R ˙ L 1 sin ( θ 32 ) sin ( θ 33 ) θ 31 ˙
which can be rewritten in matrix form as:
J x x ˙ = J q q ˙
where
J x = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33
is a Jacobian matrix that associate each point B i with the work-space, and where
J q = b 11 b 12 0 0 b 21 0 b 23 0 b 31 0 0 b 34
with
b i 1 = sin ( θ i 3 ) cos ( θ i 2 + θ i 1 )
b i , i + 1 = L 1 sin ( θ i 2 ) sin ( θ i 3 )
is a Jacobian matrix that associate the joint-space with each B i .
When the robot is not reconfigured, these Jacobians become square and measure 3 × 3 . The new joint variable R is reflected in the Jacobian J q and generates a non-square matrix of dimension 4 × 3 while the matrix J x is not affected and remain of dimension 3 × 3 .
The following forward kinematic model is obtained from Equation (18):
x ˙ = J q ˙
where J = J x 1 J q is the Jacobian matrix of the robot.
The solution to the inverse kinematics problem is the solution to the redundancy problem.

2.4. Resolution of the Redundancy Problem

In the case of kinematic redundancy in parallel robots, the inverse kinematics problem has an infinite number of solutions. Solving the redundancy problem consists of obtaining a single solution to the inverse kinematics. One of the most common ways to obtain the inverse kinematic model is to use the following solution [12]:
q ˙ = J x ˙ γ ( I J J ) δ μ δ q
where J is the Moore Penrose pseudo-inverse of J , μ is a performance index to be minimized in order to perform a second task, and the matrix I J J maps to the null space of J . The parameter γ is a scaling factor that changes the priority of the second task relative to the first. An example of such a secondary task is the minimizing of the following cost function:
μ = 1 2 i = 1 n q i q i m i d q i m a x q i m i n 2
where q i m i d is the mid value that the i-th active joint can reach between q i m a x and q i m i n , which are the maximum and minimum values that the i t h active joint can reach.
This cost function allows us to keep the active joints apart from their joint limit and thus ensure safe operations. Yet, there may be positions reachable by the unconstrained robot, but not by the robot with the actual joint limits. The gradient of (23) is given by
δ μ δ q = q i q i m i d q i m a x q i m i n 2
This solution for inverse kinematics, including the aforementioned secondary task, is used in the control laws to be defined.

2.5. Inverse Geometric Model

Two algorithms are proposed to solve the inverse geometric model to obtain a single solution. The first algorithm is based on the resolution of the inverse geometric model of the original delta robot, as it can be found in [16]. This algorithm consists of the following steps:
(i)
Solve the inverse kinematics for a specified value of R as if it were the original delta robot.
(ii)
Calculate the Jacobian matrix of the reconfigurable delta robot.
(iii)
Calculate the condition number and compare it with the smallest previous condition number of the Jacobian matrix. If the current one is smaller, the actual joint values are the solution.
(iv)
Repeat the process iteratively varying R over its entire range of motion, i.e., from 85 mm to 500 mm.
(v)
The solution with the smallest condition number is obtained at the end of all iterations.
The advantages of this algorithm are that it is easy to implement and guarantees that the smallest minimum condition number is found. As a disadvantage, it has a high computational cost and may not be suitable for applications in real-time control laws. The second strategy to solve the inverse geometric model is to use an algorithm based on an inverse kinematic model in a numerical integration process. In such an algorithm, the desired value of x and the initial condition of the joints are provided, and at the end of the iterative process a solution for that x is obtained.

3. Image Jacobian

In this section, we present the development of a linear camera model based on the pinhole camera model [28], and develop an image Jacobian matrix that relates the velocities of the end effector in three-dimensional space to the velocities in camera space, from the derivative with respect to time of the linear camera model.
We start from the pinhole camera model shown in Figure 3 where we can see point P i is projected onto the image plane through a segment passing via the origin of the camera coordinate frame ( X , Y , Z ) . This way, we can relate one point in three-dimensional space to another in image space. To mathematically model such a projection, we first pose a homogeneous transformation from the global reference frame ( x , y , z ) to the camera reference frame ( X , Y , Z ) as follows:
X Y Z = R 11 R 12 R 13 X 0 R 21 R 22 R 23 Y 0 R 31 R 32 R 33 Z 0 x y z 1 = H x y z 1
Assuming that the camera is fixed, the matrix H is constant. With the hole camera model, it is known that the projection of a point represented in the frame ( X , Y , Z ) onto the frame ( x c , y c ) in the image plane is as follows:
x c = f X Z y y c = f Y Z
where f is the focal length. The latter can be written in matrix form as:
x c y c 1 = 1 Z f 0 0 0 f 0 0 0 1 X Y Z = 1 Z F X Y Z
where
F = f 0 0 0 f 0 0 0 1
Assuming that the focal length is constant, then F is constant.
The coordinates in the frame ( x c , y c ) are at the center of the image plane, and they are in length units. Another mapping is needed that moves the points to the frame typically used in digital imaging. That is, a ( u , v ) frame in the upper left corner of the image plane in which the direction of the u axis coincides with the direction of the x c axis while the direction of the v axis is in the opposite direction to that of the y c axis. In addition, we switch from longitudinal units to pixel units via the following homogeneous transformation:
u v 1 = m x 0 p x 0 m y p y 0 0 1 x c y c 1 = K x c y c 1
where m x , m y represents the number of pixels per unit distance, and p x = c x m x and p y = c y m y are the coordinates of the center of the image plane in pixel units. All these parameters are considered constant for a camera so that the matrix K is constant.
We can perform the composition of all these homogeneous transformations to establish a direct transformation between the frame ( x , y , z ) and the frame ( u , v ) in the following way:
u v 1 = 1 Z KFH x y z
Developing the multiplication of matrices, we obtain
Z u v 1 = c 11 c 12 c 13 c 14 c 21 c 22 c 23 c 24 c 31 c 32 c 33 c 34 x y z 1
where
c 1 j = m x ( f R 1 j + p x R 3 j )
c 14 = m x ( f X 0 + p x Z 0 )
c 2 j = m y ( f R 1 j + p y R 3 j )
c 24 = m y ( f X 0 + p y Z 0 )
c 3 j = R 3 j
c 34 = Z 0
with j = 1 , 2 , 3 .
Since this mapping is composed of eleven independent parameters, we can use one of the terms of the resulting matrix as a scaling factor and divide the equation by that element. The element taken is the last element of the matrix, i.e., Z 0 . With this, we obtain the following expression:
Z Z 0 u v 1 = 1 Z 0 c 11 c 12 c 13 c 14 c 21 c 22 c 23 c 24 c 31 c 32 c 33 c 34 x y z 1
Which can be expressed as
ρ u v 1 = P x y z 1
where
P = 1 Z 0 c 11 c 12 c 13 c 14 c 21 c 22 c 23 c 24 c 31 c 32 c 33 c 34 = P 11 P 12 P 13 P 14 P 21 P 22 P 23 P 24 P 31 P 32 P 33 1
and
ρ = Z Z 0
From the third row of (31), another definition of ρ can be extracted as follows:
ρ = P 31 x + P 32 y + P 33 z + 1
Note that P in (32) is constant and independent of the depth scale factor ρ , thus obtaining as a result the following relationship, which we call the linear camera model:
ρ u v 1 = P 11 P 12 P 13 P 14 P 21 P 22 P 23 P 24 P 31 P 32 P 33 1 x y z 1
From this result, the image Jacobian can be developed. To accomplish this, we can recast Equation (35) starting with developing the product of ρ with the image coordinates obtaining
u = P 11 x + P 12 y + P 13 z + P 14 P 31 x + P 32 y + P 33 z + 1
and
v = P 21 x + P 22 y + P 23 z + P 24 P 31 x + P 32 y + P 33 z + 1
which, when restructured in matrix form, provides:
u P 14 v P 24 = P 11 P 31 u P 12 P 32 u P 13 P 33 u P 21 P 31 v P 22 P 32 v P 23 P 33 v x y z
or, in contracted form:
s ( t ) = P ( s ( t ) ) x ( t )
where
s ( t ) = u P 14 v P 24 ,
and
P ( s ( t ) ) = P 11 P 31 u P 12 P 32 u P 13 P 33 u P 21 P 31 v P 22 P 32 v P 23 P 33 v .
The vectors s ( t ) and x ( t ) represent the coordinates in image space (subtracting ( P 14 , P 24 ) ) and Cartesian space, respectively. Both vectors are functions of time since they represent the position of a target that can move in three-dimensional space at any time. Deriving with respect to time Equation (37), we obtain
s ˙ ( t ) = P ˙ ( s ( t ) ) x ( t ) + P ( s ( t ) ) x ˙ ( t )
where
x ( t ) = x y z
s ˙ ( t ) = u ˙ v ˙ ,
x ˙ ( t ) = x ˙ y ˙ z ˙ ,
P ˙ ( s ( t ) ) = P 31 u ˙ P 32 u ˙ P 33 u ˙ P 31 v ˙ P 32 v ˙ P 33 v ˙ .
The term P ˙ ( s ( t ) ) x ( t ) in (38) can be developed as
P ˙ ( s ( t ) ) x ( t ) = u ˙ ( P 31 x P 32 y P 33 z ) v ˙ ( P 31 x P 32 y P 33 z )
P ˙ ( s ( t ) ) x ( t ) = ( P 31 x P 32 y P 33 z ) u ˙ v ˙
and finally
P ˙ ( s ( t ) ) x ( t ) = ( ρ + 1 ) s ˙ ( t ) .
Hence, we can restate (38) as follows:
s ˙ ( t ) = ( ρ + 1 ) s ˙ ( t ) + P ( s ( t ) ) x ˙ ( t )
ρ s ˙ ( t ) = P ( s ( t ) ) x ˙ ( t )
s ˙ ( t ) = 1 ρ P ( s ( t ) ) x ˙ ( t )
s ˙ ( t ) = J c s m ( s ( t ) , x ( t ) ) x ˙ ( t )
where
J c s m ( s ( t ) , x ( t ) ) = 1 ρ ( x ( t ) ) P ( s ( t ) )
Thus, a direct relationship between the velocities in camera space and the velocities in three-dimensional space for each camera is obtained. This is of great value in the control context, since with this relation between velocities it is possible to define a closed-loop visual control laws using error signals defined as a function of the coordinates in image space. In order to establish an injective mapping in (40) a minimum of two cameras are required [6]. It can be observed that the image Jacobian J c s m in (41) is composed of the scale factor ρ and the mapping P , which contains the vision parameters. In previous works [6,28,36], this ρ factor was assumed to be constant and equal to 1 (embedded in the vision parameters). While this approximation works locally, little is known about the size of the region of validity of such approximation. Moreover, the vision parameter needs to be calculated locally (refined) at each control iteration. In contrast, with the formulation presented here, the (constant) vision parameters need only to be determined once and the variable depth factor ρ can be obtained in several ways (for example by using the robot’s geometric model, when the robot is close to the target; or by using 3D reconstruction from visual information. which can be as simple as a linear approximation). This not only reduces the computational cost of the calculation but also, and more importantly, makes the model globally valid, which is paramount from the automatic control point of view.

4. Control Laws for the Reconfigurable Robot

Usually, the sought behavior of most closed loop controlled systems is of the form
e ˙ = G e
where e is the difference between a reference value and the current value of the system’s state vector and G is a gain matrix. If G is Hurwitz, an exponential convergence of e to zero is achieved. This behavior can be achieved for the system at hand by designing the following control laws, following the coordinates chosen to express the system behavior. Error e can be defined as e = q q , e = x x and e = s s . That is, in joint space, operational (Cartesian) space or camera space, respectively.
On this basis, and considering the joint velocities as the control input, the following control law can be defined in joint space:
q ˙ = G e + q ˙
where e = q q and q ˙ is the reference velocity in joint space. It is common to have the information of the joint values by using encoders with which actuators are usually controlled. Therefore, it is natural to think of a control with an error signal in the joint space, but it is not common to know the desired joint value. This information is not always available and represents an additional problem, which can be addressed by using the inverse kinematic model, which in turn requires the knowledge of the reference position in Cartesian space, usually not straightforwardly available, and solving the redundancy problem beforehand. This problem becomes even more difficult when it comes to obtaining the reference velocity. Furthermore, these calculations are subjected to the influence of model errors that are a known source of estimation error for these elements. This constitutes an obstacle for the implementation of this control law.
On the same basis, for the dynamical system expressed in operational (Cartesian) space
x ˙ = J q ˙
the following control law can be defined using the Jacobian and the solution of the reconfiguration problem:
q ˙ = J ( G e + x ˙ ) γ ( I J J ) δ μ δ q
where e = x x and x ˙ is the reference velocity in Cartesian space. In the case of a workspace error signal, it is common to calculate the end-effector position using the direct geometric model using the joint information. As for the target position, in general the desired position is also not known or requires some indirect measurement by using somewhat complex 3D reconstruction models, once again, sensible to errors in the models. Moreover, yet again, the tracking problem is difficult to carry out as they require real-time determination not only of the target position but of the target velocity. This constitutes an obstacle for the implementation of this control law.
Consider the dynamic system in camera space:
s ˙ ( t ) = J c s m J q ˙
with the above conditions and using the mapping between the camera space and the operational space, the following control law can be defined:
q ˙ = J J c s m ( G e + s ˙ ) γ ( I J J ) δ μ δ q
where e = s s and s ˙ is the reference (target) velocity in camera space. Notice that both the reference position and the end-effector position in camera space are directly measured with the vision sensor and thus no model is directly involved to acquire them. As for the reference velocity, it can be estimated from the measurements of the reference position using a range of algorithms reported in the literature (e.g., Kalman filter, etc.). For these reasons, camera-space control can be adapted to a wide variety of practical applications.
All of the above control laws yield the same closed loop controlled system (42). In practice, one limitation of the above control laws is the exponential decay itself, which produce high initial control signals and, as the robot approaches the target, it decreases exponentially, slowing down the convergence. This behavior can be improved by implementing a variation of the exponential decay control law, using increasing gains as follows. In the model, we define the error dynamics as a piece-wise dynamic system where each partition has defined exponential decay dynamics. This can be written as follows:
e ˙ = G 0 e if | e | > d 1 G 1 e if d 1 > | e | > d 2 G 2 e if d 2 > | e | > d 3 G n e if | e | < d n
where G 0 < G 1 < < G n are matrices of the form G = λ I , where λ is any positive scalar, d 1 > d 2 > . . . > d n > 0 are the values of the error norm defining the partitions of the state space of the system.
In exponential decay control, as the value of e decreases, the value of e ˙ = Ge decreases in the same proportion, causing a high time of convergence to the target point. In the piece-wise control variation, we seek to propose the gains G i to increase in value as e decreases in value so that the rate of error dynamics approaches a linear decay rate, thus improving the speed of operation. If Δ d = | d i d i + 1 | with i = 1 , 2 , , n 1 , we can define G i as
G 0 = λ · I
and
G i = 1 i 1 + Δ d d 1 ( i 1 ) Δ d · G 0
Equation (50) guarantees that the eigenvalues of matrices G i increase exponentially, compensating for the decrease in the value of the error norm. Since all matrices G i are Hurwitz in every partition whose union defines the entire state space, the dynamics of system (48) is asymptotically stable. This is a variation of the control law for all of the above control laws.

4.1. Encoders Based Control Laws

Consider the dynamical system (44) where q ˙ can be considered as the control input. Defining the closed-loop error dynamics (48) with e = x x and the solution to the redundancy problem (22), we can obtain the following control law:
q ˙ = J ( G 0 ( e ) + x ˙ ) γ ( I J J ) δ μ δ q if | e | > d 1 J ( G 1 ( e ) + x ˙ ) γ ( I J J ) δ μ δ q if d 1 > | e | > d 2 J ( G 2 ( e ) + x ˙ ) γ ( I J J ) δ μ δ q if d 2 > | e | > d 3 J ( G n ( e ) + x ˙ ) γ ( I J J ) δ μ δ q if | e | < d n
Another proposed control law is based on the closed-loop error dynamics (48) with the error defined in joint values: e = q q obtaining the control law:
q ˙ = G 0 ( e ) + q ˙ if | e | > d 1 G 1 ( e ) + q ˙ if d 1 > | e | > d 2 G 2 ( e ) + q ˙ if d 2 > | e | > d 3 G n ( e ) + q ˙ if | e | < d n
In this case, the redundancy problem is solved before the execution of the task to obtain the value of q using one of the algorithms presented in the previous section.

4.2. Control Law in Vision Sensor Space

Following the guidelines presented for developing control laws based on encoder signals and taking the dynamic system (46) where q ˙ is the control input. The resulting control law is:
q ˙ = J J csm ( G 0 ( e ) + s ˙ ) γ ( I J J ) δ μ δ q if | e | > d 1 J J csm ( G 1 ( e ) + s ˙ ) γ ( I J J ) δ μ δ q if d 1 > | e | > d 2 J J csm ( G 2 ( e ) + s ˙ ) γ ( I J J ) δ μ δ q if d 2 > | e | > d 3 J J csm ( G n ( e ) + s ˙ ) γ ( I J J ) δ μ δ q if | e | < d n
where s ˙ is estimated using the numerical derivation of s from the measurements in the previous control cycles.

5. Experiments

This section presents the experimental setup implemented to validate the control laws designed in the previous sections. First, a brief description of the hardware and software will be given of the different parts of the test bench used.

5.1. Hardware

We used a reconfigurable variant of the delta PARALLIX LKF-2040 robot. The kinematic description of the robot is presented in Section 2. The robot is composed of four maxon 380961 motors, each of them controlled with a PICSERVO SC board. Communication with the control boards is serial over USB. The boards control the actuators using an internal PID controller and receive speeds as commands. A conveyor has been placed inside the workspace of the robot to allow for the tracking of moving objects.
The vision system is composed of two Allied Vision cameras of the Alvium line at 1296 × 964 resolution capturing at 130 frames per second through UBS 3.0. The cameras are located at a distance of approximately 1.5 m from the robot. Each camera is 1 m apart. A personal computer with Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz of 6 and 12 cores, 16 gigabytes of RAM, was used to control the robot controller boards, the vision system, and to perform the control law calculation. Figure 4 shows a picture of the full system and Figure 5 shows an schematic of the hardware system and its workflow.

5.2. Software

The control system was programmed using Python 3.10. Numerical computations, necessary in the control law, were performed using NumPy 1.24.3 [37]. Image capture was handled via VimbaPython 6.0 SDK provided by the camera manufacturer while image processing used OpenCV 4.7 [38]. Finally, the PyQt 5.15.9 [39] library was used to generate a graphical interface and thread management. The program is composed of three main threads:
(i)
A main thread for all the cameras,
(ii)
A thread to send commands to the robot,
(iii)
A thread for the control law calculation.
The control cycle time obtained with the aforementioned set-up was 16 ms.

5.3. Testing Configuration

First, static positioning tasks are used to validate and compare the performance of exponential convergence control laws ((45), (43) and (47)) versus the piece-wise variants ((51), (52) and (53)) with reference velocity equal zero. Reference positions were distributed in the robot workspace and the robot was commanded to reach those positions using the designed control laws. For the vision-based control law, the vision parameters were initially estimated using what has been referred to as a pre-plan. During this procedure, the robot is positioned at a series of points covering its workspace. For these tests, 410 measurements were used, evenly distributed throughout the robot’s workspace, with 50 mm spacing on the X and Y axes and 25 mm spacing on the Z axis.
Second, in order to fully validate the vision-based control law (53) and evaluate its performance, two types of experiments are considered:
(i)
The first set of experiments for the CSM control law consists of static positioning tasks. Four position tasks are performed three times each. The positions are chosen randomly inside the workspace of the robot. The maneuver is considered successful when an error vector norm of less than 2 pixel is achieved during, at least, 40 control cycles. This error vector contains the coordinates in image space of the two cameras. Speed compensation is set to zero to avoid noise input due to the velocity estimator.
(ii)
The second set of experiments for the CSM control law consists of tracking an object moving, on the conveyor, following a linear trajectory at constant speed. No a priori information about the target trajectory is used in the experiments. The target is positioned on a conveyor belt placed within the robot’s workspace. The robot first moves to the target object. The conveyor is started only once a static position has been achieved using the previously mentioned conditions. The conveyor moves at a constant speed with the robot following the object’s position. The test is stopped when the object reaches the edge of the robot’s workspace. The control law with speed compensation is used.
Figure 6 shows the general operation of the control system using the example of visual control for positioning tasks, where the stop criterion corresponds to when the error norm is less than 2 pixels. The operation is similar for the tracking task, with the difference that the stop criterion corresponds to the target leaving the assigned tasking area.

6. Results and Discussion

6.1. Results

6.1.1. Exponential Convergence Control Laws vs. Piece-Wise Variants

Static positioning tasks using exponential convergence control laws ((43), (45) and (47)) were executed to compare their performance with the corresponding piece-wise variants ((51)–(53)). Only results for the control laws in the operational space are shown, although similar results were obtained for the control laws in the other spaces. Figure 7 and Figure 8 show the error norm and control signal, respectively, for control law (45). Similarly, Figure 9 and Figure 10 show the error norm and control signals for the piece-wise variant (51). Comparing the images, which consist of the same positioning task, it can be seen how the piece-wise variant is effective in improving the convergence time from 4 s to 1.8 s, while preserving the steady state zero error of the exponential convergence control law.

6.1.2. Control Law in Vision-Sensor Space

For the first set of experiments (i.e., static positioning) for the vision-based control law, an average error of 0.604 mm was obtained. Table 1 shows more detail about these results. The maximum and minimum errors coincide with the target point farthest from the cameras and the target point closest to the cameras. This can happen because the pixel/mm ratio decreases as the distance increases (this phenomenon is also observed in the tracking tasks reported below).
Figure 11 shows the position error norm in image coordinates, measured in pixels. It can be seen how the behavior of the norm is similar to a linear decay in most of the robot’s path. This is a direct effect of the piece-wise control law. Figure 12 shows the velocity commands sent to the control boards. The angular velocities ( θ ˙ 1 i ) of the actuated rotational joints are measured on the scale to the left, while the velocity of the prismatic reconfiguration joint ( R ˙ ) is measured on the scale to the right. In all cases, the signals tend to zero.
For the tracking task of the CSM control law, for an object moving at constant speed, average drag errors were measured between 2.5 mm and 11.5 mm for speeds ranging from 3.5 cm/s to 12.7 cm/s. The tracking error increases as the belt speed increases. The results are shown in the Table 2. Figure 13, Figure 14 and Figure 15 show the tracking error in both the image and operational spaces, for the experiments at different conveyor belt speeds. The tracking error in the image space is represented by the blue line and is measured on the left hand scale while the tracking error in the operational space is represented by the violet line and is presented on the right hand scale. At the beginning of the experiments at 3.5 cm/s and 9.3 cm/s, the object is motionless and the robot positions on the target; then, the object starts moving at a constant speed and the robot tracks the target. For the experiment at 12.7 cm/s, the target starts moving from the beginning of the experiment. Table 2 shows the tracking errors in operational space. The tracking error consists of the average of the error norm from the time the target starts its movement to the end of the experiment when the target leaves the working area. This time can vary depending on the tracking speed between 2.5 s and 5 s. The working area allows for a trajectory of the object covering a distance of 38 cm. We can see that the average error is always a small fraction of the distance covered by the target and stays always below 3 % .

6.2. Discussion

In the comparison between exponential convergence control laws and piece-wise variants, it can be seen how the piece-wise variant is effective in improving the convergence time to about half the original time, while preserving the steady state zero error of the exponential convergence control law. In practical applications, this represents a significant advantage, as it can accelerate processes.
With respect to the control laws designed in the vision sensor-space, in static positioning tasks, an average positioning error of 0.6 mm was obtained. This implies an improvement in positioning precision over previous implementations of LCM-CSM/CSM control laws [6,28,36] where an accuracy of about 1 mm was achieved. This improvement can be attributed both to the implementation of the LCM-CSM Jacobian matrix that includes the depth scale information and to the higher resolution of the cameras. In the tests performed, a loss of accuracy in the positioning task was observed as the target moves away from the cameras. This can be explained by the fact that the farther away the cameras, the lower the pixel/mm correspondence.
For the constant velocity trajectory tracking task, tracking errors of 2.5 mm to 5 mm were achieved for velocities of 6.5 to 12.7 cm/s. These results are competitive with other results reported in the literature. Similar to the positioning task, there is an increase in error as the target moves away from the cameras. It was found that the velocities estimated by averaging the last 10 numerical derivatives produce very noisy signals that are transferred to the control signal. This can affect the performance of the robot and generate a higher tracking error compared to a cleaner velocity signal.If you have a polynomial estimate of the trajectory in the last 10 steps, the velocity can be calculated analytically.

7. Conclusions

In this article, the Jacobian analysis and the design of vision-based control laws for a reconfigurable, redundant delta-type parallel robot are presented. We investigated a target-tracking task where the target trajectory is not known in advance. Also, the image Jacobian ( J c s m ) based on the LCM-CSM approach is revisited and the vision parameters are shown to be constant and independent of a depth scale factor, the function of the depth of the observed features, which is variable. This allows us to very easily update the image Jacobian matrix at each control cycle with a very low computational cost.
Using the proposed control strategy, in the case of static objectives, the average positioning error was 0.6 mm with a standard deviation of 0.445 mm. These results are competitive when compared against a number of industrial applications. Regarding mobile object tracking tasks, the results show a tracking errors of 2.5 mm, 3.9 mm and 5 mm for targets moving along a linear trajectory at speeds of 6.5, 9.3 and 12.7 cm/s, respectively, with a maximum final static positioning errors of 1.53 mm once the object stopped.
Parallel robots require specialized control designs because of their model complexity and open problems, which prevent simply extending existing robot manipulator controls. Moreover, while a reconfigurable parallel manipulator increases the versatility of the manipulator and tackles some of its limitations, it also increases the complexity of the robot and requires an even more specialized control. Delta robots are very popular in industry, particularly in “pick and place” tasks; however, these applications require on-line adjustments, specification changes and the tracking of moving targets. Under these circumstances, vision-based control provides the necessary flexibility over traditional control approaches and, in the authors’ opinion, the proposed vision-based control via CSM provides a satisfactory solution for said tasks.

Future Work

The results presented in this article are a starting point for the topics addressed here. On the one hand, it is possible to deepen the analysis of the characteristics resulting from the reconfiguration of the robot. On the other hand, the presented piecewise control law shows a significant improvement in the convergence rate compared to the exponential decay control law. Comparisons with other control laws, such as PID, which are traditionally one of the first choices for controlling a system, can shed light on the full potential of the proposed law. The development of a control methodology around the proposed law is desirable. Considering the VLCM-CSM visual control, comparisons can be made between the Jacobian in this article with those reported in other papers to obtain a better sense of its effectiveness.

Author Contributions

Conceptualization, A.F.-L., M.M., A.G. and A.C.; Methodology, A.F.-L., M.M. and D.P.; Software, A.F.-L.; Validation, A.F.-L.; Formal analysis, A.F.-L., M.M. and D.P.; Investigation, M.M., A.G., A.C. and D.P.; Resources, M.M.; Data curation, A.G.; Writing—original draft, A.F.-L.; Writing—review & editing, A.F.-L., M.M., A.G., A.C. and D.P.; Visualization, A.C.; Supervision, M.M.; Project administration, M.M.; Funding acquisition, M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially funded by CONACyT grant Cátedras CONACyT 2016/972 and CONHACyT grant 712819. The APC was funded by Posgrado en Ing. Mecánica, Posgrado en Ing. Eléctrica, and Ing. en Mecatrónica, Facultad de Ingeniería, UASLP.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available upon reasonable request.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Clavel, R. DELTA, a fast robot with parallel geometry. In Proceedings of the 18th International Symposium on Industrial Robots, Lausanne, Switzerland, 26–28 April 1988; Burckhardt, C.W., Ed.; Springer: New York, NY, USA, 1988; pp. 91–100. [Google Scholar]
  2. Merlet, J.P. Parallel Robots; Kluwer Academic Publishers: Alphen aan den Rijn, The Netherlands, 2006. [Google Scholar]
  3. Mostashiri, N.; Dhupia, J.S.; Verl, A.W.; Xu, W. A Review of Research Aspects of Redundantly Actuated Parallel Robotsw for Enabling Further Applications. IEEE/ASME Trans. Mechatronics 2018, 23, 1259–1269. [Google Scholar] [CrossRef]
  4. Pandilov, Z.; Dukovski, V. Comparison of the characteristics between serial and parallel robots. Acta Tech. Corviniensis-Bull. Eng. 2014, 7, 143–160. [Google Scholar]
  5. Zhang, H.; Fang, H.; Zou, Q. Non-singular terminal sliding mode control for redundantly actuated parallel mechanism. Int. J. Adv. Robot. Syst. 2020, 17, 1729881420919548. [Google Scholar] [CrossRef] [Green Version]
  6. Loredo, A.; Maya, M.; González, A.; Cardenas, A.; Gonzalez-Galvan, E.; Piovesan, D. A Novel Velocity-Based Control in a Sensor Space for Parallel Manipulators. Sensors 2022, 22, 7323. [Google Scholar] [CrossRef] [PubMed]
  7. Pandilov, Z.; Dukovski, V. Several open problems in parallel robotics. Acta Tech. Corviniensis-Bull. Eng. 2011, 4, 77–84. [Google Scholar]
  8. Bentaleb, T.; Iqbal, J. On the improvement of calibration accuracy of parallel robots—Modeling and optimization. J. Theor. Appl. Mech. 2020, 58, 261–272. [Google Scholar] [CrossRef]
  9. Milutinović, D.; Slavković, N.; Kokotović, B.; Milutinović, M.; Živanović, S.; Dimić, Z. Kinematic modeling of reconfigurable parallel robots based on DELTA concept. In Proceedings of the 11th International Scientific Conference MMA 2012—Advanced Production Technologies, Novi Sad, Serbia, 20–21 September 2012; University of Novi Sad, Faculty of Technical Scienes, Department for Production Engineering: Novi Sad, Serbia, 2012; pp. 259–262. [Google Scholar]
  10. Plitea, N.; Lese, D.; Pisla, D.; Vaida, C. Structural design and kinematics of a new parallel reconfigurable robot. Robot. Comput.-Integr. Manuf. 2013, 29, 219–235. [Google Scholar] [CrossRef]
  11. Luces, M.; Beno Benhabib, J.K.M. A Review of Redundant Parallel Kinematic Mechanisms. J. Intell. Robot. Syst. 2017, 86, 175–198. [Google Scholar] [CrossRef]
  12. Gosselin, C.; Schreiber, L.T. Redundancy in Parallel Mechanisms: A Review. Appl. Mech. Rev. 2018, 70, 010802. [Google Scholar] [CrossRef]
  13. Mostashiri, N.; Dhupia, J.; Xu, W. Redundancy in Parallel Robots: A Case Study of Kinematics of a Redundantly Actuated Parallel Chewing Robot. In Proceedings of the RITA 2018, Kuala Lumpur, Malaysia, 16–18 December 2018; Abdul Majeed, A.P.P., Mat-Jizat, J.A., Hassan, M.H.A., Taha, Z., Choi, H.L., Kim, J., Eds.; Springer: Singapore, 2020; pp. 65–78. [Google Scholar]
  14. Harada, T.; Angeles, J. From the McGill pepper-mill carrier to the Kindai ATARIGI Carrier: A novel two limbs six-dof parallel robot with kinematic and actuation redundancy. In Proceedings of the 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), Macau, China, 5–8 December 2017; pp. 1328–1333. [Google Scholar] [CrossRef]
  15. Moosavian, A.; Xi, F.J. Modular design of parallel robots with static redundancy. Mech. Mach. Theory 2016, 96, 26–37. [Google Scholar] [CrossRef]
  16. Maya, M.; Castillo, E.; Lomelí, A.; González-Galván, E.; Cárdenas, A. Workspace and Payload-Capacity of a New Reconfigurable Delta Parallel Robot. Int. J. Adv. Robot. Syst. 2013, 10, 56. [Google Scholar] [CrossRef]
  17. Hu, L.; Gao, H.; Qu, H.; Liu, Z. Closeness to singularity based on kinematics and dynamics and singularity avoidance of a planar parallel robot with kinematic redundancy. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2022, 236, 3717–3730. [Google Scholar] [CrossRef]
  18. Pulloquinga, J.L.; Escarabajal, R.J.; Ferrándiz, J.; Vallés, M.; Mata, V.; Urízar, M. Vision-Based Hybrid Controller to Release a 4-DOF Parallel Robot from a Type II Singularity. Sensors 2021, 21, 4080. [Google Scholar] [CrossRef]
  19. Singh, A.; Kalaichelvi, V.; Karthikeyan, R. A survey on vision guided robotic systems with intelligent control strategies for autonomous tasks. Cogent Eng. 2022, 9, 2050020. [Google Scholar] [CrossRef]
  20. Cherubini, A.; Navarro-Alarcon, D. Sensor-based control for collaborative robots: Fundamentals, challenges, and opportunities. Front. Neurorobot. 2021, 14, 113. [Google Scholar] [CrossRef] [PubMed]
  21. Hutchinson, S.; Hager, G.D.; Corke, P.I. A tutorial on visual servo control. IEEE Trans. Robot. Autom. 1996, 12, 651–670. [Google Scholar] [CrossRef] [Green Version]
  22. Chaumette, F.; Hutchinson, S. Visual servo control. I. Basic approaches. IEEE Robot. Autom. Mag. 2006, 13, 82–90. [Google Scholar] [CrossRef]
  23. Ren, X.; LI, H. Uncalibrated Image-Based Visual Servoing Control with Maximum Correntropy Kalman Filter. IFAC-PapersOnLine 2020, 53, 560–565. [Google Scholar] [CrossRef]
  24. Skaar, S.B.; Brockman, W.H.; Hanson, R. Camera-Space Manipulation. Int. J. Robot. Res. 1987, 6, 20–32. [Google Scholar] [CrossRef]
  25. Xie, Y.; Wang, W.; Zhao, C.; Skaar, S.B.; Wang, Q. A Differential Evolution Approach for Camera Space Manipulation. In Proceedings of the 2020 2nd World Symposium on Artificial Intelligence (WSAI), Karlsruhe, Germany, 14–16 September 2020; IEEE: Manhattan, NY, USA, 2020; pp. 103–107. [Google Scholar]
  26. Weiss, L.; Sanderson, A.; Neuman, C. Dynamic sensor-based control of robots with visual feedback. IEEE J. Robot. Autom. 1987, 3, 404–417. [Google Scholar] [CrossRef]
  27. Lin, C.J.; Shaw, J.; Tsou, P.C.; Liu, C.C. Vision servo based Delta robot to pick-and-place moving parts. In Proceedings of the 2016 IEEE International Conference on Industrial Technology (ICIT), Taipei, Taiwan, 14–17 March 2016; pp. 1626–1631. [Google Scholar] [CrossRef]
  28. Rendón-Mancha, J.M.; Cárdenas, A.; García, M.A.; Lara, B. Robot positioning using camera-space manipulation with a linear camera model. IEEE Trans. Robot. 2010, 26, 726–733. [Google Scholar] [CrossRef]
  29. Huynh, B.P.; Kuo, Y.L. Dynamic Hybrid Filter for Vision-Based Pose Estimation of a Hexa Parallel Robot. J. Sens. 2021, 2021, 9990403. [Google Scholar] [CrossRef]
  30. Amjad, J.; Humaidi, A.I.A. Design of Augmented Nonlinear PD Controller of Delta/Par4-Like Robot. J. Control Sci. Eng. 2019, 2019, 7689673. [Google Scholar] [CrossRef]
  31. Hoang, X.B.; Pham, P.C.; Kuo, Y.L. Collision Detection of a HEXA Parallel Robot Based on Dynamic Model and a Multi-Dual Depth Camera System. Sensors 2022, 22, 5922. [Google Scholar] [CrossRef] [PubMed]
  32. Kansal, S.; Mukherjee, S. Vision-based kinematic analysis of the Delta robot for object catching. Robotica 2022, 40, 2010–2030. [Google Scholar] [CrossRef]
  33. Huynh, B.P.; Kuo, Y.L. Dynamic Filtered Path Tracking Control for a 3RRR Robot Using Optimal Recursive Path Planning and Vision-Based Pose Estimation. IEEE Access 2020, 8, 174736–174750. [Google Scholar] [CrossRef]
  34. Zake, Z.; Chaumette, F.; Pedemonte, N.; Caro, S. Vision-Based Control and Stability Analysis of a Cable-Driven Parallel Robot. IEEE Robot. Autom. Lett. 2019, 4, 1029–1036. [Google Scholar] [CrossRef] [Green Version]
  35. Zake, Z.; Chaumette, F.; Pedemonte, N.; Caro, S. Robust 2½D Visual Servoing of A Cable-Driven Parallel Robot Thanks to Trajectory Tracking. IEEE Robot. Autom. Lett. 2020, 5, 660–667. [Google Scholar] [CrossRef] [Green Version]
  36. Lopez-Lara, J.G.; Maya, M.E.; González, A.; Cardenas, A.; Felix, L. Image-based control of delta parallel robots via enhanced LCM-CSM to track moving objects. Ind. Robot. Int. J. Robot. Res. Appl. 2020, 47, 559–567. [Google Scholar] [CrossRef]
  37. Harris, C.R.; Millman, K.J.; van der Walt, S.J.; Gommers, R.; Virtanen, P.; Cournapeau, D.; Wieser, E.; Taylor, J.; Berg, S.; Smith, N.J.; et al. Array programming with NumPy. Nature 2020, 585, 357–362. [Google Scholar] [CrossRef]
  38. Bradski, G. The OpenCV Library. Dr. Dobb’S J. Softw. Tools Prof. Program. 2000, 25, 120–123. [Google Scholar]
  39. PyQt. PyQt Reference Guide. 2012. Available online: https://doc.bccnsoft.com/docs/PyQt5/ (accessed on 7 June 2023).
Figure 1. Kinematic chains of the delta robot.
Figure 1. Kinematic chains of the delta robot.
Sensors 23 07039 g001
Figure 2. Reconfiguration mechanism for the delta robot.
Figure 2. Reconfiguration mechanism for the delta robot.
Sensors 23 07039 g002
Figure 3. Pinhole camera model.
Figure 3. Pinhole camera model.
Sensors 23 07039 g003
Figure 4. The reconfigurable delta robot and camera system.
Figure 4. The reconfigurable delta robot and camera system.
Sensors 23 07039 g004
Figure 5. Schematic of the hardware system and its workflow.
Figure 5. Schematic of the hardware system and its workflow.
Sensors 23 07039 g005
Figure 6. General operation of the control system.
Figure 6. General operation of the control system.
Sensors 23 07039 g006
Figure 7. Error norm vs. time for control law q ˙ = J G ( e ) γ ( I J J ) δ μ δ q with e = x x .
Figure 7. Error norm vs. time for control law q ˙ = J G ( e ) γ ( I J J ) δ μ δ q with e = x x .
Sensors 23 07039 g007
Figure 8. Angular joint velocity (radians/s) vs. time for control law q ˙ = J G ( e ) γ ( I J J ) δ μ δ q with e = x x .
Figure 8. Angular joint velocity (radians/s) vs. time for control law q ˙ = J G ( e ) γ ( I J J ) δ μ δ q with e = x x .
Sensors 23 07039 g008
Figure 9. Error norm vs. time for control law (51) with e = x x .
Figure 9. Error norm vs. time for control law (51) with e = x x .
Sensors 23 07039 g009
Figure 10. Angular joint velocity (radians/s) vs. time for control law (51) with e = x x .
Figure 10. Angular joint velocity (radians/s) vs. time for control law (51) with e = x x .
Sensors 23 07039 g010
Figure 11. Positioning error norm in image space.
Figure 11. Positioning error norm in image space.
Sensors 23 07039 g011
Figure 12. Velocity control signal.
Figure 12. Velocity control signal.
Sensors 23 07039 g012
Figure 13. Tracking error in image and operational spaces for a speed of 6.5 cm/s.
Figure 13. Tracking error in image and operational spaces for a speed of 6.5 cm/s.
Sensors 23 07039 g013
Figure 14. Tracking error in image and operational spaces for a speed of 9.3 cm/s.
Figure 14. Tracking error in image and operational spaces for a speed of 9.3 cm/s.
Sensors 23 07039 g014
Figure 15. Tracking error in image and operational spaces for a speed of 12.7 cm/s.
Figure 15. Tracking error in image and operational spaces for a speed of 12.7 cm/s.
Sensors 23 07039 g015
Table 1. Position error for positioning task.
Table 1. Position error for positioning task.
Error (mm)
Average0.600
Max1.530
Min0.172
Std. Dev.0.445
Table 2. Tracking error in operational space for constant velocity task.
Table 2. Tracking error in operational space for constant velocity task.
Conveyor SpeedAverage Error
6.5 cm/s2.5 mm
9.3 cm/s3.9 mm
12.7 cm/s11.5 mm
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Franco-López, A.; Maya, M.; González, A.; Cardenas, A.; Piovesan, D. Depth-Dependent Control in Vision-Sensor Space for Reconfigurable Parallel Manipulators. Sensors 2023, 23, 7039. https://doi.org/10.3390/s23167039

AMA Style

Franco-López A, Maya M, González A, Cardenas A, Piovesan D. Depth-Dependent Control in Vision-Sensor Space for Reconfigurable Parallel Manipulators. Sensors. 2023; 23(16):7039. https://doi.org/10.3390/s23167039

Chicago/Turabian Style

Franco-López, Arturo, Mauro Maya, Alejandro González, Antonio Cardenas, and Davide Piovesan. 2023. "Depth-Dependent Control in Vision-Sensor Space for Reconfigurable Parallel Manipulators" Sensors 23, no. 16: 7039. https://doi.org/10.3390/s23167039

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop