Manufacturing Technology on a Mechatronics Line Assisted by Autonomous Robotic Systems, Robotic Manipulators and Visual Servoing Systems

: This paper proposes the implementation of an assisting technology to a processing / reprocessing mechatronics line (P / RML), comprising the following: two autonomous robotic systems (ARSs), two robotic manipulators (RMs) and three visual servoing systems (VSSs). The P / RML has four line-shaped workstations assisted by two ARSs—wheeled mobile robots (WMRs): one of them equipped with an RM, used for manipulation, and the other one used for transport. Two types of VSSs— eye to hand and eye in hand —are used as actuators for precise positioning of RMs to catch and release the work-piece. The work-piece visits stations successively as it is moved along the line for processing. If the processed piece does not pass the quality test, it is taken from the last stations of the P / RML and it is transported to the ﬁrst station where it will be considered for reprocessing. The P / RML, assisted by ARSs, RMs and VSSs, was modelled with the synchronized hybrid Petri nets (SHPN). To control the ARSs, we propose the use of trajectory-tracking and sliding-mode control (TTSMC). The precise positioning that allows the picking up and releasing of the work-piece was performed using two types of VSSs. In the case of the ﬁrst one, termed eye to hand VSS , the cameras have a ﬁxed position, located at the last and the ﬁrst workstations of the P / RML. For the second one, named eye in hand VSS , the camera is located at the end e ﬀ ector of the RM.


Introduction
Central to the idea of the paper is the overall proposed approach: a technology that works on a laboratory system and integrates several systems: processing/reprocessing mechatronics line (P/RML), autonomous robotic systems (ARSs), robotic manipulators (RMs) and visual servoing systems (VSSs). The technology involves several concepts: task planning, hybrid modelling, simulation, sensors, actuators, monitoring and real-time control. The technology allows the recovery of products through reprocessing and works completely automated without the intervention of the human The rest of the paper is organised as follows: the description of the P/RML assisted by ARSs, RMs and VSSs with task planning useful preliminary assumptions, for developing SHPN model, are laid out in Section 2; model structure and SHPN model formalism, in generalised and customised forms, is presented in Section 3; simulation results are also presented for the customised SHPN model associated with P/RML, assisted by ARSs; modelling and control of the eye to hand and eye in hand VSS, based of the moment of the images, are presented in Section 4; in Section 5, the implementation and real-time control of the P/RML assisted by two ARSs, two RMs, two eye to hand and one eye in hand VSSs based on the SHPN model and signals from sensors are presented; comments on the robustness to the uncertainties of the proposed technology and the need for the use of VSSs can be found in Section 6; some final remarks can be found in Section 7.

Hardware Description
P/RML FESTO MPS-200, shown in Figure 1, is a configurable laboratory mechatronic system, which can be assimilated to a flexible manufacturing line that performs several operations. It is composed of four workstations (cells), each performing different operations. These workstations are controlled by separate Siemens S7 300 PLC and each of them ensures the operations in different stages: buffer, handling, processing and sorting. ARS Pioneer 3-DX is used for the recovery operation which is a WMR with two driving wheels and one free wheel. ARS PeopleBot is used for the transport operation, which is similar to the WMR ARS Pioneer 3-DX. The control of the ARSs is carried out by a specialized application developed and executed on a remote PC that transmits commands via Wi-Fi Link, [20,21]. RMs and VSSs with task planning useful preliminary assumptions, for developing SHPN model, are laid out in Section 2; model structure and SHPN model formalism, in generalised and customised forms, is presented in Section 3; simulation results are also presented for the customised SHPN model associated with P/RML, assisted by ARSs; modelling and control of the eye to hand and eye in hand VSS, based of the moment of the images, are presented in Section 4; in Section 5, the implementation and real-time control of the P/RML assisted by two ARSs, two RMs, two eye to hand and one eye in hand VSSs based on the SHPN model and signals from sensors are presented; comments on the robustness to the uncertainties of the proposed technology and the need for the use of VSSs can be found in Section 6; some final remarks can be found in Section 7.

Hardware description
P/RML FESTO MPS-200, shown in Figure 1, is a configurable laboratory mechatronic system, which can be assimilated to a flexible manufacturing line that performs several operations. It is composed of four workstations (cells), each performing different operations. These workstations are controlled by separate Siemens S7 300 PLC and each of them ensures the operations in different stages: buffer, handling, processing and sorting. ARS Pioneer 3-DX is used for the recovery operation which is a WMR with two driving wheels and one free wheel. ARS PeopleBot is used for the transport operation, which is similar to the WMR ARS Pioneer 3-DX. The control of the ARSs is carried out by a specialized application developed and executed on a remote PC that transmits commands via Wi-Fi Link, [20,21].
On the ARS Pioneer 3-DX, an RM Pioneer 5-DOF (degrees of freedom) is mounted, also controlled via Wi-Fi. For controlling ARSs and RM Pioneer 5-DOF are used dedicated functions from ARIA (Advanced Robotic Interface for Applications) package. In order to return the piece to the line is used 7-DOF manipulator, RM Cyton 1500 connected via USB with the process PC. In the synchronization of the P/RML with the ARSs, RMs are used as signals from sensors and high definition video cameras, two of them integrated as eye to hand and the third as eye in hand VSS. P/RML, ARSs, RMs, cameras and work-pieces are shown in Figures 1 and 2.  On the ARS Pioneer 3-DX, an RM Pioneer 5-DOF (degrees of freedom) is mounted, also controlled via Wi-Fi. For controlling ARSs and RM Pioneer 5-DOF are used dedicated functions from ARIA (Advanced Robotic Interface for Applications) package. In order to return the piece to the line is used 7-DOF manipulator, RM Cyton 1500 connected via USB with the process PC. In the synchronization of the P/RML with the ARSs, RMs are used as signals from sensors and high definition video cameras, two of them integrated as eye to hand and the third as eye in hand VSS. P/RML, ARSs, RMs, cameras and work-pieces are shown in Figures 1 and 2.

Eye to Hand and Eye in Hand VSSs
The eye to hand VSS is defined by the mounting of the video sensor in a fixed position relative to the work environment, [8,9,19]. The eye in hand VSS refers to a system where the video sensor is located to the end effector of the RM, [8,9,19]. For the image-based VSSs, 2D image information is used directly to estimate the desired motion of the robot. Typical tasks such as tracking and positioning are accomplished by minimizing the error between the visual features extracted from the current image and the corresponding visual features of the desired image.
For image-based architectures, the visual sensor used to extract visual information about the work environment can be mounted either directly on the eye-in-hand effector of the robot, in which case the motion of the robot induces and the movement of the camera, or the camera can be mounted somewhere in the workspace (eye-to-hand) so that it can observe the robot's movement from a fixed point. The general representations of the two architectures are presented in Figure 3. The most commonly used features in object classification and form recognition are the features called moments of the image which, because of their benefits, are also used for robotic and artificial intelligence purposes. Considering an image as a two-dimensional intensity distribution, the moments of the image contain information about the image area, the orientation of the image and the coordinates of the centre of gravity.

Eye to Hand and Eye in Hand VSSs
The eye to hand VSS is defined by the mounting of the video sensor in a fixed position relative to the work environment, [8,9,19]. The eye in hand VSS refers to a system where the video sensor is located to the end effector of the RM, [8,9,19]. For the image-based VSSs, 2D image information is used directly to estimate the desired motion of the robot. Typical tasks such as tracking and positioning are accomplished by minimizing the error between the visual features extracted from the current image and the corresponding visual features of the desired image.
For image-based architectures, the visual sensor used to extract visual information about the work environment can be mounted either directly on the eye-in-hand effector of the robot, in which case the motion of the robot induces and the movement of the camera, or the camera can be mounted somewhere in the workspace (eye-to-hand) so that it can observe the robot's movement from a fixed point. The general representations of the two architectures are presented in Figure 3. The most commonly used features in object classification and form recognition are the features called moments of the image which, because of their benefits, are also used for robotic and artificial intelligence purposes. Considering an image as a two-dimensional intensity distribution, the moments of the image contain information about the image area, the orientation of the image and the coordinates of the centre of gravity.

Eye to Hand and Eye in Hand VSSs
The eye to hand VSS is defined by the mounting of the video sensor in a fixed position relative to the work environment, [8,9,19]. The eye in hand VSS refers to a system where the video sensor is located to the end effector of the RM, [8,9,19]. For the image-based VSSs, 2D image information is used directly to estimate the desired motion of the robot. Typical tasks such as tracking and positioning are accomplished by minimizing the error between the visual features extracted from the current image and the corresponding visual features of the desired image.
For image-based architectures, the visual sensor used to extract visual information about the work environment can be mounted either directly on the eye-in-hand effector of the robot, in which case the motion of the robot induces and the movement of the camera, or the camera can be mounted somewhere in the workspace (eye-to-hand) so that it can observe the robot's movement from a fixed point. The general representations of the two architectures are presented in Figure 3. The most commonly used features in object classification and form recognition are the features called moments of the image which, because of their benefits, are also used for robotic and artificial intelligence purposes. Considering an image as a two-dimensional intensity distribution, the moments of the image contain information about the image area, the orientation of the image and the coordinates of the centre of gravity.  Figure 3. P/RML assisted by two ARSs, two RMs, two eye to hand VSSs and one eye in hand.

Task Planning of P/RML
The P/R operations can be broken down into a sequence of elementary tasks coupled in parallel with work-piece positioning tasks along the work station, as in [6,8,12]. The hybrid processing/reprocessing strategy is based on the hierarchical model proposed in [6][7][8]22], which uses a block diagram representation shown in Figure 4. ARSs carry the work-piece that fails the quality test from the place where it is stored (last station of P/RML to the initial position (first station of P/RML) for reprocessing. Figure 4 shows the task planning for processing a work-piece and is transporting by ARSs.
Actuators 2020, 9, x FOR PEER REVIEW 5 of 23 The P/R operations can be broken down into a sequence of elementary tasks coupled in parallel with work-piece positioning tasks along the work station, as in [6,8,12]. The hybrid processing/reprocessing strategy is based on the hierarchical model proposed in [6][7][8]22], which uses a block diagram representation shown in Figure 4. ARSs carry the work-piece that fails the quality test from the place where it is stored (last station of P/RML to the initial position (first station of P/RML) for reprocessing. Figure 4 shows the task planning for processing a work-piece and is transporting by ARSs.

Assumptions
Specialized technology on this type of P/RML represents the base of a flexible industrial production line that gives a high range of standard products. Additionally, through reprocessing, the products can be recovered and brought to the required quality standards. The technology on P/RML, further developed, depends on aspects like operation modes, operation lengths and types of finished products [23]. Therefore, for P/RML, ARSs, RMs and VSSs, some assumptions have to be made for controlling the whole system: Assumption 1. P/RML is deterministic and with a single SHPN model, that allows the obtaining of suitable quality products and recoverable products, through reprocessing;

Assumptions
Specialized technology on this type of P/RML represents the base of a flexible industrial production line that gives a high range of standard products. Additionally, through reprocessing, the products can be recovered and brought to the required quality standards. The technology on P/RML, further developed, depends on aspects like operation modes, operation lengths and types of finished products [23]. Therefore, for P/RML, ARSs, RMs and VSSs, some assumptions have to be made for controlling the whole system: Assumption 1. P/RML is deterministic and with a single SHPN model, that allows the obtaining of suitable quality products and recoverable products, through reprocessing; Assumption 2. The number of the P/RML work stations involved in P/R is previously known and will remain unchanged; Assumption 3. Only one type of work-piece, but of different colours, is processed or reprocessed, because the proposed technology involves processing operations specific to a certain type of product; Assumption 4. All conditions and parameters of the technology are initially known, including task durations, costs and quantity of work-pieces that will take part in the process; Assumption 5. The workstations of the P/RML have a linear distribution: transport, handling, processing and storing workstation; Assumption 6. The P/R operations are executed on the same line and work-pieces can be processed at the same time in different system stages; Remark 1. As a result of the positioning errors or tool misalignments, it is possible for the processing operations not to be carried out properly. If these errors are in the meantime eliminated by performing the operations through reprocessing, the work-piece should be brought to the required quality standard.

Assumption 7.
A red work-piece will mean that the quality test is not passed and reprocessing is needed; Assumption 8. In the deposit, the first level from the top is for rejected work-pieces; Assumption 9. Two ARSs assisted the P/RML, one of them having mounted an RM, Pioneer 5-DOF, used for picking up the work-piece, and the other used for transport; Assumption 10. Two eye to hand VSS cameras are mounted on the P/RML, one on the last and another on the first work station; Assumption 11. One eye in hand VSS camera is mounted on the second RM, Cyton 1500.
The use of ARSs is justified for collaborating with the P/RML in order to recover the rejected work-pieces and transport them for reprocessing or for definitive rejection at the beginning of the line. The path and distances that the ARSs should pass are shown in Figure 3.

Structure of the SHPN Model
The hybrid aspect of the model (P/RML assisted by ARSs) is given by the variables associated with the distances covered by ARSs. These distances are covered by the ARSs between the last workstation (storage workstation) and the first station (transport workstation) of the P/RML. For modelling, we will use the SHPN tool, [12,16], which integrates the discrete aspect of P/RML with the continuous aspect of ARSs displacement likes is shown in [6][7][8][9]. The global model is of the SHPN type because it is interfaced with external events for synchronization, the events being signals coming from sensors and VSSs. The SHPN structure, from Figure 5, corresponds to the discrete modelling of the P/R processes and the continuous dynamics of the ARSs one of them equipped with RM, which assist the P/RML to bring the work-piece from the last to the first workstation, to be reprocessed. The internal structure of the SHPN model integrates two PN models, each of them having a specific typology: TPN (Timed Petri Network) and SPN + TPN (Synchronized Petri Network + Timed Petri Network) and THPN (Timed Hybrid Petri Network), [15][16][17]. These models describe the following operations that are performed automatically: processing (TPN modelling), reprocessing (TPN modelling) and assistance of the ARSs for the work-piece fastening and bringing it for reprocessing. Adding the apparition of an external event, the signals from sensors used for synchronization of the P/RML and the ARSs and RMs, the final model will have an SHPN structure that is shown in Figure 5. Each of the models represents a sequence in the real-time control operation, as follows: TPN type for P/R operations and SPN + TPN type for work-piece retrieving, transport and manipulation.
Actuators 2020, 9, x FOR PEER REVIEW 7 of 23 the final model will have an SHPN structure that is shown in Figure 5. Each of the models represents a sequence in the real-time control operation, as follows: TPN type for P/R operations and SPN + TPN type for work-piece retrieving, transport and manipulation.

Formalism of the SHPN Model Associated to P/RML Assisted by Two ARSs
SHPN model from Figure 6, associated with the P/RML assisted by two ARSs is a triplet, THPN is a septuplet d E is a set of external events, signals from sensors.
Sync maps the elements for the set of the discrete transitions in the set of the external events. External events are signals from sensors, used for synchronization.
P is a set of finite places, where:

Formalism of the SHPN Model Associated to P/RML Assisted by Two ARSs
SHPN model from Figure 6, associated with the P/RML assisted by two ARSs is a triplet, THPN is a septuplet E d is a set of external events, signals from sensors. Sync maps the elements for the set of the discrete transitions in the set of the external events. External events are signals from sensors, used for synchronization.
P is a set of finite places, where: with P d i i=1,15 the set of the discrete state corresponding to the processing operations on the P/RML.
(buffering, handling, catching, drilling, boring, sorting, releasing, etc.), P d_C i i=1, 14 are the discrete variables of the control system and P r j j=1, 8 the discrete states that define the ARSs displacements, with P r j j=1,4 discrete states of the ARS Pioneer 3-DX, while P r j j=5,8 the discrete states of ARS PeopleBot, with P c k k=1,2 the continuous places of ARS Pioneer 3-DX displacement, while P c k k=3,5 defines the continuous places of the ARS PeopleBot displacement.
T is the set of finite transitions Actuators 2020, 9, 127 8 of 22 where: with 14 the set of the discrete transitions for executing work-piece processing, T r j j = 1, 7 the set of the discrete transitions set of ARS Pioneer 3-DX displacement while returning the work-piece for reprocessing and T r j j = 8, 14 the set of the discrete transitions of ARS PeopleBot while returning the work-piece for reprocessing, where: T c k k=1,2 is the set of the continuous transitions of ARS Pioneer 3-DX displacement for transporting the work-piece to the ARS PeopleBot and T c k k=3,4 the set of the continuous transitions of ARS PeopleBot displacement for transporting the work-piece to the RM Cyton. The sets of the places and of the transitions are disjoint P ∩ T = ∅.
Pre : P × T → Q + is the input incidence function; Post : P × T → Q + is the output incidence function; m 0 : P → R + is the initial marking. For Pre, Post and m 0 , in the case where P i ∈ P d , the functions take values in N (natural number set) and, in the case where P i ∈ P c , the functions take values in Q + (positive rational number set).
h is a hybrid function that indicates for every node whether it is a discrete node (sets P D and T D ) or a continuous node (sets P C and T C ), where {D} and {C} are the sets of the discrete or continuous node, respectively tempo is the function of the timings associated to the transitions: Figure 6 and is defined as: If T j ∈ T d , then d j = tempo T j is known as the timing associated with T j . If T cr ∈ T c then is the flow rate associated with T c . For where U c is the variable flow of ARS displacement between continuous places, with V r = 94mm/s. Let ED T j , m be the ED-enabling degree of a C-transition, T j , for a marking m, denoted as after all the arcs, from a C− place to a C− transition, were erased.
Let the maximum firing speed be of transition T c as the product of its flow rate U r by its EDenabling degree. As is shown in Figure 6, will have where: is the mark associated with a continuous place and is the weight of the arc connecting a continuous transition to a continuous place of the ARS. For N = 2 the arches P i × T j have a weight equal to one, where: For an SHPN, a transition is enabled if each of its input places has enough tokens.

START PeopleBot
Transported piece S ync 2 signal E nd A R S P eopl eB ot action --> S tart A R S P ioneer 3-D X action Pd_C11 Pd_C13 S ync 1 signal E nd Q T2 --> S tart A R S P ioneer 3-D X action The SHPN model, shown in Figure 6, is an oriented graph described with the Petri Nets (PN) formalism, according to the discrete events system (DES). The SHPN model describes the dynamics of the P/RML assisted by ARSs, as a hybrid process. For making the model, the following assumptions were made: 15 are the discrete states of the work-piece under processing. The actions that determine the modification of these states correspond to the operations: processing (drilling, boring, etc.), transporting, quality tests (QT1 and QT2) and handling; P r j j=1, 8 are the discrete states of the two ARSs, determined by the running of the fixed point displacement and manipulation; 14 are the control/synchronization signals necessary for controlling the sequence of tasks and synchronization at the P/RML level, served by ARSs; P C = P c j j=1,5 are the continuous state variation of the position of the two ARSs, in relation to the endpoint of the displacement, corresponding to a complete service cycle (see Figure 3, with the workstations and the successive positioning scheme of the two ARSs). The synchronization signals are intended for the synchronization of the P/RML, ARS Pioneer 3-DX and ARS PeopleBot actions. In the SHPN formalism, they correspond to the set of events, which in the SHPN model are represented by: where: {e} represents the neutral event that is considered to "synchronize" transitions T\{T r14 , T d14 , T r7 , T r14 } that are neutral in terms of synchronization with other sub-processes; ∪ {e} are the signals injected by changing the states, in the controlled sub-processes, as follows: Edd 1 synchronization signal for START_P/RML with END_ARS PeopleBot and START_Piece_Delivery to P/RML for reprocessing; Edd 2 synchronization signal for END_QT2 with START_ARS Pioneer 3-DX actions; Edd 3 synchronization signal for END_ ARS PeopleBot and START_ARS Pioneer 3-DX actions; Edd 4 synchronization signal for END_ARS Pioneer 3-DX and START_ARS PeopleBot actions; Edd 5 synchronization signal for END_Piece_Delivery to P/RML for reprocessing with START_ARS PeopleBot actions; Sync = Sync 0 ∪ Sync 1 represents synchronization signals corresponding to each transition in the SHPN model with other sub-processes, so that the SHPN model, implicitly its simulation, is consistent with the control strategy of the whole process: The paper presented here focuses on the hybrid aspect of the process, both in modelling and in control, taking into account the scenarios of P/RML assistance by ARSs. The hybrid control system of the P/RML serviced by two ARSs, is able to control in real-time the entire process, according to the control strategy.

Simulation of SHPN Model
Following the testing and simulation of the SHPN model described using the Sirphyco package shown (Figure 6), the simulation results are shown in Figures 7-9. The advantage of using common simulation for both the discrete and continuous aspects in the model is that the interaction between the two parts can be obtained and analysed. Additionally, as a result of the hybrid model simulation, the maximum speed of the two ARSs that assist P/RML was reached. The speed was computed so that the cycle time for work-piece recovery and transporting is minimum. In addition, the hardware constraints of the ARSs have to be taken into account in order to obtain the best speed. By using the SHPN model, a unique and precise connection is achieved between the two models: the TPN which models the P/RML tasks and the THPN which models the operations of two ARSs while retrieving and transporting the work-pieces. In the complete SHPN model, represented in Figure 6, each task performed by the P/RML (such as buffering, handling, drilling, boring, etc.) has a very precisely determined time, together with the duration of ARSs displacement. Figures 7 and 8 present simulation results of the model and it can be observed that the time ranges correspond to the given reality.

Simulation of SHPN Model
Following the testing and simulation of the SHPN model described using the Sirphyco package shown (Figure 6), the simulation results are shown in Figures 7-9. The advantage of using common simulation for both the discrete and continuous aspects in the model is that the interaction between the two parts can be obtained and analysed. Additionally, as a result of the hybrid model simulation, the maximum speed of the two ARSs that assist P/RML was reached. The speed was computed so that the cycle time for work-piece recovery and transporting is minimum. In addition, the hardware constraints of the ARSs have to be taken into account in order to obtain the best speed. By using the SHPN model, a unique and precise connection is achieved between the two models: the TPN which models the P/RML tasks and the THPN which models the operations of two ARSs while retrieving and transporting the work-pieces. In the complete SHPN model, represented in Figure 6, each task performed by the P/RML (such as buffering, handling, drilling, boring, etc.) has a very precisely determined time, together with the duration of ARSs displacement. Figures 7 and 8 present simulation results of the model and it can be observed that the time ranges correspond to the given reality. Firstly, ARSs are initialised and then the ARS Pioneer 3-DX waits for an external event that is described through the existence of a work-piece that could be reprocessed. The video system used for synchronization detects the appearance of the work-piece and creates this external event which for synchronization detects the appearance of the work-piece and creates this external event which will trigger the robot for retrieving. Next, the ARS Pioneer 3-DX will put the recovered work-piece on the PeopleBot for transporting. Two synchronization signals will be needed:    Figure 6).

Modelling and Control Structure
The structure of a VSS is based on the following components: an autonomous system composed of a mobile robot equipped with a 5 or 6-DOF manipulator, a controller and a video sensor (web camera). The fundamental part of the architecture, the image-based controller requires a priori information about the behaviour of the system in order to minimize the error between a current configuration of visual features, f , and a desired configuration, * f . To model the behaviour in the open loop of the servoing system, the two entities that form the fixed part must be analysed separately: the manipulator robot and the visual sensor. It is still considered a configuration of the eye-in-hand type for the visual sensor-robot assembly. The main idea for modelling a VSS is to minimize the error between the real and desired features extracted from the video sensor, [9,19,23,24]. The control structures of two types of VSSs are shown in Figures 10 and 11. Let   Figure 6).
Firstly, ARSs are initialised and then the ARS Pioneer 3-DX waits for an external event that is described through the existence of a work-piece that could be reprocessed. The video system used for synchronization detects the appearance of the work-piece and creates this external event which will trigger the robot for retrieving. Next, the ARS Pioneer 3-DX will put the recovered work-piece on the PeopleBot for transporting. Two synchronization signals will be needed: -Sync 1-synchronization signal End QT2 and START ARS Pioneer 3-DX; -Sync 2-synchronization with P/RML for reprocessing of the work-piece, END ARS PeopleBot action and START Pioneer 3-DX action.

Modelling and Control Structure
The structure of a VSS is based on the following components: an autonomous system composed of a mobile robot equipped with a 5 or 6-DOF manipulator, a controller and a video sensor (web camera). The fundamental part of the architecture, the image-based controller requires a priori information about the behaviour of the system in order to minimize the error between a current configuration of visual features, f , and a desired configuration, f * . To model the behaviour in the open loop of the servoing system, the two entities that form the fixed part must be analysed separately: the manipulator robot and the visual sensor. It is still considered a configuration of the eye-in-hand type for the visual sensor-robot assembly. The main idea for modelling a VSS is to minimize the error between the real and desired features extracted from the video sensor, [9,19,23,24]. The control structures of two types of VSSs are shown in Figures 10 and 11. Let υ * c be the signal associated to the control input of the ARS. This signal represents the reference speed of the camera and has the following structure: υ * c = (υ * , ω * ) T , υ * = υ * x , υ * y , υ * z T and ω * = ω * x , ω * y , ω * z T defined as the linear and the angular speed, respectively. The signal υ * c is expressed in the Cartesian space and requires a transformation to be able to be applied to the manipulator robot. If we note with s = [s 1 , s 2 , s 3 , s 4 , s 5 , s 6 ] T the posture obtained by υ * c integration, then we define the robot Jacobian as: where q j , j = 1, . . . , 6 represents the state of the robot's joints. Thus, the signal transformation of υ * c from Cartesian space to robot joint space will be achieved by J −1 r and the interaction matrix. The interaction matrix needs to fulfil a series of properties in order to obtain the optimal behaviour for VSS. It has to be non-singular and diagonal. Considering the moments m ij as a set of visual features then, the analytical form for its variation in time, . m ij , corresponding to the moments of order (i + j), depending on the speed of the camera υ * c , is: .
where L m ij = m v x , m v y , m v z , m w x , m w y , m w z is the interaction matrix.
depending on the speed of the camera * c υ , is: where ] , , , , , [ is the interaction matrix.   Figure 11. Eye in hand VSS based closed-loop control of the RM Cyton, [9]. @2019 IEEE.

Control Input
Based on the theory presented in [9,19,23], the interaction matrix corresponding to a set of moments of the image f = [x n , y n , a n , τ, ξ, α] T , for n points, can be computed as below: 0 a n e 11 −a n (1 + e 12 ) y n 0 −1 0 a n (1 + e 21 ) −a n e 11 −x n 0 0 −1 −e 31 e 32 0 0

Control Input
The most used method to generate the control signal for robots is the use of proportional control. The visual servoing system can be viewed as a minimization problem that finds the path of the visual sensor by minimizing the cost function attached to the error vector. Note with f * the desired feature vector, f with the current feature vector, and r(t) the relative position between the camera and the object at time t. Note with f (r(t)) the features variation reported to the relative movement between the workspace and the video sensors: Because the time variation of the features, reporting to the motion of the object, is zero, then ∂ f ∂t = 0 for a static object. Therefore, (25) becomes where υ c is the vector describing the relative speed between the object and the video sensor L f and is the interaction matrix from (24). To describing the control law, it is required to define an error function between the target features f * and the current features f : Because most implementations of visual servoing systems do not take into account the dynamics of the robot (the dynamics of the robot is considered one), then υ c = υ * c and (26) becomes: .
From (27) and (28) the time variation of the error can be expressed as: .
Since the robot control input is defined by υ * c and wishing an exponentially negative decrease in the error, . e = −λe, . e = L f υ * c = −λe, the following control law is obtained from (29): where L + f is the pseudo inverse of the interaction matrix and can be computed with the expression: Because in real-time VSSs, the distance Z between the points of interest and the reference system attached to the camera is not precisely known, it must be estimated, noted with ∧ L + f . The estimation of this matrix is based on the pseudo-inverse of the target features interaction matrix Because this matrix remains constant during the entire control algorithm, the control law is:

Real-Time Control of P/RML with Integrated ARSs, RMs and VSSs
To control the assistive technology of the four stations P/RML Festo MPS 200, three robotic systems were integrated: ARS Pioneer 3-DX equipped with RM Pioneer 5-DOF, for picking up the work-piece from the last station; ARS PeopleBot for transporting and RM Cyton 1500 for picking up and releasing the work-piece from ARS PeopleBOT AND on the first work station of P/RML, respectively.
Real-time control of the assisted technology is based on five control loops: • control loop for P/RML with SIMATIC STEP 7 and SIEMENS 300 PLC; • control loop for ARS Pioneer 3-DX equipped with RM PIONEER 5-DOF and eye to hand VSS1. Video camera is mounted on the last station of P/RML; • control loop for ARS PeopleBot based on trajectory tracking sliding-mode control (TTSMC). Additionally, an obstacle avoidance method based on sonars and ultrasounds was implemented [5,9,10]; • control loop for RM Cyton 1500 based on eye in hand VSS for precise positioning for picking up work-piece from ARS PeopleBot. The eye in hand VSS camera is mounted on the end effector of RM Cyton 1500; • control loop for RM Cyton 1500 based on inverse kinematic for transporting work-piece and eye to hand VSS2 to release it on P/RML. The eye to hand VSS2 cameras is mounted to first station of P/RML.
All five control loops communicate through two remote computers one of them runs the graphic user interface (GUI) application and controls P/RML, ARSs, VSS1 and VSS2, another controls RM Cyton 1500 and eye in hand VSS. Specific programming packages, SIMATIC STEP 7, Microsoft Visual Studio and MATLAB, are used to control the whole system. As shown in Figure 12, the communication between the GUI application and ARSs is done based on a TCP/IP protocol. Using the specialized image processing toolbox from MATLAB and the control input defined in equations (30) and (32), the synchronization between all control loops is implemented.
Video camera is mounted on the last station of P/RML; • control loop for ARS PeopleBot based on trajectory tracking sliding-mode control (TTSMC). Additionally, an obstacle avoidance method based on sonars and ultrasounds was implemented [5,9,10]; • control loop for RM Cyton 1500 based on eye in hand VSS for precise positioning for picking up work-piece from ARS PeopleBot. The eye in hand VSS camera is mounted on the end effector of RM Cyton 1500; • control loop for RM Cyton 1500 based on inverse kinematic for transporting work-piece and eye to hand VSS2 to release it on P/RML. The eye to hand VSS2 cameras is mounted to first station of P/RML.
All five control loops communicate through two remote computers one of them runs the graphic user interface (GUI) application and controls P/RML, ARSs, VSS1 and VSS2, another controls RM Cyton 1500 and eye in hand VSS. Specific programming packages, SIMATIC STEP 7, Microsoft Visual Studio and MATLAB, are used to control the whole system. As shown in Figure 12, the communication between the GUI application and ARSs is done based on a TCP/IP protocol. Using the specialized image processing toolbox from MATLAB and the control input defined in equations (30) and (32), the synchronization between all control loops is implemented. Figure 12. Communication block set of P/RML, assisted by two ARSs, two eyes to hand and one eye in hand VSSs [9]. @2019 IEEE Figure 13 illustrates a series of image and frames captured during real time-control based on the eye to hand VSS located to the last station of P/RML. Additionally, Figure 13 shows the real-time steps with the precise positioning, based on eye in hand VSS, of the ARS Pioneer 3-DX equipped with RM Figure 12. Communication block set of P/RML, assisted by two ARSs, two eyes to hand and one eye in hand VSSs [9]. @2019 IEEE. Figure 13 illustrates a series of image and frames captured during real time-control based on the eye to hand VSS located to the last station of P/RML. Additionally, Figure 13 shows the real-time steps with the precise positioning, based on eye in hand VSS, of the ARS Pioneer 3-DX equipped with RM Pioneer 5-DOF: (a) Displacement of the ARS Pioneer P3-DX and positioning to retrieve the work-piece from the P/RML, (b) catching the work-piece, (c) displacement of ARS Pioneer P3-DX to the ARS PeopleBot, based on real-time, trajectory-tracking, sliding-mode control; (d) releasing the work-piece on ARS PeopleBot, (e) return to the starting position of the ARS Pioneer P3-DX. Real-time control of ARS Pioneer 3-DX equipped with RM Pioneer 5-DOF for picking up work-piece using eye to hand VSS. Some frames of real-time control of ARS Pioneer 3-DX equipped with RM Pioneer 5-DOF for picking up work-piece using eye to hand VSS, are presented in Figure 14.
piece from the P/RML, (b) catching the work-piece, (c) displacement of ARS Pioneer P3-DX to the ARS PeopleBot, based on real-time, trajectory-tracking, sliding-mode control; (d) releasing the workpiece on ARS PeopleBot, (e) return to the starting position of the ARS Pioneer P3-DX. Real-time control of ARS Pioneer 3-DX equipped with RM Pioneer 5-DOF for picking up work-piece using eye to hand VSS. Some frames of real-time control of ARS Pioneer 3-DX equipped with RM Pioneer 5-DOF for picking up work-piece using eye to hand VSS, are presented in Figure 14.  up to the obstacle, is smaller than the sensitivity bubble.
In Figure 17, it can be observed that there is an obstacle on the desired trajectory (black line) and it is desired to find a point holding the property that allows the robot to bypass the obstacle using the bypass trajectory (blue line). Knowing the width of the robot, we checked if the sonar detects another nearby obstacle in the chosen bypass direction. Figure 18 describes the following: the desired and the real trajectory of the ARS PeopleBot obtained with TT-SMC in a closed loop. A small deviation of the trajectory can be observed when  up to the obstacle, is smaller than the sensitivity bubble.
In Figure 17, it can be observed that there is an obstacle on the desired trajectory (black line) and it is desired to find a point holding the property that allows the robot to bypass the obstacle using the bypass trajectory (blue line). Knowing the width of the robot, we checked if the sonar detects another nearby obstacle in the chosen bypass direction. Figure 18 describes the following: the desired and the real trajectory of the ARS PeopleBot obtained with TT-SMC in a closed loop. A small deviation of the trajectory can be observed when where: i-number of readings; b i -component of the sensitivity bubble; k i -the safety factor,v-robot speed; T s -the sample time. An obstacle is detected when the distance measured by the sonars, up to the obstacle, is smaller than the sensitivity bubble.
Actuators 2020, 9, x FOR PEER REVIEW 17 of 23 ARS PeopleBot executes the return of  90 . Figure 19 shows the tracking error when moving on the X axis and the tracking error when moving on the Y axis. When the ARS is moving on the X axis, there are two deviations due to connection loss, but they tend immediately to 0. Additionally, Figure   19 illustrates the angular error, where it can be seen that the maximum error is      Figure 20 shows the real-time control of RM Cyton based on invers kinematics, [20], eye in hand and eye to hand VSS for handling the work-piece: in the left initial position, in the middle, caching the work-piece from ARS PeopleBot, in the right, releasing the work-piece on P/RML. In Figure 17, it can be observed that there is an obstacle on the desired trajectory (black line) and it is desired to find a point holding the property that allows the robot to bypass the obstacle using the bypass trajectory (blue line). Knowing the width of the robot, we checked if the sonar detects another nearby obstacle in the chosen bypass direction.    Figure 20 shows the real-time control of RM Cyton based on invers kinematics, [20], eye in hand and eye to hand VSS for handling the work-piece: in the left initial position, in the middle, caching the work-piece from ARS PeopleBot, in the right, releasing the work-piece on P/RML.   Figure 19 shows the tracking error when moving on the X axis and the tracking error when moving on the Y axis. When the ARS is moving on the X axis, there are two deviations due to connection loss, but they tend immediately to 0. Additionally, Figure 19 illustrates the angular error, where it can be seen that the maximum error is ±6 • .     Figure 20 shows the real-time control of RM Cyton based on invers kinematics, [20], eye in hand and eye to hand VSS for handling the work-piece: in the left initial position, in the middle, caching the work-piece from ARS PeopleBot, in the right, releasing the work-piece on P/RML.  Real and desired trajectories are shown in: Figure 21, corresponding to the movement of RM Cyton from initial position to the ARS PeopleBot position for picking up the work-piece from upper platform of ARS PeopleBot, using inverse kinematics control and eye in hand VSS for precision positioning; Figure 22, corresponding to the movement of RM Cyton for transporting work-piece from ARS PeopleBot to P/RML, using inverse kinematics control and eye to hand VSS for precision positioning; Figure 23 corresponding to the movements of RM Cyton from P/RML to the parking position using inverse kinematics control. positioning; Figure 22, corresponding to the movement of RM Cyton for transporting work-piece from ARS PeopleBot to P/RML, using inverse kinematics control and eye to hand VSS for precision positioning; Figure 23 corresponding to the movements of RM Cyton from P/RML to the parking position using inverse kinematics control.            Some frames of real-time control of RM Cytor for picking up work-piece from ARS PeopleBot, based on eye in hand VSS and releasing it on P/RML, based on eye to hand VSS, are presented in Figures  24 and 25, respectively.

Discussion
We mention in [5] that the SHPH model and its simulation in autonomous mode (HPN model) are based on two reasons. The first one is to make compatible the discrete dynamics of P/RML with the continuous dynamics of ARSs. The compatibility is needed because the P/RML and ARSs have characteristics and physical constraints that should be considered. The second one is due to the uncertainty introduced by the precise positioning of the RM for picking up or releasing the workpiece, because VSSs have not been implemented. Additionally, we state in [5,9,28] that the introduction of VSSs is a research objective in the near future. Assisting P/RML by VSSs, RMs and VSSs led to the elimination of an important part of the uncertainty during a complete cycle. One of the objectives of the control and monitoring of this manufacturing technology on P/RML assisted by ARSs, RMs and VSSs is the minimization of the cycle time. By using the TTSMC to control ARSs and sensitivity bubble method to avoid obstacles, the problem of system robustness was partially addressed. The Some frames of real-time control of RM Cytor for picking up work-piece from ARS PeopleBot, based on eye in hand VSS and releasing it on P/RML, based on eye to hand VSS, are presented in Figures  24 and 25, respectively.

Discussion
We mention in [5] that the SHPH model and its simulation in autonomous mode (HPN model) are based on two reasons. The first one is to make compatible the discrete dynamics of P/RML with the continuous dynamics of ARSs. The compatibility is needed because the P/RML and ARSs have characteristics and physical constraints that should be considered. The second one is due to the uncertainty introduced by the precise positioning of the RM for picking up or releasing the workpiece, because VSSs have not been implemented. Additionally, we state in [5,9,28] that the introduction of VSSs is a research objective in the near future. Assisting P/RML by VSSs, RMs and VSSs led to the elimination of an important part of the uncertainty during a complete cycle. One of the objectives of the control and monitoring of this manufacturing technology on P/RML assisted by ARSs, RMs and VSSs is the minimization of the cycle time. By using the TTSMC to control ARSs and sensitivity bubble method to avoid obstacles, the problem of system robustness was partially addressed. The

Discussion
We mention in [5] that the SHPH model and its simulation in autonomous mode (HPN model) are based on two reasons. The first one is to make compatible the discrete dynamics of P/RML with the continuous dynamics of ARSs. The compatibility is needed because the P/RML and ARSs have characteristics and physical constraints that should be considered. The second one is due to the uncertainty introduced by the precise positioning of the RM for picking up or releasing the work-piece, because VSSs have not been implemented. Additionally, we state in [5,9,28] that the introduction of VSSs is a research objective in the near future. Assisting P/RML by VSSs, RMs and VSSs led to the elimination of an important part of the uncertainty during a complete cycle. One of the objectives of the control and monitoring of this manufacturing technology on P/RML assisted by ARSs, RMs and VSSs is the minimization of the cycle time. By using the TTSMC to control ARSs and sensitivity bubble method to avoid obstacles, the problem of system robustness was partially addressed. The following have to be considered as uncertainties: faulty sensors/actuators, route/storage space blockage and payload variation. In future research, these will also be taken into account.

Conclusions
The assistive technology implemented to the four workstations P/RML, FESTO MPS 200, allows the recovery, through reprocessing, of products that do not correspond qualitatively. To successfully implement this eco-technology, ARSs, RMs and VSSs are used for assistance, with the last ones as