Adaptive Visual Servoing Control for Hoisting Positioning Under Disturbance Condition

This paper proposes a visual servo scheme for hoisting positioning under disturbance conditions. In actual hoisting work, disturbances such as equipment and load vibration are inevitable, which brings challenges to the development of a visual servo for hoisting positioning. The main problems are as follows: (1) the correlation between visual error and disturbance is not considered or well resolved; (2) the disturbance has a great influence on the control stability, but it is difficult to model. At present, there is no detailed research on the above problems. In this paper, the visual error is defined by the image error of the feedback signal based on dynamic equations containing disturbances. An adaptive sliding mode control algorithm is employed to decrease the influence of external disturbance, and the coefficient of the slide surface is established based on the adaptive gain. In view of the belief that it is difficult to model disturbance terms, a nonlinear disturbance observer is introduced to obtain equivalent disturbance. On this basis, an adaptive control algorithm with disturbance compensation is proposed to improve the robustness of the visual servo system. We use Lyapunov’s method to analyze the stability conditions of the system. Compared with the other state-of-the-art methods, the simulation results show that our method has superior performance in convergence, accuracy, and restraining disturbance. Finally, the proposed algorithm is applied to the hoisting platform for experimental research, which proves the effectiveness of the controller.


Introduction
Automatic hoisting positioning technology is the main component of unmanned crane technology. The research into automatic hoisting positioning technology is conducive to the development of intelligent cranes and technology upgrading, which are key in the technology of cranes in the future.
The key to improving the automation level of the hoisting operation is to realize accurate hoisting positioning ( Figure 1). Port hoisting, prefabricated building construction and the installation of large energy and power equipment all need accurate hoisting positioning. The traditional positioning method is to install an absolute value encoder [1] on the driving shaft of the crane. The absolute value encoder converts the rotation of the shaft into the moving distance and transmits it to the control system, which judges whether the equipment arrives at a designated position. For conventional cranes, this design is feasible and widely used. However, when it is used in high-precision cranes, deviations occur. The reason is that the driving wheel slips, and the wheel shaft rotates but the crane does not move, resulting in the instability of the positioning system. In order to solve the problem of low positioning accuracy, the rack and pinion positioning system is introduced into the hoisting positioning technology. The rack and pinion positioning system [2] improves the positioning accuracy, and is introduced into the hoisting positioning technology. The rack and pinion positioning system [2] improves the positioning accuracy, and is widely used in subsequent nuclear power projects. However, the rack and pinion positioning system ( Figure 2) still faces many challenges: the rack needs to be laid out to its full length, and the machining accuracy of the rack itself is too high, which brings great trouble to the engineers. For a crane with long travel distances and precise positioning, researchers tried to use a cable encoder, which consists of a cable box and an absolute value encoder. As the cable is pulled out and retracted, the encoder measures the number of turns of the drum in the cable box and converts it into a measurement signal to output to the system. In the actual project, it is found that the cable is easily damaged in the installation and commissioning processes. Moreover, the magnetic ruler system [3] has been developed and applied to hoisting positioning, but it has not been popularized because of the limitations of the working environment. Radiofrequency technology [4] has also been used in this field, but its performance was greatly discounted due to signal shielding and interference.  As a non-contact sensor, the camera can be used as the eye of the hoisting equipment to increase the ability of environmental perception. With this method, we can control the crane in a closed loop. In recent years, visual servoing has ben widely used in trajectory tracking and positioning technology [5,6]. However, in the hoisting and positioning operation, the external environment is unstable, and is easily affected by wind load, equipment vibration and other factors, which bring disturbances to the visual servo system, resulting in low accuracy of positioning, slow response speed and other problems. Therefore, in this paper, research into visual servo control under disturbance conditions is carried out.

Literature Review
As in [5], the visual servo control uses data from a vision system to control the movement of robot or mobile devices. In general, there are two kinds of visual servo control modes, according to the installation position of the camera. According to the fixation of the camera, visual servo can be introduced into the hoisting positioning technology. The rack and pinion positioning system [2] improves the positioning accuracy, and is widely used in subsequent nuclear power projects. However, the rack and pinion positioning system ( Figure 2) still faces many challenges: the rack needs to be laid out to its full length, and the machining accuracy of the rack itself is too high, which brings great trouble to the engineers. For a crane with long travel distances and precise positioning, researchers tried to use a cable encoder, which consists of a cable box and an absolute value encoder. As the cable is pulled out and retracted, the encoder measures the number of turns of the drum in the cable box and converts it into a measurement signal to output to the system. In the actual project, it is found that the cable is easily damaged in the installation and commissioning processes. Moreover, the magnetic ruler system [3] has been developed and applied to hoisting positioning, but it has not been popularized because of the limitations of the working environment. Radiofrequency technology [4] has also been used in this field, but its performance was greatly discounted due to signal shielding and interference.  As a non-contact sensor, the camera can be used as the eye of the hoisting equipment to increase the ability of environmental perception. With this method, we can control the crane in a closed loop. In recent years, visual servoing has ben widely used in trajectory tracking and positioning technology [5,6]. However, in the hoisting and positioning operation, the external environment is unstable, and is easily affected by wind load, equipment vibration and other factors, which bring disturbances to the visual servo system, resulting in low accuracy of positioning, slow response speed and other problems. Therefore, in this paper, research into visual servo control under disturbance conditions is carried out.

Literature Review
As in [5], the visual servo control uses data from a vision system to control the movement of robot or mobile devices. In general, there are two kinds of visual servo control modes, according to the installation position of the camera. According to the fixation of the camera, visual servo can be As a non-contact sensor, the camera can be used as the eye of the hoisting equipment to increase the ability of environmental perception. With this method, we can control the crane in a closed loop. In recent years, visual servoing has ben widely used in trajectory tracking and positioning technology [5,6]. However, in the hoisting and positioning operation, the external environment is unstable, and is easily affected by wind load, equipment vibration and other factors, which bring disturbances to the visual servo system, resulting in low accuracy of positioning, slow response speed and other problems. Therefore, in this paper, research into visual servo control under disturbance conditions is carried out.

Literature Review
As in [5], the visual servo control uses data from a vision system to control the movement of robot or mobile devices. In general, there are two kinds of visual servo control modes, according to the installation position of the camera. According to the fixation of the camera, visual servo can be divided into two categories, one is where the camera is placed directly onto a robot or robotic arm, called "eye-in-hand", the other is where the camera is fixed at the workspace, called "eye-to-hand".
Kinematic and dynamic modeling under the disturbance condition is key to completing the relationship between visual error and disturbance. For a visual servo system, a variety of modeling methods are constantly applied and proposed, and the interaction with the external environment is considered. Dong [7] used vision measurement technology and the Extended Kalman Filter (EKF) algorithm to estimate the pose and motion parameters of the target. Based on the target pose and motion parameters, the incremental inverse kinematics model was established to obtain the desired position of the end effector. Different from the above control method, which only considers the system kinematics, Krupínski [8] studied the kinematics and dynamics of the whole system, including nonlinear, coupling effect, interaction with the external environment and other factors, and applied the scene feedback information of the homography matrix between different images to the system dynamics to improve the control stability of the system. Dynamic modeling under different conditions is the key to solving the stability problem, and the state feedback controller is widely used for stability control [9]. Hu [10] proposed a fault-tolerant control scheme based on a disturbance observer, which can effectively suppress the external disturbance and reduce the impact of actuator failure on the control system. Based on the disturbance observer, the uncertainty caused by external disturbance or actuator failure is established and compensated. Aiming to solve the the problem of control error convergence in a closed-loop control system, an attitude stabilization control system based on integral sliding film was proposed. Fan [11] proposed an Model Predictive Control (MPC) control strategy, including an auxiliary state feedback controller and robot system. The kinematic state error of the nominal system was transformed into a chain system to solve the MPC optimization problem of the nominal system and generate the optimal state trajectory of the robot system. Ke [12] used MPC to stabilize the physical constraints of the robot system; the kinematic equation of the nonholonomic chain system was transformed into the form of skew symmetry, and the exponential decay phase was introduced to solve the uncontrollable problem of the system. Obviously, this can be used as a reference to prove that it is possible to transform the kinematic state error into the chain optimization problem.
It is necessary to improve the robustness of the visual servo system in the presence of uncertainty [13][14][15][16]. Under the uncertainty of system dynamics and vision framework, Zergeroglu [17] studied the control of planar mobile devices, in order to compensate the uncertainty of the system; a robust controller was designed to ensure the final uniform boundedness of the position tracking. Ma [18] studied the singularity and local extremum in visual servo control, and proposed a robust design strategy to suppress image noise and external disturbance, so as to ensure the internal stability of the closed-loop system. However, when the external interference changes, the effect of this robust design strategy is not ideal. In the research, the constraint optimization problem is transformed into a H∞ control framework, which improves the anti-interference of the system. It means that the research system not only contains uncertainty, but also requires to a strongly conservative performance index to be achieved.
In order to obtain better dynamic stability performance in a camera robot system, Li [19] proposed a new Lyapunov-positive definite function based on the asymptotic stability of the visual servo system. Considering the more complex situation of visual systems, including the uncertainty of system dynamics and camera parameters, the asymptotic convergence of image tracking errors was proved. Liu [20] focused on the research of ship motion control, and put forward a scheme based on the sliding mode control. With the aim of studying wave disturbance, he introduced the nonlinear disturbance observer, and studied its suppression characteristics. The stability of the above studies are, respectively, observed from two different perspectives-aspects of visual function and disturbance suppression-but the environmental limitations are more prominent. Obviously, the nonlinear problem of the visual servoing system is inevitable [21][22][23]. Elastic objects will deform under the condition of complete constraint, which will bring more nonlinear problems to visual servo control. David [21] proposed an uncalibrated algorithm based on Lyapunov to estimate the visual Jacobian matrix in the deformed state; he applied this method to a clamping operation based on visual servo control, and combined the pose information of the gripper with the visual information to realize the recognition and control of its pose. Xu [22] extracted sensitive features from image information to meet the requirements of position and direction control. The direction and position of the target were controlled by the idea that the target size is sensitive to the depth of the image. The feature translation caused by the rotation process was used as compensation, and the image depth was estimated by the interaction matrix and the change in the image features. Too much dependence on camera-sensitive features, when the delay is too large, it is likely to cause data distortion. Aiming at solving the control problem of multi-camera visual servo control systems, Kornuta [23] proposed a design method based on the embedded concept, and defined each subsystem in the multi-level system structure according to the conversion function. The accuracy of the multi camera system was limited, and the processing load of the system was increased. In recent years, machine learning methods such as neural networks have been used to solve various nonlinear problems [24,25]. Gao [26] proposed a vision servo (IBVS) dynamic positioning strategy. In the speed-tracking loop of the control system, an adaptive controller based on a neural network was designed. With the premise of ensuring the convergence of speed tracking error, the influence of cost function, the dynamic model and the speed reference model on system performance was studied compared to other schemes. Considering the dynamic and nonlinear problems of the system, Wang [27] proposed a vision servo scheme based on a neural network. An adaptive neural network was used to fit the unknown dynamic model. The advantage of this scheme is that it solves the problem of nonlinear effect of output. The controller based on neural network improves the control stability, but puts forward higher requirements for neural network construction. In the application of photometry in visual servo systems, the image changes because of the appearance and disappearance of some scenes [28]. Omar [29] focused on the research of visual servo technology based on photometric moment, and proposed a relatively direct and simple method that does not need feature extraction, visual tracking and image matching in traditional visual servo technology. The challenge of this method is that when the appearance of the image changes or partially disappears, its stability is difficult to guarantee. In the process of visual servo control using the whole photometric image information as a dense feature, the redundancy of visual information makes the convergence domain very small. Nathan [30] proposes an analytic visual servo based on Gaussian mixture to expand its convergence region. When the initial position is far away, it can still achieve stable speed control and converge to the desired position. In the motion control of a wheel robot based on a visual servo, the center position of the visual system is mostly the same as the center position of the robot body, but some settings that deviate from the center position of the robot are conducive to its motion, which will bring deviation to the visual system, resulting in the convergence failure of the visual error. In order to solve this problem, Qiu [31] designed a motion tracking method based on visual servo, which deviated from the center of the robot body, to solve the influence of the translation of the uncalibrated camera on the parameters of the visual system. In the visual servo control of most mobile robots, the trajectory and the desired position image must be given. Li [32] designed a positioning control scheme based on monocular vision, defined the image reference frame by using the visual target and plane motion constraints, proposed the attitude estimation algorithm of the robot relative to the image of desired pose, and constructed the updated rate of unknown feature parameters. However, when the visual target and motion constraint parameters change, the reference frame will change and the control precision will be affected.
This paper researches the hoisting positioning technology based on visual servo control under the disturbance condition and focuses on solving the following two puzzles: (1) the correlation between visual error and disturbance is not considered or well resolved; (2) the disturbance has a great influence on the control stability, but it is difficult to model. In this paper, the visual error model is defined by the image error of a feedback signal based on dynamic equations containing disturbance. To solve the problem of the disturbance term being difficult to model, the nonlinear disturbance observer is employed to obtain equivalent disturbance and an adaptive control algorithm, while disturbance compensation is proposed.
The organization of this paper is as follows. Section 3 depicts the problem description, including dynamics modeling, and IBVS modeling. In Section 4, we describe the visual servo control based on adaptive sliding mode control (SMC), then propose the control law with disturbance compensation, and give the stability analysis based on Lyapunov's theory. Simulations are conducted in Section 5, which depicts the superiority of the proposed method against the other methods. Section 6 represents the experimental results of two projects with different initial positions. Finally, Section 7 contains a summary and outlook.

Dynamics Modeling
The schematic model is shown in Figure 3. The hoisting platform has four degrees of freedom, and the power device is composed of a Gantry driver and a Trolley driver. The driving force in each direction has no coupling effect. The state equation of hoisting equipment can be expressed as follows: where η = [x, y, z, ϕ, 0, 0] T ∈ R 6 represents the state vector of the end-effector, which contains the displacement vector η a = [x, y, z] T and the angle vector represents the velocity vector. T(·) is the transformation matrix related to the body-fixed velocities and the global pose rates [25].
Appl. Sci. 2019, 9, x FOR PEER REVIEW 5 of 19 The organization of this paper is as follows. Section 2 depicts the problem description, including dynamics modeling, and IBVS modeling. In Section 3, we describe the visual servo control based on adaptive sliding mode control (SMC), then propose the control law with disturbance compensation, and give the stability analysis based on Lyapunov's theory. Simulations are conducted in Section 4, which depicts the superiority of the proposed method against the other methods. Section 5 represents the experimental results of two projects with different initial positions. Finally, Section 6 contains a summary and outlook.

Dynamics Modeling
The schematic model is shown in Figure 3. The hoisting platform has four degrees of freedom, and the power device is composed of a Gantry driver and a Trolley driver. The driving force in each direction has no coupling effect. The state equation of hoisting equipment can be expressed as follows: where [ ]  In an ideal state, the dynamic equation of the hoisting mechanism is expressed as follows: is the corresponding Coriolis and centripetal matrices.
( ) G ⋅ represents the gravitational vector and Q represents the control vector, respectively.
However, in the actual hoisting positioning project, considering the nonlinear, uncertain and external interference factors in the model, the dynamic equation of hoisting equipment is described in the following form: ( ) D ⋅ represents the damping matrix [21], andτ is the external disturbance. In an ideal state, the dynamic equation of the hoisting mechanism is expressed as follows: M(·) > 0 is the positive definite inertial matrix of the system. C(·) is the corresponding Coriolis and centripetal matrices. G(·) represents the gravitational vector and Q represents the control vector, respectively.
However, in the actual hoisting positioning project, considering the nonlinear, uncertain and external interference factors in the model, the dynamic equation of hoisting equipment is described in the following form: M(η) D(·) represents the damping matrix [21], and τ is the external disturbance.

IBVS Model
In this paper, the camera is fixed on the component in the form of "eye-in-hand". C denotes the frame of the camera at its current pose, C* represents the frame of camera at the desired pose, G represents the world coordinate system. We set the camera coordinate frame to coincide with the hoisting component coordinate frame, where the hoisting component is considered to be consistent with the camera's pose and speed (as Figure 4). Visual servo control uses a visual feedback signal as input, and calculates the end-effector velocity according to the Jacobian matrix. Therefore, it is necessary to associate the velocity of the end-effector in the camera coordinates with the robot reference system and establish the transformation relationship. A flow chart of the coordinate transformation is shown in Figure 5.

IBVS Model
In this paper, the camera is fixed on the component in the form of "eye-in-hand". C denotes the frame of the camera at its current pose, C* represents the frame of camera at the desired pose, G represents the world coordinate system. We set the camera coordinate frame to coincide with the hoisting component coordinate frame, where the hoisting component is considered to be consistent with the camera's pose and speed (as Figure 4). Visual servo control uses a visual feedback signal as input, and calculates the end-effector velocity according to the Jacobian matrix. Therefore, it is necessary to associate the velocity of the end-effector in the camera coordinates with the robot reference system and establish the transformation relationship. A flow chart of the coordinate transformation is shown in Figure 5.  , , ) The camera coordinate system is transformed to the image physical coordinate system through the focal length diagonal matrix: In matrix form, it can be written as

IBVS Model
In this paper, the camera is fixed on the component in the form of "eye-in-hand". C denotes the frame of the camera at its current pose, C* represents the frame of camera at the desired pose, G represents the world coordinate system. We set the camera coordinate frame to coincide with the hoisting component coordinate frame, where the hoisting component is considered to be consistent with the camera's pose and speed (as Figure 4). Visual servo control uses a visual feedback signal as input, and calculates the end-effector velocity according to the Jacobian matrix. Therefore, it is necessary to associate the velocity of the end-effector in the camera coordinates with the robot reference system and establish the transformation relationship. A flow chart of the coordinate transformation is shown in Figure 5.  , , ) The camera coordinate system is transformed to the image physical coordinate system through the focal length diagonal matrix: In matrix form, it can be written as The camera coordinate system is transformed to the image physical coordinate system through the focal length diagonal matrix: In matrix form, it can be written as Appl. Sci. 2020, 10, 2562 7 of 18 f > 0 is the camera focal length. [X c , Y c , Z c ] T represents the camera coordinates. [x, y, 1] T represents the normalized physical coordinates of the image.
The image physical coordinate system is transformed to the pixel coordinate system through the pixel transformation matrix: In matrix form, it can be written as α 1 ,α 2 are the number of pixels per unit distance in two directions. u 0 , v 0 are the pixel coordinates at the intersection of the camera optical axis and imaging plane.
According to the perspective projection principle, the feature points of the current and desired image coordinates can be expressed as follows: where, u i , v i represent the coordinates of the feature points in the pixel coordinate system.
According to (3), we can directly define the camera velocity as , which consists of translational and rotational velocity under the disturbance condition.
In order to solve the problem of motion decoupling control, a vision servo decoupling control method [33] based on the line feature and inner region feature is adopted. The line direction feature is not affected by the camera's translation movement, but only related to the camera's rotation movement. The inner region feature is sensitive to the camera's translation movement along the Z axis, but not affected by the rotation movement around the Z axis. Therefore, we use the inner region feature and the line feature as the visual features of the camera's translation and rotation movement, respectively. The real-time target feature can be written as s = [x 0 , y 0 , a, θ, 0, 0] T , where x 0 , y 0 respectively correspond to the centroid coordinates of the circular feature, a is the circular feature of the inner area, and θ corresponds to the direction angle value of any edge of the target area. The desired feature can be written as s * = [x 0 * , y 0 * , a * , θ * , 0, 0] T .
Attitude control based on the line feature can be denoted as J c − is the Moore-Penrose pseudo-inverse of the Jacobian matrix J c . λ c is the pose controller gain.
The displacement velocity vector of the camera is defined as follows where v ca and v ω are the camera translation velocities with the compensation of the inner region feature vector and the center of mass, as in [33].
Considering that the real-time target feature is s and the desired target feature s * , the feature error e can be calculated as e = s * − s.

Adaptive Control Law
In this paper, the visual error model is defined by the image error of the feedback signal based on dynamic equations containing disturbance. Considering the local asymptotic stability of the visual servo system, the visual error signal is denoted as J c − is the Moore-Penrose pseudo inverse of the estimated Jacobian matrixĴ c .
Here, we use an estimation method [34] to obtain the interaction matrixĴ c . It is estimated to satisfy the following equation.
Moreover,Ĵ c v can be written in the following linear form where ( . η) is a matrix without depending on the intrinsic and extrinsic parameters of the camera.θ is a vector where the components ofĴ c are listed [34].
In this paper, we adopt the method of image-based visual servoing (IBVS) [35]. In image-based visual servoing (IBVS), the control loop is directly closed in the image space of the vision sensor. Compared with position-based visual servoing and hybrid visual servoing, IBVS schemes get rid of 3-D reconstruction steps to compute the visual features.
The derivative of the visual error could be calculated by (13) as The estimation error of the Jacobian matrix is defined as If we substitute (17) into (16), we have The control purpose for the visual servo system is to use visual feedback ξ to drive the target to the desired pose, like η → η * . At the same time, we should ensure the asymptotic stability of the system in the case of unknown and bounded uncertainties. Proportional control law is a widely used method and can be seen in most visual servo literature. However, it is difficult for a system using only proportional control law to obtain an ideal dynamic response. In this paper, sliding mode control (SMC) is used to compensate for the influence of external disturbance. The velocity control signal could be calculated from the visual system based on the inverse of the estimated Jacobian matrix and the change in the visual frame.
We design the sliding surface S as follows Where e = s * − s is the feature error state, s * represents the reference state vector. Since s * is the constant, . s * = 0. λ is the positive definite gain matrix.
Appl. Sci. 2020, 10, 2562 9 of 18 The derivative of the sliding surface (19) is: We propose a control law combining proportional control with SMC, and it is designed as follows The proportional control makes the output U p proportional to the input e. In order to reduce the deviation, improve the response speed and shorten the adjustment time, it needs to increase k p . k p is the proportional gain matrix. Due to the existence of an external disturbance, the system error cannot converge to zero. Similarly, where k s is the diagonal positive proportional gain matrix. Because the sigum function is discontinuous, which can cause a chattering effect, to eliminate the buffeting effect, we use saturation function sat(S i ) instead of the sigum function in the equation. According to (21) and (22), the overall control law is: The sliding surface can be restricted to change in a small range: where σ i > 0 represents the thickness of the sliding surface with a small positive value. The difference between the current sliding mode variable and the specified sliding mode surface is defined as follows where sat(S i ) replace sign(S i ), the expression is as follows: In this controller, a suitable value of λ will make the sliding surface reach the stable value quickly and stabilize the system against the disturbance. Nevertheless, the boundary of disturbance value is difficult to measure in real engineering projects. If the control law is not changed accordingly, the control effect cannot reach the desired level as required. Therefore, we design the adaptive law to adjust the gain value, and then modify the control law of the main controller. An adaptive sliding mode control algorithm is proposed as whereλ is the estimate of the λ, which we obtain as follows where γ > 0 is the adaptation gain. Then, we use Lyapunov's method to prove the stability of the system, analyzing stability by defining a scalar function V(x). This method avoids solving the equation directly and does not carry out approximate linearization.
If the scalar function V(x) satisfies: .
The system is said to be stable in the Lyapunov theory. If x 0, . V(x) < 0, the system is asymptotically stable.
Based on the above theory, we establish the Lyapunov function as follows where λ =λ − λ is the estimate error. Differentiation of the Lyapunov function is as follows If we substitute (20) and (28) into (30), we have When the sliding mode is asymptotically stable, we obtain the condition for the system to be stable: We carry out parameter selection and design an adaptive control law for the gain parameters. After verification, it can be known that the sliding mode surface decreases to zero in a finite time, which ensures that the control law is sustained.

Nonlinear Disturbance Observer
In the above description of the control law, the disturbance term is not considered, so we treat the disturbance value as zero, which is obviously not appropriate. Therefore, in this section, we add the analysis of the disturbance term, and we use the nonlinear disturbance observer to obtain equivalent disturbance.
We define an update variable to be whereτ is observed disturbance, q(η, v) is given as follows where m(η, v) is the state mapping matrix from v to q. The observer error signal is defined as First, for a linear disturbance observer, the derivation process of . τ is as follows . For the disturbance observer, it is considered that the disturbance varies slowly, so the derivative of disturbance can be given as From (36), we have So, the derivative of the observer error could be calculated by We obtain q(η, v) from (33) as If we substitute (36), (40) and (41) into (33), and differentiate, the update law can be given as Then, according to (33), (35) and (42), a nonlinear disturbance observer is proposed as follows.
As mentioned above, the coefficient of the slide surface is defined based on the adaptive gain. In view of the problem that it is difficult to model disturbance terms, a nonlinear disturbance observer is introduced to obtain equivalent disturbance. On this basis, a disturbance compensation adaptive control algorithm is proposed to improve the robustness of the visual servoing system. The control law is defined as follows.
Then, the sliding surface can be given as If we differentiate Equation (45), we have Then, we choose the Lyapunov function The differentiation of the Lyapunov function gives If we substitute (46), (28) and (39) into (48), we have According to V ≥ 0, we know, if we have the conclusion that .
V is both negative and positive definite and the system is asymptotically stable. The structure of the control loop is shown in Figure 6.

Simulations
The proposed visual servo positioning method is applied to the simulation platform. Simulations are conducted to investigate the control performance of the proposed controller in the presence of system uncertainties and disturbance.
Consistent with the actual project, we adopt the eye-in-hand style in the simulation platform. To simulate the real camera projection, the controller needs the parameters of the camera, and we use a visual toolbox with the following parameters: focal length 8 mm, imaging pixel frame 1024 × 1024. The intention of this simulation is to drive the manipulator to the desired pose. In the simulation, the delay is not considered, which means that, in the ideal state, the time of visual feature extraction and processing is not calculated, and the visual error curve is smooth. We will take the data sampled as the updated data for subsequent analyses and calculations. In this way, the simulation of hoisting positioning with disturbance is carried out to verify the superiority of the method proposed in this paper. , , , s s s s represent the desired points. In Figure 7b, the blue dotted lines represent the trajectories of four points; we can see that, although there is a large displacement between the initial position and the desired position, the manipulator can still be driven to the exact position. During the whole motion process, we take the feature center point as the record point to display its trajectory in 3D space, as shown in Figure 7c. The curvature of the simulation curve in three-dimensional space is small, which means that the proposed method can drive the control object to the target position in a shorter path. After the convergence of visual error, it is stable at zero, which

Simulations
The proposed visual servo positioning method is applied to the simulation platform. Simulations are conducted to investigate the control performance of the proposed controller in the presence of system uncertainties and disturbance.
Consistent with the actual project, we adopt the eye-in-hand style in the simulation platform. To simulate the real camera projection, the controller needs the parameters of the camera, and we use a visual toolbox with the following parameters: focal length 8 mm, imaging pixel frame 1024 × 1024. The intention of this simulation is to drive the manipulator to the desired pose. The system parameters are set as k p = diag{0.8, 0.8, 0.68, 0.4, 0, 0}, k s = diag{0.6, 0.6, 0.6, 0.35, 0, 0}, the disturbance term [5.5, 5.5, 5.5, 0.1, 0, 0] T is exerted on control quantities. The sampling time is set as t = 10 ms . In the simulation, the delay is not considered, which means that, in the ideal state, the time of visual feature extraction and processing is not calculated, and the visual error curve is smooth. We will take the data sampled as the updated data for subsequent analyses and calculations. In this way, the simulation of hoisting positioning with disturbance is carried out to verify the superiority of the method proposed in this paper.
The initial pose of the manipulator is set as [−0.6, 0.4, 0.5, π/12, 0, 0] T . The initial feature points and the desired feature points in the image plane are shown in Figure 7a; s 1 , s 2 , s 3 , s 4 represent the initial points, and s * 1 , s * 2 , s * 3 , s * 4 represent the desired points. In Figure 7b, the blue dotted lines represent the trajectories of four points; we can see that, although there is a large displacement between the initial position and the desired position, the manipulator can still be driven to the exact position. During the whole motion process, we take the feature center point as the record point to display its trajectory in 3D space, as shown in Figure 7c. The curvature of the simulation curve in three-dimensional space is small, which means that the proposed method can drive the control object to the target position in a shorter path. After the convergence of visual error, it is stable at zero, which shows that our method still has satisfactory accuracy in the presence of uncertainty and disturbance. Although the position curves have tiny tilt angles, the final position converges and stabilizes at zero. In our method, an adaptive sliding mode control algorithm is employed to decrease the influence of external disturbance, and the coefficient of the slide surface is established based on the adaptive gain. Obviously, this can drive the object to the target position smoothly. Furthermore, we compare the proposed method with the other methods, including proportional control (PC) visual servo, PSMC visual servo [35] and Kalman Filtering (KF) visual servo [36]. The PSMC visual servo proposes a combination of proportional control with the sliding mode control method. Image moments of circular markers labeled are selected as visual features to control the three translational degrees of freedom (DOFs) of the manipulator. The KF visual servo presents an image- Furthermore, we compare the proposed method with the other methods, including proportional control (PC) visual servo, PSMC visual servo [35] and Kalman Filtering (KF) visual servo [36]. The PSMC visual servo proposes a combination of proportional control with the sliding mode control method. Image moments of circular markers labeled are selected as visual features to control the three translational degrees of freedom (DOFs) of the manipulator. The KF visual servo presents an image-based servo control approach with a Kalman neural network filtering scheme; this uses a neural network to estimate and compensate the errors of Kalman filtering (KF). A computer with Intel Corei5 2.67 GHz CPU, 4 GB RAM is used in this comparison. The diagonal positive definite gain matrix K p in the PC visual servo is set as the same as our controller; the diagonal positive definite gain matrices λ, K p , K s in [35] and the minimum sum squared error (MSE) in [36] are set according to their default values.
The comparison results of the accumulated visual errors are shown in Figure 8. The quantitative results of the specific comparison are listed in Table 1. Although the computational time of our method is a little longer than that of the PC visual servo, the ASMCN visual servo (ours) has a better convergence performance. Our method has fewer servo cycles, due to the introduction of a nonlinear disturbance observer and the control law with disturbance compensation. in the PC visual servo is set as the same as our controller; the diagonal positive definite gain matrices λ , p K , s K in [35] and the minimum sum squared error (MSE) in [36] are set according to their default values.
The comparison results of the accumulated visual errors are shown in Figure 8. The quantitative results of the specific comparison are listed in Table 1. Although the computational time of our method is a little longer than that of the PC visual servo, the ASMCN visual servo (ours) has a better convergence performance. Our method has fewer servo cycles, due to the introduction of a nonlinear disturbance observer and the control law with disturbance compensation.

Experiments
In order to further verify the effectiveness in the actual hoisting work of the control scheme proposed in this paper, we conduct the experimental test using the four-degrees-of-freedom hoisting platform in the laboratory. In the experiment, we drive the heavy-long load component to the target position, and the weight of the load component is up to 300 kg, the overall dimensions of the load component are 0.4 × 0.4 × 1.2 (m) of height, width, and length. For the hoisting platform, the vibration of load and equipment is inevitable when it is hoisting heavy-long load components, which is also consistent with the disturbance condition in the actual hoisting operation. OMRON encoders (E6B2-CWZ6C) connected to the motor are used to measure the velocities. The images are captured by Mako camera (AVT, Germany), using GigE Vision interface and Power-over-Ethernet. Through wired network transmission, we can realize the real-time transmission of visual system and control system.

Experiments
In order to further verify the effectiveness in the actual hoisting work of the control scheme proposed in this paper, we conduct the experimental test using the four-degrees-of-freedom hoisting platform in the laboratory. In the experiment, we drive the heavy-long load component to the target position, and the weight of the load component is up to 300 kg, the overall dimensions of the load component are 0.4 × 0.4 × 1.2 (m) of height, width, and length. For the hoisting platform, the vibration of load and equipment is inevitable when it is hoisting heavy-long load components, which is also consistent with the disturbance condition in the actual hoisting operation. OMRON encoders (E6B2-CWZ6C) connected to the motor are used to measure the velocities. The images are captured by Mako camera (AVT, Germany), using GigE Vision interface and Power-over-Ethernet. Through wired network transmission, we can realize the real-time transmission of visual system and control system. The experimental platform is shown in Figure 9. The target is composed of four black origins and a rectangular frame. The Laplacian of the Gaussian algorithm is used to extract the line feature and inner region feature in the image. In the actual condition, the change of light intensity will affect the image quality, and then affect the accuracy of image feature extraction. Therefore, we use adaptive histogram equalization [37] to preprocess the image. Moreover, the feature extraction results of the convergence process are shown in Figure 10.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 16 of 19 The experimental platform is shown in Figure 9. The target is composed of four black origins and a rectangular frame. The Laplacian of the Gaussian algorithm is used to extract the line feature and inner region feature in the image. In the actual condition, the change of light intensity will affect the image quality, and then affect the accuracy of image feature extraction. Therefore, we use adaptive histogram equalization [37] to preprocess the image. Moreover, the feature extraction results of the convergence process are shown in Figure 10.  We carried out two positioning experiments (A and B) by changing the initial position. Firstly, the load component is moved to the target position by the manual system to obtain the desired image and the camera records the characteristic images of different positions. Then, we return to component the initial position, start the automatic control system, and drive the load component to the target position by the proposed scheme. For the IBVS system proposed in this paper, its convergence region is limited due to the nonlinearity and singularity of the desired image to the driving frame. The extracted target points will match to the adjacent target points in the desired image. Figure 11a and b depict the visual errors of experiment A, from which we can see that, although the visual error value of the initial position is large, the curves converge quickly and finally reach zero. Obviously, the visual error curve has a zigzag character, which is caused by the disturbance. However, the velocity curve in Figure 12 is relatively smooth, which indicates that the delay of mechanical system is lower and the encoder sensitivity is higher. This also shows that the algorithm in this paper can output stable control signals by adjusting the adaptive gain when the visual signal has a small vibration. As for experiment B, by comparing the (e) and (f) in Figure 13, the visual error of rotation converges earlier than that of the translational direction. The velocity curves in Figure 14 have almost no oscillation effect and can gradually approach zero, which indicates that our method has a robust performance on disturbance suppression and precise positioning. The experimental platform is shown in Figure 9. The target is composed of four black origins and a rectangular frame. The Laplacian of the Gaussian algorithm is used to extract the line feature and inner region feature in the image. In the actual condition, the change of light intensity will affect the image quality, and then affect the accuracy of image feature extraction. Therefore, we use adaptive histogram equalization [37] to preprocess the image. Moreover, the feature extraction results of the convergence process are shown in Figure 10.  We carried out two positioning experiments (A and B) by changing the initial position. Firstly, the load component is moved to the target position by the manual system to obtain the desired image and the camera records the characteristic images of different positions. Then, we return to component the initial position, start the automatic control system, and drive the load component to the target position by the proposed scheme. For the IBVS system proposed in this paper, its convergence region is limited due to the nonlinearity and singularity of the desired image to the driving frame. The extracted target points will match to the adjacent target points in the desired image. Figure 11a and b depict the visual errors of experiment A, from which we can see that, although the visual error value of the initial position is large, the curves converge quickly and finally reach zero. Obviously, the visual error curve has a zigzag character, which is caused by the disturbance. However, the velocity curve in Figure 12 is relatively smooth, which indicates that the delay of mechanical system is lower and the encoder sensitivity is higher. This also shows that the algorithm in this paper can output stable control signals by adjusting the adaptive gain when the visual signal has a small vibration. As for experiment B, by comparing the (e) and (f) in Figure 13, the visual error of rotation converges earlier than that of the translational direction. The velocity curves in Figure 14 have almost no oscillation effect and can gradually approach zero, which indicates that our method has a robust performance on disturbance suppression and precise positioning. We carried out two positioning experiments (A and B) by changing the initial position. Firstly, the load component is moved to the target position by the manual system to obtain the desired image and the camera records the characteristic images of different positions. Then, we return to component the initial position, start the automatic control system, and drive the load component to the target position by the proposed scheme. For the IBVS system proposed in this paper, its convergence region is limited due to the nonlinearity and singularity of the desired image to the driving frame. The extracted target points will match to the adjacent target points in the desired image. Figure 11a,b depict the visual errors of experiment A, from which we can see that, although the visual error value of the initial position is large, the curves converge quickly and finally reach zero. Obviously, the visual error curve has a zigzag character, which is caused by the disturbance. However, the velocity curve in Figure 12 is relatively smooth, which indicates that the delay of mechanical system is lower and the encoder sensitivity is higher. This also shows that the algorithm in this paper can output stable control signals by adjusting the adaptive gain when the visual signal has a small vibration. As for experiment B, by comparing the (e) and (f) in Figure 13, the visual error of rotation converges earlier than that of the translational direction. The velocity curves in Figure 14 have almost no oscillation effect and can gradually approach zero, which indicates that our method has a robust performance on disturbance suppression and precise positioning.

Conclusions
In this paper, we propose a visual servo scheme for hoisting positioning under disturbance condition. Through simulation and experimental verification, we can draw the following conclusions: (1) We define the visual error by the image error of the feedback signal based on dynamic equations containing disturbance. The relationship between visual error and disturbance is established, which lays a foundation for improving control stability; (2) In view of the problem that it is difficult to model disturbance terms, a nonlinear disturbance observer is introduced to obtain the equivalent disturbance. On this basis, an adaptive control algorithm with disturbance compensation is proposed to improve the robustness and convergence of the visual servoing scheme; (3) The results of experiment show that the algorithm in this paper can output stable control signals by adjusting the adaptive gain when the visual signal has a small vibration, and our method shows satisfactory positioning accuracy.

Conclusions
In this paper, we propose a visual servo scheme for hoisting positioning under disturbance condition. Through simulation and experimental verification, we can draw the following conclusions: (1) We define the visual error by the image error of the feedback signal based on dynamic equations containing disturbance. The relationship between visual error and disturbance is established, which lays a foundation for improving control stability; (2) In view of the problem that it is difficult to model disturbance terms, a nonlinear disturbance observer is introduced to obtain the equivalent disturbance. On this basis, an adaptive control algorithm with disturbance compensation is proposed to improve the robustness and convergence of the visual servoing scheme; (3) The results of experiment show that the algorithm in this paper can output stable control signals by adjusting the adaptive gain when the visual signal has a small vibration, and our method shows satisfactory positioning accuracy.

Conclusions
In this paper, we propose a visual servo scheme for hoisting positioning under disturbance condition. Through simulation and experimental verification, we can draw the following conclusions: (1) We define the visual error by the image error of the feedback signal based on dynamic equations containing disturbance. The relationship between visual error and disturbance is established, which lays a foundation for improving control stability; (2) In view of the problem that it is difficult to model disturbance terms, a nonlinear disturbance observer is introduced to obtain the equivalent disturbance. On this basis, an adaptive control algorithm with disturbance compensation is proposed to improve the robustness and convergence of the visual servoing scheme; (3) The results of experiment show that the algorithm in this paper can output stable control signals by adjusting the adaptive gain when the visual signal has a small vibration, and our method shows satisfactory positioning accuracy.

Conclusions
In this paper, we propose a visual servo scheme for hoisting positioning under disturbance condition. Through simulation and experimental verification, we can draw the following conclusions: (1) We define the visual error by the image error of the feedback signal based on dynamic equations containing disturbance. The relationship between visual error and disturbance is established, which lays a foundation for improving control stability; (2) In view of the problem that it is difficult to model disturbance terms, a nonlinear disturbance observer is introduced to obtain the equivalent disturbance. On this basis, an adaptive control algorithm with disturbance compensation is proposed to improve the robustness and convergence of the visual servoing scheme; (3) The results of experiment show that the algorithm in this paper can output stable control signals by adjusting the adaptive gain when the visual signal has a small vibration, and our method shows satisfactory positioning accuracy.

Conclusions
In this paper, we propose a visual servo scheme for hoisting positioning under disturbance condition. Through simulation and experimental verification, we can draw the following conclusions: (1) We define the visual error by the image error of the feedback signal based on dynamic equations containing disturbance. The relationship between visual error and disturbance is established, which lays a foundation for improving control stability; (2) In view of the problem that it is difficult to model disturbance terms, a nonlinear disturbance observer is introduced to obtain the equivalent disturbance. On this basis, an adaptive control algorithm with disturbance compensation is proposed to improve the robustness and convergence of the visual servoing scheme; (3) The results of experiment show that the algorithm in this paper can output stable control signals by adjusting the adaptive gain when the visual signal has a small vibration, and our method shows satisfactory positioning accuracy.
Visual servoing control for hoisting positioning is still a problem worthy of further study, including research on robust uncalibrated visual servoing control and velocity estimation from finite visual features.