Autonomous Landing of Quadrotor Unmanned Aerial Vehicles Based on Multi-Level Marker and Linear Active Disturbance Reject Control

Landing on unmanned surface vehicles (USV) autonomously is a critical task for unmanned aerial vehicles (UAV) due to complex environments. To solve this problem, an autonomous landing method is proposed based on a multi-level marker and linear active disturbance rejection control (LADRC) in this study. A specially designed landing board is placed on the USV, and ArUco codes with different scales are employed. Then, the landing marker is captured and processed by a camera mounted below the UAV body. Using the efficient perspective-n-point method, the position and attitude of the UAV are estimated and further fused by the Kalman filter, which improves the estimation accuracy and stability. On this basis, LADRC is used for UAV landing control, in which an extended state observer with adjustable bandwidth is employed to evaluate disturbance and proportional-derivative control is adopted to eliminate control error. The results of simulations and experiments demonstrate the feasibility and effectiveness of the proposed method, which provides an effective solution for the autonomous recovery of unmanned systems.


Introduction
In recent years, the collaboration of unmanned aerial vehicles (UAV) and unmanned surface vehicles (USV) has played an important role in water-air cross-domain operations, such as meteorological monitoring, natural exploration, maritime rescue, and so on [1].The unmanned system consisting of UAVs and USVs can complete river inspection tasks synergistically without interruption, identifying and alerting various abnormal events.However, the UAV always returns to the USV platform for energy charging due to limited battery capacity.To complete tasks like this, the autonomous landing of UAVs is particularly important since this process is a stage with a high failure rate.
Autonomous landing requires comprehensively considering the UAV itself and the USV platform, both of which are not static and suffer complex environmental disturbances [2].This affects the accuracy and success rate of UAV autonomous landing.Visionbased autonomous landing is a commonly used method, in which one or more specially designed markers are placed on the landing platform.Then, a camera on the UAV captures markers and estimates the relative pose for landing.That is to say, markers are one of the key factors determining autonomous landing [3].
Many markers with different shapes are designed to guide UAV landing, such as H-shaped, T-shaped, QR codes, AprilTags, ArUco, and so on [4].Furthermore, the scale and number of markers are also important for precise landing, especially in dynamic scenes.It has become a popular trend that multi-level markers are placed on landing platforms Sensors 2024, 24, 1645 2 of 14 for UAV recognition at different heights [5].This operation supplies more information for pose estimation and autonomous landing.
After planning landing trajectories based on recognition, UAV flight control is another key technology for autonomous landing.Quadrotor UAV is an under-actuated system with six degrees of freedom and four control inputs.Being highly nonlinear, strongly coupled, and complex multi-variable are the characteristics of quadrotor UAVs, which complicate their flight control [6].Further, it is difficult for UAVs to land precisely according to planned trajectory.
In order to solve the problems described above, a multi-level marker is designed and placed on the USV landing board in this paper, and then the UAV recognizes different markers by a camera and jointly estimates the relative pose.Considering internal and external disturbances of quadrotor UAV, LADRC is employed to control its flight and land autonomously on the USV platform, as shown in Figure 1.
Sensors 2024, 24, x FOR PEER REVIEW 2 of 14 It has become a popular trend that multi-level markers are placed on landing platforms for UAV recognition at different heights [5].This operation supplies more information for pose estimation and autonomous landing.
After planning landing trajectories based on recognition, UAV flight control is another key technology for autonomous landing.Quadrotor UAV is an under-actuated system with six degrees of freedom and four control inputs.Being highly nonlinear, strongly coupled, and complex multi-variable are the characteristics of quadrotor UAVs, which complicate their flight control [6].Further, it is difficult for UAVs to land precisely according to planned trajectory.
In order to solve the problems described above, a multi-level marker is designed and placed on the USV landing board in this paper, and then the UAV recognizes different markers by a camera and jointly estimates the relative pose.Considering internal and external disturbances of quadrotor UAV, LADRC is employed to control its flight and land autonomously on the USV platform, as shown in Figure 1.The rest of this paper is organized as follows.In Section 2, the literature related to our work is introduced briefly.In Sections 3 and 4, we present an autonomous landing method based on joint multi-level identification and LADRC for UAVs.The simulation and experimental results are shown in Section 5. Finally, the conclusions and future work are discussed in Section 6.

Related Works
The autonomous landing of quadrotor UAVs is a hot topic but not a new issue.There are some good reviews with regard to this aspect, and the landing process requires one or more sensors, such as an inertial measurement unit (IMU), global positioning system (GPS), vision, and so on [7,8].
A QR code-based marker was used to calculate the UAV attitude, and a vision transformer particle region-based convolutional neural network was employed to accelerate feature extraction [9].The corners of the Apriltag-based marker were detected, and the UAV pose estimation was obtained by rigid body transformation [8].A marker with three levels was designed and realized for UAV landing by an image-based visual servoing technique [1].However, it is difficult for all codes at certain levels to be recognized, especially for landing on mobile platforms, such as the USV.An ArUco-based marker was used for scale estimation as well as visual odometry.At the same time, the long-term drift problem was reduced [4].
The landing pad was detected for autonomous landing by AprilTags and color segmentation, the experimental results showed that the UAV landing was successful underground, and the vehicle speed was less than 3 m/s [10].A hemispherical infrared marker The rest of this paper is organized as follows.In Section 2, the literature related to our work is introduced briefly.In Sections 3 and 4, we present an autonomous landing method based on joint multi-level identification and LADRC for UAVs.The simulation and experimental results are shown in Section 5. Finally, the conclusions and future work are discussed in Section 6.

Related Works
The autonomous landing of quadrotor UAVs is a hot topic but not a new issue.There are some good reviews with regard to this aspect, and the landing process requires one or more sensors, such as an inertial measurement unit (IMU), global positioning system (GPS), vision, and so on [7,8].
A QR code-based marker was used to calculate the UAV attitude, and a vision transformer particle region-based convolutional neural network was employed to accelerate feature extraction [9].The corners of the Apriltag-based marker were detected, and the UAV pose estimation was obtained by rigid body transformation [8].A marker with three levels was designed and realized for UAV landing by an image-based visual servoing technique [1].However, it is difficult for all codes at certain levels to be recognized, especially for landing on mobile platforms, such as the USV.An ArUco-based marker was used for scale estimation as well as visual odometry.At the same time, the long-term drift problem was reduced [4].
The landing pad was detected for autonomous landing by AprilTags and color segmentation, the experimental results showed that the UAV landing was successful underground, and the vehicle speed was less than 3 m/s [10].A hemispherical infrared marker was proposed for the UAV autonomous landing on a moving ground vehicle, and autonomous landing experiments were operated to demonstrate the effectiveness from various an-gles [11].An approach based on deep reinforcement learning was designed for the UAV landing on a moving unmanned ground vehicle (UGV), which achieved a high landing success rate and accuracy [12].This approach did not have any specific communication between the UAV and UGV.However, the premise was to identify the target well.
To solve various interference problems, a landing method based on YOLOv5 and SiamRPN was proposed, in which the proportional-integral-derivative (PID) control was employed as a control law.Then the simulation was carried out on Gazebo 7.16 simulator, in which the effectiveness and robustness were validated [5].The pan-tilt-based visual servoing system was used for UAV navigating and landing, in which information fusion and signal delay issues had been resolved [13].
Except for the issue of marker and identification, UAV landing control also involves the problem of landing trajectory tracking control, accompanied by various internal and external disturbances.To complete autonomous landing, many control methods have been provided in UAV flight control, for instance, PID control, model prediction control (MPC), neural network control, sliding mode control (SMC), and active disturbance rejection control.
Proportional-derivative (PD) control was employed for UAV flight control, in which the parameters were adapted by particle swarm optimization [14].The experimental results demonstrated the effectiveness and robustness.A composite control method was proposed for UAV landing, in which disturbances were estimated by an observer and SMC was employed in the feedback channel for landing control [15].An adaptive robust hierarchical algorithm was proposed to address the impact of rough seas, achieving position tracking of expected trajectories and attitude tracking of command postures [16].
Aiming at eliminating the wave influence on USVs, a bidirectional long short-term memory (BiLSTM) was used to predict its attitude, and PID control was employed to realize UAV-USV synchronous motion.The experimental results show effectiveness in complex marine environments [17].To solve the problem of height fluctuation in the UAV vertical take-off and landing, active disturbance rejection control (ADRC) was utilized to improve accuracy and rapidity, and the controller parameters were optimized by a multi-strategy pigeon-inspired optimization algorithm [18].
An online nonlinear MPC was proposed for the UAV deep-stall landing in a small space [19].A model reference adaptive control was employed for parameter adjusting and reduced the UAV landing error [20].MPC was used for autonomous landing, different from conventional methods, and a synthesized state feedback was formed by the H∞.The method had good performance on disturbance rejection and transient characteristics [21].In view of the problem of quadrotor UAV flight control being disturbed by wind, a dynamic model based on the wind tunnel test was established, and the response characteristics of discrete and continuous wind disturbances were obtained [22].
In terms of combining vision and control, PID based on a radial basis function and YOLOv3 were employed for UAV landing [23].To land on a moving vehicle, ApriTag-based makers were designed, and an extended Kalman filter and nonlinear model predictive control were used for UAV autonomous tracking and landing [24].The PID control was used for UAV landing based on fuzzy logic, in which the parameters of PID were adjusted adaptively [25].To deal with friction, unmodeled dynamics, and other uncertainties, an adaptive super-twisting control was addressed for UAV vertical take-off and landing [26].For landing on a UGA, for UAVs, a formation controller was designed [27].The proposed control structure, which simultaneously considers UGAs and UAVs, was validated by experiment results.The autonomous landing was studied on a UGA for the quadrotor UAV.However, the communication of both was not considered.Instead, a compound AprilTag fiducial marker was employed and a fractional-order fuzzy PID controller was used [28].
In this study, a landing marker based on ArUco is designed in multiple layers for quadrotor UAV autonomous landing on USVs.On the basis of recognition, EPnP (efficient perspective-n-point) is employed to estimate the UAV pose, and the fusion result is obtained by a Kalman filter.Furthermore, UAV landing control adopts LADRC, which mainly includes the disturb estimator and PD control law.

Landing Marker Detection
Landing marker detection mainly consists of two stages: one is recognizing ArUco and the other is estimating the UAV pose related to the USV.
A landing marker is essential for the UAV to land on the USV platform autonomously.The multi-level marker based on ArUco is employed for pose estimation of the UAV at different heights.As shown in Figure 2, the maximum ArUco is 175 × 175 mm and is placed in the middle of the landing marker, of which the identifier is 19.Four codes with a size of 35 × 35 mm are distributed in four corners, and their identifiers are from 1 to 4, respectively.Then, the smallest ArUco is located in the center of the maximum ArUco, and the identifier is 43.
Sensors 2024, 24, x FOR PEER REVIEW 4 of 14 In this study, a landing marker based on ArUco is designed in multiple layers for quadrotor UAV autonomous landing on USVs.On the basis of recognition, EPnP (efficient perspective-n-point) is employed to estimate the UAV pose, and the fusion result is obtained by a Kalman filter.Furthermore, UAV landing control adopts LADRC, which mainly includes the disturb estimator and PD control law.

Landing Marker Detection
Landing marker detection mainly consists of two stages: one is recognizing ArUco and the other is estimating the UAV pose related to the USV.
A landing marker is essential for the UAV to land on the USV platform autonomously.The multi-level marker based on ArUco is employed for pose estimation of the UAV at different heights.As shown in Figure 2, the maximum ArUco is 175 × 175 mm and is placed in the middle of the landing marker, of which the identifier is 19.Four codes with a size of 35 × 35 mm are distributed in four corners, and their identifiers are from 1 to 4, respectively.Then, the smallest ArUco is located in the center of the maximum ArUco, and the identifier is 43.

ArUco Recognition
The ArUco is a composite square marker composed of a wide black border and an internal binary matrix that determines its identifier [4].One or more ArUco markers may be contained in an image captured by a camera on the UAV.The identifier of each marker is obtained as well as the pixel coordinates of four corner points through image detection, processing, and recognition.The detailed process includes image segmentation, contour extraction and filtration, encoding acquisition and recognition, and corner adjustment.
The purpose of marker recognition is to estimate the UAV pose, which is invaluable for autonomous landing.Owing to its high precision and rapidity, the EPnP (efficient perspective-n-point) is adopted to achieve the pose transformation between the camera and marker coordinate systems.Essentially, it is about finding the rotation matrix and translation vector.The marker coordinate system is defined as the world coordinate system, and then the four corner points of ArUco in the world coordinates are obtained as follows:

ArUco Recognition
The ArUco is a composite square marker composed of a wide black border and an internal binary matrix that determines its identifier [4].One or more ArUco markers may be contained in an image captured by a camera on the UAV.The identifier of each marker is obtained as well as the pixel coordinates of four corner points through image detection, processing, and recognition.The detailed process includes image segmentation, contour extraction and filtration, encoding acquisition and recognition, and corner adjustment.
The purpose of marker recognition is to estimate the UAV pose, which is invaluable for autonomous landing.Owing to its high precision and rapidity, the EPnP (efficient perspective-n-point) is adopted to achieve the pose transformation between the camera and marker coordinate systems.Essentially, it is about finding the rotation matrix and translation vector.The marker coordinate system is defined as the world coordinate system, and then the four corner points of ArUco in the world coordinates are obtained as follows: where l denotes the marker length, and P w i (i = 1, 2, 3, 4) represents the corner point positions in the world coordinate system.
The EPnP scheme represents the camera coordinates of the reference points as the weighted sum of the control points and then transforms the problem into solving the camera coordinate systems of these four control points.The control points are denoted Sensors 2024, 24, 1645 5 of 14 as C w j = [x w j , y w j , z w j ] and C c j = [x c j , y c j , z c j ] in the world and camera coordinate systems, respectively.The following linear combination can be obtained: where P c i denotes corner point i in the camera coordinate system, and [α i1 , α i2 , α i3 , α i4 ] T is the weight vector.
When (u i , v i ) is the projection of point i in the pixel coordinate system, the following equation can be obtained: where z i is the projection depth, and A is the internal parameter matrix of the camera, which can be calculated from specific experiments in advance.Furthermore, the matrix A contains the pixel focal length (f u , f v ) and optical center offset (u c , v c ), and Equation ( 2) is converted as follows: Eight linear equation systems can be obtained by four pairs of control points and pixel points, and the rotation matrix and translation vector are achieved at last.

Multi-Level Marker Fusion
Although the UAV pose can be generated by a single ArUco, the recognition accuracy is insufficient for autonomous landing because of image noise and so on.When a UAV is at some heights, more than one marker may be recognized, and more accurate pose information can be obtained by the way of fusion.
The landing process can be divided into three stages based on the height between the UAV and USV.In the first stage, there is only the largest ArUco for calculation, and more codes are used in the second stage, of which the number is not fixed.In the last stage, only the smallest ArUco is in the camera's field of view and is employed to estimate the pose.
A Kalman filter is a method for optimal state estimation for stochastic dynamic systems, of which the state and observer equations are as follows: where

UAV Dynamics
A UAV is composed of a cross bracket and four motors with propellers.Every motor generates a certain amount of torque related to its speed, the combination of which can produce six degrees of freedom for movements of the quadrotor UAV.As shown in Figure 3, two reference frames are introduced to establish the UAV kinematic model.One is the earth-fixed frame E, and the other is the body-fixed frame B.

UAV Dynamics
A UAV is composed of a cross bracket and four motors with propellers.Every motor generates a certain amount of torque related to its speed, the combination of which can produce six degrees of freedom for movements of the quadrotor UAV.As shown in Figure 3, two reference frames are introduced to establish the UAV kinematic model.One is the earth-fixed frame E, and the other is the body-fixed frame B. According to Newton's Second Law, the motion equation of the UAV is as follows: where m is the UAV's mass, and g = [0 0 g] T expresses the vector of gravitational acceleration.Fe is the resultant force of the UAV, while Fa represents the air resistance, which is related to Ve and a resistance coefficient.Assuming the UAV lift force is f, and Ve = [vx, vy, vz], the following can be obtained: Let P e and V e be the position and velocity in frame E, respectively; then V e = . P e .The UAV attitude Θ = ϕ, θ, ψ] T includes the roll, pitch, and yaw angles.The relationship between Θ and rotation speed ω b in the frame B is as follows: . Θ = Wω b (7) where According to Newton's Second Law, the motion equation of the UAV is as follows: where m is the UAV's mass, and g = [0 0 g] T expresses the vector of gravitational acceleration.F e is the resultant force of the UAV, while F a represents the air resistance, which is related to V e and a resistance coefficient.
Sensors 2024, 24, 1645 7 of 14 Assuming the UAV lift force is f, and V e = [v x , v y , v z ], the following can be obtained: Based on the Euler equation, the resultant moment of the UAV is as follows: where J denotes the inertia matrix, G a is gyroscopic torque and M p represents the torque generated by the propellers, including the roll, pitch, and yaw.
Let ω b = [p, q, r] be three components in the frame B. The gyroscopic torque G a is as follows: where J w is the total moment of inertia, and Ω i (i = 1, 2, 3, 4) denotes the speed of motor i. Substituting Equation (10) into Equation ( 11) yields the following: where The controller of a quadrotor UAV is divided into a position controller and an attitude controller, in which the feedback of the position and attitude can be obtained, respectively, as shown in Based on the Euler equation, the resultant moment of the UAV is as follows: where J denotes the inertia matrix, Ga is gyroscopic torque and Mp represents the torque generated by the propellers, including the roll, pitch, and yaw.

Let
b ω = [p, q, r] be three components in the frame B. The gyroscopic torque Ga is as follows: where Jw is the total moment of inertia, and Ωi (i = 1, 2, 3, 4) denotes the speed of motor i. Substituting Equation (10) into Equation ( 11) yields the following: The controller of a quadrotor UAV is divided into a position controller and an attitude controller, in which the feedback of the position and attitude can be obtained, respectively, as shown in Figure 4.The inputs of the position controller are the desired position (xd, yd, zd) and rolling angle φd, and the outputs, the pitching angle θd, yawing angle φd, and control signal u1, are solved.θd and φd are also the inputs of the attitude controller, which is used to produce u2, u3, and u4.

Linear Active Disturbance Reject Control
ADRC is a novel control method that is an improvement on the traditional PID control.The ADRC components all use nonlinear functions, and many parameters need to be adjusted.Thus, LADRC is proposed and simplifies parameters into an observer bandwidth and a control bandwidth, making the tuning of control parameters simple.
LADRC does not rely on the precise mathematical model of the object.Unknown factors, uncertain states, and external disturbances in the system are considered as the total disturbance of the system, estimated by a linear observer, and compensated by the PD control.
Assuming that the total disturbance includes internal and external disturbances, the dynamic model of a quadrotor UAV is as follows: and the extended state space equation of the UAV is described as follows: .
where the state matrix

and the output matrix
LESO is the key to achieving active disturbance rejection control.When designing LESO, it is necessary to select an feedback gain matrix to ensure that the observation error converges to zero.In addition, the dynamic response of the observer is considered to ensure the accuracy and reliability of the observation results.According to LADC, the LESO of Equation ( 14) is as follows: .
where z is the observed value of x, ŷ denotes the estimated output, including z 1 and z 2 , which correspond to x 1 and x 2 , and L is the gain matrix of the observer error feedback.In [29], the poles of the characteristic equation are put in the same place, then L = 3ω 0 , 3ω 2 0 , ω 3 0 , wherein ω 0 represents the observer bandwidth of LESO.In LADRC, the control law employs the PD control, that is where u 0 is the control quantity, r is the desired signals, including the roll, pitch, and yaw angles, and k p and k d are the proportional and differential gains of PD control, respectively.

Platform
As shown in Figure 5, the basic frame of this P450-Nano UAV is composed of composite materials, reducing the overall weight to 1950 g with the battery (4000 mAh).The size is 335 × 335 × 230 mm, while the wheelbase is 410 mm, and the maximum payload is 1600 g.Four brushless motors are distributed, and the model is T-motor-2216.An onboard camera with a 1920 × 1080 maximum resolution and a 3.6 mm focal length is mounted below the UAV body to acquire the landmark images.The online computer with a Cortex-A57 CPU and a 128 NVIDIA Maxwell GPU is employed to control flights and process images.Our USV is made of recyclable ABS engineering plastics.The maximum speed is 2 m/s, and the total weight is 8 kg including two batteries.The software is developed on ROS-melodic (Robot Operating System) based on ubuntu18.04.The proposed multi-level ArUco markers are placed on a landing board, of which the length and width are both 0.5 m.In LADRC, the observer bandwidth ω 0 is set as 50, the proportional gain k p is 0.45, and the differential gain k d is 0.17.
Sensors 2024, 24, x FOR PEER REVIEW 9 of 14 camera with a 1920 × 1080 maximum resolution and a 3.6 mm focal length is mounted below the UAV body to acquire the landmark images.The online computer with a Cortex-A57 CPU and a 128 NVIDIA Maxwell GPU is employed to control flights and process images.Our USV is made of recyclable ABS engineering plastics.The maximum speed is 2 m/s, and the total weight is 8 kg including two batteries.The software is developed on ROS-melodic (Robot Operating System) based on ubuntu18.04.The proposed multi-level ArUco markers are placed on a landing board, of which the length and width are both 0.5 m.In LADRC, the observer bandwidth ω0 is set as 50, the proportional gain kp is 0.45, and the differential gain kd is 0.17.

Simulation
A simulation model of UAV autonomous landing is built on Gazebo, which provides physical simulations with high fidelity and a user-friendly interaction mode.Our simulation is to verify the feasibility of vision-based landing, including searching, adjusting, and landing stages as shown in Figure 6.
The UAV takes off from a starting point of zero, climbs to a 1 m height at a fixed speed, and then activates the searching command.When position information recognized by the UAV meets the landing condition threshold, the adjusting and landing tasks are executed.If the UAV cannot recognize the landing marker, it slowly rises to search again.
When the landing marker is recognized successfully, the current position of the UAV is used as a starting point to expand the cruising range in a clockwise direction, with a square trajectory.In this process, the UAV attitude is adjusted for the final landing.Figure 6 illustrates the effectiveness of the proposed landing markers.

Simulation
A simulation model of UAV autonomous landing is built on Gazebo, which provides physical simulations with high fidelity and a user-friendly interaction mode.Our simulation is to verify the feasibility of vision-based landing, including searching, adjusting, and landing stages as shown in Figure 6.
2 m/s, and the total weight is 8 kg including two batteries.The software is developed on ROS-melodic (Robot Operating System) based on ubuntu18.04.The proposed multi-level ArUco markers are placed on a landing board, of which the length and width are both 0.5 m.In LADRC, the observer bandwidth ω0 is set as 50, the proportional gain kp is 0.45, and the differential gain kd is 0.17.

Simulation
A simulation model of UAV autonomous landing is built on Gazebo, which provides physical simulations with high fidelity and a user-friendly interaction mode.Our simulation is to verify the feasibility of vision-based landing, including searching, adjusting, and landing stages as shown in Figure 6.
The UAV takes off from a starting point of zero, climbs to a 1 m height at a fixed speed, and then activates the searching command.When position information recognized by the UAV meets the landing condition threshold, the adjusting and landing tasks are executed.If the UAV cannot recognize the landing marker, it slowly rises to search again.
When the landing marker is recognized successfully, the current position of the UAV is used as a starting point to expand the cruising range in a clockwise direction, with a square trajectory.In this process, the UAV attitude is adjusted for the final landing.Figure 6 illustrates the effectiveness of the proposed landing markers.The UAV takes off from a starting point of zero, climbs to a 1 m height at a fixed speed, and then activates the searching command.When position information recognized by the UAV meets the landing condition threshold, the adjusting and landing tasks are executed.If the UAV cannot recognize the landing marker, it slowly rises to search again.
When the landing marker is recognized successfully, the current position of the UAV is used as a starting point to expand the cruising range in a clockwise direction, with a square trajectory.In this process, the UAV attitude is adjusted for the final landing.Figure 6 illustrates the effectiveness of the proposed landing markers.

Ground Experiment
An experiment is carried out by placing the proposed multi-level ArUco markers on the ground.The UAV starts landing autonomously when the flight height is 10 m.As shown in Figure 7, the largest ArUco marker (ID = 19) is recognized at 7.7 m, while the smallest ArUco marker (ID = 3) is at a height of about 0.8 m.In this process, at least one medium-sized marker can be acquired, but the quantity is uncertain, which shows that the multi-level marker is effective and necessary.

Ground Experiment
An experiment is carried out by placing the proposed multi-level ArUco markers on the ground.The UAV starts landing autonomously when the flight height is 10 m.A shown in Figure 7, the largest ArUco marker (ID = 19) is recognized at 7.7 m, while th smallest ArUco marker (ID = 3) is at a height of about 0.8 m.In this process, at least on medium-sized marker can be acquired, but the quantity is uncertain, which shows tha the multi-level marker is effective and necessary.
Furthermore, the multi-layer recognition method is discontinued, only identifying the single marker in the middle of the landing board.The ground experiments are con ducted 30 times, and the landing error of the two methods is shown in Figure 8.The roo mean square errors are 0.082 m and 0.035 m, respectively.Obviously, the multi-leve marker method achieves better landing accuracy.Furthermore, the multi-layer recognition method is discontinued, only identifying the single marker in the middle of the landing board.The ground experiments are conducted 30 times, and the landing error of the two methods is shown in Figure 8.The root mean square errors are 0.082 m and 0.035 m, respectively.Obviously, the multi-level marker method achieves better landing accuracy.

Figure 1 .
Figure 1.The structure of the proposed method.The method includes marker detection and landing flight control, the purpose of which is to land on the USV platform autonomously for the UAV.

Figure 1 .
Figure 1.The structure of the proposed method.The method includes marker detection and landing flight control, the purpose of which is to land on the USV platform autonomously for the UAV.

Figure 2 .
Figure 2. The distribution of the proposed multi-level landing marker.The first level is the ArUco of which the ID is 19, and the second level contains four ArUcos of which the ID are 1, 2, 3, and 4, respectively, while the third level is the ArUco and the ID is 43.

Figure 2 .
Figure 2. The distribution of the proposed multi-level landing marker.The first level is the ArUco of which the ID is 19, and the second level contains four ArUcos of which the ID are 1, 2, 3, and 4, respectively, while the third level is the ArUco and the ID is 43.

Figure 3 .
Figure 3.The quadrotor UAV reference frames.The red arrows represent coordinate frameworks, and blue arrows denotes force directions.{E} represents earth-fixed reference frame, while {B} denotes body-fixed reference frame.F1 to F4 are thrusts generated by four motors, respectively, and G represents the gravity of the UAV.Let Pe and Ve be the position and velocity in frame E, respectively; then e e P V  = .The UAV attitude T ] , , [ ψ θ φ = Θ

Figure 3 .
Figure 3.The quadrotor UAV reference frames.The red arrows represent coordinate frameworks, and blue arrows denotes force directions.{E} represents earth-fixed reference frame, while {B} denotes body-fixed reference frame.F 1 to F 4 are thrusts generated by four motors, respectively, and G represents the gravity of the UAV.

Figure 4 .
The inputs of the position controller are the desired position (x d , y d , z d ) and rolling angle φ d , and the outputs, the pitching angle θ d , yawing angle φ d , and control signal u 1 , are solved.θ d and φ d are also the inputs of the attitude controller, which is used to produce u 2 , u 3 , and u 4 .

Figure 4 .
Figure 4. Control schematic of UAV landing.The controller includes a position controller and an attitude controller.

Figure 4 .
Figure 4. Control schematic of UAV landing.The controller includes a position controller and an attitude controller.

Figure 5 .
Figure 5.The picture of our UAV and USV with the main components.The hardware includes a body frame, an online computer, an onboard camera, and four brushless motors, while the software is based on Ubuntu 18.04 and ROS-melodic.

Figure 6 .
Figure 6.The simulation result.Three main stages are shown in the simulation, including searching, adjusting, and landing.Finally, the UAV lands on the ArUco markers successfully.

Figure 5 .
Figure 5.The picture of our UAV and USV with the main components.The hardware includes a body frame, an online computer, an onboard camera, and four brushless motors, while the software is based on Ubuntu 18.04 and ROS-melodic.

Figure 5 .
Figure 5.The picture of our UAV and USV with the main components.The hardware includes a body frame, an online computer, an onboard camera, and four brushless motors, while the software is based on Ubuntu 18.04 and ROS-melodic.

Figure 6 .
Figure 6.The simulation result.Three main stages are shown in the simulation, including searching, adjusting, and landing.Finally, the UAV lands on the ArUco markers successfully.

Figure 6 .
Figure 6.The simulation result.Three main stages are shown in the simulation, including searching, adjusting, and landing.Finally, the UAV lands on the ArUco markers successfully.

Figure 9 .
Figure 9.The surface experiment results.(a-h) show the process of UAV autonomous landing, in which different markers are recognized for adjusting the UAV pose.

Figure 10 .
Figure 10.The landing curves.Two control methods are used for comparison: one is PID control and the other is LADRC, which was employed in our work.

Figure 9 .Figure 9 .
Figure 9.The surface experiment results.(a-h) show the process of UAV autonomous landing, in which different markers are recognized for adjusting the UAV pose.

Figure 10 .
Figure 10.The landing curves.Two control methods are used for comparison: one is PID control and the other is LADRC, which was employed in our work.

Figure 10 .
Figure 10.The landing curves.Two control methods are used for comparison: one is PID control and the other is LADRC, which was employed in our work.
X = [p x (t), p y (t), p z (t), v x (t), v y (t), v z (t)] denotes the UAV state, A t,t−1 is the state transition matrix, H t is the observer matrix, ω t−1 and v t are the process noise and the observer noise, respectively, and both are white noise with zero mean.Z t = [p x (t), p y (t), p z (t)] represents the observer vector.Assuming the number of identifications for ArUco is n, and the estimated locations are [p xi (t) p yi (t) p zi (t)], i = 1, 2, . .., n, then