Next Article in Journal
Planar Thermoelectric Microgenerators in Application to Power RFID Tags
Previous Article in Journal
Research on Electric Oil–Pneumatic Active Suspension Based on Fractional-Order PID Position Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Autonomous Landing of Quadrotor Unmanned Aerial Vehicles Based on Multi-Level Marker and Linear Active Disturbance Reject Control

1
School of Mechanical Engineering, Jiangsu University of Science and Technology, Zhenjiang 212003, China
2
Fujian Key Laboratory of Green Intelligent Drive and Transmission for Mobile Machinery, Xiamen 361021, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(5), 1645; https://doi.org/10.3390/s24051645
Submission received: 4 January 2024 / Revised: 20 February 2024 / Accepted: 25 February 2024 / Published: 2 March 2024
(This article belongs to the Section Intelligent Sensors)

Abstract

:
Landing on unmanned surface vehicles (USV) autonomously is a critical task for unmanned aerial vehicles (UAV) due to complex environments. To solve this problem, an autonomous landing method is proposed based on a multi-level marker and linear active disturbance rejection control (LADRC) in this study. A specially designed landing board is placed on the USV, and ArUco codes with different scales are employed. Then, the landing marker is captured and processed by a camera mounted below the UAV body. Using the efficient perspective-n-point method, the position and attitude of the UAV are estimated and further fused by the Kalman filter, which improves the estimation accuracy and stability. On this basis, LADRC is used for UAV landing control, in which an extended state observer with adjustable bandwidth is employed to evaluate disturbance and proportional-derivative control is adopted to eliminate control error. The results of simulations and experiments demonstrate the feasibility and effectiveness of the proposed method, which provides an effective solution for the autonomous recovery of unmanned systems.

1. Introduction

In recent years, the collaboration of unmanned aerial vehicles (UAV) and unmanned surface vehicles (USV) has played an important role in water-air cross-domain operations, such as meteorological monitoring, natural exploration, maritime rescue, and so on [1]. The unmanned system consisting of UAVs and USVs can complete river inspection tasks synergistically without interruption, identifying and alerting various abnormal events. However, the UAV always returns to the USV platform for energy charging due to limited battery capacity. To complete tasks like this, the autonomous landing of UAVs is particularly important since this process is a stage with a high failure rate.
Autonomous landing requires comprehensively considering the UAV itself and the USV platform, both of which are not static and suffer complex environmental disturbances [2]. This affects the accuracy and success rate of UAV autonomous landing. Vision-based autonomous landing is a commonly used method, in which one or more specially designed markers are placed on the landing platform. Then, a camera on the UAV captures markers and estimates the relative pose for landing. That is to say, markers are one of the key factors determining autonomous landing [3].
Many markers with different shapes are designed to guide UAV landing, such as H-shaped, T-shaped, QR codes, AprilTags, ArUco, and so on [4]. Furthermore, the scale and number of markers are also important for precise landing, especially in dynamic scenes. It has become a popular trend that multi-level markers are placed on landing platforms for UAV recognition at different heights [5]. This operation supplies more information for pose estimation and autonomous landing.
After planning landing trajectories based on recognition, UAV flight control is another key technology for autonomous landing. Quadrotor UAV is an under-actuated system with six degrees of freedom and four control inputs. Being highly nonlinear, strongly coupled, and complex multi-variable are the characteristics of quadrotor UAVs, which complicate their flight control [6]. Further, it is difficult for UAVs to land precisely according to planned trajectory.
In order to solve the problems described above, a multi-level marker is designed and placed on the USV landing board in this paper, and then the UAV recognizes different markers by a camera and jointly estimates the relative pose. Considering internal and external disturbances of quadrotor UAV, LADRC is employed to control its flight and land autonomously on the USV platform, as shown in Figure 1.
The rest of this paper is organized as follows. In Section 2, the literature related to our work is introduced briefly. In Section 3 and Section 4, we present an autonomous landing method based on joint multi-level identification and LADRC for UAVs. The simulation and experimental results are shown in Section 5. Finally, the conclusions and future work are discussed in Section 6.

2. Related Works

The autonomous landing of quadrotor UAVs is a hot topic but not a new issue. There are some good reviews with regard to this aspect, and the landing process requires one or more sensors, such as an inertial measurement unit (IMU), global positioning system (GPS), vision, and so on [7,8].
A QR code-based marker was used to calculate the UAV attitude, and a vision transformer particle region-based convolutional neural network was employed to accelerate feature extraction [9]. The corners of the Apriltag-based marker were detected, and the UAV pose estimation was obtained by rigid body transformation [8]. A marker with three levels was designed and realized for UAV landing by an image-based visual servoing technique [1]. However, it is difficult for all codes at certain levels to be recognized, especially for landing on mobile platforms, such as the USV. An ArUco-based marker was used for scale estimation as well as visual odometry. At the same time, the long-term drift problem was reduced [4].
The landing pad was detected for autonomous landing by AprilTags and color segmentation, the experimental results showed that the UAV landing was successful underground, and the vehicle speed was less than 3 m/s [10]. A hemispherical infrared marker was proposed for the UAV autonomous landing on a moving ground vehicle, and autonomous landing experiments were operated to demonstrate the effectiveness from various angles [11]. An approach based on deep reinforcement learning was designed for the UAV landing on a moving unmanned ground vehicle (UGV), which achieved a high landing success rate and accuracy [12]. This approach did not have any specific communication between the UAV and UGV. However, the premise was to identify the target well.
To solve various interference problems, a landing method based on YOLOv5 and SiamRPN was proposed, in which the proportional-integral-derivative (PID) control was employed as a control law. Then the simulation was carried out on Gazebo 7.16 simulator, in which the effectiveness and robustness were validated [5]. The pan-tilt-based visual servoing system was used for UAV navigating and landing, in which information fusion and signal delay issues had been resolved [13].
Except for the issue of marker and identification, UAV landing control also involves the problem of landing trajectory tracking control, accompanied by various internal and external disturbances. To complete autonomous landing, many control methods have been provided in UAV flight control, for instance, PID control, model prediction control (MPC), neural network control, sliding mode control (SMC), and active disturbance rejection control.
Proportional-derivative (PD) control was employed for UAV flight control, in which the parameters were adapted by particle swarm optimization [14]. The experimental results demonstrated the effectiveness and robustness. A composite control method was proposed for UAV landing, in which disturbances were estimated by an observer and SMC was employed in the feedback channel for landing control [15]. An adaptive robust hierarchical algorithm was proposed to address the impact of rough seas, achieving position tracking of expected trajectories and attitude tracking of command postures [16].
Aiming at eliminating the wave influence on USVs, a bidirectional long short-term memory (BiLSTM) was used to predict its attitude, and PID control was employed to realize UAV-USV synchronous motion. The experimental results show effectiveness in complex marine environments [17]. To solve the problem of height fluctuation in the UAV vertical take-off and landing, active disturbance rejection control (ADRC) was utilized to improve accuracy and rapidity, and the controller parameters were optimized by a multi-strategy pigeon-inspired optimization algorithm [18].
An online nonlinear MPC was proposed for the UAV deep-stall landing in a small space [19]. A model reference adaptive control was employed for parameter adjusting and reduced the UAV landing error [20]. MPC was used for autonomous landing, different from conventional methods, and a synthesized state feedback was formed by the H∞. The method had good performance on disturbance rejection and transient characteristics [21]. In view of the problem of quadrotor UAV flight control being disturbed by wind, a dynamic model based on the wind tunnel test was established, and the response characteristics of discrete and continuous wind disturbances were obtained [22].
In terms of combining vision and control, PID based on a radial basis function and YOLOv3 were employed for UAV landing [23]. To land on a moving vehicle, ApriTag-based makers were designed, and an extended Kalman filter and nonlinear model predictive control were used for UAV autonomous tracking and landing [24]. The PID control was used for UAV landing based on fuzzy logic, in which the parameters of PID were adjusted adaptively [25]. To deal with friction, unmodeled dynamics, and other uncertainties, an adaptive super-twisting control was addressed for UAV vertical take-off and landing [26]. For landing on a UGA, for UAVs, a formation controller was designed [27]. The proposed control structure, which simultaneously considers UGAs and UAVs, was validated by experiment results. The autonomous landing was studied on a UGA for the quadrotor UAV. However, the communication of both was not considered. Instead, a compound AprilTag fiducial marker was employed and a fractional-order fuzzy PID controller was used [28].
In this study, a landing marker based on ArUco is designed in multiple layers for quadrotor UAV autonomous landing on USVs. On the basis of recognition, EPnP (efficient perspective-n-point) is employed to estimate the UAV pose, and the fusion result is obtained by a Kalman filter. Furthermore, UAV landing control adopts LADRC, which mainly includes the disturb estimator and PD control law.

3. Landing Marker Detection

Landing marker detection mainly consists of two stages: one is recognizing ArUco and the other is estimating the UAV pose related to the USV.
A landing marker is essential for the UAV to land on the USV platform autonomously. The multi-level marker based on ArUco is employed for pose estimation of the UAV at different heights. As shown in Figure 2, the maximum ArUco is 175 × 175 mm and is placed in the middle of the landing marker, of which the identifier is 19. Four codes with a size of 35 × 35 mm are distributed in four corners, and their identifiers are from 1 to 4, respectively. Then, the smallest ArUco is located in the center of the maximum ArUco, and the identifier is 43.

3.1. ArUco Recognition

The ArUco is a composite square marker composed of a wide black border and an internal binary matrix that determines its identifier [4]. One or more ArUco markers may be contained in an image captured by a camera on the UAV. The identifier of each marker is obtained as well as the pixel coordinates of four corner points through image detection, processing, and recognition. The detailed process includes image segmentation, contour extraction and filtration, encoding acquisition and recognition, and corner adjustment.
The purpose of marker recognition is to estimate the UAV pose, which is invaluable for autonomous landing. Owing to its high precision and rapidity, the EPnP (efficient perspective-n-point) is adopted to achieve the pose transformation between the camera and marker coordinate systems. Essentially, it is about finding the rotation matrix and translation vector. The marker coordinate system is defined as the world coordinate system, and then the four corner points of ArUco in the world coordinates are obtained as follows:
P 1 w = [ l 2 , l 2 , 0 ] T , P 2 w = [ l 2 , l 2 , 0 ] T P 3 w = [ l 2 , l 2 , 0 ] T , P 4 w = [ l 2 , l 2 , 0 ] T
where l denotes the marker length, and P i w (i = 1, 2, 3, 4) represents the corner point positions in the world coordinate system.
The EPnP scheme represents the camera coordinates of the reference points as the weighted sum of the control points and then transforms the problem into solving the camera coordinate systems of these four control points. The control points are denoted as C j w = [ x j w , y j w , z j w ] and C j c = [ x j c , y j c , z j c ] in the world and camera coordinate systems, respectively. The following linear combination can be obtained:
P i w = j = 1 4 α i j C j w P i c = j = 1 4 α i j C j c j = 1 4 α i j = 1 P i w
where P i c denotes corner point i in the camera coordinate system, and [ α i 1 , α i 2 , α i 3 , α i 4 ] T is the weight vector.
When (ui, vi) is the projection of point i in the pixel coordinate system, the following equation can be obtained:
z i [ u i , v i , 1 ] T = A P i c = A j = 1 4 α i j [ x j c , y j c , z j c ] T
where zi is the projection depth, and A is the internal parameter matrix of the camera, which can be calculated from specific experiments in advance.
Furthermore, the matrix A contains the pixel focal length (fu, fv) and optical center offset (uc, vc), and Equation (2) is converted as follows:
j = 1 4 α i j f u x j c + α i j ( u c u i ) z j c = 0 j = 1 4 α i j f v x j c + α i j ( v c u i ) z j c = 0
Eight linear equation systems can be obtained by four pairs of control points and pixel points, and the rotation matrix and translation vector are achieved at last.

3.2. Multi-Level Marker Fusion

Although the UAV pose can be generated by a single ArUco, the recognition accuracy is insufficient for autonomous landing because of image noise and so on. When a UAV is at some heights, more than one marker may be recognized, and more accurate pose information can be obtained by the way of fusion.
The landing process can be divided into three stages based on the height between the UAV and USV. In the first stage, there is only the largest ArUco for calculation, and more codes are used in the second stage, of which the number is not fixed. In the last stage, only the smallest ArUco is in the camera’s field of view and is employed to estimate the pose.
A Kalman filter is a method for optimal state estimation for stochastic dynamic systems, of which the state and observer equations are as follows:
X t = A t , t 1 X t 1 + ω t 1 Z t = H X t + v t
where X = [ p x ( t ) ,   p y ( t ) ,   p z ( t ) ,     v x ( t ) ,     v y ( t ) ,     v z ( t ) ] denotes the UAV state, A t , t 1 is the state transition matrix, Ht is the observer matrix, ω t 1 and vt are the process noise and the observer noise, respectively, and both are white noise with zero mean. Z t = [ p x ( t ) ,   p y ( t ) ,   p z ( t ) ] represents the observer vector.
Assuming the number of identifications for ArUco is n, and the estimated locations are [pxi(t) pyi(t) pzi(t)], i = 1, 2, …, n, then
p x ( t ) = 1 n i = 1 n p x i ( t ) , p y ( t ) = 1 n i = 1 n p y i ( t ) , p z ( t ) = 1 n i = 1 n p z i ( t )

4. Landing Control Method

4.1. UAV Dynamics

A UAV is composed of a cross bracket and four motors with propellers. Every motor generates a certain amount of torque related to its speed, the combination of which can produce six degrees of freedom for movements of the quadrotor UAV. As shown in Figure 3, two reference frames are introduced to establish the UAV kinematic model. One is the earth-fixed frame E, and the other is the body-fixed frame B.
Let Pe and Ve be the position and velocity in frame E, respectively; then V e = P ˙ e . The UAV attitude Θ = [ ϕ , θ , ψ ] T includes the roll, pitch, and yaw angles. The relationship between Θ and rotation speed ω b in the frame B is as follows:
Θ ˙ = W ω b
where W = 1 tan θ sin ϕ tan θ cos ϕ 0 cos ϕ sin ϕ 0 sin ϕ / cos θ cos ϕ / cos θ .
According to Newton’s Second Law, the motion equation of the UAV is as follows:
m V ˙ e = m g F e + F a
where m is the UAV’s mass, and g = [0 0 g]T expresses the vector of gravitational acceleration. Fe is the resultant force of the UAV, while Fa represents the air resistance, which is related to Ve and a resistance coefficient.
Assuming the UAV lift force is f, and Ve = [vx, vy, vz], the following can be obtained:
v ˙ x = f m ( cos ψ sin θ cos ϕ + sin ψ sin ϕ ) v ˙ y = f m ( sin ψ sin θ cos ϕ cos ψ sin ϕ ) v ˙ z = g f m cos ϕ cos θ
Based on the Euler equation, the resultant moment of the UAV is as follows:
J ω ˙ b + ω b × J ω b = G a + M p
where J denotes the inertia matrix, Ga is gyroscopic torque and Mp represents the torque generated by the propellers, including the roll, pitch, and yaw.
Let ω b = [p, q, r] be three components in the frame B. The gyroscopic torque Ga is as follows:
G a = J w q ( Ω 1 Ω 2 + Ω 3 Ω 4 ) J w p ( Ω 1 + Ω 2 Ω 3 + Ω 4 ) 0
where Jw is the total moment of inertia, and Ωi (i = 1, 2, 3, 4) denotes the speed of motor i.
Substituting Equation (10) into Equation (11) yields the following:
p ˙ = 1 J x [ M x + q r ( J y J z ) J w q Ω ] q ˙ = 1 J y [ M y + q r ( J z J x ) + J w p Ω ] r ˙ = 1 J z [ M z + p q ( J y J z ) ]
where J = J x J y J z .
The controller of a quadrotor UAV is divided into a position controller and an attitude controller, in which the feedback of the position and attitude can be obtained, respectively, as shown in Figure 4. The inputs of the position controller are the desired position (xd, yd, zd) and rolling angle φd, and the outputs, the pitching angle θd, yawing angle φd, and control signal u1, are solved. θd and φd are also the inputs of the attitude controller, which is used to produce u2, u3, and u4.

4.2. Linear Active Disturbance Reject Control

ADRC is a novel control method that is an improvement on the traditional PID control. The ADRC components all use nonlinear functions, and many parameters need to be adjusted. Thus, LADRC is proposed and simplifies parameters into an observer bandwidth and a control bandwidth, making the tuning of control parameters simple.
LADRC does not rely on the precise mathematical model of the object. Unknown factors, uncertain states, and external disturbances in the system are considered as the total disturbance of the system, estimated by a linear observer, and compensated by the PD control.
Assuming that the total disturbance includes internal and external disturbances, the dynamic model of a quadrotor UAV is as follows:
ϕ ¨ = b ϕ u ϕ + f ϕ θ ¨ = b θ u θ + f θ ψ ¨ = b ψ u ψ + f ψ
Let y = [ϕ, θ, ψ], x1 = y, x2 = y ˙ , f = [fϕ, fθ, fψ], and x = [x1, x2, f]T, and the extended state space equation of the UAV is described as follows:
x ˙ = M x + N u + E f ˙ y = O x
where the state matrix M = 0 1 0 0 0 1 0 0 0 , the input matrix N = 0 0 0 b ϕ b θ b ψ 0 0 0 , the control matrix u = 0 0 0 u ϕ u θ u ψ 0 0 0 , the disturbance matrix E = 0 0 1 , and the output matrix O = 1 0 0 .
LESO is the key to achieving active disturbance rejection control. When designing LESO, it is necessary to select an appropriate feedback gain matrix to ensure that the observation error converges to zero. In addition, the dynamic response of the observer is considered to ensure the accuracy and reliability of the observation results. According to LADC, the LESO of Equation (14) is as follows:
z ˙ = M z + N u + L ( y y ^ ) y ^ = O z
where z is the observed value of x, y ^ denotes the estimated output, including z1 and z2, which correspond to x1 and x2, and L is the gain matrix of the observer error feedback. In [29], the poles of the characteristic equation are put in the same place, then L = [ 3 ω 0 , 3 ω 0 2 , ω 0 3 ] , wherein ω0 represents the observer bandwidth of LESO.
In LADRC, the control law employs the PD control, that is
u 0 = k p ( r z 1 ) k d z 2
where u0 is the control quantity, r is the desired signals, including the roll, pitch, and yaw angles, and kp and kd are the proportional and differential gains of PD control, respectively.

5. Simulation and Experiment

5.1. Platform

As shown in Figure 5, the basic frame of this P450-Nano UAV is composed of composite materials, reducing the overall weight to 1950 g with the battery (4000 mAh). The size is 335 × 335 × 230 mm, while the wheelbase is 410 mm, and the maximum payload is 1600 g. Four brushless motors are distributed, and the model is T-motor-2216. An onboard camera with a 1920 × 1080 maximum resolution and a 3.6 mm focal length is mounted below the UAV body to acquire the landmark images. The online computer with a Cortex-A57 CPU and a 128 NVIDIA Maxwell GPU is employed to control flights and process images. Our USV is made of recyclable ABS engineering plastics. The maximum speed is 2 m/s, and the total weight is 8 kg including two batteries. The software is developed on ROS-melodic (Robot Operating System) based on ubuntu18.04. The proposed multi-level ArUco markers are placed on a landing board, of which the length and width are both 0.5 m. In LADRC, the observer bandwidth ω0 is set as 50, the proportional gain kp is 0.45, and the differential gain kd is 0.17.

5.2. Simulation

A simulation model of UAV autonomous landing is built on Gazebo, which provides physical simulations with high fidelity and a user-friendly interaction mode. Our simulation is to verify the feasibility of vision-based landing, including searching, adjusting, and landing stages as shown in Figure 6.
The UAV takes off from a starting point of zero, climbs to a 1 m height at a fixed speed, and then activates the searching command. When position information recognized by the UAV meets the landing condition threshold, the adjusting and landing tasks are executed. If the UAV cannot recognize the landing marker, it slowly rises to search again.
When the landing marker is recognized successfully, the current position of the UAV is used as a starting point to expand the cruising range in a clockwise direction, with a square trajectory. In this process, the UAV attitude is adjusted for the final landing. Figure 6 illustrates the effectiveness of the proposed landing markers.

5.3. Ground Experiment

An experiment is carried out by placing the proposed multi-level ArUco markers on the ground. The UAV starts landing autonomously when the flight height is 10 m. As shown in Figure 7, the largest ArUco marker (ID = 19) is recognized at 7.7 m, while the smallest ArUco marker (ID = 3) is at a height of about 0.8 m. In this process, at least one medium-sized marker can be acquired, but the quantity is uncertain, which shows that the multi-level marker is effective and necessary.
Furthermore, the multi-layer recognition method is discontinued, only identifying the single marker in the middle of the landing board. The ground experiments are conducted 30 times, and the landing error of the two methods is shown in Figure 8. The root mean square errors are 0.082 m and 0.035 m, respectively. Obviously, the multi-level marker method achieves better landing accuracy.

5.4. Surface Experiment

Another experiment is conducted by placing the proposed multi-level markers on a USV, which was developed in a previous project. This experiment is an autonomous landing without GPS assistance. The initial height of the UAV is about 1.3 m, and the maximum height recognized is about 9 m. When approaching 0.2 m, the motors are turned off, and the UAV lands on the landing board freely. The process is shown in Figure 9, in which the left part of each picture is the image captured by the onboard camera, and the right part is from a specialized recording camera. In the landing process, there is more disruption than landing on the ground.
To demonstrate the accuracy of the proposed landing method, a comparison is made by PID control, in which the proportional gain, integral gain, and differential gain are set as 0.45, 0.052, and 0.17, respectively. It should be noted that the two methods use the same markers as the proposed ones. The trajectories of the UAV landing and the USV motion are shown in Figure 10. Both methods can land successfully, but it is observed that the landing curve of LADRC is smoother. The landing accuracy of LADRC is 0.057 m, while PID is 0.11 m. The results indicate that LADRC may effectively resist internal and external disturbances.
From the results of the ground and surface experiments, it can be seen that flying too high or too low may cause incomplete marker information problems, which may affect UAV autonomous landing on the USV platform. The proposed multi-level markers can effectively solve this problem. At the same time, to follow the expected landing trajectory, the flight control method based on LADRC has excellent landing accuracy.

6. Conclusions and Future Work

In this study, a method of UAV landing on USVs autonomously was investigated. A multi-level landing marker was placed on the USV landing board and captured by a camera mounted below the UAV body. The land marker based on ArUco contained three levels and five codes and was employed for UAV landing at different heights. Then, the EPnP algorithm was adopted to achieve the UAV pose and fusion results by different ArUco improved landing accuracy and stability. Further, LADRC based on the controller was designed for landing control. The disturbances were estimated by a linear extended state observer and eliminated by PD control. Then, the UAV could land on the USV autonomously with high precision, which was demonstrated by simulations and experiments. The proposed method enables the UAV to land stably on the USV platform for charging and expanding motion purposes.
In upcoming work, achieving UAV landing on USVs with complex motion will be the focus of research, and how to compensate for wave impacts on USV motions will be another research topic.

Author Contributions

Conceptualization, M.L.; Methodology, M.L., J.F. and J.W.; Software, B.F.; Formal analysis, J.F.; Data curation, B.F.; Writing—original draft, M.L.; Writing—review & editing, J.F. and J.W.; Funding acquisition, J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Jiangsu Province key research and development project grant number BE2022062 and Open Foundation of Fujian Key Laboratory of Green Intelligent Drive and Transmission for Mobile Machinery grant number GIDT-202307.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data are unavailable due to privacy restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cho, G.; Choi, J.; Bae, G.; Oh, H. Autonomous ship deck landing of a quadrotor UAV using feed-forward image-based visual servoing. Aerosp. Sci. Technol. 2022, 130, 107869. [Google Scholar] [CrossRef]
  2. Wang, Y.; Liu, W.; Liu, J.; Sun, C. Cooperative USV-UAV marine search and rescue with visual navigation and reinforcement learning-based control. ISA Trans. 2023, 137, 222–235. [Google Scholar] [CrossRef] [PubMed]
  3. Long, X.; Zimu, T.; Weiqi, G.; Haobo, L. Vision-Based Autonomous Landing for the UAV: A Review. Aerospace 2022, 9, 634. [Google Scholar]
  4. Lee, J.C.; Chen, C.C.; Shen, C.T.; Lai, Y.C. Landmark-Based Scale Estimation and Correction of Visual Inertial Odometry for VTOL UAVs in a GPS-Denied Environment. Sensors 2022, 22, 9654. [Google Scholar] [CrossRef] [PubMed]
  5. Wu, D.; Zhu, H.; Lan, Y. A Method for Designated Target Anti-Interference Tracking Combining YOLOv5 and SiamRPN for UAV Tracking and Landing Control. Remote Sens. 2022, 14, 2825. [Google Scholar] [CrossRef]
  6. Bouaiss, O.; Mechgoug, R.; Taleb-Ahmed, A. Visual soft landing of an autonomous quadrotor on a moving pad using a combined fuzzy velocity control with model predictive control. Signal Image Video Process. 2023, 17, 21–30. [Google Scholar] [CrossRef]
  7. Sonugür, G. A Review of quadrotor UAV: Control and SLAM methodologies ranging from conventional to innovative approaches. Robot. Auton. Syst. 2023, 161, 104342. [Google Scholar] [CrossRef]
  8. Rabah, M.; Haghbayan, H.; Immonen, E.; Plosila, J. An AI-in-Loop Fuzzy-Control Technique for UAV’s Stabilization and Landing. IEEE Access 2022, 10, 10119–101123. [Google Scholar] [CrossRef]
  9. Yuan, B.; Ma, W.; Wang, F. High Speed Safe Autonomous Landing Marker Tracking of Fixed Wing Drone Based on Deep Learning. IEEE Access 2022, 10, 3195286. [Google Scholar] [CrossRef]
  10. Alvika, G.; Mandeep, S.; Pedda, S.; Srikanth, S. Autonomous Quadcopter Landing on a Moving Target. Sensors 2022, 22, 1116. [Google Scholar]
  11. Lim, J.; Lee, T.; Pyo, S.; Lee, J.; Kim, J.; Lee, J. Hemispherical InfraRed (IR) Marker for Reliable Detection for Autonomous Landing on a Moving Ground Vehicle From Various Altitude Angles. IEEE/ASME Trans. Mechatron. 2022, 27, 485–492. [Google Scholar] [CrossRef]
  12. Wang, C.; Wang, J.; Wei, C.; Zhu, Y.; Yin, D.; Li, J. Vision-Based Deep Reinforcement Learning of UAV-UGV Collaborative Landing Policy Using Automatic Curriculum. Drones 2023, 7, 676. [Google Scholar] [CrossRef]
  13. Chen, C.; Chen, S.; Hu, G.; Chen, B.; Chen, P.; Su, K. An auto-landing strategy based on pan-tilt based visual servoing for unmanned aerial vehicle in GNSS-denied environments. Aerosp. Sci. Technol. 2021, 116, 106891. [Google Scholar] [CrossRef]
  14. El Gmili, N.; Mjahed, M.; El Kari, A.; Ayad, H. Particle swarm optimization based proportional-derivative parameters for unmanned tilt-rotor flight control and trajectory tracking. Automatika 2020, 61, 189–206. [Google Scholar] [CrossRef]
  15. Xu, J.; Zhang, K.; Zhang, D.; Wang, S.; Zhu, Q.; Gao, X. Composite anti-disturbance landing control scheme for recovery of carrier-based UAVs. Asian J. Control 2022, 24, 1744–1754. [Google Scholar] [CrossRef]
  16. Xia, K.; Lee, S.; Son, H. Adaptive control for multi-rotor UAVs autonomous ship landing with mission planning. Aerosp. Sci. Technol. 2020, 96, 105549. [Google Scholar] [CrossRef]
  17. Li, W.; Ge, Y.; Guan, Z.; Ye, G. Synchronized Motion-Based UAV-USV Cooperative Autonomous Landing. J. Mar. Sci. Eng. 2022, 10, 1214. [Google Scholar] [CrossRef]
  18. He, H.; Duan, H. A multi-strategy pigeon-inspired optimization approach to active disturbance rejection control parameters tuning for vertical take-off and landing fixed-wing UAV. Chin. J. Aeronaut. 2022, 35, 19–30. [Google Scholar] [CrossRef]
  19. Mathisen, S.; Gryte, K.; Gros, S.; Johansen, T.A. Precision Deep-Stall Landing of Fixed-Wing UAVs Using Nonlinear Model Predictive Control. J. Intell. Robot. Syst. 2021, 101, 24. [Google Scholar] [CrossRef]
  20. Chen, P.; Zhang, Y.; Wang, J.; Azar, A.T.; Hameed, I.A.; Ibraheem, I.K.; Kamal, N.A.; Abdulmajeed, F.A. Adaptive Internal Model Control Based on Parameter Adaptation. Electronics 2022, 11, 3842. [Google Scholar] [CrossRef]
  21. Latif, Z.; Shahzad, A.; Bhatti, A.I.; Whidborne, J.F.; Samar, R. Autonomous Landing of an UAV Using H∞ Based Model Predictive Control. Drones 2022, 6, 416. [Google Scholar] [CrossRef]
  22. Li, F.; Song, W.P.; Song, B.F.; Jiao, J. Dynamic Simulation and Conceptual Layout Study on a Quad-Plane in VTOL Mode in Wind Disturbance Environment. Int. J. Aerosp. Eng. 2022, 2022, 5867825. [Google Scholar] [CrossRef]
  23. Wang, L.; Jiang, X.; Wang, D.; Wang, L.; Tu, Z.; Ai, J. Research on Aerial Autonomous Docking and Landing Technology of Dual Multi-Rotor UAV. Sensors 2022, 22, 9066. [Google Scholar] [CrossRef]
  24. Aoki, N.; Ishigami, G. Autonomous tracking and landing of an unmanned aerial vehicle on a ground vehicle in rough terrain. Adv. Robot. 2023, 37, 344–355. [Google Scholar] [CrossRef]
  25. Sefidgar, M.; Landry, R., Jr. Landing System Development Based on Inverse Homography Range Camera Fusion (IHRCF). Sensors 2022, 22, 1870. [Google Scholar] [CrossRef] [PubMed]
  26. Arizaga, J.M.; Noriega, J.R.; Garcia-Delgado, L.A.; Castañeda, H. Adaptive Super Twisting Control of a Dual-rotor VTOL Flight System Under Model Uncertainties. Int. J. Control Autom. Syst. 2021, 19, 2251–2259. [Google Scholar] [CrossRef]
  27. Rabelo, M.F.; Brandão, A.S.; Sarcinelli-Filho, M. Landing a UAV on Static or Moving Platforms Using a Formation Controller. IEEE Syst. J. 2021, 15, 37–45. [Google Scholar] [CrossRef]
  28. Ghasemi, A.; Parivash, F.; Ebrahimian, S. Autonomous landing of a quadrotor on a moving platform using vision-based FOFPID control. Robotica 2022, 40, 1431–1449. [Google Scholar] [CrossRef]
  29. Wang, C.; Yan, J.; Li, W.; Shan, L.; Sun, L. Disturbances rejection optimization based on improved two-degree-of-freedom LADRC for permanent magnet synchronous motor systems. Def. Technol. 2023; in press. [Google Scholar] [CrossRef]
Figure 1. The structure of the proposed method. The method includes marker detection and landing flight control, the purpose of which is to land on the USV platform autonomously for the UAV.
Figure 1. The structure of the proposed method. The method includes marker detection and landing flight control, the purpose of which is to land on the USV platform autonomously for the UAV.
Sensors 24 01645 g001
Figure 2. The distribution of the proposed multi-level landing marker. The first level is the ArUco of which the ID is 19, and the second level contains four ArUcos of which the ID are 1, 2, 3, and 4, respectively, while the third level is the ArUco and the ID is 43.
Figure 2. The distribution of the proposed multi-level landing marker. The first level is the ArUco of which the ID is 19, and the second level contains four ArUcos of which the ID are 1, 2, 3, and 4, respectively, while the third level is the ArUco and the ID is 43.
Sensors 24 01645 g002
Figure 3. The quadrotor UAV reference frames. The red arrows represent coordinate frameworks, and blue arrows denotes force directions. {E} represents earth-fixed reference frame, while {B} denotes body-fixed reference frame. F1 to F4 are thrusts generated by four motors, respectively, and G represents the gravity of the UAV.
Figure 3. The quadrotor UAV reference frames. The red arrows represent coordinate frameworks, and blue arrows denotes force directions. {E} represents earth-fixed reference frame, while {B} denotes body-fixed reference frame. F1 to F4 are thrusts generated by four motors, respectively, and G represents the gravity of the UAV.
Sensors 24 01645 g003
Figure 4. Control schematic of UAV landing. The controller includes a position controller and an attitude controller.
Figure 4. Control schematic of UAV landing. The controller includes a position controller and an attitude controller.
Sensors 24 01645 g004
Figure 5. The picture of our UAV and USV with the main components. The hardware includes a body frame, an online computer, an onboard camera, and four brushless motors, while the software is based on Ubuntu 18.04 and ROS-melodic.
Figure 5. The picture of our UAV and USV with the main components. The hardware includes a body frame, an online computer, an onboard camera, and four brushless motors, while the software is based on Ubuntu 18.04 and ROS-melodic.
Sensors 24 01645 g005
Figure 6. The simulation result. Three main stages are shown in the simulation, including searching, adjusting, and landing. Finally, the UAV lands on the ArUco markers successfully.
Figure 6. The simulation result. Three main stages are shown in the simulation, including searching, adjusting, and landing. Finally, the UAV lands on the ArUco markers successfully.
Sensors 24 01645 g006
Figure 7. The ground experiment results; (a) recognizes the largest marker, and its ID is 19, (b) is 19, 1, and 2, (c) is 19, 1, 2, and 4, (d) is 19, 1, 2, 3, and 4, (e) is 19, 1, 2, 4, and 43, while (f) recognizes all markers, respectively.
Figure 7. The ground experiment results; (a) recognizes the largest marker, and its ID is 19, (b) is 19, 1, and 2, (c) is 19, 1, 2, and 4, (d) is 19, 1, 2, 3, and 4, (e) is 19, 1, 2, 4, and 43, while (f) recognizes all markers, respectively.
Sensors 24 01645 g007
Figure 8. The ground landing errors from 30 ground experiments. From the results, it can be seen that the multi-level marker performs better than the single marker.
Figure 8. The ground landing errors from 30 ground experiments. From the results, it can be seen that the multi-level marker performs better than the single marker.
Sensors 24 01645 g008
Figure 9. The surface experiment results. (ah) show the process of UAV autonomous landing, in which different markers are recognized for adjusting the UAV pose.
Figure 9. The surface experiment results. (ah) show the process of UAV autonomous landing, in which different markers are recognized for adjusting the UAV pose.
Sensors 24 01645 g009
Figure 10. The landing curves. Two control methods are used for comparison: one is PID control and the other is LADRC, which was employed in our work.
Figure 10. The landing curves. Two control methods are used for comparison: one is PID control and the other is LADRC, which was employed in our work.
Sensors 24 01645 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lv, M.; Fan, B.; Fang, J.; Wang, J. Autonomous Landing of Quadrotor Unmanned Aerial Vehicles Based on Multi-Level Marker and Linear Active Disturbance Reject Control. Sensors 2024, 24, 1645. https://doi.org/10.3390/s24051645

AMA Style

Lv M, Fan B, Fang J, Wang J. Autonomous Landing of Quadrotor Unmanned Aerial Vehicles Based on Multi-Level Marker and Linear Active Disturbance Reject Control. Sensors. 2024; 24(5):1645. https://doi.org/10.3390/s24051645

Chicago/Turabian Style

Lv, Mingming, Bo Fan, Jiwen Fang, and Jia Wang. 2024. "Autonomous Landing of Quadrotor Unmanned Aerial Vehicles Based on Multi-Level Marker and Linear Active Disturbance Reject Control" Sensors 24, no. 5: 1645. https://doi.org/10.3390/s24051645

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop