Next Article in Journal
Method for Multi-Target Wireless Charging for Oil Field Inspection Drones
Previous Article in Journal
An Efficient Pyramid Transformer Network for Cross-View Geo-Localization in Complex Terrains
Previous Article in Special Issue
Application of the Fault Injection Method for the Verification of the Behavior of Multiple Unmanned Aircraft Systems Flying in Formation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Collision Detection and Recovery Control of Drones Using Onboard Inertial Measurement Unit

1
Department of Aerospace Engineering, Toronto Metropolitan University, Toronto, ON M5B 2K3, Canada
2
Department of Electrical and Computer Engineering, Royal Military College of Canada, Kingston, ON K7K 7B4, Canada
*
Authors to whom correspondence should be addressed.
Drones 2025, 9(5), 380; https://doi.org/10.3390/drones9050380
Submission received: 7 April 2025 / Revised: 14 May 2025 / Accepted: 15 May 2025 / Published: 18 May 2025
(This article belongs to the Special Issue Flight Control and Collision Avoidance of UAVs)

Abstract

This paper presents a strategy for collision detection and recovery control of drones using an onboard Inertial Measurement Unit (IMU). The collision detection algorithm compares the expected response of the drone with the measurements from the IMU to identify and characterize collisions. The recovery controller implements a gain scheduling approach, adjusting its parameters based on the characteristics of the collision and the drone’s attitude. Simulations were conducted to compare the proposed collision detection strategy with a popular detection method with fixed thresholds, and the simulation results showed that the proposed approach outperformed the existing method in terms of detection accuracy. Furthermore, the proposed collision detection and recovery control approaches were tested with physical experiments using a custom-built drone. The experimental results confirmed that the proposed collision detection algorithm was able to distinguish between actual collisions and aggressive flight maneuvers, and the recovery controller can recover the drone within 0.8 s.

1. Introduction

With the rapid growth in applications of drones or UAVs in various fields, potential in-flight collisions have become an important safety concern, which can not only damage the drone itself but also endanger the surrounding environment associated with a drone crash. In order to improve drone safety, collision prevention has become a popular research topic, focusing on flight planning, obstacle detection, and obstacle avoidance. In related research, mixed integer linear programming was implemented for both collision avoidance and flight path optimization in [1]. In [2], an RGB-D camera was fused with an onboard inertial measurement unit (IMU) to identify static and dynamic obstacles, computing the collision risks and generating escape maneuvers. In [3], a vision-based obstacle avoidance system was proposed using two fisheye cameras, which utilized a Gaussian-mixture filter to fuse the images from the two cameras, allowing the drone to track multiple neighbours in real time. In [4], a relaxed-constraint triple integrator planner was proposed, which allowed the drone to maintain safety under the limited field of view by recalling a previously generated motion primitive. In [5], a causal representation disentanglement module was introduced into a deep reinforcement learning (DRL) framework to address the challenge of collision avoidance for UAVs in a previously unseen environment, which minimized irrelevant factors to improve the generalizability. In [6], a nonlinear model predictive controller was developed to generate dynamic drone obstacle avoidance maneuvers in real-time, which incorporated the trajectory of the obstacles into optimization. In [7], the vision-based navigation, control, and obstacle avoidance tasks were formulated as an optimal control problem for drones, which combined with ellipsoidal shaping areas as a safety margin and showed robust obstacle avoidance performance. In [8], an obstacle-free corridor quadratic program formulation was proposed for real-time path generation of drones through cluttered indoor spaces. While promising, not all drone collisions can be predicted or avoided. When drone collision occurs, drone protection and crash avoidance have become important concerns, so as to reduce the damage to the minimum.
The relevant research works on drone protection can be categorized into passive and active approaches. The passive drone protection approaches focus on modifying the drone frame to reduce the impact during a collision. On the other hand, the active approaches utilize onboard sensors for collision detection and recovery control since recovering quickly from a collision to a steady flight, rather than crashing to the ground, can enhance safety and minimize potential damage.
Regarding passive drone protection, some researchers have used additional cages or guards around the drone to protect them from impacts [9,10,11,12], and others have employed origami-style designs to absorb impact energy during a collision [13,14,15]. For example, in [13], an origami-style guard was developed, which will collapse and absorb energy from the impacts upon collision. Besides adding cages or guards, there were also studies that focused on redesigning the drone structure itself to make it more collision-resilient [16,17,18]. For example, in [16], soft materials were added around the exterior of the drone to improve its impact-absorbing capability. In [17], more propellers were introduced to create a fully actuated system that is capable of reacting to collisions. In [18], springs and retractable parts were incorporated into the drone’s arms to reduce the impact upon collision.
The aforementioned passive protection approaches are effective in reducing the damage to the drone itself upon collision. However, the introduction of additional guards, electronics, contraptions, or the redesign of the structure increased the drone’s weight, dimension, and fabrication costs. Moreover, these methods do not address the concern of the drone crashing, which poses a threat to the surrounding environment. In order to address the limitations of passive protection approaches, active collision detection and recovery control approaches provide a promising solution. The active approaches take advantage of onboard sensors for flight control and navigation, such as IMU, to detect collisions [19,20], or introduce extra sensors, e.g., hall sensors, for collision detection [18].
As for the research on IMU-based collision detection, in [19], fixed thresholds of horizontal acceleration and angular velocities were set for collision detection. In this reported approach, a collision would be flagged when the magnitude of the horizontal acceleration from the onboard IMU exceeds the threshold for acceleration, and the threshold for angular velocity was used to detect the angle of impact [19]. In [20], fuzzy logic was implemented to combine the IMU data with the drone’s pre-collision inclination and flipping angles to detect and characterize collisions. In this method, a fixed preset threshold on linear acceleration was used for the detection of a collision [20]. The approaches reported in [19,20] were verified by both simulations and experiments, which demonstrated high success rates for collision detection. While promising, collision detection relying only on a comparison of the IMU data with a preset threshold may flag false alerts when the drone performs aggressive maneuvers, such as rapid turns.
In relation to the research works on collision detection with extra sensors, in [18], hall sensors were introduced to actively detect the collisions of drones by monitoring their readings. The hall sensor-based approach can effectively distinguish collisions from aggressive maneuvers. However, it requires additional sensors and would require frame modifications, resulting in a more complex setup.
Upon the detection of a collision, recovery control is needed to recover the drone to steady flight. Since the impacts would cause unpredictable disturbances to the drone’s flight, nonlinear control techniques are more suitable for drone recovery than linear ones. For example, the sliding mode control approach has been widely used in many studies on drone recovery control, which was proven to be able to handle sudden external disturbances such as wind gusts and maintain a stable flight through both simulation and physical experiments [21,22,23,24]. While being effective, sliding mode control techniques may induce oscillations, which could lead to instability in a recovery scenario. In [25,26], model predictive control approach was implemented to improve drone tracking performance in complex environments. While promising, the model predictive control approach requires considerable computational power, making it less attractive for recovery control where a quick response is essential.
To address the limitations of the sliding mode control and model predictive control, self-tuning approaches, including fuzzy logic and gain scheduling, are widely used. These self-tuning methods dynamically adapt to the operating conditions and have demonstrated great performance in attitude and altitude control, including short settling times and small overshoots [27,28,29,30,31]. Fuzzy logic-based methods are difficult to validate, and the parameters are hard to tune [32]. Compared to fuzzy logic-based methods and model predictive control, gain scheduling approaches are more suitable for drone recovery control, as they can cover a wider range of operating conditions and are not computationally demanding, although they may require a careful predefinition of the operating conditions.
The main contributions of this work can be summarized as follows:
An adaptive collision detection approach was proposed that compares the predicted attitude of the drone with the measured orientation of the onboard IMU using dynamically adjusted thresholds, which can improve the detection accuracy and avoid false or missing alerts under different maneuver conditions.
A gain scheduling-based recovery controller was developed that tracks a desired acceleration and orientation of the drone after the impact caused by collision, and adapts its gains to the collision intensity.
A comparison study was conducted using simulation tests in ROS Gazebo, and a series of physical experiments were performed with a custom-built quadrotor drone. The simulation and experimental results confirmed the effectiveness of the proposed approaches.
The rest of this paper is organized as follows: Section 2 presents a dynamic model of the drone under consideration. Section 3 details the proposed approach in collision detection, characterization, and recovery control. Section 4 reports the experimental results. Section 5 provides the conclusion and future research directions.

2. Dynamic Modelling

There are various types of drone frames, including tricopters, quadcopters, and hexacopters, amongst which quadcopters are the most widely used due to their balance of performance [33,34,35]. Without loss of generality, in this paper, the term drone refers to quadcopters. There are two major configurations for quadcopters, the ‘X’ and the ‘+’ configurations, which are defined by the direction and location of the drone’s arms and motors relative to its body frame, as shown in Figure 1 [36]. In the ‘+’ configuration, the arms and the motors of the drone align with the X and Y axis of its body frame, whereas in the ‘X’ configuration, the arms and the motors reach out diagonally positioned at a 45-degree angle to the X and Y axis of its body frame. The study in [37] compared the two configurations with the same drone and found that the ‘X’ configuration provides more stability in maneuvering. In this paper, the drone is modelled in the ‘X’ configuration due to the additional stability.
The dynamics of the drone are modelled in the East–North–Up (ENU) convention. The world and body frames are defined as { X W Y W Z W } and { X B Y B Z B } , respectively. For the body frame, the origin is located at the center of the mass of the drone; the positive X-axis is selected to point forward; the positive Z-axis is vertical with the drone frame, pointing upward; and the Y-axis is determined by the right-hand rule, as shown in Figure 2.
To simplify the modeling process, the following assumptions were made [38]:
  • All the propellers and motors are identical, and the dynamic actions are instantaneous;
  • The drone is assumed to be a rigid body, and the deformation is assumed to be negligible;
  • The drone is assumed to be able to remain fully functional after collisions, i.e., no structural damage, and all the electronics stay unaffected by the collisions.
The motion of the drone is described using Newton–Euler equations. Let x and v denote the position and the velocity vectors of the drone in the world frame, and let R denote the rotation matrix from the body frame to the world frame.
x ˙ = v
m v ˙ = m g + R T t
R ˙ = R ω ^
I ω ˙ + ω × I ω = M
where m represents the mass of the drone; I denotes the moment of inertia of the drone; ω denotes the vector of the angular velocity; and ω ^ represents the skew-symmetric matrix of the angular velocity vector ω . g is the acceleration vector due to gravity. T t and M represent the total thrust and moment acting on the drone, respectively.
The total thrust force T t and moment M t acting on a drone that is generated by the propellers can be expressed as follows:
T t = j = 1 4 k t ω p j 2
M t = j = 1 2 k d ω p j 2 j = 3 4 k d ω p j 2
where k t and k d represent the thrust and the drag coefficient of the propellers, respectively; ω p represents the rotational velocity of the propellers; and the subscript j represents the j t h specific propeller on the drone.
The control of the drone can be expressed as follows:
u 1 u 2 u 3 u 4 = k t k t l k t l k d k t k t l k t l k d k t k t l k t l k d k t k t l k t l k d ω 1 ω 2 ω 3 ω 4
where u 1 represents the total thrust; u 2 , u 3 ,   u 4 represent the moments of the x-, y-, and z-axis, respectively; and l is the distance from the motor to the drone’s center of mass.

3. Collision Detection and Recovery Control

This section presents the proposed collision detection and recovery control approaches.
  • Collision Detection and Characterization
The proposed collision detection method compares the magnitudes of the expected and actual attitude response in terms of linear acceleration and angular velocity, and the differences are calculated as follows:
a x y = a e x y a I M U x y
ω z = ω e z ω I M U z
where a   and ω represent the differences in linear acceleration and angular velocity, respectively; the subscript e represents the expected values calculated from the motor speeds; and the subscript “ I M U ” represents the measurements from the onboard IMU sensor.
With the assumption that the drone collides with a vertical wall while flying with a small tilt angle, only the horizontal components of the acceleration and angular velocity around the z-axis need to be considered. The expected drone attitudes, the horizontal acceleration and the yaw rate, are derived from the following equations:
a e x y = R T t m g x y
ω e z = M t J d t z
where R represents the rotational matrix from the body frame to the world frame.
To compensate for sensor noise and control input delays, a first-order lag filter is implemented:
y n = 1 β y n 1 + β x n
where y [ n ] and y [ n 1 ] denote the outputs at the current and previous sampling time, respectively; x [ n ] represents the input at the current sampling time; and β is the smoothing factor, ranging from ( 0 < β < 1 ) , and defined as follows:
β = d t τ + d t
where τ and d t represent the time constant and time step, respectively.
The thresholds for collision detection were adjusted dynamically based on the current flight conditions to allow a wider range of collisions to be detected:
H a c c t = k a c c t a e x y + c a c c
H y a w t = k y a w t ω e z + c y a w
where k a c c and k y a w represent the coefficients for thresholds H a c c and H y a w at time t. The two constants c a c c and c y a w are set to mitigate the signal noise, which can cause false detections. The two coefficients, k a c c and k y a w , are determined by the variance in the most recent IMU signals over a sliding window with a size of W samples:
k a c c t t = 1 W i = 0 W 1 a t i 2
k y a w t = 1 W i = 0 W 1 ω z t i 2
When a collision is detected, the colliding angle and the rotation direction of the drone need to be estimated to enable effective recovery control. The residual horizontal acceleration, a x y can be calculated with a e x y a I M U x y , and a unit vector that indicate the direction of collision can be obtained by normalizing the residual:
d l i n e a r = a e x y a I M U x y a e x y a I M U x y
Similarly, the torque direction that induced by the collision can be calculated with the following:
d y a w = ω e z ω I M U z ω e z ω I M U z
In (19), the sign of the term, d y a w , indicates whether the collision introduced a clockwise or counterclockwise torque to the drone around the z-axis. Values of all the terms from Equations (18) and (19) can be measured at the time when a collision is detected.
The proposed collision detection and characterization approach is outlined in the flowchart shown in Figure 3. During each step, the magnitudes of the difference a x y and ω z are compared against the thresholds H a c c and H y a w . A collision is flagged if either of the two magnitudes exceeds its corresponding threshold.
Upon detection of a collision, the maximum difference between the calculated and measured attitude response, denoted as   a x y m a x and ω z m a x , is estimated. The magnitude values calculated at the current time step are compared with the ones obtained from the previous time step until they stop increasing. The estimated maximum difference is used as the level of the intensity, denoted as S in terms of horizontal accelerations and yaw rate, i.e.,
S a c c = a x y m a x
S y a w = ω y a w m a x
b.
Recovery Control
The proposed recovery control approach is inspired by the recovery control strategy proposed by Faessler et al. [39], where the primary objective during the recovery process is to track a desired acceleration and to minimize unwanted yaw movements. In the proposed recovery control approach, the desired acceleration is proportional to the intensity value obtained from the collision characterization process, and the desired yaw rate is selected as zero, i.e.,
a d e s = k a c c d · S a c c · d a c c
r ˙ d e s = 0
where the vector a d e s represents the desired acceleration; r ˙ d e s denotes the desired yaw rate; and k a c c d represents the proportional gain to scale the desired acceleration based on the intensity of the impact.
The proposed recovery controller orients the drone to the desired direction and tracks its acceleration by controlling the pitch, roll, and yaw rates, as well as the thrust of the drone. During our simulations and experiments, it was noticed that the drone lost its control faster and started to lose its height earlier with a higher intensity of impact. Thus, it was advisable to have the recovery controller act according to the intensity of the impacts. In other words, high intensity impacts require faster and more aggressive recoveries while impacts with lower intensity could be recovered less aggressively. Furthermore, observation from the simulation studies on collision response indicates that the time from the initial collision to the beginning of the drone crash does not necessarily relate to the peak horizontal acceleration resulting from the impact, while this time duration is more related to the pitch, roll, and yaw rates after the collision. In other words, the higher the rates are, the faster the drone starts to crash. The pitch, roll, and yaw rates are controlled by PI controllers through a gain scheduling technique, as shown in Figure 4. The maximum magnitude of the pitch/roll rate and the yaw rate after the collision were selected as the scheduling variables.
ω p q m a x = max p ˙ 2 + q ˙ 2
r ˙ m a x = max r ˙
where p ˙ , q ˙ and r ˙ represent the drone’s pitch, roll, and yaw rates, respectively. The controller was tuned to meet the desired performance at different operating conditions using the gradient descent approach, which was implemented to obtain the suitable parameters for the controller. For a general PI controller that handles the pitch and roll rate control, the cost function J is defined based on the errors in desired performance:
J K p , K i = O S % E r r o r + T s E r r o r
where
O S % E r r o r = O S % O S % d e s
T s E r r o r = T s T s d e s
and K p and K i are the parameters of the PI controller. T s and O S % are the settling time and the overshoot percentage and the subscript d e s represents the desired values. The gradient of the cost function with respect to each parameter can be calculated by the following:
J K p = J K p + δ , K i J K p , K i δ
J K i = J K p ,   K i + δ J K p   , K i δ
and the gains can be updated by the following:
K p = K p α J K p
K i = K i α J K i
where α represents the learning rate.
A series of values for the maximum magnitude of the pitch/roll rate after the collision were selected to represent different flight conditions, and for each of the values, the rate controllers are individually tuned. The lookup maps in terms of the desired setting time in this study are shown in Figure 5. Linear interpolation was used to calculate the parameters of the controllers that fall between the tuned flight conditions.
The pitch and roll rates are also utilized to determine the desired orientation of the drone for recovery after the impact:
p q d e s = k o r i d e s p ˙ q ˙
where k o r i d e s is a coefficient that is used to determine the desired orientation, and the sign of the coefficient will be determined by comparing the direction d a c c and the direction of the pitch and roll. A maximum magnitude of the desired orientation is typically set at 45° to prevent over-rotation.

4. Simulations and Experiments

To verify the effectiveness of the proposed collision detection strategy, simulations were conducted in ROS Gazebo to compare the proposed approach with an existing method. To further validate the proposed collision detection and recovery control approaches in the physical world, experiments were conducted in the lab environment using a custom-built quadrotor drone, as shown in Figure 6. The primary parameters of the drone are listed in Table 1. During the simulation studies, a detailed 3D model of the drone was created with SolidWorks 2010 and the parameters match the physical drone.

4.1. Comparison Study

The proposed collision detection approach was compared with an existing collision detection method reported in [16], which triggers a collision when the following applies:
max a x , a y > γ
where γ = 1 g is the fixed threshold, as shown in [16].
The comparison study was conducted in ROS Gazebo with PX4 software-in-the-loop (SITL) in Ubuntu 20.04. A wall was placed in the simulation world on the positive X-axis of the world frame, five meters away from the origin with varying orientations for different scenarios. Fifty Monte Carlo simulation trials were conducted for each scenario. For each trial, the drone started at the origin of the world frame and was commanded to fly along the positive X-axis direction after the takeoff, with a random velocity of up to 2 m/s and hit the wall from different angles. Two collision scenarios were investigated: (a) vertical wall collisions, where the drone flies horizontally into a vertical wall with different inception angles, ranging from 90 degrees (head-on) to 45 degrees, and (b) glancing impacts, where the drone approaches the wall with a shallower angle, which was selected as 10 degrees in this simulation. The glancing impacts would result in a brief bump and an unexpected yaw rotation of the drone. The success rate of the proposed obstacle detection approach was compared with that of the existing method reported in [16], which was calculated as follows:
S u c c e s s   R a t e = N u m b e r   o f   S u c c e s s f u l   D e t e c t i o n s T o t a l   N u m b e r   o f   T r i a l s
The simulation results are summarized in Table 2 and Table 3. Table 2 shows the success rate of detection for each scenario, and Table 3 presents the average time needed for collision detections. Across all scenarios, both strategies detected impacts in roughly 0.012 s, which is about one timestamp after the collisions. Regarding the detection success rate, the proposed detection strategy successfully identified all fifty vertical wall collisions, outperforming the existing baseline, which missed two collision events. The two missed detections may have been caused by the lower flying velocity, and the IMU signal did not reach the fixed threshold for the existing method. In comparison, the proposed approach introduced adaptive thresholding, which allowed the drone to detect collisions even with subtle impacts. With the more challenging glancing impacts, the proposed method detected 47 collisions, still outperforming the existing method, which only detected 35 impact events.

4.2. Physical Experiment

The drone was powered by a 4000 mAh Li-Po battery and equipped with four 1750 KV motors, and each paired with a three-blade, 3.5-inch propeller. Flight control was handled by an open-source Pixhawk32 V6 flight controller, which estimates the drone’s orientation using signals from an onboard 9-DOF IMU. A Nvidia Jetson Xavier NX GPU module, running Ubuntu 20.04 and ROS software, served as the companion computer to execute the collision detection and recovery algorithms. The Jetson Xavier NX connects with the flight controller and communicates through the MAVLink protocol for collision detection, sending commands for recovery control.
The drone was manually controlled in ‘Stabilized Mode’ from the Pixhawk and switched to ‘Offboard Mode’ only after a collision was detected and during the recovery phase. For safety, a rope was attached to the center of the drone’s frame from above to prevent it from crashing in case of failure. The rope was held slightly to minimize its impact on the flight. The flight data were recorded and saved on the Jetson Xavier NX companion computer for post-experiment analysis.
The test environment was carefully designed, with a hard, solid wood plate placed vertically as a ‘wall’ for the drone to collide with, surrounded by a safety net for additional protection. A total of six test trials were conducted, each preceded by a thorough restart of the collision detection and recovery code to ensure a consistent starting state. During the tests, all of the IMU readings were timestamped when the companion computer received them. The collision detection algorithm also logged a timestamp when a collision was detected. The difference between the timestamps was treated as the time required for detection. During the test, the round-trip delay was identified to be around 5–7 ms.
The physical experiments were conducted with six trials. In the first five trials, the drone collided with the wall at an angle of approximately 90 degrees. The drone was initially placed about 1 m away from the wall for each trial, and it was directed to fly towards the wall after takeoff, with a random velocity of up to 2 m/s and hit the wall from different angles. The velocity of the drone for each trail was recorded, as shown in Table 4. During the first five trials, the drone’s tilt angle was kept under 15 degrees when approaching the wall before the impacts. Keeping the tilt angle consistent can avoid making additional control adjustments before the impacts, which would help to obtain more reliable experimental results.
Trial six was conducted with more aggressive flight maneuvers to determine if the proposed approach could distinguish collisions from aggressive maneuvers. In Trial six, the maximum tilt angle and the maximum pitch/roll rate were increased to 70 degrees and 720 deg/s to allow the drone to change direction from left to right rapidly in the air. During the tests, the smoothing factor and time step were selected as β = 0.74 ,   d t = 0.008   s .
In the first five trials, the proposed detection approach successfully detected the collisions, and the developed recovery controller managed to recover the drone to stable flight after each collision. Across the five trials, the collision detection algorithm took about 0.032 s to flag a collision, which was a bit slower than in simulations. The longer collision detection time in physical experiments could be due to the latency of communication and onboard computations. During the experiments, the recovery controller effectively reacted to the disturbance and rotation caused by the collision impacts, controlling the drone to move away from the wall. Figure 7 shows the pitch rate of the drone for all five tests, and Figure 8 shows the corresponding yaw angles and rates. All the lines plotted in Figure 7 and Figure 8 are aligned such that the impact occurs at time t = 0.
In Figure 7 and Figure 8, some small oscillations were observed, which may be a result of the drone’s structural vibrations after the impact. From Figure 7, it can be seen that the drone was able to rotate itself away from the effect after collision detection. From Figure 8, it can also be seen that the drone stabilized its yaw motion after the collision within 0.2 s. Snapshots of Trial #5 are shown in Figure 9, illustrating the collision detection and drone recovery process. In this trial, the total time for drone recovery was approximately 0.6 s. The flight control mode switched to ‘Offboard Mode’ 0.08 s after the impact, taking about 0.3 s to reach its desired attitude and another 0.14 s for stabilization.
In the flight test with aggressive maneuver, the magnitude of the measured and expected horizontal acceleration, and the difference with the magnitude of the expected horizontal acceleration, a I M U x y , a e x y , and a x y , are plotted in Figure 10.
The experimental results demonstrated that the drone could successfully detect and recover from collisions using the proposed approaches. The recovery controller effectively reoriented the drone and stabilized it after the impact. It is also demonstrated that the proposed detection method can distinguish between collisions and aggressive flight maneuvers.

5. Conclusions

This paper proposed a drone collision detection method based on onboard IMU data and a recovery control strategy using a gain scheduling technique. The proposed collision detection method compares the calculated attitudes with the measurements from the IMU to determine if a collision occurs. When a collision is detected, a collision characterization approach was developed to characterize it, and a gain scheduling-based recovery controller was implemented for the recovery control. The proposed methods can effectively detect a drone collision and bring the drone back to a stable, upright flying condition after colliding with a vertical wall. A comparison study was performed in simulation under two scenarios, in terms of vertical wall colliding and glancing impacts. In both scenarios, the proposed detection strategy outperformed the existing method in terms of success rate of detection. Physical experiments were also conducted to validate the proposed obstacle detection and recovery control approaches. The experimental results demonstrated that the proposed approaches could detect the obstacle collision and recover itself from the collisions within 0.8 s. Furthermore, the dynamically adjusted thresholds used in the proposed collision detection method allow the drone to distinguish obstacle collisions from aggressive maneuvers, which is crucial for such operational scenarios as require rapid direction changes.
In our future research, the proposed drone collision detection and recovery control strategy will be extended to various scenarios, including detection and recovery from collisions with horizontal obstacles. The proposed collision detection and recovery control approaches can only work on the condition that there are no sensor failures or structural damages. Generalization of the proposed approaches to relax these conditions will form an important part of our future research. Furthermore, we will continue to investigate and improve our proposed approaches to improve overall performance and test them with different kinds of drones in various environments.

Author Contributions

Investigation, X.H.; Resources, G.L.; Writing—original draft, X.H.; Writing—review & editing, G.L. and Y.L.; Supervision, G.L. and Y.L.; Funding acquisition, G.L. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported in part by the Directorate Technical Airworthiness and Engineering Support (DTAES) 6 under the project number TC301 and in part by Natural Sciences and Engineering Research Council (NSERC) of Canada.

Data Availability Statement

Data are contained within the article. Further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Richards, A.; How, J.P. Aircraft trajectory planning with collision avoidance using mixed integer linear programming. In Proceedings of the 2002 American Control Conference (IEEE Cat. No. CH37301), Anchorage, AK, USA, 8–10 May 2002; Volume 3, pp. 1936–1941. [Google Scholar] [CrossRef]
  2. Santos, M.C.P.; Santana, L.V.; Brandão, A.S.; Sarcinelli-Filho, M. UAV Obstacle Avoidance Using RGB-D System. In Proceedings of the 2015 International Conference on Unmanned Aircraft Systems (ICUAS 2015), Denver, CO, USA, 9–12 June 2015; pp. 312–319. [Google Scholar] [CrossRef]
  3. Roelofsen, S.; Gillet, D.; Martinoli, A. Reciprocal collision avoidance for quadrotors using on-board visual detection. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 4810–4817. [Google Scholar] [CrossRef]
  4. Lopez, B.T.; How, J.P. Aggressive collision avoidance with limited field-of-view sensing. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 1358–1365. [Google Scholar] [CrossRef]
  5. Fan, Z.; Xia, Z.; Lin, C.; Han, G.; Li, W.; Wang, D.; Chen, Y.; Hao, Z.; Cai, R.; Zhuang, J. UAV Collision Avoidance in Unknown Scenarios with Causal Representation Disentanglement. Drones 2025, 9, 10. [Google Scholar] [CrossRef]
  6. Lindqvist, B.; Mansouri, S.S.; Agha-Mohammadi, A.-A.; Nikolakopoulos, G. Nonlinear MPC for collision avoidance and control of UAVs with dynamic obstacles. IEEE Robot. Autom. Lett. 2020, 5, 6001–6008. [Google Scholar] [CrossRef]
  7. Potena, C.; Nardi, D.; Pretto, A. Joint vision-based navigation, control and obstacle avoidance for UAVs in dynamic environments. In Proceedings of the 2019 European Conference on Mobile Robots (ECMR), Prague, Czech Republic, 4–6 September 2019; pp. 1–7. [Google Scholar] [CrossRef]
  8. Arshad, M.A.; Ahmed, J.; Bang, H. Quadrotor Path Planning and Polynomial Trajectory Generation Using Quadratic Programming for Indoor Environments. Drones 2023, 7, 122. [Google Scholar] [CrossRef]
  9. Mulgaonkar, Y.; Makineni, A.; Guerrero-Bonilla, L.; Kumar, V. Robust Aerial Robot Swarms Without Collision Avoidance. IEEE Robot. Autom. Lett. 2018, 3, 596–603. [Google Scholar] [CrossRef]
  10. Naldi, R.; Torre, A.; Marconi, L. Robust Control of a Miniature Ducted-Fan Aerial Robot for Blind Navigation in Unknown Populated Environments. IEEE Trans. Control Syst. Technol. 2015, 23, 64–79. [Google Scholar] [CrossRef]
  11. Edgerton, K.; Throneberry, G.; Takeshita, A.; Hocut, C.M.; Shu, F.; Abdelkefi, A. Numerical and Experimental Comparative Performance Analysis of Emerging Spherical-Caged Drones. Aerosp. Sci. Technol. 2019, 95, 105512. [Google Scholar] [CrossRef]
  12. Triviño, I.V.; Skrivervik, A.K. Small Antennas with Broad Beamwidth Integrated on a Drone Enclosed in a Protective Structure. In Proceedings of the 17th European Conference on Antennas and Propagation (EuCAP), Florence, Italy, 26–31 March 2023; pp. 1–5. [Google Scholar] [CrossRef]
  13. Sareh, P.; Chermprayong, P.; Emmanuelli, M.; Nadeem, H.; Kovac, M. Rotorigami: A Rotary Origami Protective System for Robotic Rotorcraft. Sci. Robot. 2018, 3, eaah5228. [Google Scholar] [CrossRef]
  14. Shu, J.; Chirarattananon, P. A Quadrotor with an Origami-Inspired Protective Mechanism. IEEE Robot. Autom. Lett. 2019, 4, 3820–3827. [Google Scholar] [CrossRef]
  15. Mintchev, S.; Shintake, J.; Floreano, D. Bioinspired Dual-Stiffness Origami. Sci. Robot. 2018, 3, eaau0275. [Google Scholar] [CrossRef]
  16. Bakir, A.; Özbek, D.; Abazari, A.; Özcan, O. SCoReR: Sensorized Collision Resilient Aerial Robot. In Proceedings of the IEEE International Conference on Soft Robotics (RoboSoft), Singapore, 3–7 April 2023; pp. 1–7. [Google Scholar] [CrossRef]
  17. Li, B.; Liu, K.; Ma, L.; Huang, D.; Li, Y. A Bionics-Based Recovery Strategy for Micro Air Vehicles. IEEE Trans. Ind. Electron. 2023, 70, 6068–6077. [Google Scholar] [CrossRef]
  18. Liu, Z.; Karydis, K. Toward Impact-Resilient Quadrotor Design, Collision Characterization and Recovery Control to Sustain Flight after Collisions. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 183–189. [Google Scholar] [CrossRef]
  19. Mulgaonkar, Y.; Liu, W.; Thakur, D.; Daniilidis, K.; Taylor, C.J.; Kumar, V. The Tiercel: A Novel Autonomous Micro Aerial Vehicle That Can Map the Environment by Flying into Obstacles. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 7448–7454. [Google Scholar] [CrossRef]
  20. Dicker, G.; Chui, F.; Sharf, I. Quadrotor Collision Characterization and Recovery Control. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 5830–5836. [Google Scholar] [CrossRef]
  21. Elagib, R.; Karaarslan, A. Sliding Mode Control-Based Modeling and Simulation of a Drone. J. Eng. Res. Rep. 2023, 24, 32–41. [Google Scholar] [CrossRef]
  22. Mathewson, R.; Fahimi, F. Nonlinear Adaptive Sliding Mode Control with Application to Drones. Nonlinear Eng. 2023, 12, 20220268. [Google Scholar] [CrossRef]
  23. Thanh, H.L.N.N.; Hong, S.K. Quadcopter Robust Adaptive Second Order Sliding Mode Control Based on PID Sliding Surface. IEEE Access 2018, 6, 66850–66860. [Google Scholar] [CrossRef]
  24. Darwito, P.A.; Indayu, N. Adaptive Neuro-Fuzzy Inference System Based on Sliding Mode Control for Drone Trajectory Tracking with the Presence of External Disturbance. J. Intell. Syst. Control 2023, 2, 33–46. [Google Scholar] [CrossRef]
  25. Merabti, H.; Bouchachi, I.; Belarbi, K. Nonlinear Model Predictive Control of Quadcopter. In Proceedings of the 16th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA), Monastir, Tunisia, 21–23 December 2015; pp. 208–211. [Google Scholar] [CrossRef]
  26. Kimura, T.; Hoshino, K.; Asano, Y.; Honda, A.; Motooka, N.; Ohtsuka, T. Application of Nonlinear Model Predictive Control to Quadcopter Equipped with Internal Control System. In Proceedings of the 2022 SICE International Symposium on Control Systems (SICE ISCS), Yokohama, Japan, 8–10 March 2022; pp. 51–57. [Google Scholar] [CrossRef]
  27. Baharuddin, A.; Basri, M.A. Self-Tuning PID Controller for Drone Using Fuzzy Logic. Int. J. Robot. Control Syst. 2023, 3, 728–748. [Google Scholar] [CrossRef]
  28. Dounis, A.I.; Kofinas, P.; Alafodimos, C.; Tseles, D. Adaptive Fuzzy Gain Scheduling PID Controller for Maximum Power Point Tracking of Photovoltaic System. Renew. Energy 2013, 60, 202–214. [Google Scholar] [CrossRef]
  29. Fekik, A.; Azar, A.T.; Lamine, H.M.; Denoun, H. Modeling and Simulation of Drone Using Self-Tuning Fuzzy-PI Controller. In Studies in Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2023; pp. 231–251. [Google Scholar] [CrossRef]
  30. Coulombe, C.; Saussié, D.; Achiche, S. Modeling and Gain-Scheduled Control of an Aerial Manipulator. Int. J. Dyn. Control 2021, 10, 217–229. [Google Scholar] [CrossRef]
  31. Derrouaoui, S.H.; Bouzid, Y.; Guiatni, M. PSO Based Optimal Gain Scheduling Backstepping Flight Controller Design for a Transformable Quadrotor. J. Intell. Robot. Syst. 2021, 102, 67. [Google Scholar] [CrossRef]
  32. Julián-Iranzo, P.; Moreno, G.; Riaza, J.A. The Fuzzy Logic Programming Language Fasill: Design and Implementation. Int. J. Approx. Reason. 2020, 125, 139–168. [Google Scholar] [CrossRef]
  33. Yoo, D.-W.; Oh, H.-D.; Won, D.-Y.; Tahk, M.-J. Dynamic Modeling and Control System Design for Tri-Rotor UAV. In Proceedings of the 2010 3rd International Symposium on Systems and Control in Aeronautics and Astronautics, Harbin, China, 8–10 June 2010; pp. 762–767. [Google Scholar] [CrossRef]
  34. Singh, R.; Kumar, R.; Mishra, A.; Agarwal, A. Structural Analysis of Quadcopter Frame. Mater. Today Proc. 2020, 22, 3320–3329. [Google Scholar] [CrossRef]
  35. Nguyen, D.-T.; Saussié, D.; Saydy, L. Fault-Tolerant Control of a Hexacopter UAV Based on Self-Scheduled Control Allocation. In Proceedings of the 2018 International Conference on Unmanned Aircraft Systems (ICUAS), Dallas, TX, USA, 12–15 June 2018; pp. 385–393. [Google Scholar] [CrossRef]
  36. Gupte, S.; Mohandas, P.I.T.; Conrad, J.M. A Survey of Quadrotor Unmanned Aerial Vehicles. In Proceedings of the 2012 IEEE Southeastcon, Orlando, FL, USA, 15–18 March 2012; pp. 1–6. [Google Scholar] [CrossRef]
  37. Mohsin, A.K.; Abdulhady, J.A. Comparing Dynamic Model and Flight Control of Plus and Cross Quadcopter Configurations. FME Trans. 2022, 50, 683–692. [Google Scholar] [CrossRef]
  38. Lee, T.; Leok, M.; McClamroch, N.H. Geometric Tracking Control of a Quadcopter UAV on SE(3). In Proceedings of the 49th IEEE Conference on Decision and Control (CDC), Atlanta, GA, USA, 15–17 December 2010; pp. 5420–5425. [Google Scholar] [CrossRef]
  39. Faessler, M.; Fontana, F.; Forster, C.; Scaramuzza, D. Automatic Re-Initialization and Failure Recovery for Aggressive Flight with a Monocular Vision-Based Quadcopter. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 1722–1729. [Google Scholar] [CrossRef]
Figure 1. (a) The ‘X’ configuration; (b) the ‘+’ configuration.
Figure 1. (a) The ‘X’ configuration; (b) the ‘+’ configuration.
Drones 09 00380 g001
Figure 2. The world coordinate and the frame of the drone.
Figure 2. The world coordinate and the frame of the drone.
Drones 09 00380 g002
Figure 3. A flowchart of the detection algorithm.
Figure 3. A flowchart of the detection algorithm.
Drones 09 00380 g003
Figure 4. Structure of the gain-scheduled recovery controller.
Figure 4. Structure of the gain-scheduled recovery controller.
Drones 09 00380 g004
Figure 5. Gain-scheduling curves for controller gains as functions of maximum rate magnitude (rad/s): (a) pitch/roll rate controller; (b) yaw rate controller.
Figure 5. Gain-scheduling curves for controller gains as functions of maximum rate magnitude (rad/s): (a) pitch/roll rate controller; (b) yaw rate controller.
Drones 09 00380 g005
Figure 6. The drone and main components.
Figure 6. The drone and main components.
Drones 09 00380 g006
Figure 7. Pitch rate of the drone for all five tests.
Figure 7. Pitch rate of the drone for all five tests.
Drones 09 00380 g007
Figure 8. Yaw angles (a) and rates (b) of the drone after collision for each trial.
Figure 8. Yaw angles (a) and rates (b) of the drone after collision for each trial.
Drones 09 00380 g008
Figure 9. Keyframes of the experiments. The impact between the drone and the vertical wall is shown in (a). In (b), the drone became a bit unstable due to the impact. In (c), it is shown that the recovery controller re-orientated the drone and made the drone fly away from the wall, and (d) the drone returned to the leveled attitude.
Figure 9. Keyframes of the experiments. The impact between the drone and the vertical wall is shown in (a). In (b), the drone became a bit unstable due to the impact. In (c), it is shown that the recovery controller re-orientated the drone and made the drone fly away from the wall, and (d) the drone returned to the leveled attitude.
Drones 09 00380 g009
Figure 10. Magnitude of measured and expected horizontal linear acceleration and the value of the difference in Trial 6.
Figure 10. Magnitude of measured and expected horizontal linear acceleration and the value of the difference in Trial 6.
Drones 09 00380 g010
Table 1. Main parameters of the drone.
Table 1. Main parameters of the drone.
ItemValueDescriptions
m (kg)1.47Mass
l (m)0.23Diagonal arm length
k t (N/rpm2)1.895 × 106Propeller thrust coefficient
k d (N/rpm2)8.548 × 108Propeller drag coefficient
Table 2. Comparison of collision detection accuracy for proposed strategy vs. existing method.
Table 2. Comparison of collision detection accuracy for proposed strategy vs. existing method.
Collision
Scenarios
The Proposed StrategyThe Existing Method
Number of Successful DetectionsSuccess
Rate
Number of Successful DetectionsSuccess
Rate
Vertical Wall50100%4896%
Glancing impact4794%3570%
Table 3. Average time needed to detect a collision.
Table 3. Average time needed to detect a collision.
Collision ScenariosThe Proposed StrategyThe Existing Method
Vertical Wall0.011 s0.012 s
Glancing impact0.013 s0.013 s
Table 4. The velocity of the drone before the impact for Trials 1–5.
Table 4. The velocity of the drone before the impact for Trials 1–5.
Trial NumberImpact Velocity
10.55 m/s
21.92 m/s
31.06 m/s
41.31 m/s
50.96 m/s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, X.; Liu, G.; Liu, Y. Collision Detection and Recovery Control of Drones Using Onboard Inertial Measurement Unit. Drones 2025, 9, 380. https://doi.org/10.3390/drones9050380

AMA Style

Huang X, Liu G, Liu Y. Collision Detection and Recovery Control of Drones Using Onboard Inertial Measurement Unit. Drones. 2025; 9(5):380. https://doi.org/10.3390/drones9050380

Chicago/Turabian Style

Huang, Xisheng, Guangjun Liu, and Yugang Liu. 2025. "Collision Detection and Recovery Control of Drones Using Onboard Inertial Measurement Unit" Drones 9, no. 5: 380. https://doi.org/10.3390/drones9050380

APA Style

Huang, X., Liu, G., & Liu, Y. (2025). Collision Detection and Recovery Control of Drones Using Onboard Inertial Measurement Unit. Drones, 9(5), 380. https://doi.org/10.3390/drones9050380

Article Metrics

Back to TopTop