Vision-Based Flying Obstacle Detection for Avoiding Midair Collisions: A Systematic Review

This paper presents a systematic review of articles on computer-vision-based flying obstacle detection with a focus on midair collision avoidance. Publications from the beginning until 2022 were searched in Scopus, IEEE, ACM, MDPI, and Web of Science databases. From the initial 647 publications obtained, 85 were finally selected and examined. The results show an increasing interest in this topic, especially in relation to object detection and tracking. Our study hypothesizes that the widespread access to commercial drones, the improvements in single-board computers, and their compatibility with computer vision libraries have contributed to the increase in the number of publications. The review also shows that the proposed algorithms are mainly tested using simulation software and flight simulators, and only 26 papers report testing with physical flying vehicles. This systematic review highlights other gaps to be addressed in future work. Several identified challenges are related to increasing the success rate of threat detection and testing solutions in complex scenarios.


Introduction
Although the sky may seem big enough for two flying vehicles to collide, the facts show that midair collisions still occur from time to time and are a major concern for aviation safety authorities.As a preventative measure, pilots are instructed to keep one eye out of the cockpit, scan the sky for potential threats, and be prepared to maneuver to avoid a potential accident [1,2].However, this "see-and-avoid" rule has several important limitations.First, the pilot cannot look outside all the time, as he also has to check the instruments inside the cockpit, which measure, for example, air speed or engine temperature.The time spent looking inwards and outwards must therefore be perfectly balanced.Pilots who spend most of their time looking at the instruments, and this is especially true of novice pilots, endanger the aircraft by ignoring other elements circulating around them.
The "80-20" rule suggests that pilots should spend no less than 80% of their time looking out and no more than 20% of their time checking instruments.The 80 does not just refer to looking for other traffic, as the pilot also looks for visual cues used for navigation.In any case, even if a pilot or crew member could spend 100% of their time scanning the sky, this would not mean that no threat could escape the human eye.In fact, the fraction of our visual field that allows us to detect anything in the sky is extremely small.Therefore, for practical scanning, pilots are also instructed to follow a pattern, dividing the horizon into regions and taking a moment (1-2 s) to focus before moving on to the next region.Hence, the horizon is divided into nine regions; the pilot's eye scans one ninth at a time.In other words, at least 89% of the horizon remains unattended at all times.This gives a clear idea of the chances of a threat escaping the human eye, especially when you consider that a light aircraft, such as a 9-meter-wingspan Piper PA-28 Cherokee, approaching head-on at 90 knots on a collision course, 5 seconds before impact, looks no bigger than a sparrow [3].To make matters worse, the performance of the human eye can be reduced by cloud cover, glare from the sun, fatigue, and many other factors.
With today's technologies, such as SSR (secondary surveillance radar), transponders, TCAS (traffic collision avoidance system) and, more recently, ADS-B (automatic dependent surveillance-broadcast), one might think that midair collisions should no longer occur.However, they do, because these technologies are not mandatory in all countries, airspaces, or aircraft.They are also fallible, because human factors still cause accidents.The new century has also brought a new player onto the scene.Since 2005, the use of unmanned aerial vehicles (UAVs) or drones in commercial applications has grown exponentially [4,5], increasing the need for safe path planning [6] and obstacle avoidance [7].New commercial applications are not without risk, potentially causing damage and disrupting other aerial activities [8].In 2017, a DJI Phantom 4 collided with a US Army UH-60M helicopter near Coney Island, New York.This is only the first documented case of a UAV collision with a manned aircraft, but the number of UAV sightings by pilots has increased dramatically in recent years.Aircraft collision avoidance is therefore a challenging problem due to the stochastic environment and uncertainty about the intentions of other aircraft [9].
For these reasons, and particularly in the case of manned and unmanned light aircraft in uncontrolled airspace and at aerodromes, various safety agencies and pilot associations are encouraging pilots and UAV users to install some form of electronic conspicuity (EC) device on their vehicles to be more aware of nearby aircraft [10][11][12].An example of such EC technology is FLARM (FLight alARM, https://flarm.com/accessed on 11 September 2023).The FLARM predicts potential conflicts and alerts the pilot with visual and audible warnings [13].This device was originally designed for gliders, which are slower than light aircraft.The main limitation of these devices is compatibility, as a FLARM can only display air traffic that uses another matching FLARM.Incompatibility occurs, for example, when the communication solution is different due to the use of different frequencies (the US version of FLARM devices uses a different band than the European one) or different protocols (a FLARM device that has not been updated for a year is not compatible with the latest version of the protocol and will automatically shut down).In addition, some devices are active, i.e., they transmit and share their position with others, while others are passive, i.e., they listen to the transmissions of others but remain invisible to them (e.g., many portable air traffic detectors only listen to the transponders of other aircraft).In this "Tower of Babel" scenario, when communications fail or are absent, pilots should rely not solely on their eyes to detect threats, but on an artificial eye capable of scanning the sky faster, farther, wider, sharper, and more consistently.
The most used solution for preventing accidents is the use of radars.There are many types of radar used in aviation, but the most important is the primary surveillance radar (PSR).PSR detects the position and altitude of an aircraft by measuring the time it takes for radar waves to bounce off an aircraft and return to the radar antenna.These radar systems can detect both transponder-equipped and non-transponder-equipped aircraft.PSR is not perfect and has its limitations.PSR cannot identify the detected obstacle, and the required equipment is expensive [14].Computer vision solutions have the advantage in this subject because the equipment is relatively cheap, and depending on the implementation, the solution can identify the incoming obstacle.Nevertheless, computer vision effectiveness can be impacted by light conditions.That is the reason why some researchers try to combine both approaches like [15] to overcome each approach's limitations.
This systematic review, therefore, focuses on solutions that implement computer vision for obstacle avoidance in flight.Systematic reviews help researchers to learn about the current state of the art and to extend it with new studies.Computer vision is a combination of several algorithms that typically mimic human vision [16][17][18][19].Inspired by the visual stimulus received by animals moving through the world, optical flow is defined as the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and a scene [20][21][22].Optical flow can be applied to object segmentation, object detection, stereo disparity measurement, and motion compensation [23,24].Another prominent approach to calculating apparent motion is the Kalman filter, an algorithm that helps to estimate unknown variables given observed measurements over time.The Kalman filter computes the motion vector of moving objects, making it possible to track them [25].Because computer vision applications are relatively easy to implement, the number of publications on collision avoidance systems based on computer vision has increased greatly.Other computer vision algorithms for motion detection can be found in recent reviews [26,27].
Although several reviews have been carried out on the detection of moving objects using computer vision [28,29], to our knowledge, none has focused on the application of obstacle avoidance to light and unmanned aircraft.Furthermore, there are several reasons for undertaking the present systematic review.First, it was considered pertinent to conduct a comprehensive literature review on the detection of moving objects by computer vision on light aircraft, with the aim of providing an overview of existing solutions.Second, it was thought to identify possible gaps in existing studies on the detection of flying objects by computer vision in light and unmanned aircraft.Finally, it was relevant to identify topics for future studies related to the main topic of this systematic review.
This paper is organized as follows: The systematic review methodology, data extraction, research questions, search strategy, and answers to the research questions are described in Section 2. Section 3 contains the results and limitations.The discussion of the results is provided in Section 4. Finally, Section 5 presents the conclusions of the review.

Research Methodology
This systematic review was conducted using the common framework [30], and the reporting was done according to the PRISMA 2020 statement: an updated guideline for reporting systematic reviews [31].

Search Criteria
Five databases were searched for this systematic review: Scopus (Elsevier), IEEE Xplore (Institute of Electrical and Electronics Engineers), ACM Digital Library (Association for Computing Machinery), Multidisciplinary Digital Publishing Institute (MDPI), and WoS (Web of Science).The studies used in this systematic review were identified using the following search query: TITLE-ABS-KEY(((collision OR crash OR accident OR impact OR hit OR "air proximity" OR sense) AND (avoidance OR evasion OR prevention OR avoid)) AND (aircraft OR aerospace OR ((flying OR "unmanned aerial vehicle")) AND (robot OR vehicle)) AND (vision OR visual) AND (detection OR tracking) AND (simulation OR simulator OR "virtual reality" OR "augmented reality")).
Starting with the papers obtained through the search string described in the previous section, the following criteria were used to include the publications in our review:

•
The papers used computer vision only to detect moving obstacles or threats.

•
Object detection was used to avoid midair collisions in manned or unmanned aircraft.

•
The papers were written in English.
Conversely, the following types of publications were excluded: • Abstracts without full text.• Systematic reviews, meta-analyses, and survey publications.

Search Process
The general description of the publication extraction and selection process is shown in Figure 1.The search string was queried in each of the five databases, yielding a total of 647 records.After removing 158 duplicate references and conference papers, one researcher independently filtered the remaining 489 articles by examining the full content of each publication.If the researcher was unsure whether to include or exclude an article, the publication was presented to the remaining researchers for discussion to reach a consensus decision.On completion of the screening process, 404 articles were excluded according to the remaining exclusion criteria, resulting in a final selection of 85 publications.

Research Directives
The six research questions (RQs) for this systematic review and their rationale were as follows: RQ1.How many papers have been published on computer-vision-based moving obstacle detection for midair collision avoidance?Is there a time trend?Finding out how many articles have been published on the use of computer vision for a light aircraft to detect possible obstacles is intended to understand which technologies are most commonly used and to know the possible next research steps.It also seems appropriate to determine whether there is a time trend (increasing or decreasing) in the production of publications on this topic.

RQ2. How many cameras did the algorithms need?
Investigating which type and number of cameras gives the best results for a particular algorithm or situation seems interesting for future studies.In general terms, a stereo camera has two or more image sensors to simulate human binocular vision, giving it the ability to perceive depth, unlike a monocular camera.Both stereo and monocular cameras are able to perform object detection, but only stereo cameras are able to calculate the distance to an object with high accuracy [32].

RQ3. What are the most commonly used computer vision techniques?
The goal of vision recognition is to imitate or even surpass the human eye.Vision recognition processes are grouped as follows: • Feature extraction is the identification of unique data in an image.Often lines and corners are good features because they provide large intensity contrasts.Feature extraction algorithms are the basis for object tracking and detection [33].

•
Motion detection is the detection of changes in the physical position of the object.For static cameras, background subtraction algorithms can be used to detect motion.On the other hand, for moving cameras, optical flow can be used to detect the movement of pixels in the given image [34].Single-view geometry is the calculation of the geometry of an object using images from a single camera.
Visual recognition techniques are implemented using a variety of algorithms.The classification of published articles according to the visual recognition algorithms used provides an overview of the most commonly used algorithms in the literature and the situations in which they are most applicable.

RQ4. Which tools have been most commonly used to check the algorithms' performance?
The testing tools can be divided into those that have been carried out on the aircraft themselves, those that have been carried out using simulators, and a combination of the two.A flight simulator is a device that artificially recreates the flight of an aircraft and the environment in which it flies.It has the advantage that it can be used safely for pilot training, aircraft development, and the study of aircraft characteristics and controls.A simulator must be as close to reality as possible.It should therefore include the equations that govern the flight of the aircraft, its response to the application of flight controls, the effects of other aircraft systems, and the response of the aircraft to any external factors that threaten its flight.Obviously, different test tools may be used at each stage of an investigation into vision-based obstacle detection in flight.Testing in a controlled environment, such as a simulation, can help investigators identify problems with the algorithm in an early stage.A simulator can also prevent accidents caused by faulty algorithms.Once an algorithm has passed several tests in a controlled environment, the authors can start testing in the real world.Testing in an uncontrolled environment presents different challenges, such as wind, light sources, wind, or clouds.Therefore, a review of the testing methods used and their limitations is of interest for future studies.

RQ5. What kinds of flying vehicles were used to test the collision avoidance algorithms?
The classification of the published papers according to the type of aircraft used to test their algorithms gives us an idea of the characteristics and limitations of the proposed solutions, and whether they are ready to be applied on a light or unmanned aircraft.Furthermore, the choice of UAVs for testing depends on many factors, such as resources, objectives, or legislation.Multirotor UAVs, airplane models, or helicopter models can be used for low-altitude test cases or environments with multiple obstacles, such as a city or a forest.On the other hand, aircraft can be used for test cases with high altitudes and speeds, taking into account that special permissions may be required.
RQ6.What kind of software has been used to perform computer simulations over the years?
The classification of published articles according to the type of software used to perform their simulations indicates which type of software is trending or being abandoned.Different software may be used at different stages of the investigation, and some may be used as support.For testing the basic components of an algorithm in a numerical environment, Matlab and Simulink are probably a good choice.Flight simulators provide a more realistic environment in which investigators can infer the performance of the proposed solution.Finally, other software, such as the Robot Operating System (ROS) and Google Earth, could be used as supporting tools for the flight simulators.

Results
A final number of 85 publications were extracted using the process described above.References to the 85 papers are included in Appendix A. The number of resulting publications shows the growth and evolution of moving obstacle avoidance using computer vision from flying vehicles over the years.Here, we answer the six research questions presented above in relation to the selected bibliography.RQ1.How many papers have been published on computer-vision-based moving obstacle detection for midair collision avoidance?Is there a time trend?
Figure 2 shows the number of articles published per year throughout the period studied.The period begins with only one publication found between 1964 and 1999; for this reason, it was excluded from the trend.This is followed by a 5-year period between 2000 and 2004 with no publications.Then there is a constant and regular number of publications of 2 or 3 articles per year from 2005 to 2010, with a drop to 1 publication in 2008 and 2009.The year 2011 is a turning point, where a rapid increase in published articles is observed, with 9 in that year, the third highest behind 12 in 2021.Recent years also show a significant interest in moving object avoidance based on computer vision, with 7 articles in 2019, 13 in 2020, and 12 in 2021.The year 2022 is a special case with only 2 articles, but the authors found several papers published in that year that focus on automatic flight navigation avoiding environmental obstacles.Overall, publications on this topic show an increasing trend line over the years.RQ2.How many cameras did the algorithms need?
The selected papers mainly distinguish between two types of cameras, stereoscopic and monocular.In addition, the works that use monocular cameras can be divided into four subgroups according to the number of monocular cameras used in the system: one, two, or even omnidirectional cameras, plus one studio that uses an optical mouse sensor ( [P12]).As can be seen in Figure 3, 61 articles use a single monocular camera, followed by 15 publications using stereo cameras ( [P82]) and 7 papers using two monocular cameras ([P1], [P6], [P9], [P23], [P30], [P70], [P85]).Finally, we have 1 paper with an omnidirectional camera ( [P42]).Thus, most of the papers use monocular cameras, which means that the authors focused on the detection of obstacles.As can be seen in Figure 4, the most commonly used techniques are object detection (56 articles) and object tracking (39 articles).In this figure, some of the articles are counted more than once because the authors combine more than one technique in their solution.These combinations are shown in Figure 5.In fact, the most common combination is object detection plus object tracking (OD + OT), with a total of 17 articles.Furthermore, each process can be implemented using different algorithms.Table 1 shows the algorithms used in each of them.According to our results, 65.88% of the selected publications used object detection alone or combined it with another computer vision algorithm, as shown in Table 2.As can be seen in Figure 6, the papers used three different methods to test their findings: using only aerial vehicles (30.58%), using a combination of aerial vehicles and simulators (22.35%), and most papers using only simulators (47.05%).Testing with aerial vehicles is more expensive and can cause accidents.This may explain why most papers only use simulation to test their solutions.RQ6.What kind of software has been used to perform computer simulations over the years?
The result of extracting the software used for simulations is shown in Table 3.Not all articles that carried out simulations mentioned the software used.Ten other articles describe developing their own software for the study.).As a special case, the paper [P69] used a 3D computer graphics software tool called Blender to create a simulation to test their solution.Figure 9 shows the number of publications using simulation over the years.

Discussion
To the best of our knowledge, this is the first systematic review of computer-visionbased moving obstacle detection for midair collision avoidance.The study included articles searched from inception to 2022, although only 23 years of publications were found .A total of 85 articles were selected from the 647 initial publications obtained from five databases (Scopus, IEEE, ACM, MDPI, and WoS).The results made it possible to evaluate the current situation of this growing field of knowledge.
The issue has attracted attention in recent years due to several factors.First, the availability of nonmilitary drones to the general public since 2006 [37,38] poses several challenges.The possibility of drones colliding not only with other unmanned aerial vehicles in the air but also with manned and passenger aircraft is a major concern today.In addition, the market offers a wide catalog of drones with distinctive features, such as integrated cameras, global positioning system devices, wireless connectivity, accelerometers, and altimeters.This easy access to drones also allows researchers to test different solutions without major risks [39].

Computer Vision
The second factor is computer vision.Computer vision began in the early 1970s as a way to mimic human vision, and it has continued to grow and improve ever since [40].Thanks to many advances, computer vision libraries require less computing power to run their algorithms, making it more feasible for researchers to move their solutions to lower-end hardware [41].
Finally, single-board computers give researchers the ability to test on the fly without the need for heavy equipment [42].A single-board computer is a single PC board with a processor, memory, and some form of I/O that allows it to function as a computer [43].The size and weight of single-board computers make them perfect for mounting on a drone or light aircraft without affecting its performance in flight.In the mid-1970s, the "dyna-micro" was one of the first true single-board computers.The next major development in single-board computers came in 2008 from BeagleBoard.org, which created a low-cost, open-source development board called the BeagleBoard [43].
According to the reviewed papers, the detection of moving obstacles with computer vision starts with the images provided by a camera or a group of cameras.Computer vision cannot be accurate without obtaining good images with the best possible resolution.In the included studies, the majority of publications used a single monocular camera (71.76%).Papers using stereo cameras represent 17.64% of the publications.This is probably due to the fact that applications using stereo cameras are computationally expensive compared with those using monocular cameras [44].
The captured images must then be processed using computer algorithms to detect possible obstacles.The most commonly used vision recognition process identified in the papers was object detection.Object detection involves identifying a specific object in successive images.The perfect object detection algorithm must be fast and accurate.Object detection can be complemented by object tracking, which uses information from the previous frame to track objects and is faster than object detection [45].
Grouping the selected papers by the method used to test the collision avoidance algorithm shows that almost half of the publications (47.05%) use only computer simulations to verify and validate their solutions.Using these simulations is cheaper and safer than using a manned or unmanned aircraft.It is safer because it avoids the risks of testing a collision avoidance algorithm in real flight, where accidents can occur and have costly-or even fatal-consequences, especially in the early stages of development when the solution is less mature and more prone to error.

Testing Tools
Researchers prefer a controlled environment to carry out tests and improve their solutions before real-world trials take place.Matlab is a programming and numerical computing platform.Matlab was released to the public in 1984.Matlab can be used for data analysis, data visualization, algorithm development, and more [46].Matlab's ease of use and large database of built-in algorithms make it one of the preferred methods for testing algorithms.However, since 2018, authors prefer to use flight simulators (5 papers from 2018 to 2021), Gazebo (4 papers from 2019 (Acropolis) to 2021 (Fortress)), and Blender v2.91 (1 paper in 2020).New improvements and increased realism in off-the-shelf flight simulators may be the reason for authors to switch to this software, in particular FlightGear v2020.3.18 and X-Plane v12.0.FlightGear is a free, open-source, multiplatform flight simulator.The first version was released in 1997.FlightGear was used in two papers described at RQ6, immediately after the launch of a major release, FlightGear 2.0, in 2010.X-Plane is a flight simulator available since 1995.In 2017, the engine received a major update (v11.0),providing greater graphical realism [47,48].The recent release of the popular Microsoft Flight Simulator (v1.31.22.0), after many years since the last update, may make it another option to consider in future releases.
The use of real drones as a testing method for validating collision avoidance algorithms began in 2012, as shown in Figure 8.The use of drones helps researchers create a more realistic yet controlled testing environment, reducing interference when assessing the effectiveness of an algorithm.Although the Federal Aviation Administration issued a commercial drone permit in 2006, drones were not widely available at the time [49].It was not until 2010 that the company Parrot released Parrot AR.Drone 1.0 [50].This drone was a huge commercial success, selling over half a million units.Parrot AR.Drone 2.0 was released in 2012 [51].In 2013, the Chinese company DJI launched the first camera-equipped drone called Phantom 1 [52].In 2018, the same company launched the DJI Mavic Air.This drone was designed to be portable, powerful, and accessible to enthusiasts of all levels.More interestingly, the DJI Mavic Air added an obstacle avoidance system for safety in the air [53].Unfortunately, the authors could not find any details on the obstacle avoidance system used by DJI.
It is noteworthy that only four publications reported the use of a manned aircraft in the tests.As discussed above, the use of simulators does not incur the cost of using real vehicles and reduces the hazards.Small quadrotor UAVs are also affordable and have a very low cost compared with manned aircraft.However, it should be noted that the solutions tested on quadrotor UAVs may not be directly applicable to light-manned aircraft or even larger drones due to factors such as vehicle speed, flight altitude, and weather conditions, to name a few.

Obstacles and Future Work
The algorithms, solutions, and proposals described in the articles included in this systematic review are not yet free from shortcomings that should be addressed in the next revisions, which represents an opportunity for future work and developments in this field.Some problems are related to errors, inaccuracies, and false positives ([P7], [P24], [P41], [P42]).For example, [P42] reports 90% hits in near scenarios but 40% false alarms in far ones, which also shows the importance of testing in different scenarios to realize the limitations of a proposal.Indeed, many authors are willing to test their solutions in additional scenarios, especially in more complex, crowded, and noisy environments, as the next step of their research ( [P4], [P7], [P15], [P25], [P31], [P39], [P59], [P61]).Simulation software plays an important role here.For example, [P48] uses the minimalist proprietary DJI flight simulator, but the authors claim that another, more appropriate simulation software would be needed to further improve their algorithms.A ground-based video dataset, such as the one presented in [P32], may be the solution for evaluating video-based detection and avoidance systems.
However, beyond more realistic simulations and video datasets, many authors would like to extend their test to the real world, i.e., to perform real flight tests with their solution embedded in a real manned or unmanned aircraft ( ]).Real tests reveal other problems related to vehicle speed ([P11], [P30], [P61]) or energy consumption ([P48]).For example, in [P11], a single 2D camera is used to generate a 3D model of the scene, but the lower is the speed, the less information is obtained.On the contrary, in [P30], the authors point out that the faster the drone moves, the more blurred the video becomes, which reduces the accuracy of the algorithm.
The limited field of view of the cameras is another problem ([P25]).Some authors propose to address this in future work using additional cameras ( [P61]) or other sensors, such as a depth sensor ([P60]), laser rangefinder ([P11]), or global positioning system (GPS) ([P18]).For example, in [P18], the authors plan to use GPS and stereo vision to determine the positions of both the vehicle and the obstacles in real tests.The tracking of obstacles and their direction and speed is another problem to be solved ([P8], [P10], [P18], [P25], [P30], [P41]).In particular, distance estimation is a real challenge ( [P41]).The correct spatial positioning of an obstacle is key for avoidance algorithms ([P8], [P27], [P33], [P41], [P54]), where further research is needed to improve maneuvering, minimize effort, and reduce deviation after avoidance.Finally, one paper proposed to investigate collision avoidance approaches for multiple intruders ([P39]).
For manned aircraft, one problem with detecting and avoiding obstacles in the air is how to warn pilots of a potential threat.Again, preparing test scenarios with manned aircraft is expensive, and we believe that technologies such as virtual reality and augmented reality would help.Such technologies have grown considerably in recent years [54,55].For example, immersive virtual reality could be combined with flight simulator software to reproduce test scenarios for prototyping and testing warning solutions.Augmented reality could be used to simulate approaching threats in flight and could also lead to new ways of alerting and guiding pilots to avoid a collision.It is worth noting that these technologies were included as keywords in the search, but no matching results were found.We believe that they are promising technologies that should be explored in future approaches.
From the discussions so far, it can be concluded that the publications are still at an early stage of research and that further progress is needed to find solutions to the problems identified.

Conclusions
See and avoid is a basic procedure that pilots must learn and apply during flight.Various technologies have been introduced to avoid midair collisions, but accidents still occur because they are neither mandatory in all airspaces nor suitable for all aircraft.Technology can also fail and human error can occur, as was sadly demonstrated in 1986 when a Piper Cherokee light aircraft collided with the vertical stabilizer of a DC-9 airliner over Los Angeles International Airport (California) because the Piper did not have a Mode C transponder, which would have alerted others of its presence.In addition, neither pilot appeared to have seen the other aircraft.Computer vision can assist pilots in this task, leading to solutions that match or exceed the performance of the human eye in this task, complementing the pilot's functions, or being used in autonomous systems to avoid obstacles and other flying vehicles.
This systematic review has shown that there is a continuing interest in research into computer-vision-based moving obstacle detection for midair collision avoidance.A total of 85 papers were analyzed.The results show that researchers' attention is mostly focused on motion and object detection, as well as object tracking.In addition, the preferred way to test solutions is to use simulation software, such as Matlab in early papers or flight simulators in recent papers.Only a few papers have reported testing with physical flying vehicles, namely, quadrotor UAVs, aircraft and helicopter models, and only one manned aircraft.Some of the current shortcomings, and therefore future challenges, are related to increasing the success rate of detection and, in particular, testing solutions in many different, more complex, and noisy scenarios.The use of real vehicles in real scenarios is considered by most authors to be an outstanding task.
In terms of possible improvements, the articles reviewed point to improving the detection of moving obstacles, with a focus on avoiding target loss due to occlusion.Multiple obstacle avoidance is also an important feature that seems to deserve more attention.Testing in new environments will improve the effectiveness of the algorithms, such as obstacle detection at night, which will likely require the use of night vision cameras.The authors agree that the algorithms need to extract more information about the obstacle, such as direction, speed, and distance.There is growing support for the use of stereoscopic cameras to improve the calculation of distance.It is also worth mentioning that it can be very helpful to know the current position of the aircraft via GPS.As for simulators, there is a desire for more accurate simulated environments.A good simulator can help scientists extensively test their solutions in a variety of situations where an incident in a real environment could be dangerous or costly.
Looking beyond the single UAV or manned aircraft, networking technologies in the Internet of Things (IoT) era could help share obstacle data detected by one or more vehicles, adding information such as distance or bearing that could be helpful in congested areas, and even acting as a multicamera distributed obstacle detection system in a group of vehicles, such as UAV hives.Integrating computer-vision-based algorithms with current electronic conspicuity (EC) devices, which already use their technologies to broadcast their own position to other compatible devices within range, would facilitate this.
Finally, it is essential to note that for manned aircraft, none of the selected publications investigated how to alert the pilot or the person controlling the UAV that an incoming obstacle has been detected.Future publications should investigate more effective user interfaces for alerting a pilot of incoming obstacles.Virtual and augmented reality are technologies that would play an essential role in this regard [56].
Informed Consent Statement: Not applicable.

Figure 2 .
Figure 2. Publications over the years.

Figure 3 .
Figure 3. Categorization of papers by camera type.

Figure 4 .
Figure 4. Number of papers using each vision recognition method.

Figure 5 .
Figure 5. Combination of vision recognition processes in paper.

Figure 6 .
Figure 6.Categorization of papers by test method.

Figure 7 .
Figure 7. Categorization of papers using physical equipment by aerial vehicle.

Figure 8 .
Figure 8. Use of multirotor UAVs over the years.

Figure 9 .
Figure 9. Use of simulation over the years.
• Object detection is a set of computer vision tasks involving the identification of objects in images.This task requires a data set of labeled features to compare with an input image.Feature extraction algorithms are used to create the data sets [35].• Object tracking.Given the initial state of a target object in one frame (position and size), object tracking estimates the states of the target object in subsequent frames.Tracking relies entirely on object detection.Tracking is much faster than detection because it already knows the appearance of the objects [36].•

Table 2 .
Articles related to computer vision methods.
2010RQ4.Which tools have been most commonly used to check the algorithms' performance?

Table 3 .
Software used to run simulations (some publications use more than one).