A Runway Safety System Based on Vertically Oriented Stereovision

In 2020, over 10,000 bird strikes were reported in the USA, with average repair costs exceeding $200 million annually, rising to $1.2 billion worldwide. These collisions of avifauna with airplanes pose a significant threat to human safety and wildlife. This article presents a system dedicated to monitoring the space over an airport and is used to localize and identify moving objects. The solution is a stereovision based real-time bird protection system, which uses IoT and distributed computing concepts together with advanced HMI to provide the setup’s flexibility and usability. To create a high degree of customization, a modified stereovision system with freely oriented optical axes is proposed. To provide a market tailored solution affordable for small and medium size airports, a user-driven design methodology is used. The mathematical model is implemented and optimized in MATLAB. The implemented system prototype is verified in a real environment. The quantitative validation of the system performance is carried out using fixed-wing drones with GPS recorders. The results obtained prove the system’s high efficiency for detection and size classification in real-time, as well as a high degree of localization certainty.


Introduction
The first collision of a bird with an aircraft, so-called bird strike, was reported in 1905, and then, in 1912 the first fatality was noted [1]. Since then, the number of cases has risen, presenting a significant threat to flight safety and causing a number of tragic accidents worldwide [2]. In 2020, purely in the USA, over 10,000 bird strikes were reported [3]. The reports show that the average bird strike rate, which is counted per 10,000 flights, increased from 11 in the year 2011 to 33 in the year 2017 [4]. According to the International Civil Aviation Organization (ICAO), most of the bird strikes occur during the approach, 33%, take off, 31%, and landing, 26%, which means that 90% of incidents occur in the airspace under the airport's legal responsibility [4]. The administration and legal regulations introduced by the ICAO and European Union Aviation Safety Agency (EASA) oblige each airport to minimize the bird and wildlife strike risk, under Wildlife and Hazard Management (WHM) [5,6].
Currently, different techniques and methods allowing the mitigation of bird strike risk such as the ornithological observations and radar based solutions [7] are the most widespread at medium and large airports. There are also some attempts to develop vision based monitoring systems [8][9][10]. However, enhancing automation and to improve the system performance levels of WHM in terms of detection efficiency and localization accuracy is a research challenge.
To meet the requirements of a market tailored product, which may be customized to any small and medium size airport, a stereovision based real-time solution embedded into the Internet of Things (IoT) and a distributed computing paradigm is proposed. A new stereovision method with the cameras' optical axes freely oriented is modeled, evaluated, and implemented in a prototype. The real-time field verification at an airport runway using drones with a GPS recorders shows the system's capacity to detect moving objects at a range of 300 m and to localize them within the required accuracy of 10%. It is proven that the proposed solution is able to classify detected objects into one of three size categories corresponding to bird size.

Background and Related Works
The problem of bird strikes is multifaceted and can be approached from the point of view of sustainable development, economy, law, and technology.

Non-Technological Approaches
The non-technological aspect of the presented solution could be analyzed from the perspective of bird presence near the airports, as well as the legal and financial consequences of potential bird strikes.
Increasing volumes of air traffic and the adaptation of some bird species to the living conditions in the vicinity of urban areas, which also increases their activity around airports, are the main causes of the increase in bird collisions [11]. Birds have modified their behavior and learned to tolerate the presence of both humans and man-made structures including air traffic and accompanying noises. Therefore, it is getting more difficult to control or limit their presence in the airport's vicinity [12].
The problem with the increasing bird strike rate was noted by national and international organizations including the ICAO [13], the World Birdstrike Association (WBA), and states' aviation authorities [14,15]. These organizations are responsible for sharing information and experiences, as well as for the development of the best practices regarding collision prevention. Currently, environmental monitoring of airports is regulated by the EASA [5] and the ICAO [6]. There are also national civil and military authorities and organizations responsible for aviation safety, like the Civil Aviation Authority [16] in Poland or the Swedish Civil Aviation Administration (Luftfartsverket) [17] in Sweden, who are responsible for wildlife risk management.
The data analysis performed by the ICAO and the EASA shows the critical areas where most of the accidents occur [18]: • Ninety percent of collisions are below an altitude of 150 m; • Sixty-one percent of events are at heights of less than 30 m; • Eight percent of collisions are at an altitude above 900 m and are outside the aerodrome area; • Seventy-five percent of accidents happen during the day.
The bird strikes with the windshield and the engine of the aircraft are the most dangerous and the most frequent events [19]. These damages cost over $200 million annually [20] purely in the USA and up to $1.2 billion worldwide [21].

Technological Approaches
So far, the most widespread solutions for bird strike prevention at large and medium size airports are still the eye observation of the runway. At many airports, various methods such us trained dogs, falconry, pyrotechnics, and green lasers are used as the most effective tools. Sometimes, deterrents are also installed, which emit predator or banging cannon sounds [20].
There have been a number of attempts to develop reliable autonomous bird detection and localization systems [10,22]. Besides the aforementioned automation of WHM at airports [23], the bird preservation at wind farms [10,24] and autonomous analysis of migrant bird behavior [25,26] are the main application fields.
Mainly, there are two types of sensors used for bird detection: radar [27,28] and vision cameras [9]. One of the first bird detection systems, which used the radar technology, was developed [29,30] in the early 1950s. Since then, the radar based solutions have improved their capabilities of bird detection in wide observation areas. Because of their capacity to estimate the bird's position, velocity, and movement [31], they have become widely used in airports [32]. Radar systems for bird detection are characterized by long-range detection [33] in any weather and light conditions [34,35]. It is worth noting that the radar based solutions require additional permissions for emission in the frequency band, which should not disrupt the airport's flight control systems [31].
The vision based solutions can be split into two groups: mono-and stereo-scopic systems. Whereas monoscope systems are able to detect the birds [26,36] and identify particular species [37], the stereoscopic systems additionally allow bird localization and size estimation [10,37,38].
The growth of CPU and GPU capabilities allows the application of advanced algorithms, which are more reliable in moving object detection [39] and identification [40,41]. The parallel enhancement of the resolution of image sensors and the advance in optics make it possible to detect and identify even small objects from far distances [10,26,39,42].
The core component of each vision based system is a detection algorithm. The bird detection in the video stream can be made using motion detection [22,25,43], AI based identification [26,[44][45][46][47][48][49], or a combination of both [10,38]. Whereas the motion detection algorithms allow the reduction of the computational complexity of the safety system [50], the application of AI methods allows bird identification [48,49] and the reduction of false positive rates [10]. From the AI based solutions, the Convolutional Neural Networks (CNNs) [51][52][53] outperform other methods, for instance the Haar feature based cascade classifier [45,54] or Long Short-Term Memory (LSTM) [48]. The most recent studies reported that dense CNN [54] shows good feature extraction capabilities allowing for bird identification [49,55], and after 100 epochs, the system reaches near 99% accuracy. Other CNNs, implemented in distributed computing and IoT paradigms, allow the system to ensure 99.8% precision with 99.0% recall in bird identification with real-time performance [10]. This allows the development of a reliable vision based safety system at airports.
There are several examples of vision based systems allowing WHM at airports. Chen et al. proposed an automatic bird-blocking network and intelligent bird-repelling system [56]. The proposed algorithm with the use of IoT technology allowed automatic repelling, which minimizes the habituation effect [56]. The company Pharovision developed a commercially available system that is based on the infrared camera and allows scanning of the ground and the air, day and night [9]. Using the FLIR and CCTV cameras, their system detects and tracks even a single bird from up to 7 km [9]. Another complex system allowing multiple bird detections and repelling is provided by Volacom [8]. Detection is supported by thermal and stereovision cameras, which in real-time scan the airport's vicinity for flying objects [8]. An additional acoustic module focuses a deflection sound signal at the targeted birds, to deter them up to 300 feet [8].
After detection, an automatic repellent method could be applied to minimize the bird strike risk. One of the first repelling methods tested in various scenarios was pulsing light [23]. This method was successfully used at an airport [57], other man-made structures [58], and wind farms [10]. Since the year 2015, pulsing light at 2 Hz in the landing light system has been recommended by the Federal Aviation Administration (FAA) and successfully used in airplanes and helicopters as a tool, allowing a substantial drop in bird collisions [59]. The other solution mounted in airports near the runway is large screens displaying a specific visual sight [60].
To deter a bird, a loud sound can also be used. Bishop et al. [61] showed that high frequency sound in the ultrasonic range above 20 kHz is ineffective and therefore has no biological basis for its use. In [62], the authors combined the effect of sound between 90 dB and 135 dB and a frequency of 2 kHz with white light. To reduce the habituation effect of the repellent method, the particular deterrent method used should vary and be implemented as rarely as possible [61].

Problem Statement, Objectives, and Main Contributions
As the survey of related works shows, there is a need for a reliable and cost-effective system mitigating the collision of avifauna with airplanes around airport runways. The biggest drawback of existing solutions, mostly based on stereovision, is their basically horizontally oriented Field of View (FoV), limiting the observation area and therefore requiring multiplied installations, which are heavy and costly.
The main objective of the paper is to determine the hardware and software structures of a stereovision based detection system for monitoring space over the airport runway to identify, localize, and classify moving objects. Such a highly reliable real-time system has to assure a wide range observation area without compromising its size and price, whilst also providing a wide range of customizability.
The proposed hardware configuration is composed of two cameras coupled in stereovision mode, wherein the first and second cameras are oriented with their optical axes of an angle α to the base line, wherein α is a substantially non-right angle. The cameras could be equally rotated to any direction to cover the selected observation area, which can be horizontal, vertical, or even oblique. The system software configuration is based on IoT technology, the distributed computing concept, and deep learning algorithms to ensure real-time operation mode.
The user-driven design methodology is used to provide a market tailored solution that may be customized to any small and medium size airports. The proposed solution was modeled and optimized using MATLAB software. The system prototype was installed in an real environment and verified using fixed-wing drones with GPS recorders.

System Design
The proposed avifauna monitoring system for runways was designed based on the User-Driven Design (UDD) methodology presented in [63,64]. Besides airport stakeholders and designers, the design process involved several authorities such as ornithologists and experts in aviation laws. Furthermore, future users, who contributed to the design, were falconers, airport security and safety staff, pilots, maintenance service workers, and environmental workers.
It is beyond a doubt that such a system is demanded to minimize the collision risk due to: • passengers and staff safety; • wildlife protection; • financial consequences related to damages and delays; • the legal, administrative, and marketing consequences of a potential catastrophe.
To achieve the listed goals, the designed system needs to fulfill the following functionalities and constraints: • to detect and localize suspected moving objects within the customizable safe zones and to do this with high reliability and low positioning uncertainty; • to distinguish individual, multiple, or flocks of birds simultaneously; • to work in real time with a very short detection latency; • to ensure that bird risk management has no side effects; • to eliminate the human factor by autonomous monitoring and repelling methods; • to ensure the affordability of the system including that the system price, cost of installation, and cost of maintenance are acceptable for small airports; • to facilitate and automate the reporting process recommended by the ICAO and the EASA regulations.
General and itemized functionalities and particular related constraints along with selected technologies and algorithms are summarized in Table 1. The motivation analysis of the technologies and algorithm selection is beyond the scope of this paper. However, it can be observed that the system is based on stereovision and the distributed computing and IoT concepts. The chosen algorithms belong to the machine learning and AI categories. The details of the applied solutions are presented in Sections 5 and 6.

Modeling
The system conceptualization is presented in Figure 1. Since there is a need to cover a wide observation space, the system consists of several monitoring modules and other subsystems inter-connected via the network, which becomes a central component of the proposed structure. On the network's right side, there are components allowing the user to interact with the system. On its left side, there are the control unit along with the sensors and actuators responsible for data acquisition and system reactions. The system is based on the IoT and distributed computing concepts [50], facilitating communication between modules and providing easy access to the storage data through the intuitive GUI. The system can be deployed along the runway and consists of the control system, monitoring modules, and repellent part. Each monitoring module includes the stereovision sensing unit and Local Processing Unit (LPU) responsible for motion detection, object identification, and localization. Data from all monitoring modules are sent to the control system, where the detected object is cropped from the picture and processed, and a decision is made about using the repellent part. The control system handles the connection with several monitoring modules, repellent parts, the database, and the Human Machine Interface (HMI).
In the database, the data of the detected events such us the bird's position, estimated flight-path, images, and movies, as well as info regarding any actions undertaken are collected. Archived data are accessible through the HMI such as web and mobile applications. The HMI can be also used to manually activate the repellents part and maintain/test the system.

Model of the Modified Stereovision Method
A stereovision based monitoring module oversees a selected runway zone. The proposed new solution ensures that the observation space is freely selectable through the adjustment of the cameras' optical axes by changing the orientation of their angles α(z); see Figure 2.
In classical stereovision, the cameras' optical axes are perpendicular to the baseline, B, where the baseline is the line segment connecting the cameras' centers. Then, the baseline and the cameras' image planes are placed on the same Euclidean plane; see Figure 2a.
In the proposed modified stereovision method, the cameras' optical axes are set at an angle α with respect to the baseline, in such a way that the cameras' image planes are placed on two parallel planes, as shown in Figure 2b. The cameras' alignment is presented in Figure 2c [65,66].
To understand the extraction of the 3D features, the coordinates of the modified stereovision system can be transformed using some geometric features. The transformation is carried out in relation to the first camera C1 (see Figure 3) in such a way that the coordinates and the scene are shifted by the rotation angle α. Using the geometrical transformation, the modified mathematical model of the method can be delivered using the variables and parameters defined in Table 2.

Symbol Name Unit
α The rotation angle defined as an angle between the (parallel) optical axes of the cameras and the base line. The rotation of the first camera C1is around the first axis, perpendicular to the optical axis of the first camera, and the rotation of the second camera C1 is around the second axis, parallel to the first axis.
The cameras' resolution along the Y axes wherein the Y axis of a camera is perpendicular to the rotational axis of the camera (the first axis for the first camera and the second axis for the second camera) and within the image plane of the corresponding camera. (px) The pixel number of the object's center projection on the image plane of the camera C1 along the Y1 axis wherein the Y1 axis is perpendicular to the rotational axis of C1 and within the image plane of the camera. (px) The pixel number of the object's center projection on the image plane of the camera C2 along the Y2 axis wherein the Y2 axis is perpendicular to the rotational axis of C2 and within the image plane of the camera. (px) Figure 3. Definition of the variables and basic system settings.

Distance Measurement Using Modified Stereovision
The distance, D, from the first camera C1 to the plane of the object, wherein the plane of the object is a plane perpendicular to the optical axes of the cameras, is equal to the distance D k from the object to the baseline, D = D k . From the basic geometry, one may also find that D = D b − d 1 and B × cos α = b 1 + b 2 , where b 1 and b 2 are defined as: Then, after substitution: Knowing that d 2 = B × sin α − d 1 : which could be simplified to: From this, distance, D can be calculated as: The angles ϕ 1 and ϕ 2 may be found from the relationships: Then, distance D can be defined as: which, for α = 0, gives the distance for classical stereovision: Knowing the distance D and the angle ϕ 0 , the object altitude could be calculated using the formula: where ϕ 2 can be found from (7).

Quantization Uncertainty of the Distance Measurement
The distance D defined by (8) is a non-linear discrete function of y 0 , B, (y 1 − y 2 ), y 2 , and ϕ 0 . The measurement uncertainty, ∆D, determined by the exact differential method [10,67,68], can be expressed by: The quantization uncertainty ∆D is a discrete function of (y 1 − y 2 ) ∈ N + and y 2 ∈ N + . Since ∆D depends also on the value of y 2 , it means that the uncertainty increases not only with distance D, but also with object altitude H. The quantization uncertainty of H depends on the distance estimation and may be considered per analogiam. Figure 4 shows how for a varying pixel difference, (y 1 − y 2 ), the quantized value of distance measurement D and its uncertainty ∆D depend on the y 2 value, which is a measure of object elevation. The simulations for the highest y 2 max and lowest altitude y 2 min were performed for y 0 = 1440 px and ϕ 0 = 48.8 • , corresponding to the off-the-shelf IMX219 camera with a focal lens of f = 3 mm and a large baseline B = 1 m [10].  Figures 5 and 6 illustrate how the quantized distance measurement D and its quantization uncertainty ∆D respectively depend on the pixel difference, (y 1 − y 2 ), and object position on the C 2 image plane, y 2 . The simulation was done within the range of 300 m. It proves that in the worst case, quantization uncertainty ∆D could be of 70 m, which gives a measurement uncertainty of 35 m.   Figure 7 presents the processing architecture of the system, which is based on the distributed computing and IoT paradigms. The proposed architecture supported by a stable Ethernet connection enables reliable real-time communication between the monitoring modules where images are collected and the control unit where the measurement data are processed.

System Processing
The monitoring modules with the on-board LPU provide the video streaming from the stereovision set consisting of two cameras. The flying bird identification is based on the motion detection and object identification algorithms presented in the authors' previous studies [10]. The CNN distinguishes bird-like objects from sky artifacts such as clouds, snow, rain, etc. When a detected moving object is identified as a bird, a warning trigger is activated, and the information from the motion detection algorithm including the estimated object's center coordinates, x c and y c , is sent to the 3D localization unit. The optimization procedure of the detection and identification algorithm was described in the authors' previous work [10].
Via Ethernet, the control unit receives information including the object's 3D positioning along with the image miniature and object contour [10]. In the data filtering block, a statistical analysis is performed to conclusively distinguish birds from other bird-like objects such as the drones, airplanes, and insects. Then, based on data about the object width, height, and contour received from the motion detection algorithm, as well as the estimated distance calculated in the localization algorithm, the size classification algorithm estimates the object's size to sort it into one of three categories: small, medium, or large [10]. After classification, the notification protocol via HMI is provided to the users' apps and archived in a local database. The deterrence module could be activated if needed.

Size Classification
Knowing the distance D and the size of a detected object on the image plane as p W (px) and p H (px) [10], the bird's wingspan P W (m) and height P H (m) could be estimated from: where SIA is the camera's Sensor Image Area. Previous studies showed that the approximation of the bird's size with an isosceles triangle enables classification of its size as small, medium, or large [10]. Figure 8 illustrates how the bird size could be estimated. The triangle base corresponds to the bird's wingspan p W (px), and the height of the triangle p H (px) denotes the bird's height. Then, the triangle area O approx is a measure of the bird's size. Since the representation of an object on an image depends significantly on the object distance from the monitoring modules, then the size classification accuracy depends on the quantization error. The uncertainties of the measurement of P W as a function of the distance for typical small, medium, and large objects are presented in Figure 9. Within the requested distance ranges, there are no overlaps between the shown classes; however, the fuzziness resulting from the distance measurement uncertainty could be observed for birds of sizes close to the inter-category boundaries. The presented simulations were performed for the parameters selected in Section 5.1, and the SIA was set to 3.76 mm, which corresponds to the Sony IMX219 sensor. The calculations were done for average birds representative of each class, i.e., 1 m, 1.32 m, and 1.67 m wingspans for small, medium, and large, respectively. The measures of P H and O approx show similar uncertainty and may be considered per analogiam.
The estimate of object area O approx is used for the classification of the birds [10] into three categories, with the boundary values, O b1 and O b2 , which were defined based on ornithologists suggestions. The common buzzard and the red kite were selected as boundary representatives of the medium and large objects. Therefore, each object smaller than O b1 = 0.22 m 2 , corresponding to the size of the common buzzard, is considered as small, and each object bigger than or equal to O b2 = 0.

Prototyping
This section firstly considers the optimization of the parameters within a range of constraints stated in Section 4, and then, the prototype of the system is presented.

Parameter Optimization
From (8) and Figure 10, it can be seen that the core structural parameters of the proposed method are: the baseline, B, image resolution, y 0 , and FoV, ϕ 0 ; therefore, the selection of their values is crucial.
A camera image resolution y 0 = 1440 px was selected due to the limitation of the computational complexity of the applied algorithms and the corresponding capabilities of the local processing units. Camera's focal length f and its FoV defined by ϕ 0 are interdependent. Previous studies [10] showed that the maximum possible FoV can be realized using the IMX219 with a focal lens of f = 3 mm and an FoV ϕ 0 = 48.8 • .
As a rule of thumb, the spatial vision is correct when the baseline is between 1/100 and 1/30 of the system range [70]. However, due to technical reasons, the baseline should not exceed 1.5 m. To select an acceptable baseline length, an evaluation of the distance measurement and its uncertainty was dione. The simulation results of D and ∆D for the object image detected at the top and at the bottom of the camera matrix were collected for B = {0.75 m, 1 m, 1.25 m, 1.5 m}; see Figure 10. From their analysis, it can be concluded that in the worst case at 300 m, (y 1 − y 2 ) = 4 px and (y 2 ) = 1440 px, with measurement quantization uncertainty ∆D=±81 m for B = 1 m, and for B = 1.5 m, ∆D = ±61 m. Therefore, the stereoscopic baseline B = 1 m was selected as fulfilling the requirement for a 10% localization accuracy with the shortest baseline B.

Hardware Prototyping of the Monitoring Modules
The prototype of the monitoring modules is presented in Figure 11, and the installation spot is shown in Figure 12. Each module was composed of two IMX219 cameras with a f = 3 mm lens, having a vertical FoV ϕ 0 = 48.8 • and allowing the image capture with a resolution of y 0 = 1440 px. To optimize the monitoring space, the rotation angle of the system (optical axes of both detection cameras) was set to α = ϕ 0 /2. The computational core of the LPU was an ARM v8.2 processor with 8 GB RAM and 384 CUDA cores and 48 Tensor cores for the AI based object identification algorithm. The monitoring modules were equipped with a switch allowing the IoT configuration. To ensure low weight, the system was composed of an acrylic cover.
The prototype of the system included an auxiliary recording camera allowing realtime video streaming and recording for verification and validation of the detection system. The configuration of three monitoring modules allowed monitoring of the area within the field of view of ϕ = 180 • , as shown in Figure 13, where small dead zones near construction could be neglected as having no impact on the detection efficiency.
The control system ran on a database Dell server equipped with 3.6 GHz Xeon X5687 processor and 8 GB of RAM. As the memory storage, two 8 TB hard drives were used. The connection between the monitoring modules and the control system was provided by the Ethernet protocol. The monitoring modules were powered by safety extra-low voltage.

Validation and Testing
The system prototype was installed on a dedicated stand in a test field, which was a flat open space near the runway of Reymont Airport in Lodz, Poland (IATA: LCJ, ICAO: EPLL), as shown in Figures 11 and 12. The prototype was equipped with three monitoring modules and one control unit. Mutual placement of the stereoscopic cameras was manually set based on the fixed distant object. The positions of the images were manually determined using transformation by handle in the GIMP software. The system reported approaches by birds in flight, and an example of one such observed dangerous approach of a bird with an airplane is presented in Figure 14. For the quantitative evaluation of the system performance in terms of detection efficiency and localization precision, bird-like drones equipped with GPS recorders were used. Two fixed-wing drones and one quadrocopter representing small, medium, and large objects are presented in Figure 15, and their dimensions provided by the manufacturer in terms of the wingspan, height, and total area are shown Table 3. The drones were programmed to fly along a given path within the system vicinity.  To evaluate the system detection efficiency, test flights for the three drones were performed. The drones flew at a random speed and altitude within the desired system detection range. The system detected the small drone 1565 times, the medium drone 2248 times, and the large drone 2875 times, during the 3 min, 12 min, and 10 min flights, respectively. The detection efficiency presented in Table 4 was calculated as the relationship between the time when the drone was visible to the monitoring module and the time of flight in the defined range. Table 4 summarizes the results. The presented results prove that the desired efficiency was achieved within the requested detection range defined in Table 1.  -50>  29  32  91  26  26  100  ---(50-100>  80  85  94  82  82  100  --(100-150>  20  40  50  226  235  96  24  24  100   (150-200 >  ---322  329  98  242  252  96   (200-250>  ---64  69  93  102  105  98   (250-300>  ------26  28  92   (300-350>  ------206  362  57 To quantitatively evaluate the developed system's ability to carry out 3D object localization, it was tested in nine different scenarios defined in Table 5. The drones were turned on in autopilot mode using the remote controller, and they flew around the module at a predefined approximately constant distance and altitude, with different distances D and altitudes H used for different scenarios. The subscripts S, M, and L included in the scenarios listed with Roman numerals denote small, medium, and large drones, respectively. The average speed of the small, medium, and large drones during each test was 4.0 m/s, 20.0 m/s, and 15.0 m/s, respectively.
For each test flight, the mean distance D, height H, and corresponding standard deviation σ D and σ H , for GPS and detection module data, respectively, were estimated. The GPS measurements were treated as reference values for the analysis of system uncertainty presented in the last four columns, where ∆D k and ∆H depict the mean absolute accuracy of the distance and height measure, respectively, and δD k and ∆H depict the corresponding relative accuracy of the distance and height measure, respectively. Table 5. Test plan of the designed system. N is the number of samples registered during the test, and the error is the difference in the mean between the GPS and the system measurements.

GPS Data Detection Module Data Uncertainty
Scenarios Examples of the graphical illustration of the test results are presented in Figures 16-17 for the small, medium, and large drones, respectively. The flight scenarios were chosen to show the system capabilities at the detection range borders for each drone. The green and red dots represent localization measurement samples from the GPS and from the system, respectively. The ellipses illustrate the measurement statistics where their center coordinates, X(D, H), correspond to the mean values of the distance and height measurements, respectively. Their semi-major axes depict the standard deviation σ D , and the semi-minor axes correspond to the standard deviation σ H . At long distances of more than 200 m, the quantization error of a single measurement was greater than the desired localization precision. However, statistics allow the reduction of the quantization error, which meets the user's desire; see Table 1. The mean values of the distance and height uncertainty dropped below the expected 10% even for far distances of more than 300 m, which is above the quantization uncertainty of the distance measurement; see Figure 6. The system detection range and localization precision depend on the object size. The system was able to detect the small drone from 100 m, the medium drone up to 200 m, and the large drone up to 300 m. Table 3 includes the test drones' data sheet information, which were treated as reference values. Table 6 shows the test results for the size estimates and their quality along with the results of bird classifications, and they are presented in the last three columns of the table. For each scenario defined in Table 5, the drones' width, P w , height, P h , and size, O approx , were estimated from the images, and then, the estimates' variances σ P w , σ P h , and σ O approx were calculated, respectively. Despite relatively high estimation uncertainties, the system was capable of classifying drones into their correct categories.
Object classification into one of three categories of small, medium, and large was based on the estimate O approx and defined heuristically. The selected boundaries between categories were: between small and medium O b1 = 0.22 m 2 and between medium and large O b2 = 0.48 m 2 , as introduced and presented in Section 5.3. The test results proved that within the desired ranges, the system classified small and large objects with a reliability of 99.6% and 91.4%, respectively. The classification reliability of medium objects was 65.4%. Nevertheless, medium objects were more likely to be classified as large (25.4%) rather than small (9.0%), which errs on the safe side from an application point of view. It is worth noting that the classification of objects should be treated as a fuzzy categorization, because the real sizes of birds of the same species vary. Furthermore, size estimates are biased by measurement uncertainties. Nevertheless, the test results confirmed that the average size O approx calculated for each scenario allowed the evaluation of the object size correctly in each case.

Discussion, Conclusions, and Future Work
This work proposes a stereovision based detection system for monitoring the space near airports to identify and localize moving objects. The system is a reliable and costeffective solution for the prevention of bird strikes around airport runways.
A new stereovision structure is proposed, composed of two cameras coupled in stereovision mode, with the cameras' optical axes able to be freely oriented to cover the desired monitoring space from one installation spot within the cameras' common FoV. A set of detection modules could extend the system observation FoV up to 360 • . One can estimate that a medium size airport with a 2600 m runway can be covered using up to seven systems, each equipped with eight monitoring modules. The system software configuration based on the distributed computing concept powered by machine learning algorithms embedded in the IoT paradigm ensures real-time performance. Apart from the detection of moving objects, the system is capable of localizing and classifying them based on their size. To make the system desirable and flexible for different airport sizes, the user-driven design was applied, which included many actors such as airport stakeholders, local and ecological authorities, designers, and future users. This has driven the design solution into a customizable system, which ensures cost-effectiveness without compromising system reliability.
The system was modeled and optimized using MATLAB software. The evaluation method included the analysis of the localization uncertainty and enabled system optimization. The quantitative evaluation of the system performance showed that the proposed solution meets the desired requirements regarding detection range and localization precision.
The modeled system was implemented and prototyped and then installed in a test field, which was a flat open space near the runway of Reymont Airport in Lodz, Poland. To validate the system performance, two drone sizes of 2.0 m and 1.2 m and one quadrocopter of 0.24 m were applied, imitating large, medium, and small birds, respectively. Nine test scenarios, three for each device, were applied to prove system localization and size estimate accuracy, as well as to prove the detection efficiency and ability to correctly classify the objects.
The tests proved that the system detects small objects within a range of 100 m with an efficiency of 94%. Medium objects can be detected within a range of 250 m with an efficiency of 93%, whereas the large object detection range of 300 m had a detection efficiency of 93%; see Table 4.
The estimates of the localization uncertainty for both distance and height measurements varied from 0.7% up to 9.7%, but did not exceed the required 10%, as shown in Table 5.
Estimations of drone size, which is used for object classification, were done for all nine scenarios; see Table 6. The test results proved that the system is capable of distinguishing small and large objects with a reliability of 99.6% and 91.4%, respectively. The classification reliability of medium objects was 65.4%. The results show that the approximated sizes were overestimated compared to the reference ones. However, this type of result is not fatal, and the applied classification algorithm is able to sort the objects into the correct categories. Nevertheless, the test results confirmed that by means of statistics, it is possible to enhance the object's size estimation.
The system validation proved that the system implements all the desired functionalities and fulfills all the regulatory requirements and therefore can be used for standalone autonomous bird monitoring, complementing ornithologists' work to minimize the risk of bird collisions with airplanes.
Among other future developments, a tracking algorithm to anticipate bird flight paths could be implemented to improve system reliability and localization accuracy. The implementation of Multiple Hypothesis Tracking (MHT), Kalman filter, or Probability Hypothesis Density (PHD) are considered as possible solutions. Moreover, the classification could be extended to include the recognition of bird species, which could improve long-term wildlife monitoring. Other possible work may also concern the detection of mammals or Foreign Object Debris (FOD) within an airport's proximity.
Furthermore, ornithological long-term observations should be performed to verify the system performance in terms of bird detection efficiency and false positive rate. These observations could also validate the system performance in overcast weather conditions, which would be required before its implementation at airports in autonomous operational mode.
The precise calibration of a large-base stereovision system is complex and may cause a large positioning uncertainty [74]. Therefore, our future work will focus on an autonomous in situ calibration of the system. Aviation safety at airports requires also the detection of FOD, as well as land mammals. The monitoring area of the proposed detection system could be extended to cover the whole runway.
Future work may also concern the deployment of a multi-module configuration along an airport's runway to ensure full coverage of the skies within an airport's legal jurisdiction.  Acknowledgments: The authors would like to acknowledge Sandy Hamilton's support to improve this article. In addition, the enormous participation of all Bioseco employees in the project's implementation should be emphasized.

Conflicts of Interest:
The authors declare no conflict of interest.