Fixed Wing Aircraft Automatic Landing with the Use of a Dedicated Ground Sign System

: The paper presents automatic control of an aircraft in the longitudinal channel during automatic landing. There are two crucial components of the system presented in the paper: a vision system and an automatic landing system. The vision system processes pictures of dedicated on-ground signs which appear to an on-board video camera to determine a glide path. Image processing algorithms used by the system were implemented into an embedded system and tested under laboratory conditions according to the hardware-in-the-loop method. An output from the vision system was used as one of the input signals to an automatic landing system. The major components are control algorithms based on the fuzzy logic expert system. They were created to imitate pilot actions while landing the aircraft. Both systems were connected with one another for cooperation and to control an aircraft model in a simulation environment. Selected results of tests presenting control efﬁciency and precision are shown in the ﬁnal section of the paper.


Introduction
Systems of various types of Unmanned Aerial Vehicles (UAV), including fixed wings [1][2][3], multirotor [4][5][6] and other hybrid type aircraft [7,8] are increasingly being used in both military and civilian applications. An increasing number of flying platforms, greater availability of entire UAV systems and new types of missions, make issues related to the full automation of the entire flight, including its terminal phases, more and more important. This is especially true of the landing phase, which is the most critical phase of each flight and a subject of interest for much research. To some degree, it determines the type of functions an automatic flight control system capable of executing this phase should offer. Moreover, both the type and class of the aircraft have an impact on the type of on-board equipment and airfield infrastructure necessary for a safe automatic landing [9,10].
When analyzing the operation of autopilots offering the automatic landing function, the control algorithms supporting this phase can be, in general, considered on two levels [9][10][11]. The first covers low level functions responsible for the stabilization of angles of aircraft attitude and flight speed. They are used as a tool by the functions of the second, master level, which in turn, guides the aircraft along the desired landing trajectory.
Currently, for manned aircraft as well as for MALE (Medium Altitude Long Endurance) or HALE (High Altitude Long Endurance) UAV class aircraft, an automatic landing is generally supported by extra aerodrome infrastructure based on ILS (Instrument Landing System), occasionally MLS (Microwave Landing System), or GBAS (Ground-Based Augmentation System), which has an alternative nowadays [12]. Some vendors develop their own specific systems supporting this phase, e.g., the mobile MAGIC ATOLS system. These are short-range radio navigation based systems and are composed of on-ground and airborne components. Both a technological development of digital optical systems and progress in picture processing algorithms cause vision systems to enter this field efficiently. Optical systems have recently become available for helicopters and multirotors, e.g., systems capable of identifying the 'H' letter sign on the ground while hovering over it. They were a catalyst for the research presented in this paper.
In fact, the automatic landing of mini or even MALE-class fixed-wing aircraft can be performed with relatively primitive or with some simple dedicated infrastructure. The only condition is that proper information on an aircraft's relative position to the runway must be available [13,14]. From this perspective, it seems reasonable to develop solutions which could support the automatic landing of such aircraft in terms of reduced ground facilities. The simple and easy-to-install ground sign system could transform any auricular or other area into an airfield capable of hosting automatically-landing fixed-wing UAVs.
An automatic landing requires proper information about the position of the aircraft in relation to the desired landing trajectory, or at the very least to the theoretical touchdown point [15]. This data is usually provided by landing assist systems (e.g., ILS, satellite systems) or could be achieved by utilizing visual signals coming from systems which typically assist pilots of manned aircraft [16][17][18][19][20] as well. In this case, it is obvious that to extract proper information from visual signals, some image processing methods must be employed. It should be mentioned here that image processing has already been successfully applied both for the identification of some aircraft flight parameters and determining the aircraft attitude [21], and also during the landing phase [22]. One of the systems whose visual signals have already been used to determine an aircraft's relative position to the landing trajectory and deployed by a control system is the PAPI (Precision Approach Path Indicator) light system [23]. However, small aircraft most often operate at airfields which are not equipped with any ground infrastructure or marked with any airfield ground signs. Additionally, these only play the role of airfields occasionally, so it seems reasonable to develop a system which could both determine an aircraft's relative position to the desired glide path and touchdown point and provide signals to control algorithms sufficient for automatic landing based on image processing of a simple ground signs system located near to the runway threshold.
The paper presents a concept for a longitudinal channel aircraft flight control system, which uses functions of the conventional autopilot in combination with information obtained from dedicated ground sign image analysis and dedicated control algorithms based on expert knowledge. It is designated to imitate the behavior of a human operator who, while piloting the aircraft, maintains the correct approach path, performs the flare phase, leading the aircraft down to touch the runway (Figure 1a). The pilot type of control is set as a reference point in some way, because of one key feature; however, the accuracy of pilot control is often lower in comparison with the accuracy of automatic control. Nevertheless, the most significant advantage is that the pilot control is much more robust and disturbance-proof. The pilot can accomplish the landing even in terms of very residual and dubiously accurate, or reliable data on the aircraft's position relative to the runway.
A method employed to the development of the control algorithms should take into consideration an overall goal of the control, key features of both inputs and outputs, as well as the approach to defining dependencies existing in the control rules. Since the algorithms used in the control system are to imitate the pilot's control, the method should provide any mean linking the real sample data characterizing the landing, the way human operators assess key flight parameters, and expert knowledge. Because of this, using some kind of expert system seems to be the most reasonable. Additionally, the human operators use linguistic terms (variables) to describe the flight parameters and magnitudes of the controls. A natural linkage between linguistic variables and numerical values exists in fuzzy logic or fuzzy neural systems. Unfortunately the second of those requires a huge training data set, and the results of learning are not always reliable [24,25]. Consequently, the fuzzy logic expert system approach was selected to develop the control algorithms.

Ground Signs System
During a VFR (Visual Flight Rules) landing, pilots of manned aircraft observe the runway as well as characteristic objects at the airfield and its surroundings, their relative position and orientation to the aircraft, and indications given by key on-board instruments. In this way, they are able to build a mental picture of an entire landing, correctly catching and maintaining an optimal glide path which runs down to an aiming point inside an aiming zone ( Figure 1) while also maintaining the right aircraft flight parameters [26,27]. Since a vision system is assumed as the main source of data for the aircraft's position on the landing trajectory, some kind of ground signs, being a reference point indicating the aiming zone as well as the runway, must be defined. In cases of landing at well-equipped runways, classical runway signs or lights can be used. Unfortunately, in cases of operation at runways not equipped with any lights or ground signs-e.g., grass runways or even plain grasslands-an additional easy-to-install ground sign system contrasting with the immediate vicinity should be considered [28,29] (Figure 1).
The solution discussed in the paper assumes the use of dedicated ground signs whose picture is processed by an onboard image processing system to provide feedback data to the autopilot regarding the aircraft position on the desired landing trajectory. There are two circles with radius R = 2 m, located at the runway threshold in the direction along the runway, in yellow and red colors, used as a dedicated sign system ( Figure 1). Colors were selected to increase the contrast between the runway and its surroundings. Dimensions are set experimentally to make signs visible to the employed vision system from a height of 120 m-the height recognized as the proper height at which the final approach should start for the type of aircraft used in the experiments (Figure 2), plus a 20% safety margin. An aiming point, the point at which the virtual glide path touches the runway, is the center of the yellow circle. The configuration of the circles (yellow first) defines the right approach direction.

The Experimental Aircraft
The system presented in the paper is developed for the MINI class of unmanned airplanes in general. Therefore, such aircraft was also selected as a control plant and for in-flight test campaigns ( Figure 2). The aircraft was equipped with an automatic flight control system offering a set of core, low-level flight control functions capable of stabilizing at the desired spatial attitude [30,31]. This cooperates with the tested master systems, executing desired functions or forwarding control signals produced by the master system directly to actuators. In addition, it can record a number of flight parameters.
Dynamics of the aircraft in a configuration for the final approach and landing were found and linearized. For a state vector X = [u w q θ] T and control inputs [δ E δ T ] T , a state matrix A and input matrix B are as follows (1).
where: u-airspeed along aircraft X B axis in m/s, w-airspeed along aircraft Z B axis in m/s, q -pitch rate in 1/s, θ -pitch angle in radians, δ E -elevator position in radians, δ T -thrust in percentages [32].
On the basis of aerodynamic, mass, geometrical, and other characteristics of the aircraft, a model was built in the simulation environment and then used in final simulations.
The aircraft was also equipped with a vision system capable of analyzing the picture of the ground sign system (Figures 1 and 2) appearing on an on-board video camera ( Figure 2) The camera was mounted in the aircraft's vertical plane of symmetry with the angle between the aircraft's X B axis and its optical axis ζ = 0 • ., identified and set on the basis of manual landings during flight tests.
This aircraft was also used for sensor flights in order to obtain information for the synthesis of the control rules and to get pictures for the image processing algorithms. Models of its dynamics were used in the synthesis of the control algorithms and in final real-time simulations.
According to the approach adopted in this paper the selection of a specific type of aircraft is not critical for the structure of the developed algorithms. However, the selected aircraft's specific features determine the numerical values of the coefficients and parameters appearing in the developed rules and algorithms.

Structure of the System
A general structure of the developed flight control system is shown in Figure 3. A similar one was previously applied to control MALE class aircraft on the glide path with the use of PAPI light indications [33]. It is composed of a classical measurement system, an autopilot [30,31], and two extra modules expanding its basics functions: • Vision System-provides information on the aircraft's vertical position relative to the selected glide path δ V ; this information results from processing of the ground signs image appearing on the on-board camera.

•
Automatic Landing System-controls the aircraft with the use of fuzzy logic and expert knowledge-based control algorithms. It also uses some specific autopilot spatial attitude and speed stabilization functions, or utilizes it to control actuators directly, to maintain the desired glide path. The Automatic Landing System is a master control module to the autopilot. It manages the landing from the initial moment, when the aircraft gets on the desired approach path, through the flare phase, to touchdown at the end. It engages the autopilot's pitch angle and airspeed channels. An output, desired pitch angle θ d , is to be stabilized by the autopilot in the pitch angle channel. However, in the airspeed channel and because of its criticality and specific features, the throttle δ T is directly controlled.
The control algorithms of the Automatic Landing System rely on the set of aircraft flight state data provided by the standard measurement devices, as well as on the Vision System outputs. In particular, information on the aircraft's relative position to the desired glide path, supported by classical sensors and systems, is utilized to control the aircraft on the vertical trajectory, so as to maintain it correctly until the flare phase. It should also make it possible to conduct the flare and touchdown maneuvers safely when visual signals are not readily available.

Determination of Aircraft Position Vertical Deviation from the Selected Glide Path
During landing, pilots of manned aircraft observe the runway and runway signs-if present-constantly and very carefully. In this way, they are able to assess actual deviation from the required glide path angle [26,27,[34][35][36][37]. It is assumed, that the developed system should be capable of estimating a deviation from the desired landing trajectory using of the picture of the runway, marked with its signs, just as human pilots do. The goal of the vision system described in this work is to detect and interpret the position of dedicated ground signs ( Figure 1) on the picture appearing on the on-board camera and feed the autopilot with any detected deviation from the predefined glide path. It is assumed that the aircraft fixes on the aiming point, keeping it in the center of the picture that together with airspeed stabilization at the desired approach airspeed provides the capability of proper glide path angle control. In this way, the vertical position of signs on the recorded picture is indirectly linked with the vertical deviation of the aircraft's position from the desired glide path. Thus, the entire process is based on the identification of an image of dedicated signs in relation to the center of the gathered picture ( Figure 4). The ground signs system is revealed on a video image by extraction of red and yellow elliptical areas, highly contrasted to the background This is achieved by a specific sequence of standard mathematical operations with pixel arrays [38] (Figure 5). This assumes that the vertical dimension of the picture D V is given in pixels, the pixel having (0,0) coordinates matches the left upper corner of the picture, and δ V is the vertical deviation of the signs system from the center of the picture as a percentage of the image height.
A raw picture captured by an on-board video camera containing both red and yellow circles is first transformed from the RGB to the HSV model of colors (Figure 6a) [39]. Next, a sixfold binarization (for each red and yellow component) with double thresholding H, S, V components for yellow and red is defined by (2). The first three operations extract objects in red, and the next three extract objects in yellow. The results of these transformations are six separate monochromatic pictures merged into two final pictures (separately for yellow and red) by binary operation for each pixel (Figure 6b). The image processing algorithm for a single frame of the picture determining the position of ground signs. H, S, V-pixel color components for yellow (y) and red (r) circles; M-number of identified elliptical objects; P r (x r , y r ), P y (x y , y y )-position of respectively red and yellow circle on the picture in pixels; δ v -vertical deviation of the signs from the aiming point system from the center of the picture.
where x, y-coordinates of the single pixel, F Hr , F Sr , F Vr -monochromatic pictures for binarization H, S, V components for red color; F Hy , F Sy , F Vy -monochromatic pictures for binarization H, S, V components for yellow color; F Rbin , F Ybin , F bin -monochromatic pictures representing objects in red, yellow and red-yellow respectively. Erosion with square kernel 8 × 8 pixels removes small artefacts from the picture, making it more unambiguous and more convenient for the next step of processing: namely finding the elliptical shape contours in both red and yellow. The single contour of the sign is defined in the form of the vector of points [38,40].
Geometric zero-zero order m r 0,0 and m y 0,0 moments of areas inside counters (3) [41] make it possible to find two of them having the largest red and yellow areas respectively.
where: m p,q -geometric contour of p, q order; i-number of pixels; x i , y i -coordinates of i-th pixel. The next phase concentrate on calculating the coordinates of the geometric center point P c for each sign detected on the picture (4) [41]. where: x, y-coordinates of the centers of the objects; m 00 , m 10 m 01 -relevant geometric moment respectively for the largest red and yellow objects. The condition y y > y r (Figure 7) checks the correctness of the flight direction. The assumed correct direction causes the yellow sign to be closer to the aircraft than the red one, so it should be located lower on the picture.

Automatic Landing Algorithms
The automatic landing algorithms hosted by the Automatic Landing System (Figure 3) make the execution of the landing phase possible from the moment the aircraft is on the desired approach path until it touches the runway. They are designed to imitate an UAV operator's activity and should be able to land the aircraft relying on the same set of data. Thus, the method a human uses to control the plane is a key driver for defining the set of system input signals [42]. The implemented algorithms are a Mamdani-type fuzzy expert system. It is composed of fuzzy sets describing the values input and outputs take, a set of rules defining dependencies between inputs and outputs and logical conditions managing when specific rules are active. It has three inputs assigned to three specific aircraft flight parameters. The first and most crucial is the ground sign system related position on the image appearing to the on-board camera δ V -with desired value set to zero. It is reduced by the use of the desired pitch angle simultaneously supported by the thrust control, in the way defined by rules from R.1 to R.12. Thus, δ V signal is used by the Automatic Landing System as a control error. The second are two supplementary, but important inputs from the point of view of the entire process: height above the runway H, and true airspeed U.
There are also two outputs from the system: desired pitch angle θ d and the throttle control signal δ T .
Using the information about airplane control during landing applied by the human, a linguistic space as well as the linguistic values it contains (i.e., fuzzy sets) were defined for all linguistic variables: • linguistic spaces of the input variables: X δV (vertical deviation of yellow circle from the center of the picture) = {extremely low, very low, low, correct, high, very high, extremely high}, X H (height above the runway) = {very low, low, high}, X U (airspeed) = {very low, low, correct, high, very high}, • linguistic spaces of the output variables: X θd (desired pitch angle) = {negative big, negative small, negative, zero, positive, positive small, positive big}, X δT (desired throttle position) = {very small, small, medium, big, very big}.
The next step in designing the expert system was the assignment of the membership functions of linguistic variables to fuzzy sets [43,44]. Triangular, trapezoidal typical forms of membership functions were used for some practical reasons, facilitating the final tuning of the control algorithms. For each fuzzy set, values of membership function parameters were determined based on measurement data recorded during real flights performed by the operator as well as on information obtained from experts (skilled in manual remote-control UAV operations) [45].
In the first step, the vertical deviation of the yellow sign from the center of the picture δ V and the height of the plane above the runway H were linked to proper fuzzy sets- Figures 8 and 9 respectively.  Due to both the ground proximity and the relatively low flight speed, it was necessary to pay increased attention in order to maintain the proper airspeed all the time. For this reason, the control algorithms also take into account the airspeed U associated with the fuzzy sets: very low, low, correct, high, very high ( Figure 10). The label "correct" is associated with the correct final approach airspeed defined for the aircraft used in the experiments (Figure 2). It results from aircraft mass, configuration, and aerodynamic characteristic and is recognized as safe for all actions during the approach. There are two output signals from the automatic landing algorithms controlling the aircraft ( Figure 3) desired pitch angle θ d (Figure 11) maintained by the autopilot, and δ T directly controlling the throttle lever position ( Figure 12). A number of the fuzzy sets selected for each linguistic variable and their widths result from expert knowledge supported with data recorded during test flights. Ranges of both input and output data were associated with fuzzy sets in such a way as to obtain the appropriate granularity (accuracy) of the information, securing the satisfactory proper utilization of expert knowledge and the precision of the control process [44,46,47].
A fuzzy rule base for an expert system contains fourteen principles defining the aircraft control strategy during particular phases of the landing (Figure 13). Despite the fact that the study focuses on the use of visual signals by control rules which are fully feasible only on the approach segment, the next segment, not being the main objective of this research, is also considered in the rules only so as to make the landing complete.  It is worth noting that the rules were not generated automatically. Consequently, it is likely that if the set was analyzed literally, some formal cases would not be covered by them. However, when looking at the expert system as a whole, these cases are indirectly covered by other rules, or are not feasible at all.
It is assumed that the system can only work if the following conditions are met: • the system can be activated if the aircraft is in a landing configuration on the glide path; ground signs are located no further than 25% of the picture height from the center of the picture, which indirectly predefines the real path angle to be maintained; • ground signs are visible and identifiable until the flare phase; • only normal procedures are supported; no emergency situations are considered.
Rules from R.1 to R.7 are responsible for keeping the airplane on the approach path aiming the signs system when the flight height H above the ground is greater than the defined height of the flare phase. They produce the desired pitch angle θ d which should be slightly negative to maintain the proper sink rate when the aircraft follows the glide path correctly. When the aircraft is under or over the desired trajectory, it should be more positive or more negative, respectively, to assure that the aircraft will soon follow the approach path. Rules R.8 to R.12 control the airspeed. They are meant to maintain a desired (constant in an ideal case) approach airspeed, regardless of the actual pitch angle. They are active during the approach phase, when the height above the ground is greater than the assumed flare height, and the real aiming point, i.e., ground signs, appears to the camera. If the airspeed U is adequate, i.e., equal to the predefined aircraft approach airspeed-16 m/s (58 km/h), the expert system sets the throttle lever δ T to a position between the middle and the minimum setting. At a negative pitch angle, it allows the aircraft to continue the flight with the appropriate approach airspeed. If the aircraft flies too fast or too slowly, the system decreases or increases the thrust. When the flare height is reached, visual signals are no longer considered because they are not visible. The aircraft is moved nose up to reduce the sink rate, so the signs move down beyond the frame of the camera. The information on the aircraft's relative position to the glide path becomes unreliable at the beginning of this phase and totally unavailable at the end of it. Thus, because of the differences in the nature of the approach and flare phases, the set of control rules must be exchanged. The rules controlling the approach are deactivated, and the rule R.13 is triggered. This rule doesn't request any information from the Vision System, slowly increases the pitch angle (to small positive values), and retracts the throttle lever to the minimum position. These actions imitate the operator's activity during the flare maneuver. It causes a reduction of the sink rate and a gradual reduction of the airspeed, leading the aircraft as close to the touchdown as possible [48].

R.13 IF (H IS very low) THEN (θ d IS negative small) (δ T IS very small)
Once the aircraft is very near the ground signs, and they start moving beyond the frame of the camera the control is given to the rule R.14. Its task is to continue maintaining the pitch angle to reduce the sink rate to a safe value.
R.14 IF (H IS very low) THEN (θ d IS zero) (δ T IS very small)

Simulation Results
The issues theoretically discussed so far were then checked and verified in practical tests. The tests of the Vision System were intended to analyze the work of the algorithm in determining, the vertical deviation of the aircraft from the desired flight path. The tests of the Automatic Landing System were intended to verify the control algorithms guiding the aircraft along the glide path with the use of information about deviations obtained from the Vision System. Software in the Loop (SIL) and Hardware in the Loop (HIL) tests, supported by flight tests, were performed. For this purpose, a Cularis aircraft simulation model, target autopilot, Embedded Vision System, and Automatic Landing System modules with target software whose key functions were automatically generated using Matlab/Simulink v.2016b were integrated into laboratory rigs [30].

Tests of Ground Signs Indication and Interpretation Algorithms
The Vision System (Figure 3) was tested individually at the beginning of this stage. Its hardware features the following as well as video camera performances of the following components: • embedded computer-based on a quad-core processor, with 256-core GPU graphics chip, managed by a real-time operating system (including built-in compiler, debugger and system libraries containing basic functions and algorithms for image processing and calculations support), quad-core ARM processor with a 256-core GPU graphics chip that allows parallel computing (Nvidia Jetson Nano TX1 from Nvidia, Santa Clara, CA, computer based on NVIDIA Maxwell architecture was used as a hardware platform, Linux Ubuntu 16.04 LTS operating system); • video camera-4K resolution type, 155 degrees view angle, with picture stabilization mechanism, adapted to variable lighting conditions (FPV Legend 3 from Foxeer, Shenzhen, China), affected the test configuration and campaign scenarios.
Software realizing image processing algorithms offered all the functions necessary to extract essential information from the image. These functions identified objects on video frames, generated information about their location, and provided those data for further calculations. Image processing methods for this purpose were supported by a computer vision methods and techniques, contained in the OpenCV programming library [38]. Lowlevel library functions that perform basic operations on images were not developed as part of the research. The goal was only to use them as "bricks" in the development of algorithms interpreting indicated ground signs.
The software analyzing the ground signs image was tested in three steps, as shown below, i.e., elements of desktop simulations, SIL/HIL tests and real flights were successively merged (Figure 14.): Figure 14. The approach adopted to test the kernel functions of the Vision System.

1.
The operator landed a Cularis aircraft (Multiplex Modellsport GmbH & Co.KG, Bretten, Germany), during dedicated real flights, and a movies containing ground signs image in *.avi format was recorded. 2.
The Vision System was fed with the movie off-line, but the real time of calculations was maintained.

3.
Information about the deviation δ v , being a system output, was compared and verified with the manual calculations.

Tests of the Control Algorithms
The next stage of the tests focused on verifying, whether data generated by the image processing algorithm hosted by the Vision System is sufficient for the control algorithms to guide the aircraft to the flare phase safely and possibly land successfully. Because both pieces of the system are mutually dependent, it is obvious that the control algorithms might not be tested in a separation from the image-processing component.
An overall assessment of the combined work of the Vision System and the Automatic Landing System took into consideration the correct execution of all phases. Special attention was paid to stabilization of airspeed, vertical deviation from the desired glide path and vertical speed (sink rate) during approach. During flare and touchdown (because those are the only supplementary phases for the final approach and the main objectives of the research) those flight parameters were not in such priority. The assessment was composed of subjective expert's opinions taking into consideration all specific phases, firstly the overall impression of the entire landing and secondly, some typical indexes applicable for a control system (e.g., ranges of selected flight parameters vary, dynamics of these variations).
Testing began with SIL type tests conducted with a dedicated rig [30,31] (Figure 15) consisting of the following components: • computer hosting software functions of picture processing and fuzzy control algorithms. As a video input, a USB interface was used. As an output, an RS-232 interface for the transmission of calculated deviations of the aircraft position from the glide path was applied;  The main feature of that particular campaign was to verify the work of major software functions automatically generated from MATLAB/Simulink models and then downloaded onto the simulating computer. These functions were intended to be downloaded into target hardware in further steps.
A HIL test campaign was the next step in testing, carried out with the use of a modified SIL rig (Figure 16). The simulating computer was replaced with two target hardware units:

•
Vision System with software realizing picture processing algorithms. As a video input, a CSI-2 interface was used, and as an output, an RS-232 for transmission of identified vertical deviation of the aircraft position from the desired glide path were applied; • Automatic Landing System with fuzzy logic algorithms capable of controlling the aircraft during the landing phase. This generated the desired pitch angle, stabilized by a real autopilot unit and the desired throttle level position. Figure 16. HIL rig configuration for testing picture processing and control algorithms. Adapted from [33].
Due to the basic feature of HIL type tests, i.e., usage of real hardware modules with real software, the quality of results is increased and is comparable with real systems.       A vertical profile of the aircraft trajectory achieved in the simulation is composed of three main sections (Figures 17-21). The aircraft aimed at the set aiming point all through the approach-first section, losing it from the camera frame in the final phase of the flaresecond section. The effect of a nose up maneuver when the flare height reached 6 m in 114 s of the simulation (Figure 18). In 123 s, signs totally moved beyond the picture, activating the touchdown rule (R.14)-third section.
The system kept the ground signs in the center of the picture (Figure 19) until the flare height was reached. Activation of the flare rule at a height of 6 m caused the signs to slip out from the image appearing to the camera. The magnitude of vertical deviation was artificially set by the system over 100% in this moment, as information that there are no indicated signs in the picture.
Actions taken by the control system, meant that vertical speed ( Figure 20) was approximately constant from the time of system activation until the flare phase. It was significantly reduced by the flare rules (R.13, R.14) to a safe level. The system ensured the aircraft made contact with the ground at a vertical speed of 0.5 m/s, which was recognized as safe.
The airspeed ( Figure 21) maintained at approximately 50-52 km/h (13.9-14.4 m/s) during the approach was reduced seamlessly to 32 km/h (8.9 m/s) just before the touchdown which was safe and also typical for manual landings.
Although, the main objectives of the paper were to obtain data on the aircraft's relative position to the landing trajectory from visual signals, and then develop fuzzy control rules useable during the approach, the entire landing was simulated in the tests. Rules R.13 and R.14 were applied at flare when the signs moved off from the picture and image processing could not be engaged any longer, only to accomplish the landing (Figure 22).
The pilot's actions were imitated, and safe flight parameters, sink rate and pitch angle especially were maintained. The flight in the last five seconds before touchdown cannot be recognized as perfect because of overshoot in the pitch channel and a "jump over" effect. However, the flight parameters remained within safe ranges and the aircraft finally landed. This experiment proved both the linkage and seamless transition from the approach phase to the next stage is feasible and could be successful. What the solution discussed in the paper presents could be a proposal for and an alternative to systems supporting automatic landing of small, fixed-wing UAV aircraft.

Conclusions
Issues related to the automatic landing of small UAVs have become more important nowadays because of an increasing number of operators, operations, and diversity of mission types and their purposes. Extra attention should be paid to fixed-wing aircraft because of the necessity for their permanent horizontal movement. They cannot stop flying forward in order to correct their height or path angle, unlike helicopters and multirotor aircraft. Moreover, small UAVs mostly operate from airfields which are not equipped with any electronic infrastructure, including any radio-navigation, supporting automatic landing. UAV remote pilots use only visual information, together with indications from basic instruments, to build a mental image of the landing, and attempt to land successfully. This paper presents an alternative, whereby, with the use of simple ground signs the onboard vision system can assess the glide path and feed this data to the control system. An Automatic Landing System applies fuzzy logic-based rules to guide the aircraft along the desired vertical landing trajectory. The results achieved in both SIL and HIL simulations proved that the system imitating a pilot's actions was capable of controlling the aircraft during the approach, switching between rules during the flare phase to accomplish the landing. Moreover, testing with real videos recorded during real flights and processed by the data-processing section of the vision system revealed that the system was able to assess the aircraft's vertical deviation from the desired flight path using the recordings. The reliability of the results is somewhat remarkable because real devices were used in the tests, with the target software hosted.
The flight parameters maintained by the control system were within the regimes acceptable to experts at all times. The pattern of the flare phase in general imitated a manual landing, but was not recognized as perfect. However, it must be noted that the main objectives of the paper concerned the flight before the start of the flare phase. Transition to the flare phase and controlling the aircraft during this phase were only considered in order to accomplish the maneuver and show that this solution could be seamlessly linked with other controls performing the final phase of the landing.
Although the presented solution uses a predefined ground sign system, there could be images of real objects located near a potential runway used in the next step of the research. The system is also independent of the source of the image. The daylight type of camera could be replaced with another source of picture (e.g., LIDAR, infrared camera, radar) with proper adjustment of the image processing algorithm, making it more universal and robust in the future.
If the research goes further along the route presented in this paper, there could be an autonomous landing system composed of only on-board elements, independent of satellite or radio aids, developed in the future.
The results achieved are promising, and could be the background for further development and real implementation.

Conflicts of Interest:
The authors declare no conflict of interest.