A State-of-the-Art Analysis of Obstacle Avoidance Methods from the Perspective of an Agricultural Sprayer UAV’s Operation Scenario

: Over the last decade, Unmanned Aerial Vehicles (UAVs), also known as drones, have been broadly utilized in various agricultural ﬁelds, such as crop management, crop monitoring, seed sowing, and pesticide spraying. Nonetheless, autonomy is still a crucial limitation faced by the Internet of Things (IoT) UAV systems, especially when used as sprayer UAVs, where data needs to be captured and preprocessed for robust real-time obstacle detection and collision avoidance. Moreover, because of the objective and operational difference between general UAVs and sprayer UAVs, not every obstacle detection and collision avoidance method will be sufﬁcient for sprayer UAVs. In this regard, this article seeks to review the most relevant developments on all correlated branches of the obstacle avoidance scenarios for agricultural sprayer UAVs, including a UAV sprayer’s structural details. Furthermore, the most relevant open challenges for current UAV sprayer solutions are enumerated, thus paving the way for future researchers to deﬁne a roadmap for devising new-generation, affordable autonomous sprayer UAV solutions. Agricultural UAV sprayers require data-intensive algorithms for the processing of the images acquired, and expertise in the ﬁeld of autonomous ﬂight is usually needed. The present study concludes that UAV sprayers are still facing obstacle detection challenges due to their dynamic operating and loading conditions.


Introduction
The integration of Unmanned Aerial Vehicles (UAVs) with IoT (Internet of Things) devices, such as embedded sensors and communication elements, for agricultural operations is growing at a significantly faster pace than expected [1,2]. These IoT devices greatly enhance the capabilities of UAVs and enable UAVs to be used in a wide range of agricultural crop management operations, including field mapping [3,4], plant-stress detection [5,6], biomass estimation [7,8], weed management [9,10], inventory counting [11], etc.
Moreover, disease and pest control are mostly achieved by applying different chemical elements using different distribution systems [12,13]. Among these distribution systems, manual air-pressure and battery-powered backpack sprayers constitute the leading spraying equipment [14][15][16]. Because of the toxicity of manual spraying to human health, sprayer UAVs integrated with IoT devices have been deployed to replace the crude and noneffective manual spraying methods [17][18][19]. These kinds of IoT sprayer UAVs are expected to revolutionize agronomy, finishing tasks in hours instead of days, reducing human intervention from pest outbreak, balancing pest deposition on crops, being environmentally friendly, 1.
The most relevant obstacle detection and collision avoidance techniques are reviewed and discussed, as well as their application perspective with agricultural sprayer UAVs.

2.
The latest path planning algorithms that are used in agricultural UAVs and the structural challenge of sprayer UAVs are described. 3.
The operational pattern, detection sensors, obstacles in agricultural farmlands, and control architecture for collision avoidance are thoroughly highlighted to pave the way for future researchers to design their own agricultural sprayer UAV systems. 4.
The core open challenges and recent technical limitations associated with agricultural sprayer UAVs are enumerated.
The rest of this article is organized as follows. Section 1.1 introduces the general background and related work on obstacle detection and collision avoidance of UAVs. Section 2 analyses the constraints of agricultural sprayer UAVs. A review of the operational pattern and the general spraying architecture of sprayer UAVs, including farmland obstacles, is also presented in this section. In Section 3, details of the various obstacle detection and collision avoidance algorithms, including the various sensing techniques, are presented. Finally, Section 4 highlights the currently open challenges, and the conclusion remark is presented in Section 5.
The main objective of a sprayer UAV is to spray the maximum farming area, to have an adequate spraying coverage and good droplet deposition. However, when spraying, there are some constraints within the agricultural field that have to be taken into consideration. These include (a) field shapes; (b) different weather conditions; and (c) obstacles in the field [59]. Other spraying performance constraints are inherent of the sprayer UAV itself, and these include (a) the liquid load, which will eventually decrease; (b) liquid capacity or vehicle weight; (c) battery/fuel capacity; (d) vehicle type; and (e) spray pressure [24,60,61]. For the purpose of brevity, the basic diagram of the agricultural sprayer UAV is shown in Figure 1.  For the sprayer UAV to be able to perform within the abovementioned constraints, some significant farmland spraying issues have to be considered, and these "spraying issues" are highlighted in the subsequent sections.

Challenges
A manually flown UAV needs an expert pilot for flying, which may lead to low machinery expertise and thus a decrease in spraying efficiency. Because of human error, autonomous sprayer UAVs are becoming more popular. Similar to agricultural autonomous vehicles, the autonomous sprayer UAV also follows a coverage route plan. Numerous studies have been conducted on route planning for agricultural field coverage. Some route planning research focus on different geometrical field shapes [62][63][64][65], while others have been done without the concern of the geometry of the field [66][67][68][69]. A typical path planning structure is shown in Figure 2.  [67]; (b) path plan considering geometry [63]. Torres et al. [70] proposed an algorithm of path planning from a 2D image for optimizing battery consumption. Using satellite imagery, other path-planning methods also have been constructed, which focus on coverage plans without larger obstacles visible from satellite [71][72][73]. Zhang et al. [74] also developed a path-planning method excluding obstacle circles for crop protection UAVs using satellite imagery. Figure 3 shows the filtered path plan excluding the visible satellite obstacles.
Agronomy 2021, 11, x FOR PEER REVIEW 5 of 36 Torres et al. [70] proposed an algorithm of path planning from a 2D image for optimizing battery consumption. Using satellite imagery, other path-planning methods also have been constructed, which focus on coverage plans without larger obstacles visible from satellite [71,72,73]. Zhang et al. [74] also developed a path-planning method excluding obstacle circles for crop protection UAVs using satellite imagery. Figure 3 shows the filtered path plan excluding the visible satellite obstacles.

Liquid Load and Sloshing
As is well known, the agricultural sprayer UAV carries a liquid tank, and that liquid tank has three possible situations: (a) the liquid level continuously decreases during flight; (b) vehicle flight movement changes the angle of the vehicle as well as the tank; and (c) the tank liquid create slosh on changing direction. The first situation is that the decreasing level of liquid continuously changes the tank's center of gravity.
Second, the flight's activity changes the tank's angle, which also changes the center of gravity of the liquid tank. Khorsandi, Ayers, Freeland, and Wang [75] have shown how a tilted tank changes the center of gravity, which is shown in Figure 4. The third situation, which is the liquid sloshing, is related to the liquid level, tank tilt angle, and velocity [76,77,78].

Liquid Load and Sloshing
As is well known, the agricultural sprayer UAV carries a liquid tank, and that liquid tank has three possible situations: (a) the liquid level continuously decreases during flight; (b) vehicle flight movement changes the angle of the vehicle as well as the tank; and (c) the tank liquid create slosh on changing direction. The first situation is that the decreasing level of liquid continuously changes the tank's center of gravity.
Second, the flight's activity changes the tank's angle, which also changes the center of gravity of the liquid tank. Khorsandi, Ayers, Freeland, and Wang [75] have shown how a tilted tank changes the center of gravity, which is shown in Figure 4. The third situation, which is the liquid sloshing, is related to the liquid level, tank tilt angle, and velocity [76][77][78]. In Figure 5, the sloshing impact inside a tank has been shown with a 30% filling rate which was experimented by Li Xi [79]. He did the experiment specifically for sprayer UAV's liquid sloshing and has shown that the inner horizontal and vertical grille can effectively reduce tank liquid sloshing and the oscillation of the sprayer UAV's tank. In the literature, several works have been conducted to reduce big liquid tankers' slosh [80][81][82][83][84]. In Figure 5, the sloshing impact inside a tank has been shown with a 30% filling rate which was experimented by Li Xi [79]. He did the experiment specifically for sprayer UAV's liquid sloshing and has shown that the inner horizontal and vertical grille can effectively reduce tank liquid sloshing and the oscillation of the sprayer UAV's tank. In the literature, several works have been conducted to reduce big liquid tankers' slosh [80,81,82,83,84].

Obstacles on Farmland
Due to geographical position and seasonal changes, the agricultural sector faces uncontrollable weather changes such as strong wind, drought, freezing wind, fog, etc. Thus, for the same geographical position, the agricultural UAV faces different situations, and may change its flight parameters, such as motor speed, and PID gains, in order to control the stability and position [85,86]. Moreover, sprayer UAVs are relatively heavier than other UAVs due to the liquid load [87,88]. During spraying operation, droplet deposition is the primary concern for an agricultural UAV, and it is directly related to the flying parameters such as flying velocity and altitude [89]. These parameter settings are selected by operators depending on the plant growth and types, terrain, and topography of the farmland, etc. [90,91,92].
With regards to these spraying issues, the safety of the sprayer UAV should always be ensured. Because of the low altitude flying, obstacles are more specified on farmland, such as ladders, pump houses, electrical substations, power lines, telephone towers, lighting towers, groups of trees, scattered trees, flying birds or bats, etc. Example obstacle images (satellite image) are shown in Figure 6. To avoid these kinds of obstacles successfully and intelligently, it is essential to analyze the features of the obstacles on farmland. So, we categorized all the possible obstacles on farmlands, and their details are in Table 1

Obstacles on Farmland
Due to geographical position and seasonal changes, the agricultural sector faces uncontrollable weather changes such as strong wind, drought, freezing wind, fog, etc. Thus, for the same geographical position, the agricultural UAV faces different situations, and may change its flight parameters, such as motor speed, and PID gains, in order to control the stability and position [85,86]. Moreover, sprayer UAVs are relatively heavier than other UAVs due to the liquid load [87,88]. During spraying operation, droplet deposition is the primary concern for an agricultural UAV, and it is directly related to the flying parameters such as flying velocity and altitude [89]. These parameter settings are selected by operators depending on the plant growth and types, terrain, and topography of the farmland, etc. [90][91][92].
With regards to these spraying issues, the safety of the sprayer UAV should always be ensured. Because of the low altitude flying, obstacles are more specified on farmland, such as ladders, pump houses, electrical substations, power lines, telephone towers, lighting towers, groups of trees, scattered trees, flying birds or bats, etc. Example obstacle images (satellite image) are shown in Figure 6. To avoid these kinds of obstacles successfully and intelligently, it is essential to analyze the features of the obstacles on farmland. So, we categorized all the possible obstacles on farmlands, and their details are in Table 1.   [94]; (c) big bush [95]; (d) trees [96]. After categorizing all possible obstacles on farmland, we can see some of these obstacles and the global detection system can detect the bigger size obstacles, and some need to be detected by the local detection system [70]. The local detection and obstacle avoidance operations need real-time analysis, intelligent identification, potential area detection, the suitable path, and so on [97]. Thus, the sprayer UAV needs a suitable detection sensor or sensor fusion; thus, obstacle detection sensors are discussed in the next section.

Obstacle Avoidance Scenario
The primary motivation for obstacle avoidance, specifically for sprayer UAVs, comes from a fast-growing number of commercial sprayer UAVs and their full autonomy. Unlike the other types of UAVs, the avoidance scenarios of sprayer UAVs are different, for three reasons: (a) the UAV's liquid load transfer, which is comparatively heavy; (b) the UAV must maximize its spray coverage; and (c) the UAV must use its full battery usage so that it can spray more. Various techniques have been proposed for obstacle avoidance. The basic idea behind the obstacle avoidance scenarios for sprayer UAV is to detect the obstacle precisely and create a suitable avoidance trajectory concerning spraying and safety. This section describes the most important obstacle avoidance scenarios.
Several technologies have been developed for obstacle detection, and numerous works have been performed using sonar, radar, laser, visionary, and fusion methods. For mission follower UAVs, three control architectures have been developed, namely, the reactive control architecture, deliberative planning architecture, and hybrid control architecture. Thus, combining different obstacle detection methods and control architectures, several techniques have been developed for obstacle avoidance. We categorize the detection methods and describe some obstacle avoidance approaches remembering the terms of sprayer UAVs. Finally, we summarize the scenario using existing works about the control, detection sensors, detection methods, and avoidance approaches, as shown in Figure 7.

Avoidance Plan and Control Architecture
The control architecture for avoiding obstacles for a UAV describes how the avoidance actions will be organized from perceiving the environmental data. Three types of control architectures meet the obstacle avoidance requirement. These are reactive control, deliberative planning, and hybrid integration control [36]. The functional diagram is shown in Figure 8. Typically, the flying mission and obstacle avoidance operation are separate works. This control architecture is called reactive control, where the UAV will start flying according to the mission plan, and during the mission, when it senses the obstacle locally, it will avoid the obstacles and follow real-time sensor data.
In the deliberative planning architecture, at first, the UAV starts flying according to the flight plan, then during operation, it senses the working environment and updates the map model. After the primary flying, the final map will be determined, and it will generate the optimal sequence of the collision-free path. However, this method takes a longer time to determine the definitive plan, and without accurate positioning data, this method will not be sufficient.  Using reactive control and deliberative planning, Nakhaeinia et al. [98] made a hybrid architecture where the execution layer will be in three parts, namely, the deliberative layer, control execution layer, and reactive control layer, which are also shown in Figure 8c. Here, the deliberative layer generates an optimal collision-free plan, which is then transferred to the reactive layer to generate the UAV's control action. The execution layer connects the other two layers.

Detection Sensors
Due to atmospheric conditions, such as lighting difference, temperature difference, spray drift or spray fog, water vaporization, etc., selecting the type of sensors for obstacle detection is an essential factor for agricultural sprayer UAV [99][100][101][102]. Different kinds of sensors generate the required information about the obstacle, including their surrounding environment, for the UAV to calculate the obstacle distance and processes necessary for safe path generation around the obstacle; this information contains the size, shape, and location of the obstacles [103,104], and according to the task requirements, different systems use various sensor setups [105].
Sensor classification can be classified into two functional axes [106]: proprioceptive/exteroceptive and passive/active. Proprioceptive sensors detect a vehicle's internal detection data, such as position, orientation, and speed, which are detected by sensors such as velocity sensors [107], tilt sensors [108], position sensors [109], heading sensors [110], accelerometers [111], etc. Exteroceptive sensors collect information and features from the surrounding environment as well as the surrounding obstacle data of the vehicle, which are obtained by sensors such as ToF [112], lidar [113], laser [114], camera [115], sonar [116], microwave radar [117], etc. These sensors provide information collected from the surroundings and help the vehicles in their decision-making and interaction with the environment.
From the viewpoint of technical information, the other type of classification is passive or active. These are some different sensing systems used to detect obstacles and record information of the obstacle's presence on the path. Passive sensors use the available environmental energy sources to process data of the surroundings [106]. Sun provides energy, which has visible spectrums and creates light reflections off the objects. These visible wavelengths and reflecting lights can be detected by light-or wavelength-detecting sensors and different camera types, such as CCD, CMOS, and thermal cameras [118,119]. That is why these kinds of sensors are often defined as vision-based sensors, which generate information based on pictures [120][121][122][123][124][125][126][127][128][129]. On the other hand, active sensors use their own generated energy, such as light and soundwaves, and detect the energy reflections by sonar, microwave radar, laser, ToF, etc. From the reflected energy, the sensor generates the surrounding environmental reaction. Because active sensors generate information depending on their controlled interactions, sometimes its performance is better. However, the generated energy can be influenced by other sources of energy, which may cause error readings [106,[130][131][132][133][134][135][136].
For agricultural UAVs in different environments, a single sensor may not fulfill the task requirements. So, in that case, a multi-sensor combination, as well as sensor fusion, can be used for better performance [137,138]. Hrabar et al. [139], McGuire et al. [140], and Santos et al. [141] proposed an optical flow and stereo vision sensor fusion to improve the accuracy of the obstacle avoidance performance. Gageik et al. [142] presented infrared with ultrasonic sensors for obstacle avoidance at a low cost. on farmland, the obstacles and situation are different and more specified. Kragh et al. [143] made a datasheet of the different sensor performance types using a ground robot on farmland. After studying the Chinese agricultural UAV industry and the different types of detection sensors for obstacle avoidance, Wang, Lan, Zhang, Zhang, Tahir, Ou, Liu, and Chen [97] compared and summarized the features, advantages, and disadvantages of the sensors from the perspective of agricultural field operation. The different types of real-time obstacle detection sensors, and their advantages and disadvantages, are outlined in Table 2.

Obstacle Detection Technologies
A system can get various information about its surrounding obstacles from light or soundwave reflection via different sensors. From the previous section, we know these noncontact sensors have different information-sensing capacities. These pieces of information contain the obstacle's size, distance, shape, color, and direction. After selecting the farmland for spraying, farmland obstacles are more likely a no-fly or no-spraying zone for the UAV. Thus, the primary concerns for the UAV are the shape and position of the obstacle. To achieve obstacle-detection capacity, the sprayer UAV can choose a single or multiple fusion detection methods. Due to the emerging development and future demand of image processing and 3D mapping, obstacle detection using image processing has attracted the attention of several researchers [161][162][163][164][165][166][167][168][169][170][171][172]. Moreover, because of the rapid response and low computational requirement, many detection and ranging-based or active sensor-based research also have been used for the detection of the surrounding obstacle's position, as well as its coordinates and mapping [125,144,[173][174][175][176][177][178][179][180]. Other studies on such detection systems use the fusion of active and passive sensing [181][182][183]. For autonomous navigation, the obstacle detection method is an important part. From analyzing previous studies, we found some of the major obstacle detection methods, which are given in the next section.

Sonar Mapping
The sonar (sound navigation and ranging) mapping system uses the acoustic wave reflection time from different angles to make an image or diagram of the surrounding environment. A sonar sensors represents the oldest obstacle detection technology, which was firstly used to measure the range of an underwater floor in the year 1912 [184]. Later on, this technology was broadly used in modern warfare to detect obstacles [185]. Eventually, it started operating in robotics. Elfes [186] used the sonar-based surrounding mapping for obstacle detection, where a sonar sensor detects the obstacle's range from different points of view, creating a 2D map. Flynn [187] used multiple sonar sensors for more accuracy from a different angle to create the 2D map in a ground robot. Kleeman and Kuc [188] used a sonar array for targeting, localizing, classifying, and creating the 2D mapping. Akbarally and Kleeman [189] used the sonar sensor for localizing and classifying accurate 3D targets. Later, simultaneous localization and mapping or SLAM technology using sonar sensors were used for 3D imaging underwater by Ribas et al. [190]. Steckel and Peremans [191] used a biomimetics SLAM called BatSLAM, a mapping module using sonar to create a map for mobile robots. Steckel and Peremans [192] used a 3D sonar sensor and BatSLAM sonar module together to improve localization and mapping. Two fully embedded real-time 3D imaging sonar architectures, also known as RTIS, were also demonstrated by Kerstens et al. [193]. A series of sonar sensors were also used in a UAV by bin Misnan et al. [194] for low-altitude mapping where the sensors were setup at different angles. Gageik, Muller, and Montenergo [144], Gageik, Benz, and Montenegro [142], Gupta et al. [195], and Becker, et al. [196] used a round array setup of ultrasonic sensors in a small quadrotor UAV for obstacle mapping. Further studies on mapping obstacles using sonar sensors also have been conducted by various other researchers [197][198][199][200][201].
Sonar sensor mapping and obstacle detection sensors are mostly used in underwater vehicles because of the shortage of light in the deep sea. Because of the fact that the human body is mostly made of water, sonar sensors are also used in medical science to detect the internal body structures in humans [202][203][204][205]. Besides that, sonar mapping is also used in ground robots and small quadcopters in some research to prove the concept of algorithms [142,189,191,195,196].

Radar Mapping
Radar (radio detection and ranging) imaging uses the same ranging principle as sonar. Both use time-of-flight or echo ranging to calculate the distance. Radar uses different radio wave signals, which are also a part of the radio spectrum [206]. Radar imaging uses massive mathematical calculations, but has the powerful capacity of being very longranging through space, and therefore has extensive use in aerospace technologies [207,208]. This is why many works have been published on radar in rocket science and military technology [209][210][211][212][213]. However, our concern is obstacle detection for smaller vehicles with lower computational systems. A simple, short-range obstacle localization system was proposed by Giubbolini [214], where multiple 13-24 GHz radars were set up around a vehicle with a central digital processor system (DSP) system. Comprehensive sensing (CS) technology, addressed by Baraniuk and Steeghs [215], is appropriate for monostatic bistatic and multistate scenarios. Viquerat, Blackhall, Reid, Sukkarieh, and Brooker [174] used four microwave doppler radars on a fixed-wing UAV to illuminate four forward field quadrants to determine an obstacle's position. A rotating radar was used for localization and mapping the surroundings by Vivet et al. [216]. Zhu et al. [217] proposed a 60 GHz imaging algorithm that can detect the nearby object and its location, orientation, curvature, and surface boundaries. Iyer et al. [218] experimented with 77-81 GHz radar for obstacle detection. Guo et al. [219] used range-angle mapping with a single radar pulse to create a map, while Feger et al. [220] used radar integrated with a MIMO array to determine an object's location. The frequency-modulated continuous wave, or FMCW radar sensor, is another popular radar sensor for obstacle imaging [221][222][223][224].
The rapid growth of autonomous car manufacturing increases miniature radar usage [225,226]. Autonomous UAV usage is growing, and several studies also used radar for obstacle detection [47,[227][228][229]. The benefit of a radar detection system is that it can work under various hazardous conditions, such as rain, dust, sunlight, etc. [218].

Laser Ranging
Laser imaging uses single-or multiple-laser sensors to generate an image or map of the surroundings. For laser-imaging techniques, the most used technology is laser imaging and ranging or lidar [230]. Lidar works the same as radar, except that it uses a laser for ranging. The single-point laser uses one-point distance measurements [231]. Creating a single line or 2D laser beam, using a single laser or multiple lasers, for 2D area or obstacle imaging creates an array of point clouds [232,233]. Using a four-layer laser scanner, Yu and Zhang [234] proposed an obstacle detection algorithm that can be used on autonomous land vehicles. The lidar technology uses laser point clouds and scans continuously to create a 3D map [235,236]. Demantké et al. [237] presented a multi-scaling method to compute geometric structures on lidar point clouds to retrieve the optimal surrounding size for each point. Li et al. [238] designed bounding box encoding, which uses a fully convolutional network with lidar to detect vehicles. Kim et al. [239] used 2D lidar scanning on an agricultural helicopter to scan obstacles. Peng et al. [240] also used 2D lidar scanning to detect obstacles that can effectively filter noise from raw laser data with the ground robot. Zheng et al. [241] used clustering based on relative distance and density (CBRDD) and the point-cloud correction method to detect surrounding obstacles from airborne UAVs. Basically, the 2D lidar scanning method is used for avoidance operations [242][243][244]. Usage of 3D lidar scanning is frequent in airborne geographical monitoring, such as agricultural plant monitoring, forest canopy monitoring, urban area monitoring, etc. [230,[245][246][247][248][249] Imaging with lidar is very precise, but it is relatively costly. Laser-ranging system uses highly intensive light, which can be interrupted by fog, rain, or dust. However, because of the increasing precision engineering, the usage of laser imaging is very high.

Computer Vision
Computer vision is a trendy way to detect obstacles, and many studies have been done in different regions [250][251][252][253]. Computer vision or image processing method uses a different type of camera for obstacle detection and mapping, such as monocular vision, stereo vision, binocular vision, infrared cameras, etc. [254][255][256][257]. Computer vision does obstacle detection, recognition, and can measure the distance using different methods. Using vision as the only exteroceptive sensors is called simultaneous localization and mapping (SLAM) [258]. The first use of computer vision was for negation based on a binocular stereo configuration [259,260]. However, because of the expensive binocular or multi-camera systems, monocular vision became more popular [261]. Vision-based obstacle detection is often used for complex environments, where getting the surrounding data is complicated for active sensors. Using computer vision for obstacle detection needs higher computational facilities, but nowadays the rapid development of microcomputers is filling this gap. There are various methods used for computer vision obstacle detection, and some essential techniques are discussed in the subsequent sub sections.

Target Based
The target-based obstacle detection method using computer vision uses known obstacle features to find the obstacles in a known environmental situation. Target-based obstacle detection using computer vision on various platforms has a vast and long history of research [262,263]. Besides, target-based methods on UAVs are also rising. The classical morphological filtering method has been used to detect and avoid obstacles from UAVs by Carnie et al. [264], and the same method has been employed in finding landmine-like objects using UAVs, by Rodriguez et al. [265]. Aoude, Luders, Levine, and How [165] used Ecological Recognizer architecture, which uses a pattern matcher, and was trained offline to make a path estimation. A color-based, robust tracking method was developed by Teuliere et al. [266], which was tracked through particle filtering. Mori and Scherer [161] designed a scale expansion detector to detect obstacles with a monocular camera and used SURF (Speeded-Up Robust Features) to match the obstacle detail with the given data. Targeting forward motion, Barry, Florence, and Tedrake [166] proposed an integration method using push-broom stereo perception and control. Wang and Li [124] proposed a local object-based subtraction method to get the object's outline.
One advantage of target-based obstacle detection is that it usually uses a single camera for detection, and another benefit is its faster method than others because of its low computation. However, even though it can detect the position of the obstacle in the image correctly, it cannot detect the actual distance. Thus, it requires an additional distance sensor fusion to solve this problem [267,268].

Optical Flow Based
The optical flow-based method uses camera images frame-by-frame, and monitor each frame's pixel-level movement to find the motion and temporal variation in each image's grayscale version. Braillon et al. [269] used the optical flow method, where they used two-frame pixel matching to detect an obstacle in real time. Using optical flow information, Souhila and Karim [167] developed an algorithm that can locate any obstacle by detecting any change from the data. Naito et al. [270] developed an algorithm to find the obstacle edges and their changes from the optical flow images. Gharani and Karimi [271] proposed an algorithm that used optical flow with point-tracking algorithms to detect stationary and moving obstacles and show a safe path. Agrawal et al. [272] used two sides of the optical flow images to generate the turn rate of a UAV through the urban canyon. The kernelized correlation filter (KCF) framework, proposed by Bharati et al. [273], uses an adaptive obstacle detection and tracking approach. Capito et al. [274] used a sequence of images using sparse optical flow to generate an artificial potential field or visual potential field. Even though this method can be applied to any unknown environment, it cannot identify the obstacle's specific characterization. In addition, it renders poor performance when applied to stationary and relatively slow-moving obstacles.

Stereo-Vision Based
Stereo-vision, also known as binocular stereo-vision, uses multiple camera feeds from different angles to generate depth and detect stereo-vision-based obstacles [275]. Due to its ability to provide more accurate data than other visionary detection systems, several research studies have been conducted on stereo-vision-based object tracking. Using the simplified stereo-vision algorithm, Bertozzi et al. [276] detected vehicle identification and distance. Nedevschi et al. [277] presented a stereo-vision-based obstacle detection method that reconstructs and works on the 3D points matching of the object edges. Moore, Thurrowgood, Bland, Soccol, and Srinivasan [168] used two rigidly mounted cameras in a coaxial stereo configuration to capture stereo images of the environment and process them for navigation. Gao, Ai, Wang, Rarity, and Dahnoun [169] used a 3D camera to produce depth information and converted it into the UV-disparity domain, which presents obstacles and ground surfaces as lines. Kramm and Bensrhair [170] proposed an algorithm using stereo-vision and clustering data to localize obstacles. Iacono and Sgorbissa [171] used an RGB-D camera to generate the 3D surface of the surroundings, generating a radial function for every obstacle, and then creating a safe UAV path. Ma et al. [278] proposed a novel insulator detection approach based on RGB-D saliency detection and structural feature searching for aerial images captured by a UAV power transmission line inspection system.
Stereo-vision works like a human eye, which captures an object's view and assumes the distance. The binocular stereo-vision detection system needs a massive amount of data to process the figure, and real-time processing requires a powerful system [54,275,279,280].

Fusion
From the review of different obstacle detection methods, it can be observed that there are limitations for each detection method. Optical sensing cannot generate accurate ranging data, while active sensor-based methods can measure distances more accurately. In view of this, researchers use fusion methods depending on the working motive, and several fusion combinations can be found in the literature [126,176,267,[281][282][283][284][285][286][287].

Obstacle Avoidance Techniques
As presented in the previous sections, the flying complexity of a sprayer UAV is quite different from a general UAV. Besides that, the obstacles are static and separated on farmlands, as described in Section 2.3. This is why the obstacle avoidance techniques need to be different from others. Although there are some research papers on obstacle avoidance and path planning using satellite images (Section 2.1), to the best of our knowledge, no research work specifically on local obstacle avoidance for sprayer UAVs has been conducted.
Using local satellite images for obstacle avoidance and path planning on farmlands may be deleterious. Because the satellite data updates after a particular schedule, sometimes the images get updated before an obstacle appears, which may cause accidents. Moreover, after updating the image of an obstacle, the obstacle may disappear, which may cause unwanted path generation. Sometimes even a small or narrow obstacle cannot be seen from the satellite image. Thus, avoiding obstacles locally is very important for sprayer UAVs. Several works have been done on local obstacle avoidance for mobile robots as well as UAVs. The subsequent subsections will present some of these obstacle avoidance approaches from different studies.

Bug Algorithm
The simplest obstacle avoidance method among all obstacle avoidance methods is the bug method. Lumelsky and Stepanov [288] proposed this method following bug's movement. They made two versions of the bug algorithm: Bug1 and Bug2, which are shown in Figure 9. Here, the robot starts the operation from "s" to target "t". For the case of Bug1, the robot will move fully around to check the object and will calculate the shortest position towards the target point. For the case of Bug2, the robot will create a line from start to target, and if it finds an obstacle, it will go alongside the obstacle, and when it finds the line, it will keep moving. The Bug1 algorithm travels a long path to reach the goal point, whiles Bug2 uses a shorter route. However, to use even a shorter way to reach the goal, some improvements have been made on the bug algorithm, such as tangent bug [289], I-Bug [290], improved bug [291], splitting Bug [292], etc. [293]. These bug algorithms are not so reliable in a more complex environment, and in some tricky conditions, one version works better than the other version. However, the generality of the bug algorithms is that it works well with single obstacle avoidance [294].
knowledge, no research work specifically on local obstacle avoidance for sprayer UAVs has been conducted.
Using local satellite images for obstacle avoidance and path planning on farmlands may be deleterious. Because the satellite data updates after a particular schedule, sometimes the images get updated before an obstacle appears, which may cause accidents. Moreover, after updating the image of an obstacle, the obstacle may disappear, which may cause unwanted path generation. Sometimes even a small or narrow obstacle cannot be seen from the satellite image. Thus, avoiding obstacles locally is very important for sprayer UAVs. Several works have been done on local obstacle avoidance for mobile robots as well as UAVs. The subsequent subsections will present some of these obstacle avoidance approaches from different studies.

Bug Algorithm
The simplest obstacle avoidance method among all obstacle avoidance methods is the bug method. Lumelsky and Stepanov [288] proposed this method following bug's movement. They made two versions of the bug algorithm: Bug1 and Bug2, which are shown in Figure 9. Here, the robot starts the operation from "s" to target "t". For the case of Bug1, the robot will move fully around to check the object and will calculate the shortest position towards the target point. For the case of Bug2, the robot will create a line from start to target, and if it finds an obstacle, it will go alongside the obstacle, and when it finds the line, it will keep moving. The Bug1 algorithm travels a long path to reach the goal point, whiles Bug2 uses a shorter route. However, to use even a shorter way to reach the goal, some improvements have been made on the bug algorithm, such as tangent bug [289], I-Bug [290], improved bug [291], splitting Bug [292], etc. [293]. These bug algorithms are not so reliable in a more complex environment, and in some tricky conditions, one version works better than the other version. However, the generality of the bug algorithms is that it works well with single obstacle avoidance [294].

Artificial Potential Field Algorithm
The artificial potential field algorithm (APF) was proposed by Khatib [295], a unique real-time obstacle avoidance approach for mobile robots. This algorithm sets an artificial potential field to every point of its known area and starts moving towards the lower possible area, where the target point is the lowest possible area. The vehicle is always attracted to the lowest possible area, to eventually reach the target point. In Figure 10, a robot is avoiding an obstacle using the potential field. Cetin et al. [296] used the APF algorithm to avoid other obstacles and forming obstacles. Besides that, the authors used APF in multiple connected vehicles to make a suitable path.
APF algorithm has some limitations, such as the local minimum point problem and dead point problems. Chen et al. [297] reconstructed the APF to constrained optimization to solve the traditional dead point problem. Since the traditional algorithm only works for a single-vehicle trajectory, Sun et al. [298] proposed an optimized APF algorithm for multi-UAV operation in a 3D environment. Fan et al. [299] improved the APF algorithm by solving some inherited problems, such as the local minima and the target's inaccessibility.

Collision Cone Method
The collision cone concept was first proposed by Chakravarthy and Ghose [45] for a 2D movement scenario. The authors assumed any obstacle as a circular area, and the distance from the UAV's position to the obstacle area is calculated within a collision cone using the UAV's velocity vector. This concept works for any irregular-shaped unknown obstacle and prevents collision between two irregular-shaped objects or vehicles. Following the same strategy, Ajith Kumar and Ghose [47] calculated a radar detection cone to find the possible collision-free path. To solve the collision between two aircraft in 3D space, Goss et al. [300] used the collision cone method with mixed geometry. Watanabe et al. [301] used a 2D passive vision sensor and collision cone approach in order to examine the obstacles from a critical distance. In Figure 11, the vehicle's position is X v , the obstacle position, X obs , is inside the safe boundary, and X ap is the vehicle's aiming point. Later on, Chakravarthy and Ghose [302] extended the collision cone approach to detect the moving obstacle in 3D space. Sunkara et al. [303] used this method to avoid shape-shifting targets like shape-shifting snake robots, a swarm of vehicles, and oil split. They first developed the collision cone between a point object and a deformable object and subsequently extended that to the case of an engagement between a circular object and a deforming object. A real-time collision avoidance algorithm, called Tangent Plan Coordinate, addressing multiple obstacles, was proposed by Park and Baek [51]. They used a stereo-vision sensor with a limited field of view to assume the unknown obstacle. The collision cone was calculated by straight lines using an affine transformation that are tangent to the ellipsoid and that passes through the position of the quadrotor.

Fuzzy Logic Algorithm
Fuzzy logic was proposed by Zadeh [304], which uses the fuzzy controller. To use fuzzy logic in any system, the operator needs to assign a set of data or knowledge to create fuzzy sets that will be used to avoid obstacles or navigate the mobile robot. This process of assigning fuzzy input sets is called fuzzification. The set value usually can be anywhere between two traditional logics, such as (0, 1) or (Low, High) or (Cold, Hot), etc. This is why, usually, those vehicles that use fuzzy logic for navigation and avoidance use one kind of multiple sensors or sensor fusion. Lian [305] used the fuzzy controller to control an obstacle avoidance mobile robot. In Figure 12, the usage procedure is given. This classic method is used in many vehicle navigation systems [306]. Reignier [307] used fuzzy logic techniques to build a reactive navigation system and avoid obstacles. Several research works created a fuzzy logic controller using fuzzy sets to avoid obstacles in real time. Dong et al. [308] used a fuzzy-based approach to track paths and avoid obstacles. Jin [309] proposed a navigation algorithm using a fuzzy controller and sensor fusion (camera and sonar) with a mobile robot to avoid obstacles and generate trajectories. Using the fuzzy logic system and three-way ultrasonic sensor, Li and Choi [310] proposed an avoidance algorithm for a mobile robot. Pandey et al. [311] designed a fuzzy logic controller to improve the vehicle's movement according to the obstacle's position.

Vector Field Histogram Method
Vector field histogram is a real-time obstacle avoidance method for mobile robots developed by Borenstein and Koren [41]. This method works using three steps for obstacle avoidance. In the first step, the robot generates a two-dimensional sensory histogram around its body or a limited angle and starts updating the histogram data at every stage. In the second step, the two-dimensional histogram data are converted into one-dimensional histogram data. Finally, it selects the lower polar dense area and moves the vehicle, calculating the direction. In Figure 13, the 2D and 1D histograms are presented. The VHF algorithm was improved to VFH+ [312] and VFH* [313], respectively. VFH+ reduces the original VFH parameter tuning, and the VFH* method verifies that a particular candidate direction guides the robot around an obstacle. Lidar is a suitable sensor for approving VFH methods, since it can take high-resolution multiple ranging data in two dimensions. For example, Sary et al. [314] used VFH+ with lidar to avoid obstacles using a hexa-copter and Bolbhat et al. [315] used original VHF with lidar for obstacle avoidance of automated guided vehicles.

Neural Network
Neural network algorithms are inspired by the human brain. The neural network takes in data, train themselves to recognize the patterns in the data, and then predict the outputs for a new set of similar data. A computational model repeats training or functions with a biological neural network system until the best result comes out. The dynamic neural network is capable of automatically adjusting its structure, following the complication of the vehicle's environment, understanding the mapping connection amongst the vehicle's state and its obstacle avoidance decision in real-time, and efficiently decreasing the vehicle's computational load [316].
Glasius et al. [317] designed a Hopfield-type neural network with nonlinear analog neurons for path planning and obstacle avoidance. Using a reinforcement learning neural network, an obstacle avoidance approach was proposed by Huang et al. [318]. Yadav et al. [319] designed a controller to find the obstacle-free shortest trajectory in 3D space for a UAV, which used a vision-based Grossberg Neural Network. Later, using a modified Grossberg neural network Wang, Yadav, and Balakrishnan [50] proposed an algorithm to avoid dynamic obstacles in 3D space. Chi and Lee [320] proposed a neural network control system to guide mobile robots to avoid arbitrary obstacles in a maze. Kim and Chwa [321] used a fuzzy neural network to avoid obstacles with a wheeled mobile robot. They used fuzzy sets as members of the neural network layer. Back et al. [322] proposed a vision-based trail following a UAV, which will avoid obstacles in the route using Convolutional Neural Network (CNN). Dai et al. [323] also used CNN to learn schemes to avoid obstacles in an unknown environment for a quadrotor UAV. Using neural network algorithms for obstacle avoidance needs lots of training data, but it is suitable for real-time obstacle avoidance performance. Example training data, to train for a complex environment, are shown in Figure 14, but the result is better than a few other obstacle avoidance methods, shown in Figure 15.

Obstacle Detection and Collision Avoidance Challenges
If positive and precise spraying is to be achieved, autonomous sprayer UAVs must be capable of taking a coherent decision as to which scenarios involve hazard, in order to avoid obstacles that lead to actions that are not only wrong but deleterious to other farming tools and farm workers. This section highlights the major open challenges of obstacle detection and collision avoidance algorithms.

•
A major obstacle detection challenge arises from severe weather conditions and environments with changing illumination conditions. Windy environments and fog can obscure the detection sensors or cameras, which render the processed data inadequate for obstacle avoidance. Even though this challenge can be resolved with the use of radar, which has no problems detecting in fog, rain, or heavy snow, a major drawback of the radar is its limited lateral vision. In other words, the radar only covers a relatively small angular section of about 15 degrees. To enhance lateral vision, multiple sensors have to be employed, which complicates the system. Thus, in order to measure an accurate reconstruction of the environment in such unfavorable conditions, robust algorithms have to be developed to effectively detect and classify the obstacle.

•
Obstacle avoidance techniques, such as Neural Network and Fuzzy Logic, require the extraction of hierarchical abstractions from preprocessed data used throughout the training or learning stages, and the ability of generalization relies on the availability of a large dataset. As a drawback, the large computational cost is a major challenge and has to be highlighted.

•
Realistically, most of the obstacle detection algorithms have to be trained offline through simulation analysis before they are integrated into the sprayer UAV system. The huge gap between real and virtually simulated environments limits the applicability of offline simulation policies to the real world. The development of a realistic virtual dataset is still an open challenge. • Accuracy of temporal and spatial alignments among different sensors used in the sprayer UAV system also impacts the quality of the collected data.
The materials and methods should be described with sufficient details to allow others to replicate and build on the published results. Please note that the publication of your manuscript implicates that you must make all materials, data, computer code, and protocols associated with the publication available to readers. Please disclose at the submission stage any restrictions on the availability of materials or information. New methods and protocols should be described in detail while well-established methods can be briefly described and appropriately cited.
Research manuscripts reporting large datasets that are deposited in a publicly available database should specify where the data have been deposited and provide the relevant accession numbers. If the accession numbers have not yet been obtained at the time of submission, please state that they will be provided during review. They must be provided prior to publication.

Other Challenges
In addition to the abovementioned obstacle detection and collision avoidance challenges, the following open challenges exist.

•
For agricultural sprayer UAVs, spray deposition and coverage are a primary concern, and these parameters are directly related to the drone weight and payload. Moreover, there is always a trade-off between payload and cost, and reliability should be maintained when selecting the type of agricultural sprayer UAV. In most instances, the selection is between single-rotor and multi-rotor. However, quadrotors, which have several spraying limitations, are the preferred choice for agricultural sprayer UAVs.

•
Taking into consideration the current stage of sprayer UAV technology, the high cost of the intelligent sensors and that of the UAV system is a major issue. Improvements in this area will enable farmers to reap more from the use of sprayer UAVs for remote sensing in precision spraying. • Even though the use of UAVs for agricultural spraying is increasing, several limitations that prevent wider usage exist. Among these limitations is the absence of a standardized workflow, which leads to the use of ad-hoc procedures for deploying agricultural sprayer UAVs, a fact that discourages stakeholders.

•
As agricultural sprayer UAVs require data-intensive algorithms for the processing of the images acquired, expertise in the field of autonomous flight is usually needed. This suggests that the average farmer will require training or may be compelled to hire experts to assist in the image processing, which may be costly. This may prohibit the adoption of agricultural sprayer UAVs from farmers with less technical expertise.

•
Most agricultural sprayer UAVs have a short flight time, usually from 10 min to barely half an hour. The sprayer UAVs that can offer a longer flight time are relatively expensive. Moreover, the effective usage of a sprayer UAV is mostly prone to climatic conditions. For instance, during windy or rainy days, the flight operation has to be postponed.

Avoidance Technique Comparison
The proper technique of obstacle avoidance on farmland for sprayer UAVs during the planned mission, based on some special parameters, were discussed in the previous sections. Based on those studies, we summarized and compared the obstacle avoidance techniques for sprayer UAVs, shown in Table 3. The multisensory system mostly uses low-cost sensors, which may reduce the UAV's retail price and increase availability for poor farmers. But it also has an issue with heading change.

Conclusions
This article reviewed the current advances in obstacle detection and collision avoidance scenarios concerning sprayer UAVs. In doing so, the most relevant obstacle detection and collision avoidance techniques were reviewed and discussed, including their application with agricultural sprayer UAVs. In addition, the recent obstacle detection algorithms that are used in agricultural UAVs and the structural challenges of sprayer UAVs were described. After analyzing the main issues of autonomous sprayer UAVs on farmlands, a thorough survey of the recent articles on obstacle detection and collision avoidance techniques was presented. Besides, various constraints of agriculture sprayer UAVs were detailed together with the operational pattern. The detection sensors and control architectures for collision avoidance are thoroughly highlighted to pave the way for future researchers to design their own agriculture sprayer UAV systems. Specific to the physical structure of the sprayer UAV, the liquid load and sloshing, and the most widely used detection sensors were described. For autonomous navigation, the obstacle detection method is an important part. A major obstacle detection challenge arises from severe weather conditions and environments with changing illumination conditions. As agricultural sprayer UAVs require data-intensive algorithms for the processing of the images acquired, expertise in the field of autonomous flight is usually needed. The present study provides a comprehensive review analysis of the obstacle detection methods under UAV spraying conditions and concluded that UAV sprayers are still facing obstacle detection challenges due to their dynamic operating and loading conditions. Moreover, and most importantly, the relevant obstacle detection and collision avoidance algorithms were also presented, wherein, comparative analysis of the obstacles on farmland, including obstacle detection technologies, were tabulated. The Bug2 algorithm is suitable for rotorcraft UAVs. It can ensure the coverage of spraying because of following the exact border of the obstacle. Modifying the algorithm by fixing the heading direction may reduce the time duration. The other algorithms also have their merits under various working conditions. Furthermore, the most relevant open challenges concerning agricultural sprayer UAVs were highlighted. Among these challenges is "spray deposition and coverage", and these parameters are directly related to the drone weight and payload. Another open challenge that has to do with the high cost of the intelligent sensors and that of the UAV system was highlighted. The short UAV flight times and the high cost of operation, which are major concerns to farmers, were also disclosed. Finally, the gap between real and virtually simulated environments, which limits the applicability of offline simulation policies to the real world, was stated. In all, this review work defines a clear roadmap for future research.