Automatic Identiﬁcation and Monitoring of Plant Diseases Using Unmanned Aerial Vehicles: A Review

: Disease diagnosis is one of the major tasks for increasing food production in agriculture. Although precision agriculture (PA) takes less time and provides a more precise application of agricultural activities, the detection of disease using an Unmanned Aerial System (UAS) is a challenging task. Several Unmanned Aerial Vehicles (UAVs) and sensors have been used for this purpose. The UAVs’ platforms and their peripherals have their own limitations in accurately diagnosing plant diseases. Several types of image processing software are available for vignetting and orthorectiﬁcation. The training and validation of datasets are important characteristics of data analysis. Currently, different algorithms and architectures of machine learning models are used to classify and detect plant diseases. These models help in image segmentation and feature extractions to interpret results. Researchers also use the values of vegetative indices, such as Normalized Difference Vegetative Index (NDVI), Crop Water Stress Index (CWSI), etc., acquired from different multispectral and hyperspectral sensors to ﬁt into the statistical models to deliver results. There are still various drifts in the automatic detection of plant diseases as imaging sensors are limited by their own spectral bandwidth, resolution, background noise of the image, etc. The future of crop health monitoring using UAVs should include a gimble consisting of multiple sensors, large datasets for training and validation, the development of site-speciﬁc irradiance systems, and so on. This review brieﬂy highlights the advantages of automatic detection of plant diseases to the growers.


Introduction
Simply, UAVs are any aerial vehicles that are remotely driven, meaning no pilot on board. They are considered one of the important innovations of present-day precision agriculture [1][2][3][4][5][6][7][8]. Precision agriculture (PA) is a method to transform agriculture by reducing time and labor and increasing production and management efficiency [9]. With the development in technology and computational skills, there have been changes in agricultural patterns such as using digital planters, harvesters, sprayers, etc. Agriculture has transformed over time from being carried out by manual labor to mechanical labor due to the adoption of technological changes. Previously, plant diseases in agriculture fields were monitored visually by people who have experience scouting and monitoring plant diseases. This type of observation is psychological and subject to bias, optical illusion, and error [1]. This generated the need for external image-based tools that can replace unreliable human observations. This also allows for extended coverage in a limited amount of time.
The potential of UAVs for conducting detailed surveys in precision agriculture has been demonstrated for a range of applications such as crop monitoring [10,11], field mapping [12], biomass estimation [13,14], weed management [15,16], plant population counting [17][18][19], and spraying [20]. A large amount of data and information is collected by UAVs to improve agricultural practices [21]. Different kinds of data recording instruments, cameras, and sensor installation equipment have been developed for agricultural Nowadays, as discussed earlier, different agricultural activities have been carried out by UAVs. Among them, UAVs are increasingly used for disease monitoring and identification. They work together with different components such as cameras, sensors, motors, rotors, controllers, etc. One of the basic uses of UAV is to capture images. The information contained in images is extracted and transformed into useful information by image processing and deep learning tools [1,26]. Electromagnetic spectra also provide useful information, which is used to make decisions regarding plant physiological stress [33,36]. The comparison between the spectroscopy in the specific region helps to assess the condition of the plants in real-time under field conditions [33]. Plant disease is identified by observing the physiological disturbances caused by foliar reflectance in a near-infrared portion of the spectrum in a UAV captured image [36]. In addition, the disturbances in the photosynthetic activities of the crops caused by many diseases are also observed as reflectance in the red wavelength range.
There are various types of UAVs for different agricultural operation purposes. In relation to UAVs, these UAV structures are called platforms [4]. Primarily, there are two types of agricultural UAV platforms: fixed wing and rotatory wing [25]. A fixed-wing UAV is comparatively larger in size and used for large-area coverage [4,25,37,38]. Rotary-wing UAVs are further divided into two types: helicopter and multirotor. Helicopters have a large propeller at the top of the aircraft and are used for aerial photography and spraying. Similarly, multirotors have different varieties depending upon the number of motors the aircraft possesses. Different types of multirotors are the quadcopter (four rotors) [39][40][41], hexacopter (six rotors) [42], and octocopter (eight rotors) [43].
For the purpose of plant disease monitoring and identification, both fixed-wing and rotary UAVs have been used. To monitor leaf stripe disease of grapevine, one of the diseases of the esca complex, Di Gennaro and his colleagues used a modified multirotor Mikrocopter OktoXL (HiSystems GmbH, Moomerland, Germany) [36]. Similarly, target spot and powdery mildew in soybean have been identified using a popular quadcopter, Phantom-3 (Shenzhen DJI Sciences and Technologies Ltd., Shenzhen, China) [26]. Xavier and colleagues (2019) used a multirotor UAV to identify Rumaria leaf blight disease in cotton [44]. A hexacopter, DJI Matrice 600 pro (Shenzhen DJI Sciences and Technologies Ltd., Shenzhen, China), was used to detect target spot and bacterial spot in tomato leaves [45]. In 2013, Garcia-Ruiz and colleagues used fixed-wing multirotor UAVs to compare the aerial imaging platforms for identifying citrus greening disease in Florida [33]. Gomez Selvaraj et al. [46] used a multicopter, UAV Phantom 4 Pro (Shenzhen DJI Sciences and Technologies Ltd., Shenzhen, China), to identify major diseases in bananas. An Altura X8 octocopter (Aerialtonics DV B.V., Katwijk, The Netherlands) was used in monitoring fire blight in pear orchards in Belgium [47]. In a similar fashion, an octocopter (DJI S1000) (Shenzhen DJI Sciences and Technologies Ltd., Shenzhen, China) was used in spatio-temporal monitoring of yellow rust in wheat in China [48]. A helicopter was modified to carry a camera system, an autopilot, and sensors to obtain thermal imagery and multispectral imagery over the agricultural fields [49]. A package of sensors was built into the DJI S800 hexacopter (Shenzhen DJI Sciences and Technologies Ltd., Shenzhen, China) to identify citrus greening disease [50]. Özgüven [51] used a rotary UAV-DJI Phantom 3 Advanced Brand (Shenzhen DJI Sciences and Technologies Ltd., Shenzhen, China) to determine Cercospora leaf spot in sugar beet. Valasek et al. [52] used rotary UAV to distinguish leaf spot (Cercosporidium personatum) among healthy and diseased peanuts. Sugiura et al. [53] used a multirotor UAV to determine late blight resistant cultivars of potato.
As we can see, both fixed and rotary UAVs have been used for crop monitoring purposes. The choice of the platform simply depends upon the area to be covered by the UAVs, payload, and nature of the study. For example, fixed-wing UAVs can cover a larger area than rotary-wing UAVs in the same amount of time. Additionally, fixed-winged UAVs have faster flight speed and can take comparatively bigger payloads than rotary UAVs. Rotary-winged UAVs are small and confined to easily access the congested area. Rotary UAVs are mostly used for aerial photography and videography, whereas fixed-winged UAVs are best suited for aerial mapping. Because of their shape and size, fixed-wing UAVs cannot hover in place and need definite spots to launch and land. Moreover, multirotors are cost-effective in terms of total area coverage and easy to handle as compared to fixedwinged rotors. Despite their larger size and difficulty in maintaining control, larger and fixed-winged UAVs can be effectively used in large fields because of their cost-efficiency in the total field coverage per flight duration [54].

Cameras and Sensors
Aerial imaging is one of the most important factors when it comes to the application of UAVs. Usually, the image quality determines the selection of UAVs. However, the type of sensor and the purpose of the study also dictate the choice of platform. Thus, the UAV platforms are loaded with different cameras and sensors. For example, a DJI Mavic 2 UAV is upgraded with a Sentera single NDVI camera (Sentera, Saint Paul, MN, USA) to monitor the drought stress in crapemyrtle plants ( Figure 1). However, UAV platforms are limited to their payload. An increase in payload decreases the speed, stability, and flight time of the UAV [4,23]. The choice of sensors depends upon the purpose of the study. Drought stress is better observed using thermal sensors in the early stages [55,56], while multispectral and hyperspectral sensors are used for long-term results. Pathogen infections in crops are better diagnosed using hyperspectral and thermal sensors in the early stages, but RGB sensors, multispectral, and hyperspectral can be used to detect the severity of infection in later stages [57]. In this subsection, we will discuss different cameras and sensors used in plant disease monitoring and observation.
distinguish leaf spot (Cercosporidium personatum) among healthy and diseased peanuts. Sugiura et al. [53] used a multirotor UAV to determine late blight resistant cultivars of potato.
As we can see, both fixed and rotary UAVs have been used for crop monitoring purposes. The choice of the platform simply depends upon the area to be covered by the UAVs, payload, and nature of the study. For example, fixed-wing UAVs can cover a larger area than rotary-wing UAVs in the same amount of time. Additionally, fixed-winged UAVs have faster flight speed and can take comparatively bigger payloads than rotary UAVs. Rotary-winged UAVs are small and confined to easily access the congested area. Rotary UAVs are mostly used for aerial photography and videography, whereas fixedwinged UAVs are best suited for aerial mapping. Because of their shape and size, fixedwing UAVs cannot hover in place and need definite spots to launch and land. Moreover, multirotors are cost-effective in terms of total area coverage and easy to handle as compared to fixed-winged rotors. Despite their larger size and difficulty in maintaining control, larger and fixed-winged UAVs can be effectively used in large fields because of their cost-efficiency in the total field coverage per flight duration [54].

Cameras and Sensors
Aerial imaging is one of the most important factors when it comes to the application of UAVs. Usually, the image quality determines the selection of UAVs. However, the type of sensor and the purpose of the study also dictate the choice of platform. Thus, the UAV platforms are loaded with different cameras and sensors. For example, a DJI Mavic 2 UAV is upgraded with a Sentera single NDVI camera (Sentera, Saint Paul, MN, USA) to monitor the drought stress in crapemyrtle plants ( Figure 1). However, UAV platforms are limited to their payload. An increase in payload decreases the speed, stability, and flight time of the UAV [4,23]. The choice of sensors depends upon the purpose of the study. Drought stress is better observed using thermal sensors in the early stages [55,56], while multispectral and hyperspectral sensors are used for long-term results. Pathogen infections in crops are better diagnosed using hyperspectral and thermal sensors in the early stages, but RGB sensors, multispectral, and hyperspectral can be used to detect the severity of infection in later stages [57]. In this subsection, we will discuss different cameras and sensors used in plant disease monitoring and observation.

RGB Camera
RGB (Red-green-blue) cameras are commonly used types of cameras that produce images that measure the intensity of three colors and define values of each color in pixels: red, green, and blue. RGB cameras are used to generate 3-dimensional (3D) models of

RGB Camera
RGB (Red-green-blue) cameras are commonly used types of cameras that produce images that measure the intensity of three colors and define values of each color in pixels: red, green, and blue. RGB cameras are used to generate 3-dimensional (3D) models of agricultural crops [58][59][60] and provide an estimation of crop biomass [61][62][63]. RGB cameras are also used with NIR and multispectral cameras to improve accuracy while calculating the biomass [62]. If the near-infrared filter is replaced by a red filter, it is called a modified RGB camera [64,65]. Commercial RGB cameras are cheap and have poor spectral resolution [65]. However, Grüner, Astor, and Wachendorf [61] calculated the biomass of grassland with an Remote Sens. 2021, 13, 3841 5 of 23 RGB camera using only SfM processing. RGB cameras have also been successfully used to identify diseases in plants. It is worth mentioning that RGB cameras only have the electromagnetic spectrum range of 380 nm to 750 nm, and not all of these wavelengths are suitable for appropriate crop disease detection [66]. The optical properties and the spectral range of the camera are considerable factors in plant disease detection. Mattupalli and his colleagues confirmed Phymatotrichopsis root rot disease in an alfalfa field using RGB imaging with a maximum likelihood classification algorithm [67]. RGB images were used to detect banana bunchy top virus and banana Xanthomonas wilt diseases in the African landscape of the Congo [46]. Similarly, an RGB camera with 12 megapixels was used to classify leaf spot (Cercospora beticola Sacc.) severity in sugarbeets in Turkey [51]. Grapevines were classified as diseased or healthy in France using RGB sensors [68]. Valasek, Thomasson, Balota, and Oakes [52] classified the leaf caused by Cercosporidium personatum spot of peanut using RGB cameras. Potato late-blight-resistant cultivars are screened using RGB cameras [53]. In a study conducted by Ashourloo et al. [69], wheat leaf rust (caused by Puccinia triticina) was detected by using RGB digital images with varying reflectance at 605, 695 and 455 nm wavelengths. RGB images also provides information on LAB (L = lightness, A and B are color opponent dimensions), YCBCR (Y = luma component, CB = blue difference and CR = red difference chroma components), HSV (hue, saturation, value), etc., which are very helpful in recognizing plant diseases.
RGB cameras are readily available and are typically less expensive. They can also be used to collect high-resolution still images. However, they can only measure three bands (red, green and blue) of the electromagnetic spectrum. This causes RGB images to be less accurate than multispectral or hyperspectral images in terms of the spectral resolution of the camera system. Images from RGB cameras do not provide sufficient information to differentiate levels of sheath blight in rice [70]. However, one of the major advantages of RGB cameras is their ability to capture high spatial resolution images in comparison to multispectral systems and, in turn, provide finer spatial details for plant disease detection and monitoring. RGB cameras should be carefully operated in order to have uniform coloring and lighting in the images. Uniform images will reflect fewer errors in differentiating healthy and diseased plants.

Multispectral Cameras
Multispectral cameras are considered to be one of the most appropriate sensors for agricultural analytics, as they have the capacity to capture images in high spatial resolution and determine reflectance in near infrared bands [71]. Multispectral and NIR cameras create vegetative indices that rely on near-infrared or other bands of light [72][73][74]. Multispectral cameras use different spectral bands: mostly red, blue, green, red-edge, and near-infrared. They can be differentiated into two groups based on bandwidth: narrowband and broadband [75]. Most of the aerial images for monitoring crop health issues use multispectral cameras [4] as they are used to calculate indices such as NDVI and others including NIR [63,71,73,76,77]. The absence of multispectral sensors in agricultural UAVs will hinder the early detection of plant diseases. The evaluation of multispectral image bands captured in an aerial image at different heights was carried out in 2019 by Xavier and colleagues to detect Ramularia leaf blight in cotton. However, the differentiation in the severity index was not significant [44]. In a study conducted by Abdulridha, Ampatzidis, Kakarla, and Roberts [45], 35 vegetative indices were used to detect target spot and bacterial spot in tomatoes under laboratory and field conditions. A fixed-wing UAV capable of capturing hyperspectral images equipped with multiband sensors was used in 2012 in Florida to compare aerial imaging platforms in identifying citrus greening disease [33]. In the same study, both the multispectral and hyperspectral cameras were installed in the UAV. The multispectral camera used was six narrow-band cameras (miniMCA6, Tetracam, Inc., Chatsworth, CA, USA), having six digital sensors with customizable bands of 10 nm and arranged in a 3 × 2 array. Similarly, the hyperspectral camera used in the study was the AISA EAGLE Very Near Infra-red (VNIR) hyperspectral imaging sensor (Specim Ltd., Oulu, Remote Sens. 2021, 13, 3841 6 of 23 Finland). This imaging sensor has a spectral range of 398-998 nm and a resolution of around 5 nm. The same camera had 128 spectral bands for the VNIR region [33]. A combination of RGB images and multispectral images captured aerially using UAVs were used for pixelbased classification of banana diseases in the Democratic Republic of Congo. The camera system, Micasense RedEdge (MicaSense, Inc., Seattle, WA, USA), was capable of acquiring a 16-bit raw image in five narrow bands. Su, Liu, Hu, Xu, Guo, and Chen [48] used a multispectral camera-RedEdge-having a resolution of 1280 × 960 pixels and five narrow bands. A six-band multispectral camera (MCA-6, Tetracam, Inc., Chatsworth, CA, USA) was used, having an image resolution of 1280 × 1024 pixels, 10-bit radiometric, and optical focal length of 8.5 mm [49]. An RGB-Depth (RGB-D) camera, employed with two grayscale cameras (mvBlueFOX-MLC202bG), covering light sources with polarizing films and the multispectral sensors mounted to UAV platform, was used to monitor orange orchards for detecting citrus greening disease in Florida [50]. Al-Saddik et al. [78] used multispectral sensors to differentiate Flavescence dorée diseased and healthy grapevines in a vineyard without using common vegetative indices. Several other studies also used multispectral cameras [36,48,70,[79][80][81][82][83]. A combination of visible and infrared images was used to form a multispectral approach for the identification of vine diseases [84]. The platform was combined using an RGB camera and an infrared light sensor with a wavelength of 850 nm; both cameras had 16-megapixel high-resolution properties. In the study, the accuracy was varied between 70% to 90% depending upon the surface area, with a larger surface having higher accuarcy. Similarly, a multispctral sensor, OptRx TM -Ag Leader, was used at visual range of 670 nm, RedEdge 730 nm, and NIR 775 nm to identify the esca complex and Flavescence dorée in the vineyard [85]. A multispectral camera consisiting spectra of blue (475 nm), green (560 nm), red egde (720 nm), and near red (840 nm) was used to identify pine wood nematode caused by Bursaphelenchus xylophilus [86]. The accuracy observed in the study was 79%. Ye et al. [87] used a five-band multispectral camera having blue (465-485 nm), green (550-570 nm), red (653-673 nm), red edge (712-722 nm), and near infrared (800-880 nm) spectral ranges to classify banana wilt disease and obtained an accuracy of 80%. Multispectral sensors have high practicability for the new innovations of the automatic identification of plants. They can capture images in both visible and near infrared regions, but they may be limited while detecting subtle changes in the biophysical and biochemical parameters.
Various studies have reported that multispectral cameras are best suited to identify plant diseases and pests in the field. The principle behind high accuracy is multiple bands of the electromagnetic spectrum. They not only provide additional information to the acquired images but can also provide vegetative indices. Vegetative indices are one of the most important factors in the identification of crop diseases. There are a few disadvantages of multispectral cameras, which include expensiveness and increased effort for calibration for specific tasks, such as disease identification, image processing, etc.

Hyperspectral Cameras
The major difference between multispectral and hyperspectral cameras is that hyperspectral cameras collect light of different narrow size bands for every pixel in the image captured [72,88]. Though multispectral cameras are able to capture light reflected by biomolecules, the differences lie in the bandwidth and placement of the light, which helps us to isolate responses from the different molecules. These cameras have particular properties in detecting lights emitted from biomolecules such as chlorophyll [89,90], mesophyll [88], xanthophyll [91], and carotenoids [89,90]. The major drawback of using a hyperspectral camera is the high cost of cameras [72,92] and the huge amount of unnecessary data when not properly calibrated [88,[93][94][95].
Abdulridha and colleagues [45] used a hyperspectral imaging system (line-scan imager system), Pika L 2.4 (Resonon Inc., Bozeman, MT, USA) with a 23 mm lens, that had a spectral range of 380-1020 nm and 281 spectral channels to detect diseases in tomato leaves. Fire blight in pears was monitored using a hyperspectral camera (frame-based system) (COSI-Remote Sens. 2021, 13, 3841 7 of 23 cam, VITO NV, Boeretang, Belgium) with a spectral range of 600-900 nm [47,96]. Images were captured in rapid succession of 340 frames/s in an 8-bit mode [47,97]. The overall accuracy was found to be 52% for the detection of healthy and infected trees. However, red wavelength (611 nm) and NIR (784 nm) were found to have an accuracy of 85% in distinguishing healthy and diseased trees [47]. Calderón et al. [98] and Sandino et al. [99] both used line-scan imagers hyperspectral cameras in their experiments to monitor crop health. The thermal, multispectral, and hyperspectral cameras were able to successfully detect the crop crown temperature, structural indices, fluorescence, and health index of the olive tree [100]. Similarly, in the study conducted by Sandino, Pegg, Gonzalez, and Smith [99], the identification of diseased trees was achieved at 97% accuracy and for healthy trees were at 95% accuracy, whereas the global multiclass detection rate was 97%. The main reason why hyperspectral cameras are used is to reduce the shortcomings of multispectral cameras. Hyperspectral cameras are used to capture details in fewer spectral differences and to identify and discriminate target objects.
The major breakthrough that hyperspectral cameras have provided is that, unlike other cameras [101], they are able to detect plant stress with possible causative agents (pathogen/disease). Hyperspectral sensors basically measure several hundred bands of the electromagnetic spectrum to derive accurate results. They are able to measure visible spectrum (400-700 nm), near infrared (700-1000 nm), and also short-wave infrared (1000-2500 nm). As a result, they not only collect images but a huge amount of spectral data as well, which causes difficulties in extracting relevant information.

Thermal Cameras
Thermal cameras capture infra-red lights in the range of 0.75 to 1000 µm and provide the temperatures of the objects in the form of a thermal image [102]. The advantages of thermal cameras are the low cost as compared to other spectral cameras and that RGB cameras can be converted to thermal cameras with certain modifications [103]. Originally, thermal cameras were used for inspecting drought stress in crops [92,[103][104][105]. Thermal images include the temperature of the surrounding objects and have low resolution compared to images captured by other major cameras [102]. Thermal sensors are also used in detecting crop diseases and monitoring crops [80,98,106].
Thermal cameras are capable of identifying responsible agents for plant stress. As the pathogen infects, the structure and metabolism of the plant change, which can be detected by thermal sensors [107,108]. Baranowski et al. [109] reported that fungal stress caused by Alternaria in oilseed rape was identified using thermal imaging. Similarly, early detection of red leaf blotch on almonds is also conducted using hyperspectral and thermal imagery [110]. Many other studies carried out using thermal cameras are detection of Huanglongbing disease of citrus [111], tobacco mosaic virus [112], tomato powdery mildew [110], Fusarium wilt of cucumber [113], etc. A thermal FLIR camera was used to detect disease-induced spots on banana leaves with an accuracy of 92.8% [114]. Similar technology was used by Raza, Prince, Clarkson, and Rajpoot [108] to detect tomato powdery mildew caused by Oidium neolucopersici.
This shows that the use of thermal cameras is increasing in crop health monitoring and disease identification. However, thermal imaging technology has not been fully explored. This may be because of environmental factors considered during image acquisition. Additionally, they contain huge amounts of information along with the image, which leads to challenges in deriving relevant information. The advantages of thermal imaging are low cost and early identification with causative agents. Yang et al. [115] developed a method for early detection of diseases in tea using thermal imagery.

Depth Sensors
Depth sensors are common peripherals used in agricultural UAVs that provide extra feature-depth in the RGB pixels. The depth in a depth sensor is defined as the distance between the sensor and the point of an object at the time of image capture [116]. The most prevalent depth sensor technology is Light Detection and Ranging (LiDAR). The major difference between RGB-D sensors and LiDAR is that RGB-D sensors depend upon the light reflection intensities, whereas LiDAR uses laser pulses to calculate distance [23]. However, RGB-D sensors are one of the least commonly used sensors in the case of plant health monitoring but are commonly used in spraying [4], 3D modelling [117,118], and phenotyping [116].
Nowadays, depth sensors are used to increase the accuracy of the sensors. Sarkar et al. [50] used RGB-D cameras for detecting citrus greening disease. Similarly, Xia et al. [119] used depth sensors to create 3D segmentation of the individual leaf. There are many sensors available in the market to provide accurate 3D information, such as LiDAR and time of flight (ToF), but these are highly expensive and cannot be used for general agriculture practices. The use of depth cameras provides both the pixel intensity and depth in the captured image to develop a classifier during disease identification. Moreover, depth sensors provide data on the intensity of the light reflected from stressed objects or plants [120]. Paulus et al. [121] used 3-D laser technology to detect Fusarium spp. in kernels of wheat. The challenge of using depth sensors is that the sensors may not detect objects after a certain distance, which may result in a lower intensity count. This can be eliminated by using a configurable camera and calibrating it with depth sensors.

Image Pre-Processing
Image pre-processing involves a series of steps before extracting data from the images. The major objective of image pre-processing is to reduce errors and to prepare to extract data from the image. After the high spatial geo-referenced aerial images are captured using UAVs, a large amount of data has to be extracted. Thus, it is very important that the images are error-free. The images may have been degraded while capturing due to noise, shadow, etc. It is very important to have critical knowledge of plant diseases for pre-processing the images and choosing an appropriate method to increase the accuracy of identification [122]. Image pre-processing includes a series of steps to make it appropriate to extract data from the images. These include enhancement of images, their segmentation, color space conversion, and filters [123]. However, Sonka et al. [124] categorize the steps of image pre-processing as pixel brightness transformation, geometric transformation, local pre-processing, and image restoration. Pixel brightness transformation modifies the brightness of the image depending on the pixel of the image, for which they have two classes: brightness correction and grey scale transformation. Similarly, geometric transformation correlates the coordinates of the input image pixel with the points in the output image and determines the brightness of the transformed point in the digital raster. Local pre-processing utilizes the neighborhood of the pixel to generate the new brightness value in the output image. Finally, image restoration is the method in pre-processing that is used to discard degradation factors, such as lens defects, wrong focus, etc., in the image.
After image pre-processing, data from the images are extracted to use for data processing. The accuracy of the image classification depends on the type of image pre-processing and extraction techniques used. Studies have reported that image processing helps to improve the information in the image and can be more easily interpreted than non-processed images [70]. The images are generally pre-processed using different available software to orthorectify spectral bands in reflectance. There are varieties of software available to process raw images depending upon the sensors and cameras used. Ghosal et al. [125] used an unsupervised explanation framework to isolate symptoms during image pre-processing. In the study conducted by Ferentinos, a strange result was found while detecting diseases using open dataset images. The overall accuracy of real images was higher than the pre-preprocessed images [126]. Headwall SpectralView ® (Headwall Photonics Inc., Bolton, MA, USA) and CSIRO|Data61 Scyven 1.3.0. (Scyllarus Inc., Canberra, Australia) software was used for isolating regions of interest and reflectance data by Sandina and colleagues [99,127,128]. The Simple Linear Iterative Clustering (SLIC) super pixels method was used by Tetila, Machado, Menezes, Da Silva Oliveira, Alvarez, Amorim, De Souza Belete, Da Silva, and Pistori [27] to segment leaves from the UAV captured images. Environment for Visualizing Images (ENVI) software (version 4.7, ITT VSI, White Plains, NY, USA) was used to reflect bands in the images from UAVs by Garcia-Ruiz, Sankaran, Maja, Lee, Rasmussen, and Ehsani [33]. PixelWrench 2 software (Tetracam Inc., Chatsworth, CA, USA) was used for correcting radiometric distortion and vignetting images in identifying Ramularia leaf blight [44] and grape leaf stripe diseases [36]. Precision Hawk (Precision Hawk USA Inc., Raleigh, NC, USA) provides an image pre-processing facility based on its cloud server [6]. Image processing for detection of cotton root rot was carried out using Pix4D software (Pix4D SA, Lausanne, Switzerland).
The selection of the image processing features depends upon the purpose of the study. For example, if the diseased leaf contains an unwanted object, it can simply be removed by cropping to obtain the target image. This process is called image clipping, a part of image pre-processing. Similarly, the image enhancement can be performed using the histogram equalization technique to equally distribute the color intensities in the diseased plant images. Additionally, if the image contains shadow and has low contrast, this can be enhanced by removing blur [129]. Image pre-processing helps to identify target regions in the images and also helps to reduce noise in the images. It increases the reliability of the optical inspection. Images captured from the multispectral and hyperspectral sensors require atmospheric and radiometric calibration and correction of data in order to ensure consistency in the data from the image.

Data Processing
Data processing of the UAV images is carried out using various tools. Most of the researchers use vegetation indices for data processing because they are easy and readable. Currently, different Artificial Neural Networks (ANNs) are used for result demonstrations. In most of the studies, researchers also use numerical values from the sensors and use different statistical tools such as K-means clustering, Receiver Operator Characteristics (ROC) analysis, and regressions. However, it also depends upon the purpose of the study for the selection of data processing tools. Some of the data processing tools are discussed below.

Image Data Processing
Data processing in UAVs is conducted in several ways. After the vignetting and orthorectification of the images, some of the results are derived using vegetation indices (VIs) only. However, imagery data analysis is more than that. It also includes image segmentation and result interpretation in the form of images. For those types of image data processing, various tools are used. Some of these tools are Artificial Neural Networks (ANNs), Decision Trees, K-means, k nearest neighbors, Support Vector Machines (SVMs), and Regression Analysis. In this review, a short description of k-means clustering and regression analysis is given.

K-Means Clustering
Altas and his colleagues converted RGB images into L*a*b color space to identify Cercospora leaf spot in sugar beet, where a*b are the components in which the information about diseases in the leaves is stored. The colors in a*b are classified using K-means clustering [51]. K-means clustering is a common clustering algorithm used in various application domains, such as image segmentation [130], which divides a dataset into k groups [131]. In k-means clustering, an initial k cluster center is defined, and then the algorithm repeatedly selects other k values within that dataset. In the datasets for which the value of k is already known (such as Unique client identifiers, e.g., customer IDs), the same k value is used. For those where the k value is not known, the k value should be identified separately [132]. For the study to determine leaf spot in sugar beet, k value was issued as 3 since three images are obtained when pixels in the RGB image were separated according to color [51].

Regression Analysis
Regression analysis is one of the most common methods of analyzing UAV imagery. After orthorectification of images, the values for each band, such as NIR, Red, Green, Blue, etc., are extracted. For those extracted values, the regression model is run to investigate the spectral characteristics of the parameters. Generally, different types of regression analysis models are run depending upon the type of dataset acquired: linear and non-linear and simple and multiple regression analysis. At the end of the large dataset analysis, the cross-validation of regression (RA) model is important because when a model is chosen, it is predicted that the observation will be the same in the future as well, but it may not always be similar [133]. Thus, data are split, and two portions are formed. A set is used to form a regression analysis model, while the other set is reserved as a future observation to fit into the model. However, it is not required that both of the split datasets should be equal [133].

Vegetation Indices
Vegetative indices are one of the major analysis tools for analyzing aerial images. They are the numeric representation of the relationship between different wavelengths of lights that are reflected from the plant surface [90]. There are various vegetative indices to describe the status of plants. They include NDVI [4], Optimized Soil-Adjusted Vegetation Index (OSAVI) [134,135], and CSWI [104,136]. A detailed study of different remote sensing vegetation indices is conducted by Xue and Su [137]. These indices are correlated with different imaging sensors to derive a conclusion. Excess Red (ExR), Excess Green (ExG), and Excess Blue (ExB) can be used with RGB imaging. The equation for ExR, ExG and ExB are as below: where R represents red, G represents green, and B represents blue [4,15,18,138]. The physiological and morphological properties of the plant, such as water content, biochemical composition, nutrient status, biomass content, and diseased tissues, are reflected in the values of vegetative indices [70,76,139]. The most common vegetative index to determine diseased tissue is NDVI [104]. The range of NIR is 780-800 nm, and R is 670-700 nm, as both come from reflected light [62,73,104]. The intensities of these lights are measured by multispectral or hyperspectral cameras and formatted to an NDVI map later. At the end of formatting, each pixel of the NDVI map represents the value of the crop. It is also important to note that vegetation indices can be combined to form other different vegetation indices (VIs). Therefore, we can assume that different VIs are evolving during the writing of this paper. Different studies using VIs to process the data include the esca complex in a grape vineyard detected by using NDVI [36]. Similarly, Albetis et al. [140] used NDVI and 10 other VIs to differentiate symptomatic and asymptomatic Flavescence dorée in grapevines.
Analyzing and comparing vegetation indices between diseased and healthy crop samples is widely used in monitoring crop health. The use of VIs is considered simple, easy, and comparatively reliable in terms of disease identification and monitoring.

Deep Learning Models
Deep Learning (DL) is an approach under Machine Learning (ML) where a computer model resembles the biological pathways of a human [141]. Deep learning includes the use of artificial neural networks that contain various numbers of processing layers different from traditional neural networks [126]. It is actually the inclusion of several steps from data collection to the classification of the images and interpretation of results.

Artificial Neural Networks (ANNs)
The use of neural networks for disease recognition in agriculture crops is rapidly increasing [126,142,143]. A commonly used deep learning tool for image processing and classification is the Artificial Neural Networks (ANNs) [8]. ANNs are mathematical models that work in the same fashion as the human brain does with neurons and synapses to connect each other [126]. The neural networks are trained into a model using previously known data and are programmed to work on a similar set of data. There are different kinds of artificial neural networks for image classification: Recurrent Neural Networks (RNNs), Convolutional Neural Networks (CNNs), and Generative Adversarial Networks (GANs). The most common neural networks used for plant disease detection and classification are convolutional neural networks (CNNs).

Convolutional Neural Networks (CNNs)
CNNs are basic deep learning tools to identify plant diseases using aerial imagery [144]. They consist of powerful modelling techniques performing complex pattern recognition that have large amounts of data, such as in image recognition [126]. Studies that do not have large amounts of data to run neural networks augment the data [27]. CNNs are the successors of previous ANNs. ANNs were designed to apply in fields, having repeating patterns such as image recognition of diseased plants. For the identification of diseases in plants using CNNs, several algorithms have been successfully applied, making crop health monitoring easier than before. The major CNN algorithms or architectures used are AlexNet [145,146], AlexNetOWTBn [147], GoogLeNet [146], Overfeat [148], and VGG [149]. The different studies that used CNNs to identify plant diseases are listed in Table 1 below.  The performance of the deep architectures also depends upon the number of modifying images, minibatch sizes, weight differences, and bias learning rate [158]. A study by Too et al. [159] reported that DenseNets showed higher accuracy with no overfitting and degraded performance when compared with VGG 16, Inception V4, and ResNet. Similarly, AlexNet had higher accuracy when compared with SqueezeNet in classifying tomato diseases [160] whereas, AlexNet and VGG16 had similar accuracy in classifying tomato disease [158]. Mohanty, Hughes, and Salathé [151] used AlexNet and GoogLeNet to classify plant diseases and achieved an accuracy of 99.35%; however, they performed poorly when tested in different sets of images.
Deep learning architectures are considered accurate as compared to the previously used models such as SVM and random forest methods [158]. The correct prediction percentage of CNN was reported 1-4% higher than SVM [161][162][163][164] and 6% higher than random forests [165]. However, Song et al. [166] reported that CNN models are 18% lower in correct prediction as compared to the Root Mean Square Error (RMSE). CNNs are widely used in the identification and classification of plant diseases using images. Crop classification using deep learning models helps for pest control, cropping activities, yield prediction, etc. [167]. Deep learning models have eased the work for the growers in the way that they can click the image of the picture in the field and identify the disease by uploading it to the software. Feature engineering, a complex process, is eliminated in CNN models as the important features are located during the training of the dataset. However, different architectures have their own pros and cons. As the layers extend, neural networks suffer from performance degradation, resulting in less accuracy. It is also time-consuming during training of the images as a large number of images are required. Deep networks experience an internal covariant shift, which causes disruption in input data and the training layer. However, different techniques, such as skip connections [168], layer-wise training [169], transfer learning, initialization strategies, and batch normalization, are used these days to overcome those challenges [159].

Challenges of Automatic Plant Disease Identification Using UAVs
Automatic plant disease detection primarily means the identification of biotic injury caused by pathogens in plants involving no direct human resources in the field. One way of automatically identifying plant disease is deploying UAVs with machine learning algorithms installed in them. Information gathered in the algorithms not only helps in identifying diseases but also estimating the severity of the disease [17]. Plants are infected with hundreds of pathogens in the field, and most of them exhibit similar symptoms. Appropriate identification of plant disease is one of the basic but challenging tasks in agricultural activities. Manually identifying plant disease is subject to bias and optical illusions, which result in errors [170]. It involves intensive labor with economic costs. Nevertheless, automatic disease identification also requires expert opinion for disease confirmation in specific cases. Scouting each plant using laboratory and molecular techniques is not practically possible for disease identification in a large area. Large and complex data obtained from optical sensors are able to detect disease quickly and classify between diseases, stress, and intensity of the diseases [120]. This incentivizes scientists and researchers to develop tools that are programmable and can read every plant through images to detect diseases. There has been progress in automatic monitoring of crop health using UAVs and imagery. However, the system of automatic detection and identification of plant diseases has still been experiencing difficulty with programming accuracy. Some of the challenges during automatic identification of plant diseases using UAVs are discussed by Barbedo [1,170], which include background noise, unfavourable field conditions, sensor limitations, symptom variations, limitation of resources (peripherals and cameras), and 'Training'-validation discrepancy.
In addition, most of the crop health monitoring activities using UAVs are dominated by RGB and NIR sensors or their combination. The accurate detection of plant disease requires more advanced sensors with higher spectral ranges, such as hyperspectral sensors. These sensors have the potentiality to distinguish specific features of the object with several hundreds of narrow spectral bands [171]. Moreover, the platform used and the image captured also require a considerable value in accurate crop health monitoring. There are different platforms and sources to acquire the images. Satellite images such as Landsat are available for free however have less resolution to correctly detect plant disease. Even the satellite images that are available with costs fall under the Visual-NIR region. Development of sensors to provide high-quality spatial, temporal and spectral information are undergoing [171,172]. Different new technologies have also been introduced to monitor crop phenology, such as sun-induced fluorescence and short-wave infrared for greenhouse gas monitoring. However, these sensors are highly affected by climatic conditions, such as clouds, sunlight, etc., and are only available on regional or global scales [173].
The limitations are extended to image processing, segmentation and classification as well. Different machine learning tools and architectures are available but contain limitations. Some of the major limitations during image processing, classification, and segmentation are low-resolution images, less accuracy, unavailability of large datasets to train the models, etc. Nguyen and Shah [174] found a huge discrepancy in the accuracy of their dataset and the PlantDisease dataset. The author recommends the semi-supervised approach to classify disease, which will create more diverse images. Similarly, Arsenovic et al. [175] suggested an improvement in the decision-making process. In a nutshell, the process of automatic identification of plant diseases using UAVs and deep learning can be improved by choosing a high-quality image capturing camera, appropriate sensors (RGB, multispectral or hyperspectral) depending upon the purpose of study, enough datasets to accurately train the model, and selecting appropriate architecture for the deep learning model.
Apart from these various challenges regarding image analysis and result interpretation during automatic detection of plant diseases using UAVs, challenges regarding their usage and application in the field exist. The regulatory body for controlling UAVs' application, Federal Aviation Administration (FAA), has various regulations such as limits on the height, operating areas, and zones. In 2018, FAA applied legal clauses for the application of pesticides using UAVs that the operator should receive permission following three exemptions and a waiver process [176]. The privacy around the operating and surrounding areas is strictly addressed [177]. Regulations on UAVs' application and their handling in the international context is driven by International Civil Aviation Authority (ICAO). More details about the different authorities, protocols, and information about UAVs and their regulations in the global context are available in Stöcker et al. [178]. The UAV technology is expanding and becoming cheaper over time but still is not very applicable to many smallholding growers. The addition of hyperspectral and multispectral sensors, which are of the utmost importance for monitoring plant health in an already purchased UAV, adds more than $10,000 to the cost [4,72]. Similarly, the affordability of trained manpower for a small-scale farmer is still unrealistic.

Future Considerations
The Unmanned Aerial Vehicle has been a boon to automatic monitoring of crop status. Nonetheless, identifying the type of stress, either biotic and abiotic, is still vague. Researchers and scientists working together with UAV system manufacturers are merging to broaden possibilities. Many challenges are limiting progress. Different platforms and the types of sensors are already discussed in the sections above; however, all of them have their own pros and cons. Lightweight UAVs with high-resolution cameras can capture a better image that helps for the proper detection of diseases and reduces chances of error. Sensors also play a vital role in disease detection. Abdulridha, Ampatzidis, Kakarla, and Roberts [45] used a benchtop hyperspectral imaging system with a 23 mm lens having a spectral range of 380-1030 nm, 281 spectral channels 15.3 • field view, and a spectral resolution of 2.1 nm to detect powdery mildew in squash. The wider the spectral ranges, the better the differentiation of disease symptoms and eventually helps to reduce the error. The selection of sensors and their spectral range is aided by the nature of the disease. For example, Xu et al. [179] used NIR spectroscopy and found that the best range for disease monitoring was 1450 nm and 1900 nm in tomatoes. Similarly, Al-Ahmadi et al. [180] utilized the same technique with the range of 900-2400 nm to monitor charcoal rot (Macrophomina phaseolina in soybean (Glycine max)). Scientists are working to develop a hybrid UAV that works as both fixed-wing and multirotor systems simultaneously. ALTI transition is a system [181,182] that serves both fixed-wing and multirotor systems when and where needed. A combined platform constituent of multiple sensors, such as RGB, NIR, RE infrared, and many others could be in the future, which will decrease the payload and could measure a variety of physiological parameters from the same sensor [183]. The increased flight duration of the UAV is the future of agricultural UAVs. With limited flight duration, it is challenging to distinguish the overall status of the crop. Interpreting results in the form of images rather than in the parametric value such as NDVI or other indices could be beneficial for growers to learn and execute management practices. Most of the decisions regarding crop health are based on the values of vegetative indices. Recent development in crop monitoring integrating UAVs and deep learning techniques offers concomitant crop counting, yield predictions, crop disease and nutrient deficiency detection [184]. Nebiker, Lack, Abächerli, and Läderach [83] utilized low-weight multispectral UAV sensors to predict grain yield and diseases in rape and barley. However, most of these integrations of concomitant yield and disease monitoring are only prototypes and are not available for commercial purposes [185]. The universal system developed by the American Society for Testing and Materials (ASTM) is called the G173 standard. The G173 standard system is used by different software to derive vegetation indices, such asNDVI, Green NDVI (GNDVI), Soil Adjusted Vegetation Index (SAVI), etc. However, the G173 standard accounts for the sun facing 37 • , an average latitude of the continental states of the United States [186]. This develops a site-specific irradiance system that can provide a precise description of vegetation indices that will help in getting closer to a more accurate result using the Simple Model of Atmospheric Radiative Transfer of Sunshine (SMARTS). SMARTS stands on the base of the ASTM G173/G177 standard to provide more local solar irradiance spectra [187]. Scientists from Israel and Italy have launched hyperspectral imaging sensors to the orbit of the earth named the Space-born Hyperspectral Applicative Land and Ocean Mission (SHALOM). SHALOM is expected to work in the field of environmental quality and assist in precision agriculture in Israel and Italy [9,188]. Similarly, Fluorescence Explorer (FLEX) is a satellite to be launched by the European Space Agency (ESA) in 2022 [189,190]. FLEX is comprised of three instrumental arrays of fluorescence, hyperspectral reflectance, and canopy temperature. This satellite is expected to observe the vegetative fluorescence of crops at the global level [189,191,192]. Further emphasis on chemometric or spectral decomposition should be given for the derivative method of analysis. Many researchers are creating their own datasets for training and validation. It has become important to develop agricultural datasets that will aid machine learning algorithms and help with accurate disease diagnosis [163]. In this review, we saw that the future of automatic detection of plant diseases is a combination of agriculture with machine learning. The development of robotic arms facilities in UAS, which can retrieve samples and return them for confirmation in the case of confusion, will help better diagnose plant diseases. Thus, machine learning, agriculture, and UAVs can act together to extend the realm of food security by limiting food loss due to pests and diseases.

Conclusions
Rapid population growth and climate change are the leading causes of food insecurity. The advancements in UAVs and their systems to diagnose crop stress, pests, and diseases have greatly benefitted growers. Increasing farm productivity and lowering the cost of production using advanced technology is helping growers to increase yields and sustainability on their farms. The development in the automatic detection of plant diseases using UAVs has emerged as a novel technology of precision agriculture. UAVs are accurate and provide large amounts of data regarding crop status, which aids in making management decisions. However, there is still immense opportunity in plant disease diagnosis. As discussed in the future considerations section, the development of various algorithms of machine learning and collaboration with the other stems will help to reach this milestone.