GeoDLS: A Deep Learning-Based Corn Disease Tracking and Location System Using RTK Geolocated UAS Imagery

: Deep learning-based solutions for precision agriculture have recently achieved promising results. Deep learning has been used to identify crop diseases at the initial stages of disease development in an effort to create effective disease management systems. However, the use of deep learning and unmanned aerial system (UAS) imagery to track the spread of diseases, identify diseased regions within cornﬁelds, and notify users with actionable information remains a research gap. Therefore, in this study, high-resolution, UAS-acquired, real-time kinematic (RTK) geotagged, RGB imagery at an altitude of 12 m above ground level (AGL) was used to develop the Geo Disease Location System (GeoDLS), a deep learning-based system for tracking diseased regions in corn ﬁelds. UAS images (resolution 8192 × 5460 pixels) were acquired in cornﬁelds located at Purdue University’s Agronomy Center for Research and Education (ACRE), using a DJI Matrice 300 RTK UAS mounted with a 45-megapixel DJI Zenmuse P1 camera during corn stages V14 to R4. A dataset of 5076 images was created by splitting the UAS-acquired images using tile and simple linear iterative clustering (SLIC) segmentation. For tile segmentation, the images were split into tiles of sizes 250 × 250 pixels, 500 × 500 pixels, and 1000 × 1000 pixels, resulting in 1804, 1112, and 570 image tiles, respectively. For SLIC segmentation, 865 and 725 superpixel images were obtained using compactness (m) values of 5 and 10, respectively. Five deep neural network architectures, VGG16, ResNet50, InceptionV3, DenseNet169, and Xception, were trained to identify diseased, healthy, and background regions in corn ﬁelds. DenseNet169 identiﬁed diseased, healthy, and background regions with the highest testing accuracy of 100.00% when trained on images of tile size 1000 × 1000 pixels. Using a sliding window approach, the trained DenseNet169 model was then used to calculate the percentage of diseased regions present within each UAS image. Finally, the RTK geolocation information for each image was used to update users with the location of diseased regions with an accuracy of within 2 cm through a web application, a smartphone application, and email notiﬁcations. The GeoDLS could be a potential tool for an automated disease management system to track the spread of crop diseases, identify diseased regions, and provide actionable information to the users.


Introduction
As diseases pose a serious threat to crop production systems worldwide [1], research is underway to develop high-throughput precision agricultural solutions for disease management in fields.Most current solutions rely on pesticide application over entire fields, which is expensive and destructive to healthy crops [2].Furthermore, these ineffective approaches are subjective [3].Therefore, there is a need to develop effective solutions capable of identifying different diseased regions, which will help overcome the limitations presented by widely practiced approaches.
In particular, deep learning has been used extensively for crop disease diagnosis since the introduction of the PlantVillage dataset in 2015 [15].Deep learning was used for disease identification in corn with accuracies of up to 95.99% [16].Soybean diseases were identified with 94.29% accuracy [17].Diseases in strawberry, grapes, tomato, and cucumber were identified with accuracies of up to 95.59% [18], 97.22% [19], 98.4% [20], and 93.4% [21], respectively.For detailed coverage of deep learning for disease identification and monitoring, readers are encouraged to refer to sample review articles [15,[22][23][24].Though deep learning has shown great promise for disease identification, developing an effective plant disease diagnosis system requires scouting entire fields to locate diseased regions.Therefore, deploying UAS mounted with sensors capable of scouting entire fields for identifying crop diseases is becoming a preferred approach within the research community.
UAS imagery acquired using hyperspectral sensors has been useful in identifying crop diseases using deep learning [25][26][27][28][29]. Multispectral imagery has also been useful for disease identification [30,31].Although spectral sensors can help locate diseased regions, they are costly and difficult to operate [32].On the other hand, RGB sensors cost less and are easy to operate [33].Therefore, the use of RGB sensors is gaining popularity for identifying diseases.
UAS imagery acquired using RGB sensors was recently used to train deep learning models for identifying NLB disease in corn [34].NLB disease lesions from UAS imagery were also identified using deep learning techniques [35].Although high accuracies were achieved, the location of the diseased regions within the field was not reported.In addition, the studies resized the UAS images, which could lead to the loss of features for deep learning-based disease identification.Thus, splitting large images into smaller images or segments when dealing with UAS imagery can be advantageous [11].
Segmentation was used in the literature for preparing datasets to train deep learning models.Multiple different computer vision-based segmentation approaches have been proposed over the years.Recently, the simple linear iterative clustering (SLIC) segmentation approach was proposed [36].SLIC segmentation promises a fast and computationally efficient method that can help to create superpixels corresponding to similar regions within an image [37].SLIC segmentation was used for precision agricultural applications such as insect counting [8], tree detection in urban areas [38], and plant disease identification [39][40][41][42].SLIC segmentation was also used for creating superpixels from UAS imagery for training deep learning models to identify diseases in soybean [8,39].UAS imagery was used for disease identification in potato [43], wheat [44], and rice [45].SLIC segmentation was used for plant disease identification from UAS imagery acquired in soybean [39] and rice [46].To the best of our knowledge, SLIC segmentation has not been used for corn disease diagnosis from UAS imagery.
Current studies have reported promising results for disease diagnosis.However, it is important to accurately identify and locate diseased regions in fields to develop an effective disease management system.One practical approach can rely on using a sliding window guided by a deep learning model for identifying regions within an image.A deep learning-based approach using a sliding window was recently reported to identify diseased regions in corn fields with testing accuracies of up to 97.84% [5].The sliding window approach was also used for identifying diseased regions using hyperspectral imagery [47].Although the sliding window with deep learning has been used in different domains [48][49][50][51], its application for crop disease identification is limited.In addition, using different segmentation approaches and the GNSS (Global Navigation Satellite System) information from the UAS imagery was not harnessed to develop an application to alert users of existing diseased hot spots within corn fields.The RTK geolocation information from images could help farmers or robots navigate to specified locations within fields.
In this study, a new system named the Geo Disease Location System (GeoDLS) was developed to track and locate disease regions within corn fields to notify users.Deep learning was used to train disease region identification models using tile-segmented images and superpixels created using SLIC segmentation.A total of 25 deep learning models were trained using state-of-the-art deep neural network architectures: VGG16, ResNet50, Incep-tionV3, DenseNet169, and Xception.After comparing the different techniques for splitting the images, the real-time kinematic (RTK) geolocation information for each uploaded image was obtained.The user was then notified of diseased regions in corn fields using RTK geolocation and the deep learning model to indicate the percentage of the field infected at the location where the image was acquired.Five primary objectives were identified for developing a disease region identification and location tool to help in their management: 1.
Acquire a UAS imagery dataset in diseased corn fields; 2.
Use tile segmentation on UAS imagery to create datasets; 3.
Train deep learning models to compare the different segmentation approaches for disease region identification; 5.
Develop an application for alerting users of diseased regions in corn fields using RTK geolocated images.

Dataset
For this study, a custom dataset consisting of a total of 5076 images was created by subjecting UAS imagery to two different segmentation techniques: tile and SLIC segmentation, to develop a deep learning-based disease region identification tool.A DJI Matrice 300 quadcopter UAS with a RTK mobile station was utilized for collecting images in diseased corn fields in 2021.The UAS was mounted with a Zenmuse P1 45-megapixel RGB camera capable of acquiring 8192 × 5460 pixels resolution images.Flights were conducted at an altitude of 12 m above ground level (AGL), resulting in a ground sampling distance (GSD) of 0.15 cm/pixels.A DJI D-RTK 2 mobile station was used, which helped geotag images with an accuracy of up to 2 cm [52].The location of the mobile station is marked in Figure 1.The mobile station was connected to the remote control and the Matrice 300 UAS.The software DJI Pilot 2 application (DJI, Shenzhen, China) then automatically corrected the geolocation error and stored the corrected coordinates in image EXIF data [53,54].A total of 151 images corresponding to diseased regions in the corn field from flights conducted on July 30th, August 2nd, August 4th, and August 13th were used to segment into tiles and superpixels.Data collection started 65 days after planting in corn field 21B (Figure 1), located at Purdue University's Agronomy Center of Research and Education (ACRE), when the crop was at stage V14, the recommended time to scout for diseases [55].Regions in the field were infected with northern leaf blight, gray leaf spot, northern leaf spot, and common rust diseases.Tile and SLIC segmentation were then used to segment the UAS-acquired images into tiles and superpixels for training deep learning models to identify diseased, healthy, and background regions within corn fields.

Tile Segmentation
The UAS-acquired images from different dates were first split into a total of 3486 tiles using tile segmentation.The images were split into tiles of sizes 250 × 250 pixels, 500 × 500 pixels, and 1000 × 1000 pixels to prepare the datasets for training deep learning models.Each original image of size 8192 × 5460 pixels was first split as per the three tile sizes, resulting in a total of 672, 160, and 40 images, respectively (Figure 2).Each tile was manually labelled as diseased, healthy, or background and organized into training and testing folders using a 50-50% training-testing split ratio to train deep learning models.Overall, the datasets corresponding to images of sizes 250 × 250 pixels, 500 × 500 pixels, and 1000 × 1000 pixels comprised 1804, 1112, and 570 images, respectively (Table 1).

SLIC Segmentation
Superpixels are segments of an image created by grouping together different pixels within an image into perceptually meaningful atomic regions that may be similar in color, texture, and shape [38].Although different algorithms exist for creating superpixels, simple linear iterative clustering (SLIC) segmentation is a popular and computationally efficient method to segment an image into multiple superpixels [36].
When creating superpixels, the SLIC algorithms rely on two primary parameters, i.e., the number of segments that need to be created (K) and compactness (m).The compactness determines the compactness of the pixels corresponding to superpixels.Essentially, more regular quadrilateral contours are generated by increasing the compactness (m).However, by reducing the compactness (m), the superpixels are more irregular, and it was observed to be better for differentiating between diseased and healthy regions within UAS-acquired corn field imagery.Therefore, different compactness (m) values were used to test their impact.The number of segments (K) was fixed in this study to maintain consistency.The parameters for SLIC segmentation were chosen after experimenting with different values, as shown by [39,56].
For SLIC segmentation in this study, a total of 1590 superpixels were created using different combinations of parameters (Figure 3).After testing various compactness (m) values and the number of segments, the compactness (m) values of 5 and 10 were used (Table 2).In addition, the number of segments (K) was 100.Individual segments were labeled as diseased, healthy, or background to prepare the dataset for training deep learning models.

Deep Learning
Deep learning is a machine learning technique that relies on using deep neural networks (DNN) that can accurately learn important features from training data for identification purposes.
A DNN typically consists of input, hidden, and output layers.The input layer takes in the input images as tensors in a specified size as per the deep learning DNN architecture requirements.Multiple hidden layers follow the input layer.Hidden layers comprise convolutional, dense, pooling, or batch normalization layers.In addition, fully connected layers are then presented, followed by an output layer.The output layer consists of neurons corresponding to the total number of classes identified using either the sigmoid activation function for a binary classification problem or the Softmax activation function for a multiclass classification problem.
Image classification is a deep learning technique in which a probability is assigned to an image corresponding to different classes used to train the model.Unlike object detection and semantic segmentation, traditional image classification cannot accurately locate the identified objects using bounding boxes or masks.Therefore, in this study, each UAS image was split using tile or SLIC segmentation, which helped overcome the tedious annotation task required for training object detection and semantic segmentation models.Image classification was then used to accurately identify the diseased, healthy, and background regions.
Training robust deep learning-based image classification requires access to large imagery datasets consisting of thousands of images.One of the most popular datasets, ImageNet, comprises of 14 million images.Due to limited efforts or resources, access to such large datasets for disease identification is limited.Therefore, transfer learning was used for training each model in this study.
Transfer learning is a technique commonly used to train deep learning models when access to large datasets and computational resources is limited.Transfer learning helps train deep learning models by utilizing pre-trained weights from models trained for similar but different tasks.For image classification, the pre-trained ImageNet weights are most commonly used.
A total of five different state-of-the-art DNN architectures, namely VGG16 [57], ResNet50 [58], InceptionV3 [59], DenseNet169 [60], and Xception [61], were utilized for the interest of this study.Transfer learning was used by utilizing pre-trained ImageNet weights for training deep learning models capable of locating diseased regions in corn fields from UAS imagery.
A total of 25 deep learning models were trained for this study using the datasets created using the tile segmentation and SLIC segmentation approaches.Using each of the DNN architectures, five models were trained for the tile segmentations created using tile sizes of 250 × 250, 500 × 500, and 1000 × 1000 pixels.The same five DNN architectures were then used to train the superpixel datasets that were created using compactness (m) values of 5 and 10.Before training the models, the data augmentation techniques were used.Each image was augmented using the built-in TensorFlow functions by rotating, flipping, and zooming the images.In addition, each image was converted into tensors of input size corresponding to the input image size requirements for each DNN architecture.For VGG16, ResNet50, DenseNet169, and Xception, the training images were resized to 224 × 224 pixels.The input image size for InceptionV3 was 299 × 299 pixels.Each model was trained for 25 epochs with a learning rate of 0.0001, the ADAM optimizer, and a batch size of 32.The categorical cross-entropy loss function was also used.After training all the models, different metrics were used for evaluating and comparing their performances.

Evaluation Metrics
Two primary evaluation matrices were utilized to evaluate the trained deep learning models: confusion matrices and testing accuracies.

GeoDLS: Web and Smartphone Application for Disease Region Identification
After training and comparing the deep learning models for accurately locating and identifying diseased regions within corn fields from UAS imagery, a disease region identification tool named Geo Disease Location System (GeoDLS) was developed for use via web browsers and smartphones.The "Streamlit" Python library was used for creating the application.Streamlit is a library that helps to easily deploy deep learning models for various tasks.In addition, Streamlit offers multiple additional promising tools to help users easily upload images for analysis.
The home page was developed to allow users to use tile segmentation or SLIC segmentation to identify different diseased regions.The application's title is displayed at the top of the home page.It also shows a map of the farm from where the data were collected.
After the user selects the type of segmentation, another prompt is provided to upload an image for analysis.The uploaded image is fed into a sliding window algorithm that iterates over each segment and classifies them as either diseased, healthy, or background using the trained deep learning model.If the region is identified as diseased, it will be highlighted orange on the analyzed image.
Using the "Exif" library from Python, the name of the image, the time at which the image was acquired, and the RTK geolocation coordinates at which the image was acquired were extracted.The area of the diseased image and the Exif information obtained from the image was then sent to the user via email.The "smtplib" library from Python for setting up an SMTP server was used to send email notifications.For this study, a temporary Gmail account named farmerhack1@gmail.comwas created to send emails with information corresponding to diseased regions identified from UAS imagery acquired in corn fields.

Computational Resources
This study's code was primarily written using the Python programming language.The TensorFlow 2.0 deep learning framework was utilized for training the deep learning models.Each model was trained using an NVIDIA RTX 3090 GPU.In addition, Python was used to develop the web application.

Tile Segmentation
For the first set of experiments, the dataset that was created by splitting the UAS images into tiles of size 250 × 250 pixels, 500 × 500 pixels, and 1000 × 1000 pixels was used.

Tile Size of 250 × 250 Pixels
The first set of models was trained using the tile segments of size 250 × 250 pixels.After the models were trained, the training and validation accuracy and loss plots were created (Figure 4).It was observed that apart from the ResNet50 model, the validation accuracy for models reached 100.00%.In addition, it was observed from the plots that the VGG16 had a higher degree of overfitting and the validation accuracy for the inception model started to decrease towards the end of the training, which indicates some degree of overfitting.Nevertheless, evaluating the models by comparing the testing accuracies is important.
- In addition, the testing accuracies and testing losses were also obtained using the testing dataset, as shown in Table 3.It was observed that 100.00% testing accuracy was achieved for the VGG16, DenseNet169, and Xception models.The lowest testing loss was achieved for the Xception model.Therefore, when a tile size of 250 × 250 pixels was used, the Xception model performed the best.Here, five models were trained to identify diseased regions within the UAS imagery of diseased corn fields using a tile size of 500 × 500 pixels.First, the training and validation accuracies and losses were plotted (Figure 5).In addition, almost no overfitting was observed as there were very small fluctuations in the generated plots.The ResNet50 model, again, failed to train, and validation accuracy did not cross 50%.
---- After the plots were generated, the testing accuracy and testing losses were obtained and compared.The testing accuracies were 100.00% for InceptionV3, VGG16, DenseNet169, and Xception.After evaluating the testing losses, it was observed that the InceptionV3 achieved the best performance as it had the lowest testing loss of 0.0045.The results are shown in Table 4. Finally, tile segments of size 1000 × 1000 pixels were used to train the models.A low degree of overfitting was observed as almost no fluctuation existed in the training and validation accuracy and loss plots, as shown in Figure 6.In the case of larger tile sizes, the testing accuracy for ResNet50 improved.
Testing accuracies and losses were again compared to evaluate the overall performance of the models.The testing accuracies for InceptionV3, VGG16, DenseNet169, and Xception were 100.00%.Unlike the models that were trained using ResNet50 for tile segments of sizes 250 × 250 pixels and 500 × 500 pixels, the testing accuracy was high at 87.50% when tile segments were 1000 × 1000 pixels.However, the best model was the DenseNet169 model, as it achieved the highest testing accuracy of 100.00% at the lowest testing loss of 0.0003 (Table 5).

SLIC Segmentation
For the second set of experiments, the dataset was created by splitting the UAS images into superpixels using SLIC segmentation.Two compactness (m) values, i.e., 5 and 10, were used.

Superpixels Created Using Compactness (m) Value of 5
When the compactness (m) value was set to 5, the created superpixels consisted of more irregular boundaries.After the dataset was prepared with diseased, healthy, and background superpixels, the five DNN architectures were used to train five different models.After training, the training and validation accuracy and loss plots were created, as shown in Figure 7.It was observed that there was a larger degree of overfitting as the validation loss values fluctuated throughout training.The ResNet50 model once again failed to train well, and validation accuracy did not cross 50%.
In addition, the testing accuracies and testing losses were also obtained using the testing dataset, as shown in Table 6.The highest testing accuracy of 93.75% was achieved for the VGG16 model and the corresponding testing loss was 0.1872.No other model achieved testing accuracies of greater than 90%.However, to further assess the performance of the models, the testing accuracies and losses were compared.It was observed that the DenseNet169 model achieved the highest testing accuracy and lowest testing loss of 93.75% and 0.2469, respectively (Table 7).

Sliding Window Disease Region Identification
After comparing the performances of the different segmentation types, it was observed that the tile segmentation yielded higher overall results for accurately identifying the diseased regions present within corn fields.The testing accuracies using tile segmentation reached up to 100%, whereas for SLIC segmentation, the highest testing accuracy was 93.75%.Therefore, the DenseNet169 model that was trained to identify tiles of size 1000 × 1000 pixels was selected to identify diseased, healthy, and background regions.A sliding window was then guided by the DenseNet169 model to identify and highlight the diseased regions within the image.If the regions were diseased, they were highlighted orange, as shown in Figure 9.In addition, the area of the diseased region was calculated with respect to the area of the entire image, and it was reported in the title of the image.
----Figure 9.The sliding window algorithm identifies and highlights diseased regions in UAS imagery acquired in diseased fields.

GeoDLS Web and Smartphone Applications
After training the deep learning models capable of accurately locating the different diseased regions present within corn fields using two different segmentation approaches and developing the sliding window algorithm for highlighting the diseased regions, a web application was developed using the Streamlit API.
The application's home page displays the title and consists of a map with a pinpoint on the locations of the farms or fields where data were collected.In addition, the drop-down box on the top left of the screen prompts the user to select the type of different segmentation algorithm, as shown in Figure 10.If the user selected the tile segmentation algorithm, as shown in Figure 10, the best model that achieved the highest testing accuracy was used for identifying the diseased regions.After the user chooses the option to identify different diseased regions within corn fields, the user is prompted to upload an image, as shown in Figure 11.The image can be selected from the computer.Once the image is uploaded, the Pillow library in Python is used to read the information and perform the segmentation.If the tile segmentation was selected, the original images were split into sizes of 1000 × 1000 pixels.Each of the tiles was then passed into the trained deep learning model and was identified as either diseased, healthy, or soil.All the diseased regions were highlighted orange to indicate the diseased parts of the image corresponding to a region in the field, as shown in Figure 12.In addition, the percentage of the image, which consists of diseased regions, was calculated and the information was displayed to the user.Finally, the RTK geolocation information stored in the image's EXIF data was used to locate the image on the map (Figure 13).The total area corresponding to the diseased regions, the name of the image, the date and time of image acquisition, and the coordinates were then sent to the user in the form of an email.This link could be opened using a smartphone or a web browser to help update users on diseased region-related information for their fields.Once the email was sent, the notification was then displayed in a smartphone's notifications.A sample of the email sent/received is shown in Figure 14.The information can help users keep track of disease information in different parts of their fields until the time to harvest.Although a web application is useful, many users are also likely to use the application in fields.Therefore, the application has also been designed for smartphones, as shown in Figure 15.The task can be selected, and the original location is shown on the home page.The images can be uploaded from the gallery when uploading the image for analysis and disease diagnosis.In addition, the smartphone application provides the benefit of taking an image on the go.After the image is uploaded, the diseased regions are again identified, and a map is displayed with the information for the diseased corn field.Finally, the information is also sent to the user in an email.

Discussion
The use of deep learning-based solutions for agricultural applications is on the rise.Disease identification is a complex task and deep learning has shown great promise within the literature.Accurate disease identification is necessary for the development of disease management systems.However, accurately identifying diseased regions within corn fields is an essential component to aid farmers control/track their spread.Although traditional approaches for disease region identification that rely on manual scouting are common, it is important to explore efficient modern solutions.Therefore, this study relied on RGB UAS imagery with RTK geolocation information for identifying and locating diseased regions in corn fields.
The availability of UAS imagery data from diseased corn fields is limited.However, a publicly available UAS imagery dataset was acquired in corn fields [13].The NLB dataset was used for disease identification resulting in high accuracies.However, the dataset only comprises UAS images corresponding to NLB foliar disease of corn without harnessing the geolocation information.Thus, the dataset cannot be used to train deep learning models for identifying and locating diseased regions in corn fields.Therefore, in this study, a UAS imagery dataset was acquired to train deep learning models for accurately identifying diseased regions.The UAS-acquired images were then subject to different techniques: tile and SLIC segmentation, which were used to split each image into multiple smaller tiles or superpixels.Tiles of sizes 250 × 250 pixels, 500 × 500 pixels, and 1000 × 1000 pixels were created with 1804, 1112, and 570 images, respectively.Superpixels with compactness (m) values of 5 and 10 were created with 865 and 725 images, respectively.Overall, in this study, a total of 5076 images were created and used for training and evaluating deep learning models.
The images were used to train a total of 25 deep learning models using state-of-the-art neural network architectures to compare the performance of the different segmentation approaches for diseased region identification.It was observed that the tile segmentation approach performed better for identifying diseased regions when compared with the SLIC segmentation approach.For SLIC segmentation, testing accuracies of up to 93.75% were achieved using the DenseNet169 model.Similar testing accuracies of up to 93.82% were reported for soybean pest identification using UAS imagery subject to SLIC segmentation [8].For soybean disease identification using UAS imagery and SLIC segmentation, testing accuracies of up to 99.04% were reported [39].However, both studies were conducted by flying the UAS 2 m above the canopy.UAS imagery and SLIC segmentation were also used for rice disease diagnosis; however, accuracy values were not reported [46].For SLIC segmentation, we achieved an accuracy of 93.75% for corn disease region identification from UAS imagery acquired at 12 m.Conducting flights at a higher altitude helps to cover larger areas in a shorter time.Furthermore, as a high-resolution sensor capable of acquiring images at a resolution of 8192 × 5460 pixels was used, we maintained a high spatial resolution with a GSD of 0.14 cm/pixels.
In this study, tile segmentation outperformed SLIC segmentation for deep learningbased corn disease region identification using each of the different tile sizes.In the literature, deep learning was used for disease diagnosis using hyperspectral UAS imagery with an accuracy of 85% [47].Deep learning was also used, along with a sliding window for identifying diseased regions in a corn field with a testing accuracy of up 97.84% [5].In this study, however, testing accuracies up to 100% were observed for disease region identification in corn fields.The DenseNet169 deep learning model that was trained on tile segments of size 1000 × 1000 pixels was the best, as the lowest loss value of 0.0003 was achieved.Therefore, after achieving testing accuracies of up to 100% in this study, it can be concluded that RGB imagery has a great potential to be used for identifying diseased regions with confidence.
After the best deep learning model was identified, the model was deployed in the form of a web and smartphone application.Although different tools were created for plant disease diagnosis [62][63][64], most tools are not capable of UAS-based corn disease diagnosis.The GeoDLS tool, however, supports UAS imagery-based corn disease region identification by providing an interactive user interface via a web and smartphone application.Additionally, the location of diseased regions was identified and communicated using email.The application will be further enhanced by supporting UAS-based disease identification in mosaiced images and identifying different disease types in a future study.

Conclusions
The development of tools for managing crop diseases using modern solutions, such as UAS and deep learning, is vital to help overcome yield losses.Therefore, this study proposed a deep learning-based disease region identification tool called GeoDLS to notify users about the presence of diseased regions within corn fields.Five DNN architectures, namely VGG16, ResNet50, InceptionV3, DenseNet169, and Xception, were trained for identifying diseased, healthy, and background regions from UAS imagery acquired in corn fields using two different segmentation techniques, namely tile and SLIC segmentation.The findings and achievements of the study are as follows: (1) Densenet169 achieved the highest testing accuracy of 100.00% for 1000 × 1000 pixel tile segmentation; (2) SLIC segmentation led to inferior performance compared to tile-based segmentation, with testing accuracies of only 93.75%; (3) A sliding window algorithm helped in quantifying the percentage of diseased regions in each UAS image; (4) The trained model was deployed on a web and smartphone application to log and update users about diseased regions in corn fields.
Overall, this study developed a deep learning-based tool to help users analyze diseased corn fields using UAS imagery.The tool will be enhanced in the future by allowing the UAS to send the acquired images directly to the GeoDLS in real-time.

Figure 1 .
Figure 1.Field 21B located at Purdue University's ACRE farm and UAS flight path for data collection.

Figure 4 .
Figure 4. Training and validation accuracy and loss plots for training deep learning models to identify diseased regions using tile segments of size 250 × 250 pixels.

Figure 5 .
Figure 5. Training and validation accuracy and loss plots for training deep learning models to identify diseased regions using tile segments of size 500 × 500 pixels.

Figure 6 .
Figure 6.Training and validation accuracy and loss plots for training deep learning models to identify diseased regions using tile segments of size 1000 × 1000 pixels.

Figure 7 .Figure 8 .
Figure 7. Training and validation accuracy and loss plots for training deep learning models to identify diseased regions using SLIC segments with a compactness (m) value of 5. 3.2.2.Superpixels Created Using Compactness (m) Value of 10 Superpixels created using a compactness (m) value of 10 were used to conduct further experiments.With a higher compactness (m) value, the validation accuracy and loss values did not closely follow the training accuracy and loss, representing a higher degree of overfitting.The training and validation accuracy and loss plots were created and are shown in Figure 8.

Figure 10 .
Figure 10.Home page of the GeoDLS web application.

Figure 13 .
Figure 13.Pinpoint diseased regions on maps using RTK geolocation information.

Figure 14 .
Figure 14.Email notification corresponding to diseased corn fields sent to users from the GeoDLS.

Table 1 .
Dataset distribution for training deep learning models for identifying diseased regions using tile segmentation. ---

Table 3 .
Testing accuracies and testing loss when tile size of 250 × 250 pixels was used.

Table 4 .
Testing accuracies and testing loss when tile size of 500 × 500 pixels was used.

Table 5 .
Testing accuracies and testing loss when tile size of 1000 × 1000 pixels was used.

Table 6 .
Testing accuracies and loss when SLIC segmentation was used to create superpixels with a compactness (m) value of 5.

Table 7 .
Testing accuracies and testing loss when SLIC segmentation was used to create superpixels with a compactness (m) value of 10.