Next Article in Journal
A Beacons Selection Method under Random Interference for Indoor Positioning
Next Article in Special Issue
PODD: A Dual-Task Detection for Greenhouse Extraction Based on Deep Learning
Previous Article in Journal
Joint Satellite-Transmitter and Ground-Receiver Digital Pre-Distortion for Active Phased Arrays in LEO Satellite Communications
Previous Article in Special Issue
An Improved Apple Object Detection Method Based on Lightweight YOLOv4 in Complex Backgrounds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cuscuta spp. Segmentation Based on Unmanned Aerial Vehicles (UAVs) and Orthomasaics Using a U-Net Xception-Style Model

by
Lucia Gutiérrez-Lazcano
1,
César J. Camacho-Bello
1,*,
Eduardo Cornejo-Velazquez
2,
José Humberto Arroyo-Núñez
1 and
Mireya Clavel-Maqueda
2
1
Artificial Intelligence Laboratory, Universidad Politécnica de Tulancingo, Tulancingo 43629, Hidalgo, Mexico
2
Research Center on Technology of Information and Systems, Universidad Autónoma del Estado de Hidalgo, Pachuca 42039, Hidalgo, Mexico
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(17), 4315; https://doi.org/10.3390/rs14174315
Submission received: 24 July 2022 / Revised: 22 August 2022 / Accepted: 30 August 2022 / Published: 1 September 2022

Abstract

:
Cuscuta spp. is a weed that infests many crops, causing significant losses. Traditional assessment methods and onsite manual measurements are time consuming and labor intensive. The precise identification of Cuscuta spp. offers a promising solution for implementing sustainable farming systems in order to apply appropriate control tactics. This document comprehensively evaluates a Cuscuta spp. segmentation model based on unmanned aerial vehicle (UAV) images and the U-Net architecture to generate orthomaps with infected areas for better decision making. The experiments were carried out on an arbol pepper (Capsicum annuum Linnaeus) crop with four separate missions for three weeks to identify the evolution of weeds. The study involved the performance of different tests with the input image size, which exceeded 70 % of the mean intersection-over-union (MIoU). In addition, the proposal outperformed DeepLabV3+ in terms of prediction time and segmentation rate. On the other hand, the high segmentation rates allowed approximate quantifications of the infestation area ranging from 0.5 to 83 m 2 . The findings of this study show that the U-Net architecture is robust enough to segment pests and have an overview of the crop.

Graphical Abstract

1. Introduction

Invasive weeds in agricultural fields cause problems that result in decreased yields, affected product quality, and increased production costs. Weed management in crops aims to control invasive species to a level where their economic impact is reduced. Cuscuta spp. is included in the list of noxious and invasive weeds in many countries [1]. It is a parasitic plant that affects various crops of agricultural and forestry importance. The Centre for Agricultural Bioscience International (CABI) reports that the species affected are forage legumes, herbaceous plants, shrubs, trees, alfalfa, clover, beans, soybean, blueberry, carrot, citrus, tomato, and grasses [2]. Cuscuta spp. is an obligate holoparasitic species that, to complete its life cycle, obtains nutrients, water, and carbohydrates through vascular connections with other plants [3,4,5]. It is a cosmopolitan species that grows in a wide variety of climates and ecosystems on almost all continents [6]. The weed is native to Asia, Africa, and Europe in the Mediterranean region [7]. Cuscuta spp. has more than 170 species distributed worldwide [8], most of which are found in North America in regions with warm and humid climates [9]. In Mexico, more than 60 species have been reported [10] in the states of Baja California Sur, Colima, Mexico City, Guerrero, Hidalgo, Jalisco, Michoacan, Morelos, Oaxaca, Puebla, Querétaro, San Luis Potosi, Sonora, Tamaulipas, and Veracruz [11]. It spreads through the dispersal its seeds and it overgrows, causing massive damage to crop fields. It can cause losses of 50–75% of host crop yields [12]. Losses of 87% have been reported in Cicer aurantium crops [13], 60–65% in Capsicum frutescens, 31–34% in Vigna mungo, 60–65% in Guizotia abyssinica, and 87% in Lens culinaris [14]. Control of infested crops is difficult because Cuscuta spp. grows by approximately 7 centimeters daily, forming a dense and thick layer on the host crop. The invaded crop is affected until it dies due to nutrient absorption. In addition, it blocks sunlight and reduces the amount of photosynthesis [8].
The classification of Cuscuta spp. is a difficult task, since it is necessary to recognize and identify the weed early, which is mainly to prevent its spread. Farmers generally do this through direct observation at stages where the expansion is visible to the human eye. Traditional manual weed identification is often labor intensive and time consuming. The impact of an infestation is reflected in the development and production of the crop, so it is necessary to design innovative alternatives for early identification and to strengthen crop management by farmers.
Currently, computer vision technology solves problems in various fields of engineering related to agriculture, such as remote-sensing-based analysis for monitoring [15], high-accuracy multi-camera reconstruction for point cloud correction [16], multi-target recognition and positioning [17], and automated fruit-picking [18]. On invasive species, a series of studies based on convolutional neural networks (CNNs) were carried out to segment crop weeds from color images [19,20].
On the other hand, UAVs that monitor crops offer great possibilities for acquiring field data in an easy, fast, and cost-effective way compared to other methods [21]. Among the most popular applications of UAVs in agriculture are weed mapping [22,23], automatic identification and monitoring of plant diseases [24], and early-stage detection [25]. However, as of 2019, there has been an interest in segmenting images with U-Net by using images acquired by UAVs. The applications are mainly focused on segmenting forest areas [26,27], urban areas [28,29], coastal areas [30], and mining areas [31]. Unfortunately, the images from UAVs alone do not give an overview of an area for the location of points of interest. Few works have explored the possibility of generating orthomosaics due to the resolution of the images required for their generation and the computational difficulty of segmenting large images using U-Net.
In this work, we explore the generation of orthomosaics from segmented images with Cuscuta spp. for the identification of affected areas by using the U-Net Xception-style architecture on reduced images to speed up training. The proposal is compared with the DeepLabV3+ semantic segmentation model [32,33], which performs well with the latest-generation models [34]. The experiments were carried out on an arbol pepper (Capsicum annuum Linnaeus) crop with four separate missions for three weeks to identify the evolution of the Cuscuta spp. The proposed methodology helps farmers access emerging technologies to promote intelligent aerial monitoring systems and have a general overview of the infected crop.
The document is organized as follows. Section 2 briefly describes the location of the case studies, data acquisition, dataset generation, orthomosaics, and image segmentation with U-Net. Section 3 describes the proposed methodology for segmenting images with the U-Net architecture and the infestation area of the four case studies. Section 4 discusses the advantages of the proposed method. Finally, we offer the conclusions of the investigation.

2. Materials and Methods

2.1. Study Area

The experimental site of the study was located in Tezontepec de Aldama, which belongs to the Valle del Mezquital in the state of Hidalgo, Mexico, as shown in Figure 1. The analysis was carried out in a field with arbol pepper plants during the spring production cycle. The experimental field had a rectangular area of 1.4 hectares of seedlings that had a similar density throughout the plot and were distributed in equidistant double rows. In addition, the flood irrigation system was used periodically to provide water for the development of plants according to traditional farming practices. The experiment was carried out from July to August 2021, beginning in week 17 and concluding in week 32 of the growth and flowering of the arbol pepper (Capsicum annuum L.).

2.2. Data Acquisition

In this study, the low-cost DJI Mavic Pro platform (DJI Innovations, Shenzhen, China) was used, which is a UAV with applications in various fields, including the agricultural sector. The UAV had a CMOS-type camera capable of taking 12 megapixel ( 4000 × 3000 pixels) photographs with a 78.8-degree field-of-view (FOV) lens, 28 mm focal length, f/2.2 aperture, less than 1.5% distortion, and 1.6’ focal range. The drone had a 3830 mAh intelligent LiPo 3S battery for 20 min. In addition, it had a global positioning system (GPS) that provided precise positioning data with corrections in real-time that were stored in the images’ metadata. On the other hand, the ground station used a 2.4 GHz remote control with a range of 7 km. The flight was carried out through automatic route planning with the Pix4D Capture software from a ground control station (GCS) connected to the remote control to create and follow the flight pattern. Flight plans were made at an altitude of 30 m with the same parameter settings for the camera with winds below ten m/s. The trajectory was flown over the crop field around 1:00 p.m. to maintain similar lighting conditions. Four field datasets of Capsicum annuum L. with about 244 images were collected between weeks 17 and 32 of crop development (14 July 2021, 8 August 2021, 29 August 2021, and 11 September 2021). Figure 2 presents some images captured by the UAV.

2.3. Generation of Datasets

Training requires a coarse set of images. We took the images collected on 29 August and 11 September as a training set. From the locations, the samples were labeled to create segmentation masks. We obtained 200 masks of Cuscuta spp. with 4000 × 3000 pixels, which were generated and manually annotated using the “Labelme” package [35]. The software masked specific pixels of each image belonging to the weed for each image of the flight mission. The reduced number of samples with Cuscuta spp. required more data in order to improve the training set’s diversity, which was done by applying transformations, such as image rotation and flipping. The data augmentation was intended to generate new areas with Cuscuta spp. at different points so that the model would learn to identify the undergrowth at various points in the image. In addition, it helped address overfitting and insufficient data, making the model robust and allowing it to perform better. Table 1 shows the transformations generated to augment the images of the training set. The data augmentation generated 1000 images, each with a hand-drawn binary mask denoting the area infested with Cuscuta spp. The collection contained training and validation data, with 900 locations represented in the training set and 100 locations represented in the validation set.

2.4. Orthomosaic

An orthomosaic corresponds to a set of images that have areas of overlap between them and that are joined and combined into a single image to expand the range of vision of the scene [36]. They are used to obtain geographic information and give a general perspective of a study area. Currently, it is easy to generate orthomosaics from specialized software. WebODM is an open-source software developed by OpenDroneMap that is designed to generate maps, point clouds, georeferenced digital elevation models, and 3D models from aerial imagery models. WebODM extracts the geolocation and camera information contained in each orthophoto in the Exif format in order to compare the images while taking into account the effects of the radial distortion of the sensor. As a result of information processing for the extraction and comparison of features from neighboring images, a digital elevation model (DEM) and a georeferenced orthomosaic are generated from the dense point cloud and the textured mesh, respectively [37]. Figure 3 shows the results of the orthomosaics of the case study.

2.5. Image Segmentation with U-Net Xception-Style

U-Net is an efficient and easy-to-use architecture that is widely used for semantic segmentation tasks. It was first proposed by Olaf Ronneberger et al. [38] for medical image segmentation. The main idea of U-Net is to combine different network layers in the downsampling process because feature maps can be decoupled for segmentation. Currently, there are other strategies through which convolutional neural networks can be fully coupled. The Xception architecture is a linear stack of depthwise-separable convolution layers with residual connections [39], i.e., a spatial convolution is performed independently over each channel of an input, followed by a pointwise convolution; this makes the architecture very easy to define and modify. Therefore, the U-Net Xception-style architecture is divided into an encoder and a decoder [40]. The encoder continuously samples through multiple depthwise-separable convolution layers to obtain different levels of image entities. The decoder performs multilayer deconvolution on the top-level feature map to restore the feature map to the original input image size and completes the task of the end-to-end semantic segmentation of the image, as shown in Figure 4.

2.6. Methods

This proposal seeks to give farmers an overview of pests in order to implement sustainable farming systems, apply appropriate control tactics, and facilitate decision making in reasonable computation times. The model is divided into six stages:
  • Image acquisition by the UAV.
  • Image size reduction.
  • Obtaining a mask with the U-Net Xception-style model.
  • Increasing the size of the mask.
  • Segmenting the infected areas in blue.
  • Generation of an orthomosaic with the set of images.
Figure 5 shows a diagram of the main steps of the proposed method. In this study, the network was adapted to be suitable for image segmentation of Cuscuta spp. in tree chili crops based on the U-Net Xception-style architecture. The proposal aims to reduce computation by reducing the input images for the model. The minimum input size for the model is 64 × 64 due to the architecture’s design, and it must increase in multiples of 32. In the following subsections, we briefly describe the implementation and results of the proposal.
Usually, the performance of a segmentation model is expressed in terms of the intersection-over-union (IoU). This metric helps to determine the degree of overlap between the ground truth and the prediction. The IoU ranges from 0 to 1, and is defined as
IoU = Area of Intersection Ground Truth Area + Predicted Box Area Area of Intersection
On the other hand, the MIoU quantifies a set of segmented images and is defined as the mean value of the IoU over all label classes.

3. Results

This section shows the main characteristics that were used to evaluate the models and the training results. In addition, the performance with DeepLabV3+ is considered in terms of prediction time and segmentation. Finally, we show the orthomosaics of the case study, the infected area, and the infestation rate.

3.1. U-Net Xception-Style Training

The proposal trained the neural network model in Python 3.8 in the PyCharm software. In addition, the TensorFlow 2.5 library was used to build the U-Net Xception-style network. The code ran on a 64-bit system on Windows 10, with an Intel Core i7-9700 CPU, 32 GB of memory, and an NVidia GeForce GTX 1660 SUPER (6G) GPU. In training, the root mean square propagation optimization algorithm was used. The loss function involved cross-entropy, which is commonly used in the field of segmentation. In addition, it had a learning rate of 0.001 , which could improve the results. The mini-batch size was set to 16, and a total of 100 epochs were used. The model with the highest accuracy on the validation dataset was selected as the final model and applied to the test dataset. However, the training results do not reflect the error of increasing the mask to segment the images taken by the UAV. Table 2 shows the training results on the test dataset.

3.2. Model Evaluation

Accuracy was the first control point for evaluating the model; however, it did not determine the degree of overlap of the prediction with the segmented image. The MIoU is the most widely used metric for evaluate datasets for semantic segmentation [34]. It is also essential to consider the prediction times of each image when developing applications. Table 3 shows the results of the comparison with DeepLabV3+ in terms of the time and segmentation rate.

3.3. Cuscuta spp. Segmentation in the Cultivation of Arbol Peppers

The high segmentation rate allowed the accurate identification of the areas of Cuscuta spp. from the images acquired by the UAV. In addition, it facilitated the generation of orthophotos due to the superimposition of the infected areas. The model with the lowest segmentation error with an augmented mask in the validation dataset was selected as the final model and applied to the test dataset. Figure 6 shows the results of the generation of orthomosaics; the best results were compared to the manually segmented image with 4000 × 3000 px. masks.
The best reductions from the input images were able to identify the same areas with Cuscuta spp. as those in the manually segmented areas. The proposal aimed to identify the infestation in the early stages to facilitate its removal. Therefore, we sought to find the invasive weeds in the samples collected. Intuitively, the model obtained by comparing segmentation errors was able to identify the parts where weeds were hosted. We used the model input size of 320 × 320 , which had the highest MIoU rate, as shown in Table 3. Figure 7 shows the results of the four case studies; the infected areas are shown in blue.
In our study case, the crop area given by 243 m × 77 m corresponded to 0.0025 m 2 per pixel in the orthophoto. The results made it easy to determine the percentage of affectation of the entire crop according to the infested area, as shown in Table 4.

4. Discussion

The end-to-end features of the U-Net Xception-style architecture allowed us to focus on the input and output of the task without having to extract complex features from the input data. Table 2 shows the average performance of the model for the segmentation of Cuscuta spp. and the generation of the orthomosaics on the test dataset. Intuitively, the model obtained by using the datasets was able to segment the infected crop (blue zone) in the orthomosaic. The segmentation rate in Table 3 was above 70%, which confirmed this point. The correct segmentation allowed the determination of the area of infestation by Cuscuta spp. and the monitoring of the evolution of the weeds, as shown in Table 4. The proposal outperformed DeepLabV3+ in terms of time and segmentation at all test sizes. In addition, DeepLabV3+ was not as capable of generating orthomosaics with infested areas compared to manual segmentation, unlike the proposal, as shown in Figure 6. DeepLabV3+ is a model designed for multiclass segmentation and not for binary classes, so it requires changes in its design and fine-tuning of parameters to improve its performance. On the other hand, the present study only needed to pay attention to the input of the image data and output of the evaluation results, even if there were differences in the image data, such as input size changes. The contribution of this work is the exploration of the ability to segment Cuscuta spp. with reduced input images to improve computation times. The results show that, with images with sizes of 320 × 320 , 160 × 160 , and 96 × 96 , the same regions of the validation data could be identified. In addition, it is interesting that, even when the input image is reduced to 96 × 96 , it contains enough information on Cuscuta spp. to be able to identify it in the culture in the same way as with manual segmentation. Moreover, the method proposed in this study is suitable for the natural environment, is highly robust to different external factors, and has reasonable computation times. The identification and location of invasive weeds help farmers make decisions to mitigate and minimize impacts on crop productivity. Therefore, there is a great need to compare the results of the described proposal and clarify the differences between expectations and reality for the early detection of weeds. Unfortunately, the survival and expansion mechanism of Cuscuta spp. is based on a fissure induced by the same weed in the stem of the host plant—in this case, the tree chili plant. Farmers generally use a traditional method based on cutting and removing infected plants, which are then burned, and infested areas are quarantined [41]. Therefore, this prevents the exact and on-site quantification of the presented results. However, we believe that the U-Net Xception-style architecture’s high accuracy rate allows us to give results that are close to reality.

5. Conclusions

Smallholder farming operations are exploring the benefits of employing UAVs and emerging technologies to improve crop sustainability. Using a DJI Mavic UAV platform equipped with a high-resolution digital camera, they can acquire aerial photos of crops and use them to locate invasive weeds, such as Cuscuta spp. This proposal demonstrates that U-Net Xception-style effectively detects the presence of Cuscuta spp. The results show that there is no linear relationship between the size of the mask and the performance of the segmentation, which is interesting for research in future works. In our case study, we presented how a low-cost tool was developed to detect invasive weeds in their early stages. In addition, this allowed the generation of an orthomosaic in order to gain a general view of the entire crop and the invasion’s progress, which helped label the specific coordinates of where the highest concentration of weeds was found. This finding is promising given the challenging conditions of smallholder farming systems. However, the proposal has the following limitations:
  • The analysis considered a single arbol pepper crop, which caused the overfitting of the trained model.
  • There is still no exact quantification of Cuscuta spp. that allows an objective comparison.
  • There was no substantial change in the prediction times by decreasing the size of the input images; this only significantly reduced the training times.
  • Due to its characteristic yellow color, the proposed model was exclusively adapted for Cuscuta spp.
We know that there is a long way to go to with this research in order to improve our results; however, this study provides an essential methodological reference for monitoring research on Cuscuta spp. in arbol pepper crops, thus supporting decision making for facilitating its elimination.

Author Contributions

Conceptualization, C.J.C.-B. and E.C.-V.; methodology, C.J.C.-B. and L.G.-L.; software, L.G.-L. and M.C.-M.; validation, C.J.C.-B., J.H.A.-N. and E.C.-V.; writing—original draft preparation, L.G.-L. and J.H.A.-N.; writing—review and editing, L.G.-L., C.J.C.-B., E.C.-V., J.H.A.-N. and M.C.-M.; visualization, L.G.-L. and M.C.-M.; project administration, C.J.C.-B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

We extend our gratitude to the reviewers for their valuable suggestions and Beatriz Flores Vargas for the comments on grammar and writing.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

List of important symbols and abbreviations:
UAVUnmanned Aerial Vehicle
MIoUMean Intersection-over-Union
CNNConvolutional Neural Networks
FOVField-of-View
GPSGlobal Positioning System
GCSGround Control System
DEMDigital Elevation Model
IoUIntersection-over-Union

References

  1. Costea, M.; García-Ruiz, I.; Dockstader, K.; Stefanović, S. More problems despite bigger flowers: Systematics of Cuscuta tinctoria clade (subgenus Grammica, Convolvulaceae) with description of six new species. Syst. Bot. 2013, 4, 1160–1187. [Google Scholar] [CrossRef]
  2. CABI. Datasheets Cuscuta. 2022. Available online: https://www.cabi.org/isc/search/index?q=cuscuta (accessed on 19 July 2022).
  3. Costea, M.; García, M.A.; Stefanović, S. A phylogenetically based infrageneric classification of the parasitic plant genus Cuscuta (Dodder, Convolvulaceae). Syst. Bot. 2015, 1, 269–285. [Google Scholar] [CrossRef]
  4. Ahmadi, K.; Omidi, H.; Dehaghi, M.A. A Review on the Botanical, Phytochemical and Pharmacological Characteristics of Cuscuta spp. In Parasitic Plants; IntechOpen: London, UK, 2022. [Google Scholar]
  5. Le, Q.V.; Tennakoon, K.U.; Metali, F.; Lim, L.B.; Bolin, J.F. Impact of Cuscuta australis infection on the photosynthesis of the invasive host, Mikania micrantha, under drought condition. Weed Biol. Manag. 2015, 15, 138–146. [Google Scholar]
  6. Stefanović, S.; Kuzmina, M.; Costea, M. Delimitation of major lineages within Cuscuta subgenus Grammica (convolvulaceae) using plastid and nuclear DNA sequences. Am. J. Bot. 2007, 4, 568–589. [Google Scholar]
  7. Iqbal, M.; Hussain, M.; Abid, A.; Ali, M.; Nawaz, R.; Qaqar, M.; Asghar, M.; Iqbal, Z. A review: Cuscuta (Cuscuta planifora) major weed threat in Punjab–Pakistan. Int. J. Adv. Res. Biol. Sci. 2014, 4, 42–46. [Google Scholar]
  8. Kogan, M.; Lanini, W. Biology and management of Cuscuta in crops. Cienc. E Investig. Agrar. Rev. Latinoam. Cienc. Agric. 2005, 32, 165–180. [Google Scholar]
  9. Dawson, J.H.; Musselman, L.; Wolswinkel, P.; Dörr, I. Biology and control of Cuscuta. Rev. Weed Sci. 1994, 6, 265–317. [Google Scholar]
  10. Carranza, E. Flora del Bajío y de Regiones Adyacentes; Instituto de Ecología: Mexico City, Mexico, 2008; Volume 155. [Google Scholar]
  11. Ríos, V.; Luis, J.; García, E. Catálogo de Malezas de México; Fondo de Cultura Económico: Mexico City, Mexico, 1998. [Google Scholar]
  12. Aly, R.; Dubey, N.K. Weed management for parasitic weeds. In Recent Advances in Weed Management; Springer: Berlin/Heidelberg, Germany, 2014; pp. 315–345. [Google Scholar]
  13. Kannan, C.; Kumar, B.; Aditi, P.; Gharde, Y. Effect of native Trichoderma viride and Pseudomonas fluorescens on the development of Cuscuta campestris on chickpea, Cicer arietinum. J. Appl. Nat. Sci. 2014, 2, 844–851. [Google Scholar] [CrossRef]
  14. Mishra, J. Biology and management of Cuscuta species. Indian J. Weed Sci. 2009, 41, 1–11. [Google Scholar]
  15. Hazaymeh, K.; Sahwan, W.; Al Shogoor, S.; Schütt, B. A Remote Sensing-Based Analysis of the Impact of Syrian Crisis on Agricultural Land Abandonment in Yarmouk River Basin. Sensors 2022, 22, 3931. [Google Scholar]
  16. Chen, M.; Tang, Y.; Zou, X.; Huang, K.; Li, L.; He, Y. High-accuracy multi-camera reconstruction enhanced by adaptive point cloud correction algorithm. Opt. Lasers Eng. 2019, 122, 170–183. [Google Scholar] [CrossRef]
  17. Wu, F.; Duan, J.; Chen, S.; Ye, Y.; Ai, P.; Yang, Z. Multi-target recognition of bananas and automatic positioning for the inflorescence axis cutting point. Front. Plant Sci. 2021, 12, 705021. [Google Scholar] [CrossRef] [PubMed]
  18. Wang, H.; Lin, Y.; Xu, X.; Chen, Z.; Wu, Z.; Tang, Y. A Study on Long-Close Distance Coordination Control Strategy for Litchi Picking. Agronomy 2022, 12, 1520. [Google Scholar] [CrossRef]
  19. Yu, J.; Sharpe, S.M.; Schumann, A.W.; Boyd, N.S. Deep learning for image-based weed detection in turfgrass. Eur. J. Agron. 2019, 104, 78–84. [Google Scholar] [CrossRef]
  20. You, J.; Liu, W.; Lee, J. A DNN-based semantic segmentation for detecting weed and crop. Comput. Electron. Agric. 2020, 178, 105750. [Google Scholar] [CrossRef]
  21. Tsouros, D.C.; Bibi, S.; Sarigiannidis, P.G. A Review on UAV-Based Applications for Precision Agriculture. Information 2019, 10, 349. [Google Scholar] [CrossRef]
  22. Selvi, C.T.; Subramanian, R.S.; Ramachandran, R. Weed Detection in Agricultural fields using Deep Learning Process. In Proceedings of the 2021 7th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 19–20 March 2021; Volume 1, pp. 1470–1473. [Google Scholar]
  23. Abouzahir, S.; Sadik, M.; Sabir, E. Lightweight Computer Vision System for Automated Weed Mapping. In Proceedings of the 2022 IEEE 12th Annual Computing and Communication Workshop and Conference (CCWC), Virtual, 26–29 January 2022. [Google Scholar]
  24. Neupane, K.; Baysal-Gurel, F. Automatic Identification and Monitoring of Plant Diseases Using Unmanned Aerial Vehicles: A Review. Remote Sens. 2021, 13, 3841. [Google Scholar] [CrossRef]
  25. Ahmadi, P.; Mansor, S.; Farjad, B.; Ghaderpour, E. Unmanned Aerial Vehicle (UAV)-Based Remote Sensing for Early-Stage Detection of Ganoderma. Remote Sens. 2022, 14, 1239. [Google Scholar] [CrossRef]
  26. Wagner, F.H.; Sanchez, A.; Tarabalka, Y.; Lotte, R.G.; Ferreira, M.P.; Aidar, M.P.; Gloor, E.; Phillips, O.L.; Aragao, L.E. Using the U-net convolutional network to map forest types and disturbance in the Atlantic rainforest with very high resolution images. Remote Sens. Ecol. Conserv. 2019, 5, 360–375. [Google Scholar] [CrossRef]
  27. Reder, S.; Mund, J.P.; Albert, N.; Waßermann, L.; Miranda, L. Detection of Windthrown Tree Stems on UAV-Orthomosaics Using U-Net Convolutional Networks. Remote Sens. 2021, 14, 75. [Google Scholar] [CrossRef]
  28. Yao, X.; Yang, H.; Wu, Y.; Wu, P.; Wang, B.; Zhou, X.; Wang, S. Land use classification of the deep convolutional neural network method reducing the loss of spatial features. Sensors 2019, 19, 2792. [Google Scholar] [CrossRef] [PubMed]
  29. Zhang, P.; Ke, Y.; Zhang, Z.; Wang, M.; Li, P.; Zhang, S. Urban land use and land cover classification using novel deep learning models based on high spatial resolution satellite imagery. Sensors 2018, 18, 3717. [Google Scholar] [CrossRef] [PubMed]
  30. Li, R.; Liu, W.; Yang, L.; Sun, S.; Hu, W.; Zhang, F.; Li, W. DeepUNet: A deep fully convolutional network for pixel-level sea-land segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3954–3962. [Google Scholar] [CrossRef]
  31. Giang, T.L.; Dang, K.B.; Le, Q.T.; Nguyen, V.G.; Tong, S.S.; Pham, V.M. U-Net convolutional networks for mining land cover classification based on high-resolution UAV imagery. IEEE Access 2020, 8, 186257–186273. [Google Scholar] [CrossRef]
  32. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
  33. Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef]
  34. Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
  35. Wada, K. Labelme: Image Polygonal Annotation with Python. 2016. Available online: https://github.com/wkentaro/labelme (accessed on 19 July 2022).
  36. Cheng, Y.; Xue, D.; Li, Y. A fast mosaic approach for remote sensing images. In Proceedings of the 2007 International Conference on Mechatronics and Automation, Harbin, China, 5–8 August 2007; pp. 2009–2013. [Google Scholar]
  37. Lam, O.H.Y.; Dogotari, M.; Prüm, M.; Vithlani, H.N.; Roers, C.; Melville, B.; Zimmer, F.; Becker, R. An open source workflow for weed mapping in native grassland using unmanned aerial vehicle: Using Rumex obtusifolius as a case study. Eur. J. Remote Sens. 2021, 54, 71–88. [Google Scholar] [CrossRef]
  38. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  39. Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
  40. Chollet, F. Image Segmentation with a U-Net-Like Architecture. 2019. Available online: https://keras.io/examples/vision/oxford_pets_image_segmentation (accessed on 19 July 2022).
  41. Winston, R.; Schwarzländer, M.; Hinz, H.L.; Day, M.D.; Cock, M.J.; Julien, M.; Julien, M.H. Biological Control of Weeds: A World Catalogue of Agents and Their Target Weeds; USDA Forest Service, Forest Health Technology Enterprise Team: Morgantown, WV, USA, 2014.
Figure 1. Schematic diagram of the study area.
Figure 1. Schematic diagram of the study area.
Remotesensing 14 04315 g001
Figure 2. Images of the study area captured with the UAV.
Figure 2. Images of the study area captured with the UAV.
Remotesensing 14 04315 g002
Figure 3. Orthomosaics from test missions: (a) 14 July 2021, (b) 8 August 2021, (c) 29 August 2021, and (d) 11 September 2021.
Figure 3. Orthomosaics from test missions: (a) 14 July 2021, (b) 8 August 2021, (c) 29 August 2021, and (d) 11 September 2021.
Remotesensing 14 04315 g003
Figure 4. U-Net Xception-style model.
Figure 4. U-Net Xception-style model.
Remotesensing 14 04315 g004
Figure 5. Proposed method for Cuscuta spp. identification.
Figure 5. Proposed method for Cuscuta spp. identification.
Remotesensing 14 04315 g005
Figure 6. Generation of segmented orthomosaics with (a) manual segmentation, (b) the U-Net Xception-style model, and (c) DeepLabV3+.
Figure 6. Generation of segmented orthomosaics with (a) manual segmentation, (b) the U-Net Xception-style model, and (c) DeepLabV3+.
Remotesensing 14 04315 g006
Figure 7. Segmentation results of Cuscuta spp. in the crop with input images of 320 × 320 : (a) 14 July 2021, (b) 8 August 2021, (c) 29 August 2021, and (d) 11 September 2021.
Figure 7. Segmentation results of Cuscuta spp. in the crop with input images of 320 × 320 : (a) 14 July 2021, (b) 8 August 2021, (c) 29 August 2021, and (d) 11 September 2021.
Remotesensing 14 04315 g007
Table 1. Image operations for data augmentation.
Table 1. Image operations for data augmentation.
Original ImageFlipRotateFlip
Remotesensing 14 04315 i001 Remotesensing 14 04315 i002 Remotesensing 14 04315 i003 Remotesensing 14 04315 i004
Remotesensing 14 04315 i005 Remotesensing 14 04315 i006 Remotesensing 14 04315 i007 Remotesensing 14 04315 i008
Remotesensing 14 04315 i009 Remotesensing 14 04315 i010 Remotesensing 14 04315 i011 Remotesensing 14 04315 i012
Table 2. Training results with different input sizes.
Table 2. Training results with different input sizes.
SizeTraining TimeLossAccuracy
64 × 64 2 s 43 ms/step0.037598.87%
94 × 94 3 s 58 ms/step0.029898.99%
128 × 128 4 s 83 ms/step0.02699.02%
160 × 160 5 s 122 ms/step0.02499.10%
192 × 192 7 s 158 ms/step0.024299.10%
224 × 224 9 s 204 ms/step0.024999.10%
256 × 256 11 s 260 ms/step0.02699.13%
288 × 288 15 s 340 ms/step0.022599.18%
320 × 320 19 s 433 ms/step0.026499.17%
352 × 352 23 s 533 ms/step0.025799.13%
Table 3. Comparison with different input sizes.
Table 3. Comparison with different input sizes.
SizeTime for U-Net Xception-StyleTime for DeepLabV3+MIoU for U-Net Xception-StyleMIoU for DeepLabV3+
64 × 64 0.03861 s0.04080 s47.67%42.92%
94 × 94 0.03910 s0.04145 s61.27%35.43%
128 × 128 0.03959 s0.04249 s56.11%47.84%
160 × 160 0.04038 s0.04453 s63.59%39.65%
192 × 192 0.04000 s0.04505 s60.22%43.83%
224 × 224 0.04075 s0.04667 s57.48%35.71%
256 × 256 0.04114 s0.04667 s52.34%42.01%
288 × 288 0.04261 s0.04961 s59.56%56.23%
320 × 320 0.04371 s0.05149 s71.20%52.07%
352 × 352 0.04505 s0.05343 s60.28%44.77%
Table 4. Infested areas in the study cases.
Table 4. Infested areas in the study cases.
DatePixelsCuscuta spp. (m 2 )Cuscuta spp. (%)
14 July 20211820.46 m 2 0.003%
8 August 202117,74144.35 m 2 0.303%
29 August 202128,37670.94 m 2 0.485%
11 September 202133,21883.05 m 2 0.568%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gutiérrez-Lazcano, L.; Camacho-Bello, C.J.; Cornejo-Velazquez, E.; Arroyo-Núñez, J.H.; Clavel-Maqueda, M. Cuscuta spp. Segmentation Based on Unmanned Aerial Vehicles (UAVs) and Orthomasaics Using a U-Net Xception-Style Model. Remote Sens. 2022, 14, 4315. https://doi.org/10.3390/rs14174315

AMA Style

Gutiérrez-Lazcano L, Camacho-Bello CJ, Cornejo-Velazquez E, Arroyo-Núñez JH, Clavel-Maqueda M. Cuscuta spp. Segmentation Based on Unmanned Aerial Vehicles (UAVs) and Orthomasaics Using a U-Net Xception-Style Model. Remote Sensing. 2022; 14(17):4315. https://doi.org/10.3390/rs14174315

Chicago/Turabian Style

Gutiérrez-Lazcano, Lucia, César J. Camacho-Bello, Eduardo Cornejo-Velazquez, José Humberto Arroyo-Núñez, and Mireya Clavel-Maqueda. 2022. "Cuscuta spp. Segmentation Based on Unmanned Aerial Vehicles (UAVs) and Orthomasaics Using a U-Net Xception-Style Model" Remote Sensing 14, no. 17: 4315. https://doi.org/10.3390/rs14174315

APA Style

Gutiérrez-Lazcano, L., Camacho-Bello, C. J., Cornejo-Velazquez, E., Arroyo-Núñez, J. H., & Clavel-Maqueda, M. (2022). Cuscuta spp. Segmentation Based on Unmanned Aerial Vehicles (UAVs) and Orthomasaics Using a U-Net Xception-Style Model. Remote Sensing, 14(17), 4315. https://doi.org/10.3390/rs14174315

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop