Next Article in Journal
Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks
Next Article in Special Issue
Devising Mobile Sensing and Actuation Infrastructure with Drones
Previous Article in Journal
Research on the Multiple Factors Influencing Human Identification Based on Pyroelectric Infrared Sensors
Previous Article in Special Issue
Using Unmanned Aerial Vehicles in Postfire Vegetation Survey Campaigns through Large and Heterogeneous Areas: Opportunities and Challenges
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

UAVs and Machine Learning Revolutionising Invasive Grass and Vegetation Surveys in Remote Arid Lands

1
Institute for Future Environments; Robotics and Autonomous Systems, Queensland University ofTechnology (QUT), 2 George St, Brisbane City QLD 4000, Australia
2
School of Mathematical Sciences; ARC Centre of Excellence for Mathematical & Statistical Frontiers(ACEMS), Queensland University of Technology (QUT), 2 George St, Brisbane City QLD 4000, Australia
3
Environment and Sustainability Institute, University of Exeter, Penryn, Cornwall TR10 9FE, UK
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(2), 605; https://doi.org/10.3390/s18020605
Submission received: 30 November 2017 / Revised: 25 January 2018 / Accepted: 11 February 2018 / Published: 16 February 2018
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications)

Abstract

:
The monitoring of invasive grasses and vegetation in remote areas is challenging, costly, and on the ground sometimes dangerous. Satellite and manned aircraft surveys can assist but their use may be limited due to the ground sampling resolution or cloud cover. Straightforward and accurate surveillance methods are needed to quantify rates of grass invasion, offer appropriate vegetation tracking reports, and apply optimal control methods. This paper presents a pipeline process to detect and generate a pixel-wise segmentation of invasive grasses, using buffel grass (Cenchrus ciliaris) and spinifex (Triodia sp.) as examples. The process integrates unmanned aerial vehicles (UAVs) also commonly known as drones, high-resolution red, green, blue colour model (RGB) cameras, and a data processing approach based on machine learning algorithms. The methods are illustrated with data acquired in Cape Range National Park, Western Australia (WA), Australia, orthorectified in Agisoft Photoscan Pro, and processed in Python programming language, scikit-learn, and eXtreme Gradient Boosting (XGBoost) libraries. In total, 342,626 samples were extracted from the obtained data set and labelled into six classes. Segmentation results provided an individual detection rate of 97% for buffel grass and 96% for spinifex, with a global multiclass pixel-wise detection rate of 97%. Obtained results were robust against illumination changes, object rotation, occlusion, background cluttering, and floral density variation.

1. Introduction

Over recent decades, invasive grasses have resulted in very substantial losses to native ecosystems around the world. Governmental, scientific, and community efforts to monitor and control these and other introduced plant species have been extremely challenging due to restricted and difficult access to remote areas, expensive operational costs, and, in some cases, hazardous data collection campaigns [1]. In Australia, the U.S., South Africa, and other parts of the world, introduced grasses have flourished in arid landscapes due to their tenacity under hot, heavy grazing, and drought conditions [2,3]. Moreover, they have been fostered by farmers because of the economic benefits that they bring through land rehabilitation and livestock production. However, many of these plant species have invaded some of the wetter and more fertile parts of the landscape and affected the survival of native plant and animal populations [4,5,6]. As a result, these species have now been catalogued as invasive plants or weeds. There is an increasing body of research focused on assessing the biodiversity effects of invasive grasses, which shows that their expansion rates are likely to exceed new high records due to climate change effects [6,7,8,9,10,11]. Straightforward, efficient, and accurate surveillance methods are required to quantify expansion rates of invasive grasses and apply reliable and efficient control methods.
Among the present efforts to monitor invasive grasses and other vegetation, different investigations have developed diverse solutions, using various image sensors and detection methods to meet a range of needs. Previously, satellite and manned aircraft imagery was used to map invasive grass infestations. Nonetheless, advances in unmanned aerial vehicles (UAV) design and path planning [12,13] have seen an increased application of remote sensing for ecological assessments and biosecurity applications [14,15,16,17]. Research from Olsson et al. [18], for instance, demonstrated the importance of using hyperspectral imagery for invasive grass detection as compared with satellite imagery. A feasibility study of sensing technology by Marshall et al. [19], for example, illustrates the potential for regional mapping of buffel grass infestations in arid landscapes using high-resolution aerial photography in red, green, blue (RGB) colour model at a cm/pixel scale over multi- and hyperspectral technologies for overall detection rates.
Weed mapping using different image sensors capable of sensing multiple spectral bands is also an active field of research. Alexandridis et al. [20], for instance, developed an approach by integrating UAVs and multispectral imagery for weed mapping, achieving detection rates of up to 96 % . Moreover, Blaschke et al. [21] and Torres-Sánchez et al. [22] showed the use of Geographic Object-Based Image Analysis (GEOBIA) through UAVs and multispectral imagery to obtain detection rates of approximately 90 % .
Development of image and data processing techniques for vegetation assessments is also increasing. Amongst the popular methods, the use of spectral indexes for weed detection has gained considerable popularity, as explored by Ashourloo et al. [23], Robinson et al. [24], and Lin et al. [25]. In these cases, both supervised and unsupervised segmentation algorithms were greatly influenced by image quality, spectral bands, and ground sampling distance (GSD), among other complex considerations of the scene. In sum, a universal criterion has not yet been defined for choosing a feasible sensing technology and data processing pipeline that meets every application need [26]. This paper proposes the creation of a global approach for the surveillance of invasive grasses and related biosecurity applications by developing an automatic surveillance solution integrating UAV technology with high-resolution RGB cameras and a machine learning-based classification algorithm to process and segment the data. The presented pipeline process is illustrated with the automatic detection of buffel grass (Cenchrus ciliaris) and spinifex (Triodia sp.) in arid and semi-arid ecosystems in Australia.

2. Materials and Methods

2.1. Process Pipeline

We developed a pipeline process consisting of four main components: Acquisition, Preprocessing, Training, and Prediction, as illustrated in Figure 1. High-resolution digital images are initially captured from a UAV flight mission. Images are downloaded, orthorectified, and preprocessed in order to extract samples with key features and label them subsequently. Data are then fed into a supervised machine learning classifier to train and optimise its detection capabilities. Finally, the entire orthorectified imagery is processed to predict the location of invasive grasses and vegetation in the studied area.

2.2. Site

The study site is located in the Cape Range National Park, Western Australia (WA), Australia ( - 22.190429 , 113.865478 ). The site contains buffel grass, spinifex, remains of dry and decomposed vegetation, bushes, and arid soil. Images were taken in a successive series of four flight campaigns, conducted on the 10 July 2016, from 12:20 p.m. until 2:20 p.m. Meteorological conditions for that day were sunny, with south-easterly winds from 17 to 26 km/h, 46 % relative humidity, 21.2   C mean temperature, and no precipitation [27].
In the site, invasive grass species such as buffel grass and spinifex were found with negligible size variation, viewpoint variation, background clutter, and occlusion. However, they occurred at various densities, as shown in Figure 2.

2.3. Image Sensors

A Canon EOS 5DsR digital camera (Canon Inc., Tokyo, Japan) was utilised to capture high-resolution images. The camera specifications include 50.6 MP resolution, 28 mm focal length, ISO-400 speed, a full-frame complementary metal–oxide–semiconductor (CMOS) sensor of 36 mm × 24 mm, a 625 μ s exposure time, and a global positioning system (GPS) sensor.

2.4. The UAV and Sample Acquisition

A DJI S800 EVO Hexa-rotor UAV (DJI, Guangdong, China) was employed in the study area following a designed mission route with DJI Ground Station 4.0 software. As shown in Figure 3, the UAV featured high-performance brushless motors, a customised dampened gimbal providing active three-axis stabilisation of the sensor payload (levelled out to ensure the sensor was pointing permanently in the direction of the ground), a total weight of 3.9  kg, and dimensions of 1180 mm × 1000 mm × 500 mm. The flight mission was performed at an altitude of 66.9 ± 4.6  m, an overlap of 80 % , side lap of 50 % , and a route length of 6.6  km at 16.2  km/h. The horizontal and vertical GSD were approximately 1.0152  cmpixel in both cases.

2.5. Software

Various software solutions were used through the development of this research. In order to prepare the data, more than 500 raw images were filtered and orthorectified using Agisoft PhotoScan 1.2. With this software, an orthomosaic image of 44 , 800 × 17 , 200 pixels of 2.4  GB was generated. Due to the huge image size and possible random-access memory (RAM) limitations, this image was split into 4816 items of 400 × 400 pixels in Tagged Image File (TIF) and Keyhole Markup Language (KML) formats. A group of representative samples in cropped regions was extracted and subsequently labelled using GNU Image Manipulation Program (GIMP) 2.8.22 to fit the classifier. The generated image set was processed using Python 2.7.14 programming language and several third-party libraries for data manipulation and machine learning, including eXtreme Gradient Boosting (XGBoost) 0.6 [28], Scikit-learn 0.19.1 [29], OpenCV 3.3.0 [30], and Matplotlib 2.1.0 [31].

2.6. Data Labelling

Due to the variety of conditions in which invasive grasses were found in Cape Range National Park, 10 images were selected and analysed using photo interpretation. Invasive grasses (buffel grass and spinifex), as well as common objects in the area, were highlighted using bright distinguishable colours as depicted in Figure 4. Regions were coloured through the “Bucket fill” tool of the GIMP software.
To perform image labelling, a mask for each image sample was generated by assigning integer values for every highlighted pixel. Each bright coloured pixel was filtered from every sample using Equation (1).
H ( x , y ) = a if S ( x , y ) = F ( R , G , B ) 0 otherwise
where H is the mask for each sample S and a is the integer value for every bright colour value F ( R , G , B ) . Values for a were set as follows: 1 = buffel grass; 2 = soil and road; 3 = bushes; 4 = shadow; 5 = dry vegetation (Dry Veg.); 6 = spinifex.

2.7. Classification Algorithm

Algorithm 1 was utilised for the training and prediction stages. It identifies and filters the highlighted regions mentioned in Section 2.6, trains a gradient boosted decision tree classifier, cross-validates the classification rates, predicts unlabelled data, and displays the results.
The training section of Algorithm 1 comprises several steps to load, preprocess the data, and fit an XGBoost classifier. The processing stage transforms the read data into an array of features or attributes, which are consequently processed by the classifier. As described in the algorithm, in order to obtain the feature array D, representative sample images G are firstly converted from their default RGB colour model into the hue, saturation, value (HSV) colour model in Step 3. Then, a set of filters are applied on G and their outputs inserted into D subsequently, as mentioned in Step 5. The two-dimensional (2D) filters calculate the variance into a subset of pixel neighbours contained in a window, following Equations (2) and (3).
Algorithm 1 Detection and segmentation of invasive grasses using high-resolution RGB images.
Required: orthorectified image set I. Representative samples set G. Sample masks set H
Training
1:for i 1 , n do n = total images in G (labelled data)
2:    Load G i and H i images
3:    Convert colour space of G i into HSV
4:    Insert each colour channel into a feature array D
5:    Use 2D filters on G i and insert their outputs into D
6:    From G i and H i , filter only the pixels with assigned labelling on D
7:end for
8:Split D into training data D T and testing data D E
9:Create a XGBoost classifier X and fit it using D T
10:Use K-fold cross validation with D E ▹ number of folds = 10
11:Perform grid search to tune X parameters
Prediction
12:for i 1 , m do m = total images in I
13:    Load I i image
14:    Convert colour space of I i into HSV
15:    Scan every pixel and predict the object using X
16:     O i Convert the data into a 2D image
17:    Export O i into TIF format
18:end for
19:return O i
X = 1 w 2 1 1 1 1 1 1 1 1 1
s 2 = E [ X 2 ] E [ X ] 2
where X is the kernel of the filter to estimate the mean value of the processed image, w is the window size, and s 2 is the variance defined as the subtraction between the estimation of mean of square and the square of mean. Thus, the array of features D for this case study contains 10 items as follows: hue, saturation, value, variance filters on hue where w equals 3 and 15, variance filters on saturation where w equals 3 and 15, and variance filters of the grayscale image from G i where w equals 3, 7, and 15. Later, as described in Step 6, pixel locations that were previously labelled are filtered using masks H, following Equation (4).
D j = [ G i ( x , y ) , H i ( x , y ) ] if H i ( x , y ) 0 null otherwise
where D j is the 2D output array of the operation, G i ( x , y ) is the sample image, and H i ( x , y ) is the labelled counterpart of G i at position ( x , y ) . In total, 342 , 626 pixel-wise samples were filtered and subsequently split randomly into a training ( 75 % ) and testing ( 25 % ) data array. In Step 9, data are processed into the XGBoost classifier, which is a state-of-art decision tree and gradient boosting based model created by Chen and Guestrin [28] that is optimised for large tree structures, high execution speed, and excellent performance. Hyper-parameters for this classifier such as the number of estimators, the learning rate, and maximum depth are estimated by running a grid search method in Step 11. This technique evaluates a combination of multiple values for each hyper-parameter, returning the optimal combination of those for the classifier. For this case study, the optimal hyper-parameter values to obtain an accuracy-robustness balance without causing over-fitting are:
estimators = 100 , learning rate = 0.1 , maximum depth = 3
where “estimators” is the number of trees, “learning rate” is the step size of each boosting step, and “maximum depth” is the maximum depth per tree that defines the complexity of the model. For the prediction stage (Steps 13–17), all the orthorectified images are processed in a loop using the trained classifier and the same data conversion considerations applied at the training stage. Finally, classified pixels for each image are painted in distinguishable colours and exported in TIF format, compatible with geographic information system (GIS) platforms.

3. Results

Segmented images for photo interpretation as well as accuracy indicators were implemented for validation purposes. In total, 85 , 657 labelled pixels were evaluated from the test data set D E to assess Algorithm 1. The confusion matrix of the classifier is presented in Table 1.
From 25 , 795 instances of pixels labelled as buffel grass, the algorithm predicted correctly 25 , 256 pixels and reported misclassifications of 362 pixels as spinifex, 156 pixels as bushes, 17 pixels as soil and 4 pixels as dry vegetation. Similarly, 25 , 196 pixels were successfully predicted as soil, with 17 misclassifications; 3913 pixels as bushes with 737 misclassifications; 7729 pixels as shadow with 1 misclassification; 5734 pixels as dry vegetation with 185 misclassifications; and 15 , 649 pixels as spinifex with 701 misclassifications. Based on these numbers, a classification report is generated as shown in Table 2.
Here, precision is the ratio between true positives and the sum of true positives and false positives, recall is the ratio between true positives and the sum of true positives and false negatives, f-score is the mean value between precision and recall, and support is the number of tested pixels per class. For this case study, precision errors indicate the output of misleading results by labelling wrong classes, whereas recall errors show the output of incomplete class detection.
Overall, the majority of the classes were successfully classified. For buffel grass and spinifex, most of the misclassified pixels were attributed to their counterpart class; these misclassification rates were small, representing for buffel grass precision and recall errors of 1 . 92 % and 1 . 40 % , respectively, and for spinifex values of 2 . 23 % and 3 . 11 % , respectively. Due to the high variation in greenness values of labelled buffel grass, specific areas of dry grass were classified as spinifex and vice versa. Similarly, misclassification of dry vegetation and spinifex instances ( 2 . 88 % and 2 . 69 % , 0 . 98 % and 1 . 05 % ) occurred owing to many occurrences of this plant in senescence conditions. The classification rates were excellent for the shadow and soil classes, mainly due to the small variation in their visual properties such as their colour intensity, luminosity, and smooth texture. In contrast, the classification of the bushes class was not satisfactory at all, especially its recall rates, as indicated by a greater proportion of pixels classified as buffel grass ( 3 . 81 % and 13 . 59 % ) and to a lesser degree, spinifex ( 0 . 49 % and 1 . 74 % ).
The proposed algorithm is capable of classifying invasive grasses and other vegetation with remarkable global precision rates of 97 % and recall rates of 95 . 76 % . The proposed method increases, nevertheless, the likelihood of classifying certain bush regions as buffel grass, with a recall rate of 84 . 15 % . Considering an equal relevance of precision and recall for this investigation, the overall detection rate of the proposed method is 96 . 54 % . The 10-fold cross-validation analysis achieved mean accuracy and standard deviation values of 97 . 54 % and 0 . 042 % , respectively. Furthermore, a feature’s relevance analysis was conducted for the XGBoost classifier. The embedded estimation function performs the sum of the instances each feature is split in its decision-tree-based structure. The importance of each feature is depicted in Figure 5.
Where each bar item from the x-axis represents the relative frequency each feature has in the classifier. Here, hue, value, and saturation scores demonstrate the significant relevance the features had on the model, representing up to 65 % of the total instances. These ratings are followed by 2D variance filter images such as the grayscale image with window size of 7 pixels, and the saturation image with a 15-pixel window size. The filters with substantially large window sizes showed clearly the importance of classifying accurately certain objects whose pixels are presented in a set of textures, such as bushes and spinifex. An illustration of the prediction and segmentation outputs is depicted in Figure 6.
Figure 6a,c,e,g depict representative samples where buffel grass, spinifex, bushes, soil, and dry vegetation are displayed at different densities and light conditions, whereas Figure 6b,d,f,h show the segmentation obtained from the proposed algorithm. As seen in the confusion matrix from Table 1, it is possible to obtain highly accurate segmentation results for the buffel grass, spinifex, soil and shadow classes. However, the segmentation results for the “bushes” class is unstable in some images and can be regarded in many cases as image noise. The segmented images can be loaded and displayed in any GIS software, as shown in Figure 7.

4. Discussion

Accuracy and segmentation indicators presented in Section 3 validate the proposed pipeline approach to map vegetation and invasive grasses in arid lands. Negligible proportions of observed misclassifications for “buffel” and “spinifex” classes may be attributed to human error during the labelling of sample data. That is strongly evidenced by evaluating the results for the “bushes” class where the number of misclassified pixels is attributed to a challenging image labelling task. These inaccuracies occurred because the visible colour properties of bushes from the RGB sensor showed many similarities with other vegetation. From environmental monitoring and biosecurity perspectives, the proposed method is capable of providing critical information such as the distribution of invasive grasses, density values of invasive species in arid lands, and estimation of their expansion values for the short and mid-term, among others.
The present study represents a competitive approach for the use of UAVs and machine learning-based classification models compared with alternative solutions. It complements the research outcomes on buffel grass of Marshall et al. [19] by confirming a feasible, accurate, lightweight and relatively cheap solution for invasive grass mapping. With regard to invasive grasses in arid lands, this paper has demonstrated that using only high-resolution RGB images and single pixel-wise classification satisfies the need for accurate and efficient detection and segmentation solutions.
It is noteworthy that the invasive grasses in this study had negligible size variation, background clutter, occlusion, and viewpoint variation, constituting, apparently, an advantage. As opposed to senescence conditions, varied levels of grass density and illumination variation did not represent additional challenges. However, acquired data is insufficient for performing further classification tests with changes in illumination in the study area, such as acquisition tasks at different times during the day and under cloudy conditions. These parameters might alter the detection rates of the presented approach, and further research should be conducted under these conditions. The processing of imagery with small GSD values demonstrates how UAV-based remote sensing equipment has improved sensing capabilities compared with satellite and manned aircraft for invasive grass assessments.
Future research should analyse the efficacy of supervised and unsupervised algorithms to label vegetation and specifically invasive grass species accurately, and integrate the best approaches in the proposed pipeline. Additionally, new efforts should be focused on improving the performance of the entire pipeline process as well as the aggregation and evaluation of unsupervised classification algorithms for image labelling tasks using RGB pictures only. Although the amount of previous research in optimising machine learning models is significant, specific areas might be improved for real-time applications, such as orthomosaic-based processes and a better software integration into a single solution.

5. Conclusions

This paper proposed an integrated pipeline methodology for mapping vegetation and invasive grasses in arid lands. The methods were demonstrated by mapping buffel grass and spinifex in remote areas of WA through the use of UAVs, high-resolution RGB imagery, and gradient boosted decision trees. The presented approach illustrates detection rates of 96 . 75 % and 96 . 00 % for single mapping of buffel grass and spinifex, respectively, and a multiclass detection rate of 96 . 54 % . Invasive grasses were accurately detected at different spatial concentrations with a GSD of up to 1 . 015 cm/pixel, demonstrating how UAV data collection can be useful for invasive grass detection at early stages. This case study demonstrates the implementation of unmanned aerial systems and machine learning for a feasible, accurate, and lightweight assessment of invasive grasses in arid and semi-arid lands. Future work will focus on integrating unsupervised and supervised methods for vegetation data labelling in order to reduce processing times.

Supplementary Materials

Supplementary File 1

Acknowledgments

This work was funded by the Plant Biosecurity Cooperative Research Centre (PBCRC) 2164 project, the Agriculture Victoria Research and the Queensland University of Technology (QUT). The authors would like to acknowledge Derek Sandow and WA Parks and Wildlife Service for the logistic support and permits to access the survey areas at Cape Range National Park. The authors would also like to acknowledge Eduard Puig-Garcia for his contributions in co-planning the experimentation phase. The authors gratefully acknowledge the support of the QUT Research Engineering Facility (REF) Operations Team (Dirk Lessner, Dean Gilligan, Gavin Broadbent and Dmitry Bratanov), who operated the DJI S800 EVO UAV and image sensors, and performed ground referencing. We thank Gavin Broadbent for the design, manufacturing, and tuning of a two-axis gimbal for the camera. We also acknowledge the High-Performance Computing and Research Support Group at QUT, for the computational resources and services used in this work.

Author Contributions

Felipe Gonzalez and Kerrie Mengersen contributed to experimentation and data collection planning. Felipe Gonzalez supervised the ground and airborne surveys, quality of acquired data, and logistics. Juan Sandino designed the proposed pipeline and conducted the data processing phase. Felipe Gonzalez, Kerrie Mengersen, and Kevin J. Gaston provided definitions, assistance and essential advice. Juan Sandino analysed the generated outputs, and validated and optimised the algorithm. All the authors contributed significantly to the composition and revision of the paper.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
2DTwo-dimensional
CMOSComplementary metal–oxide–semiconductor
Dry Veg.Dry vegetation
GEOBIAGeographic Object-Based Image Analysis
GIMPGNU Image Manipulation Program
GISGeographic information system
GPSGlobal positioning system
GSDGround sampling distance
HSVHue, saturation, value colour model
KMLKeyhole Markup Language
MDPIMultidisciplinary Digital Publishing Institute
RAMRandom-access memory
RGBRed, green, blue colour model
TIFTagged Image File
UAVUnmanned Aerial Vehicles
WAWestern Australia
XGBoosteXtreme Gradient Boosting

References

  1. Godfree, R.; Firn, J.; Johnson, S.; Knerr, N.; Stol, J.; Doerr, V. Why non-native grasses pose a critical emerging threat to biodiversity conservation, habitat connectivity and agricultural production in multifunctional rural landscapes. Landsc. Ecol. 2017, 32, 1219–1242. [Google Scholar] [CrossRef]
  2. Schlesinger, C.; White, S.; Muldoon, S. Spatial pattern and severity of fire in areas with and without buffel grass (Cenchrus ciliaris) and effects on native vegetation in central Australia. Austral Ecol. 2013, 38, 831–840. [Google Scholar] [CrossRef]
  3. Fensham, R.J.; Wang, J.; Kilgour, C. The relative impacts of grazing, fire and invasion by buffel grass (Cenchrus ciliaris) on the floristic composition of a rangeland savanna ecosystem. Rangel. J. 2015, 37, 227. [Google Scholar] [CrossRef]
  4. Grice, A.C. The impacts of invasive plant species on the biodiversity of Australian rangelands. Rangel. J. 2006, 28, 27. [Google Scholar] [CrossRef]
  5. Marshall, V.; Lewis, M.; Ostendorf, B. Buffel grass (Cenchrus ciliaris) as an invader and threat to biodiversity in arid environments: A review. J. Arid Environ. 2012, 78, 1–12. [Google Scholar] [CrossRef]
  6. Bonney, S.; Andersen, A.; Schlesinger, C. Biodiversity impacts of an invasive grass: Ant community responses to Cenchrus ciliaris in arid Australia. Biol. Invasions 2017, 19, 57–72. [Google Scholar] [CrossRef]
  7. Jackson, J. Impacts and Management of Cenchrus ciliaris (buffel grass) as an Invasive Species in Northern Queensland. Ph.D. Thesis, James Cook University, Townsville, Australia, 2004. [Google Scholar]
  8. Jackson, J. Is there a relationship between herbaceous species richness and buffel grass (Cenchrus ciliaris)? Austral Ecol. 2005, 30, 505–517. [Google Scholar] [CrossRef]
  9. Martin, T.G.; Murphy, H.; Liedloff, A.; Thomas, C.; Chadès, I.; Cook, G.; Fensham, R.; McIvor, J.; van Klinken, R.D. Buffel grass and climate change: A framework for projecting invasive species distributions when data are scarce. Biol. Invasions 2015, 17, 3197–3210. [Google Scholar] [CrossRef]
  10. Miller, G.; Friedel, M.; Adam, P.; Chewings, V. Ecological impacts of buffel grass (Cenchrus ciliaris L.) invasion in central Australia—Does field evidence support a fire-invasion feedback? Rangel. J. 2010, 32, 353. [Google Scholar] [CrossRef]
  11. Smyth, A.; Friedel, M.; O’Malley, C. The influence of buffel grass (Cenchrus ciliaris) on biodiversity in an arid Australian landscape. Rangel. J. 2009, 31, 307. [Google Scholar] [CrossRef]
  12. Gonzalez, L.; Whitney, E.; Srinivas, K.; Periaux, J. Multidisciplinary aircraft design and optimisation using a robust evolutionary technique with variable fidelity models. In Proceedings of the 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Albany, NY, USA, 30 August–1 September 2004; pp. 3610–3624. [Google Scholar]
  13. Whitney, E.; Gonzalez, L.; Periaux, J.; Sefrioui, M.; Srinivas, K. A robust evolutionary technique for inverse aerodynamic design. In Proceedings of the 4th European Congress on Computational Methods in Applied Sciences and Engineering, Jyväskylä, Finland, 24–28 July 2004. [Google Scholar]
  14. Gonzalez, L.; Montes, G.; Puig, E.; Johnson, S.; Mengersen, K.; Gaston, K. Unmanned Aerial Vehicles (UAVs) and Artificial Intelligence Revolutionizing Wildlife Monitoring and Conservation. Sensors 2016, 16, 97. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Chahl, J. Unmanned Aerial Systems (UAS) Research Opportunities. Aerospace 2015, 2, 189–202. [Google Scholar] [CrossRef]
  16. Allison, R.S.; Johnston, J.M.; Craig, G.; Jennings, S. Airborne Optical and Thermal Remote Sensing for Wildfire Detection and Monitoring. Sensors 2016, 16, 1310. [Google Scholar] [CrossRef] [PubMed]
  17. Thomas, J.E.; Wood, T.A.; Gullino, M.L.; Ortu, G. Diagnostic Tools for Plant Biosecurity. In Practical Tools for Plant and Food Biosecurity: Results from a European Network of Excellence; Gullino, M.L., Stack, J.P., Fletcher, J., Mumford, J.D., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 209–226. [Google Scholar]
  18. Olsson, A.D.; van Leeuwen, W.J.; Marsh, S.E. Feasibility of Invasive Grass Detection in a Desertscrub Community Using Hyperspectral Field Measurements and Landsat TM Imagery. Remote Sens. 2011, 3, 2283–2304. [Google Scholar] [CrossRef]
  19. Marshall, V.M.; Lewis, M.M.; Ostendorf, B. Detecting new Buffel grass infestations in Australian arid lands: Evaluation of methods using high-resolution multispectral imagery and aerial photography. Environ. Monit. Assess. 2014, 186, 1689–1703. [Google Scholar] [CrossRef] [PubMed]
  20. Alexandridis, T.; Tamouridou, A.A.; Pantazi, X.E.; Lagopodi, A.; Kashefi, J.; Ovakoglou, G.; Polychronos, V.; Moshou, D. Novelty Detection Classifiers in Weed Mapping: Silybum marianum Detection on UAV Multispectral Images. Sensors 2017, 17, 2007. [Google Scholar] [CrossRef] [PubMed]
  21. Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.; Queiroz Feitosa, R.; van der Meer, F.; van der Werff, H.; van Coillie, F.; Tiede, D. Geographic Object-Based Image Analysis—Towards a new paradigm. ISPRS J. Photogramm. Remote Sens. 2014, 87, 180–191. [Google Scholar] [CrossRef] [PubMed]
  22. Torres-Sánchez, J.; López-Granados, F.; Peña, J. An automatic object-based method for optimal thresholding in UAV images: Application for vegetation detection in herbaceous crops. Comput. Electron. Agric. 2015, 114, 43–52. [Google Scholar] [CrossRef]
  23. Ashourloo, D.; Aghighi, H.; Matkan, A.A.; Mobasheri, M.R.; Rad, A.M. An Investigation Into Machine Learning Regression Techniques for the Leaf Rust Disease Detection Using Hyperspectral Measurement. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 4344–4351. [Google Scholar] [CrossRef]
  24. Robinson, T.; Wardell-Johnson, G.; Pracilio, G.; Brown, C.; Corner, R.; van Klinken, R. Testing the discrimination and detection limits of WorldView-2 imagery on a challenging invasive plant target. Int. J. Appl. Earth Obs. Geoinform. 2016, 44, 23–30. [Google Scholar] [CrossRef]
  25. Lin, F.; Zhang, D.; Huang, Y.; Wang, X.; Chen, X. Detection of Corn and Weed Species by the Combination of Spectral, Shape and Textural Features. Sustainability 2017, 9, 1335. [Google Scholar] [CrossRef]
  26. Schmittmann, O.; Lammers, P. A True-Color Sensor and Suitable Evaluation Algorithm for Plant Recognition. Sensors 2017, 17, 1823. [Google Scholar] [CrossRef] [PubMed]
  27. Bureau of Meteorology. Learmonth, WA—July 2016—Daily Weather Observations; Bureau of Meteorology: Learmonth Airport (station 005007), Australia, 2016. [Google Scholar]
  28. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; ACM: New York, NY, USA, 2016; pp. 785–794. [Google Scholar]
  29. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  30. Bradski, G. The OpenCV library. Dr. Dobb’s J. Softw. Tools 2000, 25, 122–125. [Google Scholar]
  31. Hunter, J.D. Matplotlib: A 2D graphics environment. Comput. Sci. Eng. 2007, 9, 90–95. [Google Scholar] [CrossRef]
Figure 1. Primary pipeline for mapping of invasive grasses and related vegetation.
Figure 1. Primary pipeline for mapping of invasive grasses and related vegetation.
Sensors 18 00605 g001
Figure 2. Main features of the study site. (a) Geographical location. (b) Area with high density of buffel grass. (c) Area with high density of spinifex. (d) Area with low density of invasive grasses. (e) Buffel grass. (f) Spinifex.
Figure 2. Main features of the study site. (a) Geographical location. (b) Area with high density of buffel grass. (c) Area with high density of spinifex. (d) Area with low density of invasive grasses. (e) Buffel grass. (f) Spinifex.
Sensors 18 00605 g002
Figure 3. The DJI S800 EVO (DJI, Guangdong, China) unmanned aerial vehicle (UAV) flying in Cape Range National Park, Western Australia (WA), Australia.
Figure 3. The DJI S800 EVO (DJI, Guangdong, China) unmanned aerial vehicle (UAV) flying in Cape Range National Park, Western Australia (WA), Australia.
Sensors 18 00605 g003
Figure 4. Image labelling. (a) Representative sample. (b) Highlighted regions using bright colours.
Figure 4. Image labelling. (a) Representative sample. (b) Highlighted regions using bright colours.
Sensors 18 00605 g004
Figure 5. Relevance of each feature for the tuned classifier.
Figure 5. Relevance of each feature for the tuned classifier.
Sensors 18 00605 g005
Figure 6. Pixel-wise segmentation from acquired red, green, blue (RGB) colour model images using Algorithm 1. (a, c, e, g) Orthorectified RGB images. (b, d, f, h) Final segmentation with predicted classes.
Figure 6. Pixel-wise segmentation from acquired red, green, blue (RGB) colour model images using Algorithm 1. (a, c, e, g) Orthorectified RGB images. (b, d, f, h) Final segmentation with predicted classes.
Sensors 18 00605 g006
Figure 7. Prediction of invasive grasses in Cape Range National Park and its display in Google Earth.
Figure 7. Prediction of invasive grasses in Cape Range National Park and its display in Google Earth.
Sensors 18 00605 g007
Table 1. The eXtreme Gradient Boosting (XGBoost) classifier confusion matrix.
Table 1. The eXtreme Gradient Boosting (XGBoost) classifier confusion matrix.
PredictedBuffelSoilBushesShadowDry vegetationSpinifex
LabelledBuffel25,2561715604362
Soil1525,1961010
Bushes6321391322181
Shadow010772900
Dry vegetation810625734159
Spinifex508220017115,649
Table 2. Classification report from confusion matrix of Table 1.
Table 2. Classification report from confusion matrix of Table 1.
ClassPrecision (%)Recall (%)F-Score (%)Support
Buffel95.6097.9196.7525,795
Soil99.8899.9399.9025,213
Bushes95.5384.1589.844650
Shadow99.9599.9999.977730
Dry vegetation96.6896.8796.785919
Spinifex96.3095.7196.0016,350
Mean97.3295.7696.54∑ = 85,657

Share and Cite

MDPI and ACS Style

Sandino, J.; Gonzalez, F.; Mengersen, K.; Gaston, K.J. UAVs and Machine Learning Revolutionising Invasive Grass and Vegetation Surveys in Remote Arid Lands. Sensors 2018, 18, 605. https://doi.org/10.3390/s18020605

AMA Style

Sandino J, Gonzalez F, Mengersen K, Gaston KJ. UAVs and Machine Learning Revolutionising Invasive Grass and Vegetation Surveys in Remote Arid Lands. Sensors. 2018; 18(2):605. https://doi.org/10.3390/s18020605

Chicago/Turabian Style

Sandino, Juan, Felipe Gonzalez, Kerrie Mengersen, and Kevin J. Gaston. 2018. "UAVs and Machine Learning Revolutionising Invasive Grass and Vegetation Surveys in Remote Arid Lands" Sensors 18, no. 2: 605. https://doi.org/10.3390/s18020605

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop