Next Article in Journal
Object-Based Detection of Linear Kinematic Features in Sea Ice
Previous Article in Journal
Exploiting the Redundancy of Multiple Overlapping Aerial Images for Dense Image Matching Based Digital Surface Model Generation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Detection of Uprooted Orchards Based on Orthophoto Texture Analysis

1
Department of Projects and Rural Engineering, Campus Arrosadía, Public University of Navarre, Pamplona 31006, Spain
2
Tracasa, 31621 Sarriguren, Navarre, Spain
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(5), 492; https://doi.org/10.3390/rs9050492
Submission received: 14 March 2017 / Revised: 10 May 2017 / Accepted: 13 May 2017 / Published: 17 May 2017

Abstract

:
Permanent crops, such as olive groves, vineyards and fruit trees, are important in European agriculture because of their spatial and economic relevance. Agricultural geographical databases (AGDBs) are commonly used by public bodies to gain knowledge of the extension covered by these crops and to manage related agricultural subsidies and inspections. However, the updating of these databases is mostly based on photointerpretation, and thus keeping this information up-to-date is very costly in terms of time and money. This paper describes a methodology for automatic detection of uprooted orchards (parcels where fruit trees have been eliminated) based on the textural classification of orthophotos with a spatial resolution of 0.25 m. The textural features used for this classification were derived from the grey level co-occurrence matrix (GLCM) and wavelet transform, and were selected through principal components (PCA) and separability analyses. Next, a Discriminant Analysis classification algorithm was used to detect uprooted orchards. Entropy, contrast and correlation were found to be the most informative textural features obtained from the co-occurrence matrix. The minimum and standard deviation in plane 3 were the selected features based on wavelet transform. The classification based on these features achieved a true positive rate (TPR) of over 80% and an accuracy (A) of over 88%. As a result, this methodology enabled reducing the number of fields to photointerpret by 60–85%, depending on the membership threshold value selected. The proposed approach could be easily adopted by different stakeholders and could increase significantly the efficiency of agricultural database updating tasks.

Graphical Abstract

1. Introduction

The European Common Agricultural Policy provides financial support to agricultural producers via a number of different subsidies and measures [1]. Thus, the administrations of the European Community states make great efforts to meet the necessary requirements to receive this aid. In this context, and due to its significant spatial presence and economic relevance, agricultural geographical databases of olive groves, vineyards, orchards and citrus trees should be kept up-to-date by the regional public bodies.
The combination of photointerpretation and field visits is a very popular up-dating technique because of the high level of accuracy it provides [2]. However, it is very costly in terms of both the economic and time investments it requires. For this reason, developing an effective methodology to detect changes in AGDBs is a topic of high interest among public administrations at both the regional and national levels. Automatic crop identification methodologies based on the processing of remotely sensed images reduce the cost and accelerate the updating process, making this technique a viable option for detecting changes in AGDB [3]. Different authors have assessed the utility of remotely sensed image processing for automatic identification and management of crops [4,5,6,7] as well as for updating GDBs [8,9].
To update AGDBs using remotely sensed sub-metric resolution images, there should be, at minimum, an annual coverage of the entire territory. However, the acquisition of multi-spectral satellite data with very high spatial resolution of a large territory is very costly. Besides, cloud cover further compromises the acquisition of useful observations in many regions of the World. This suggests the need to find alternative sources of information with enough spatial resolution to tackle this issue.
Digital orthophotos are routinely acquired in many countries of the world, since they are a crucial information source for land planning and related activities. Orthophotos are high resolution aerial images that have been subjected to a photogrammetric process. This process gives the image the right scale and coordinates, allowing its use as a map. Historically, the orthophoto has been used in cartography generation and in photointerpretation processes. Therefore, the development of photogrammetric systems has been focused on obtaining geospatial information as accurate as possible. In this context, the orthophoto radiometry has been utilized from a visual and not quantitative point of view. Nowadays, an intense research effort is being made to introduce a rigorous radiometric correction in the orthophoto production. This correction will be useful for: (1) eliminating discontinuities in the radiometry of orthophoto mosaics; (2) obtaining “real“ color orthophoto; and (3) using the orthophoto to calculate biophysical parameters. Generating a uniform regional or country level orthophoto from individual ones (in most cases acquired in different dates, with different illumination conditions, etc.) involves a process of mosaicking that necessarily modifies the spectral values of pixels to homogenize their color and brightness settings. This fact precludes the use of orthophotos as a source of spectral information in the process of automatic detection of crops in studies that consider wide areas. However, due to their high spatial resolution their textural information can still be useful to identify different crops and land covers [10,11,12]. The use of spectral information from orthophoto should be limited to small areas covered by a single or by contiguous orthophotos captured under the same conditions to achieve comparable spectral values. Therefore, the spatial characteristics such as texture become an interesting alternative for studies based on orthophotos at the regional level.
Image texture is an important spatial component of remotely sensed images. The texture of an image can be defined as the relationship between the grey levels of neighboring pixels that contribute to the general appearance of the image [13,14]. Texture is essential for the perception of different regions and cover types on images [15]. Because the textural characteristics of images provide useful information for discrimination, it is important to set features to quantitatively evaluate it [16].
Many scientists have tackled the problem of texture feature extraction; thus, a variety of methods for quantifying spatial patterns have been proposed, such as variograms [17], fractals [18], scale-invariant feature transform (SIFT) [19,20,21], wavelet transformation [10] and both first and second order statistics [22]. One of the most common procedures used to measure texture involves the use of a grey level co-occurrence matrix [13,23,24]. This matrix includes the relative frequencies with which two neighbor pixels, separated at distance d with direction   a , occur in the image, one with a grey tone   i , and other with a grey tone   j . The set of features proposed by Haralick et al. [16] based on GLCM has been recently assessed for diverse applications using very high resolution (VHR) imagery such as GeoEye [25,26], WorldView-2 [27,28,29,30], WorldView-3 [31] or Unmanned Aerial Vehicles (UAVs) imagery [32,33]. GLCM based features (H) have been successfully used in the agroforestry field for diverse applications as land cover mapping of forested areas [24] and for crops identification in agriculture [3,34], such as vineyards and orchards [4,34,35,36,37]. Regarding previous work in VHR image texture analysis using wavelet decomposition, different texture features have been extracted and assessed using VHR imagery, such as the local energy [38], the variance filter [39] or histogram signatures [40], among others. In the field of agriculture, Recio [41] applied features based on wavelet planes (W) to update agricultural AGDBs.
According to Tuceryan and Jain [42], texture is a property of the context, and the definition of the context is associated with the choice of a neighborhood of pixels, so called objects. Working at object level allows neighboring pixels to be grouped in homogeneous areas with physical meaning, which will later be characterized texturally and identified as independent units. In agriculture, it is common to use pixel grouping methodologies based on Geographic Information System (GIS) data that is superimposed on the image [9,43]. This process is known as parcel-level segmentation and is applied especially in agriculture, where the dominant spatial unit is the cadastral parcel [44]. The boundaries between adjacent parcels are helpful for detecting changes and updating agricultural databases because these boundaries are relatively stable, while crops change frequently [45]. This approach assumes that cartographic boundaries are stable over time and homogeneous in relation to a selected legend, so crops can be identified but the boundaries between them are not modified. In parcel-level classification, every pixel of the parcel is classified in the same category, which reduces the typical salt and pepper effect obtained when classifying very high-resolution images at pixel level. This technique reduces the amount of data that must be classified and produces results that are easily interpretable. Multiple classification algorithms have been developed; the choice of one depends on the nature of the data and the sample size. Classifiers can differ greatly in terms of the training information they require for an accurate classification [46]. A training set that could be used to derive a highly accurate classification from one classifier may yield a lower accuracy if used with another classifier [47]. For example, unequal size of classes and small training samples do not influence the results of the Discriminant Analysis classifier [48], which could be useful when a minority class wants to be identified among a big population. Discriminant Analysis classifier has been assessed in Remote Sensing (RS) to identify rainforest types [49], forest types [50] grass weeds [51,52] or crop stress [52,53,54].
The objective of this study is to determine whether it is possible to automatically detect uprooted orchards by using only textural information calculated from orthophotos, with the aim of reducing the amount of photointerpretation needed in the updating process of agricultural databases. The use of orthophotos in automated RS applications would extend the utility of this image, which is freely available in most cases, and thus reduce the need to purchase VHR satellite imagery for this particular application To this aim, a combination of textural features based on GLCM and Wavelet planes were used in this paper due to its acceptance in research scenarios analogous to the analyzed in this investigation [4,34,37,41], and because it is widely acknowledged that properly combining multiple features results in good classification performance [55]. The proposed approach is based on supervised Discriminant Analysis classification performed at parcel-level in order to discriminate between orchards and uprooted orchard parcels. Due to the large number of textural features that can be drawn from each parcel, a statistical analysis was performed based on the study of both relevancy and redundancy. This analysis aims to reduce the number of features without losing information and maximizing the separability between uprooted orchard parcels and orchard parcels.

2. Materials and Methods

2.1. Study Area

Navarre, with a surface area of 10,400 km2, is located in the northern part of Spain at the western end of the Pyrenees. Permanent crops represent approximately 10% of the agricultural area of Navarre, and normally consist of small parcels scattered throughout the territory. Figure 1 shows the location of the study sites and the area occupied by orchards in each of the municipalities of Navarre. The study is focused on the parcels of the Orchard Register of 2006 that belong to the municipalities of Sartaguda, Fontellas and Carcar, involving a total of 2072 study parcels. These parcels are candidates that are likely to contain an uprooted orchard, as uprooted orchards can only occur in parcels in which an orchard was previously located. Parcel size in hectares (ha) ranges approximately from 0.01 ha to 20 ha, 0.4 ha and 0.23 ha being the mean and the median of the parcels size, respectively.

2.2. Data Preparation

The textural data were obtained from the National Plan of Aerial Orthophotography (PNOA) digital orthophoto. Since 2004, in Spain, the PNOA provides annual, complete, country-wide coverage with a spatial resolution of at least 0.5 m. This image is available for free and can be requested from the website of the Spanish National Institute of Geography. Specifically, the used orthophoto was an 8-bit RGB-NIR image with a spatial resolution of 0.25 m. This orthophoto was acquired in June 2010 with a Digital Mapping Camera (DMC) sensor.
The definition of the study parcels, which constitute the basic unit of analysis in this research, was based on the cartographic limits obtained from the cadaster corresponding to the study site. The class Orchard (O) included different orchard species, i.e., apricot (Prunus armeniaca), almond (Prunus dulcis), cherry (Prunus cerasus), plum (Prunus domestica), peach (Prunus persica), apple (Malus dulcis), walnut (Juglans regia), pear (Pyrus communis) and blackthorn (Prunus spinosa). In orchards, the trees are grown following three geometric patterns: regular (i.e., open-vase planting system following square, rectangular, and triangular planting arrangements), linear (i.e., hedgerow planting system following a rectangular planting arrangement in which the plant-to-plant distance within the row is 1/3 to 1/2 of the distance between rows) and random (i.e., open-vase planting system following an arbitrary distribution). The Uprooted class (U) consisted of parcels in which fruit trees have been eliminated, and thus show no defined texture pattern. The reasons that can lead to an orchard owner to uproot it are multiple. However, in commercial fruit cultivation, this practice is mainly related to low productivity and cost effectiveness. After the removal of the trees, new orchards could be planted again, or the land use of the parcel could change (e.g., arable crops, fallow lands, greenhouses, and urban covers.) The Uprooted class includes this last type of parcels.
To train the classification algorithm and to assess the results obtained with this methodology, reliable data regarding land use are needed. With the aim of obtaining an accurate ground truth (GT), the 2072 study parcels of the Orchard Register of 2006 were photointerpreted using the image from 2010. During this period, 1813 parcels remained as orchards, while 259 were found to be uprooted. This imbalance between classes reflects a real scenario, where the number of uprooted parcels is much less than the number of tree orchards. Once ground truth was carried out, 15% of the parcels were randomly chosen as training samples for the classification process. The training sample is composed of 14% uprooted parcels and 86% orchard parcels. The remaining 85%—approximately 1760 parcels—were used to verify the classification model. Figure 2 presents the general process.

2.3. Parcel Level Image Segmentation

The segmentation is the process in which an image is totally divided into non overlapping regions or objects [56] with a real physical meaning [57]. The objective is to simplify the representation of an image in an easier way to analyze [58]. In the segmentation process, the minimum units of classification are created, and to be correct or not depends on the pursued objectives.
In this study, the parcel-level segmentation of the images of 2006 and 2010 was performed using the cartographic boundaries of the Cadaster of Navarre of 2006, so the cadastral parcel is the basic unit of classification.

2.4. Textural Feature Extraction

The textural features based on GLCM and wavelet planes were calculated in this step of the methodology.

2.4.1. Textural Features Based on GLCM

One of the most commonly used methods for mathematically measuring texture is based on the extraction of second-order statistical features from the GLCM [59]. Haralick et al. [16] proposed spatial co-occurrence as a basic structure in the derivation of quantitative features that characterize the texture of digital images. They assumed that the spatial variability information of an image was contained in the “average” spatial relationships that grey tones in the image had with one another. These relationships are summarized in co-occurrence matrices, which are calculated for four directions (0°, 45°, 90°, and 135°) between neighboring pixels within a given parcel. The GLCM is a two-dimensional histogram of grey level for a pair of pixels that are separated by a fixed spatial relationship and distance. The GLCM approximates the joint probability distribution of a pair of pixels.
Figure 3B represents the GLCM of an orchard (Figure 3(B.1)) and an uprooted orchard (Figure 3(B.2)). It also includes the logarithmic transformation of both GLCMs. Since the GLCM is calculated from an orthophoto of 8 bits, a square matrix of 256 × 256 pixels is obtained. Each pixel in the matrix represents a co-occurrence combination, with the pixels of the main diagonal representing pairs of pixels in the orthophoto with same grey level. Lighter pixels represent more frequent co-occurrence combinations of their corresponding grey levels. As the pixels move away from the diagonal, they represent greater differences in grey levels. In the Uprooted class case (Figure 3(B.2)), the information is located near the diagonal because in an homogeneous image, most pixels show a grey level identical to that of their neighbors. On the other hand, the highly contrasted texture of an Orchard (Figure 3(B.1)) generates a GLCM with more expansion with respect to the diagonal. Haralick et al. [16] proposed a set of textural features that describe the distribution of data in the GLCM. In this study, we calculated 8 of these features: homogeneity, entropy, angular second moment (ASM), contrast, dissimilarity, mean, standard deviation and correlation (definitions are provided in Table 1). All features were calculated considering four different directions (0°, 45°, 90°, and 135°) and a multi-directional relationship (All) in the red and green semi-sum band; thus, each parcel is characterized by 40 textural features based on GLCM.

2.4.2. Plane-Based Wavelet Features

Wavelet transform is a powerful tool for discriminating textures [60,61], as it allows an image’s spatial variability to be decomposed into different frequency components. If an object exhibits a fine texture, as observed in uprooted parcels, its information is mostly concentrated in the low-frequency region. On the contrary, if an object exhibits a coarse texture, most of its information is centered in the high-frequency region.
The discrete wavelet transform can be performed with several different algorithms. One of the most popular ones is the à trous algorithm [62]. In this study, the à trous algorithm is implemented by using a bi-dimensional low pass filter associated to a B3 cubic spline function. The convolution of an original 2j resolution image with the mentioned filter results in a degraded image with a 2j-1 resolution. The difference between an image and its degraded version will generate a wavelet plane or wavelet image that contains the spatial detail information that is lost between 2j and 2j-1 resolution images, that is, the high frequency information. To perform this analysis, we have obtained wavelet images at 3 different levels (plane 1, plane 2 and plane 3), which are added to obtain a total wavelet or detail image (plane T). From these wavelet images, maximum, minimum, standard deviation and range are calculated for each parcel, resulting in a vector of 16 features for each parcel (definitions are provided in Table 2).
Figure 3C shows an example of the wavelet images in plane 1 corresponding to the orchard (Figure 3(C.1)) and the uprooted parcel (Figure 3(C.2)). Wavelet planes obtained from uprooted parcels are very homogenous, while those calculated with orchard parcels present high-frequency information derived from the trees’ borders. As observed in Figure 3(C.1), in these wavelet planes, the difference between maximum and minimum is lower than that in the orchards parcels (Figure 3(C.2)), which indicates a greater range and standard deviation in the latter. This fact could allow for the discrimination between these two classes.

2.5. Feature Selection

After calculating the textural features, the most appropriate to discriminate between the Uprooted and Orchard classes are selected. Detecting the most suitable features reduces the computational complexity and increases the generalization capability of the classification algorithm [63]. Therefore, suitable feature selection is expected to enhance the discrimination between uprooted and orchard parcels in this study. For this, the features evaluated are classified as: (1) relevant, i.e. those that influence the discrimination of a specific class; (2) not relevant, i.e. those that do not influence the discrimination; and (3) redundant, i.e., those that do not provide any different information [64]. Redundant features can be both relevant and not relevant. The objective of any feature selection analysis is to identify relevant and non-redundant features. The methodology used here is based on the study of relevance and redundancy.
To analyze the relevance of each variable, Student’s t-test was performed. The T statistic is particularly suitable for measuring the separability between two groups [64] and can be used as distance measure to analyze the efficacy of each feature for discriminating between Orchard and Uprooted classes [65]. T between two classes is defined as follows:
T = | m 1 m 2 | σ 1 2 n 1 + σ 2 2 n 2
where, m 1 ,   σ 1 2 ,   n 1 ,   m 2 ,   σ 2 2 , and n 1 are the mean, variance and size of each class, respectively. Low T values indicate that the differences between Uprooted and Orchard classes are low, whereas by increasing the differences between the classes, T increases as well. Before performing the t-test, the assumption of normality was evaluated using a Kolmogorov–Smirnov test. In the case of a negative result, a logarithmic transformation was applied.
As previously stated T measures the statistical distance between classes for each feature but does not provide any information about the redundancy or correlation between the features. For this reason, a principal components analysis was performed to qualitatively explore the interrelationship between the features and thus complete the separability analysis. Loading plots are dispersion graphs that show the location of each feature in the space defined by the obtained principal components (PC), also called factors. The coordinates of each feature in each PC represent the correlation of said feature and each of the factors. In addition, the proximity between features indicates the degree of correlation between them; therefore, when the correlation equals one, the points coincide. Visual analysis of these graphs is useful for understanding how the features are grouped together. Once the groups of features that provide the same information were identified, the most relevant textural feature of each group was selected, i.e., the one providing the highest T. To carry out these feature selection analyses, 100 representative parcels for both Uprooted and Orchard classes were randomly selected from the study parcels.

2.6. Parcel-Level Classification

A Discriminant Analysis classification algorithm was applied to identify uprooted parcels using textural information. This classification algorithm involves creating a linear combination of textural features that maximizes the discrimination between the Uprooted class and Orchard class, called a discriminant function. The incorporation of the features in this function was performed with the method, which can add as many textural features as desired to the discriminant function. Several studies have successfully used this classification method in remote sensing [66,67].
The identification of uprooted parcels was carried out considering: (1) all calculated features; and (2) the selected features after performing the t-test analysis and PCA. These two types of classifications were performed using GLCM based textural features (H) and wavelet planes-based features (W), both independently and in combination (H+W).

2.7. Performance Evaluation for Uprooted Parcel Identification

The automatically detected uprooted parcels were compared, parcel by parcel, with the ground truth. Each parcel used in the study fell into one of the four categories shown in the confusion matrix presented in Table 3. Quality indices for binary classification (see Table 4) were computed considering the Uprooted class as positive class.

3. Results

3.1. Feature Selection

3.1.1. Evaluation of GLCM Based Textural Features

(a) Redundancy study
The principal component analysis of GLCM based textural features extracted five principal components that reproduce 93.32% of the original variability. Most of the variability of the data is concentrated in the first three components; thus, visualization of loading plots formed by these components will provide information about data redundancy (Figure 4).
Entropy, dissimilarity, ASM and homogeneity are assigned to the first principal component (PC1), which are divided into two groups. Entropy and dissimilarity (to a lesser extent) are on the negative side of the axis, while ASM and homogeneity are on the positive side. Entropy and the group formed by ASM and homogeneity are close to the axis; however, they are located in maximum opposing positions, which indicate a strong negative correlation. On the other hand, dissimilarity is not very close to the PC1 axis, nor it is in an extreme position, indicating that this feature correlates to some extent with the second principal component. The axis of the second principal component (PC2) is represented mainly by the contrast, with which it has a high correlation, and the axis of the third principal component (PC3) is represented mainly by the features correlation and mean, which correlate negatively. Standard deviation is not particularly correlated with any of the principal components. In PC1, PC2 and PC3, the features extracted in different directions are highly correlated with each other; thus, they are located at close positions.
(b) Separability Analysis
Figure 5A shows the values of T obtained for the different GLCM based features according to the direction used to generate the GLCM. The axes of the diagram represent the studied directions (0°, 45°, 90°, 135°, and All), while the lines that form a pentagon show the observed T for each feature. These pentagons are different in shape and size, indicating that the discriminant power varies for each feature, and in some cases, with the calculation angle. In addition, the pentagons intersect each other, indicating that the T ranking is not maintained for all spatial relationships.
Correlation, ASM and entropy are the features that show the highest T, while ASM and entropy provide comparable values for all given spatial directions; correlation shows a slight decrease in T for 135° and 45°. On the other hand, dissimilarity and standard deviation show the lowest T, with dissimilarity meeting the equal means assumption for all angles and standard deviations for 45°. In general, pentagons tend to be regular and do not have a noticeable orientation toward any given direction, indicating analogous separability in all spatial relationships. However, for contrast, correlation and dissimilarity, there is a decline in T for 135° and 45°. In homogeneity, there is also a slight variation in separability for these two spatial relationships, showing a slight increase for 135° and a decrease for 45°.
The results of the studies of relevance and redundancy justify choosing entropy, contrast and correlation in the spatial direction All as representatives of GLCM based textural features.

3.1.2. Evaluation of Textural Features Based on Wavelet Planes

(a) Redundancy Study
The position of the variables in the factorial space is explained in the two-dimensional graph of factor loadings of the first and second component (Figure 6). There are two clear groups of variables along the axis representing the first principal component (PC1) and the area adjacent to the axis. On the positive side of this axis are maximum and range (and standard deviation to a lesser extent), while minimum is on the negative side. The fact that both sets are close to the axis and in maximum opposing positions indicates that they are negatively correlated. Meanwhile, standard deviation is the only feature with weight in the second principal component (PC2), indicating that it provides different information than the features grouped along the first principal component.
(b) Separability analysis
Figure 5B shows the T obtained for the studied features by varying the wavelet plane used for calculation. The axes of the diagram represent each of the three wavelet planes and their sum wavelet plane, while lines show the observed T for each feature. These lines have different diamond size and shape and do not intersect, indicating that although the discriminatory power varies in each plane, the T ranking is maintained.
Standard deviation is the feature with the highest T, while maximum has the lowest T. Rhomboids point towards plane 3; this is particularly noticeable for maximum and minimum, which show higher values of T in this plane and therefore higher discriminant power of the same. The distance between the classes Uprooted and Orchard increases for all the textural features in plane 3; the separability decreases slightly for plane 2 and more pronouncedly for plane 1. These results suggest that, in regard to differentiating between Uprooted and Orchard classes, 0.25 m resolution images have high frequency or detail components that may be considered as noise. The detail that is eliminated in plane 1, when going from 0.25 m to 0.5 m resolution, corresponds to small structures, which are not representative of the main texture. However, when going from 1 m to 2 m, the detail corresponds to larger structures or textures to differentiate, making plane 3 more appropriate for differentiating between the defined classes.
The studies of relevance and redundancy justify choosing the minimum and standard deviation in plane 3 as representatives of all extracted wavelet plane-based features.

3.2. Parcel-Level Classification

Table 5 shows the results of a comparison between automatic classifications and the ground truth. Texture features (Tex) are known to be useful in the identification of uprooted parcels. The highest TNR is provided by Tex_Total_H+W, with 0.9, while the maximum TPR is 0.849, which is achieved with Tex_Sel_W. The highest accuracy and precision are obtained with Tex_Sel_H+W, reaching A = 0.881 and P = 0.516.

3.3. Effectiveness of Textural Feature Selection

In this section, the effectiveness of textural feature selection is studied; thus, classifications that use all textural features (Total) and classifications that only consider features selected in Section 3.1 (Sel) are compared.
Classifications performed using all wavelet features, Tex_Total_W, provided a greater accuracy and true negative rate than classifications considering only the selected features. However, the approach Tex_Sel_W achieves the highest true positive rate values of the entire study, reaching 0.849. Classifications produced using GLCM based textural features follow this behavior as well: They provide better accuracy and true negative rate values in Tex_Total_H classification and a superior true positive rate in Tex_Sel_H, with 0.757. When using both GLCM and wavelet based textural features, the explained pattern changes. By using textural features separately, Sel classifications only show higher values of true positive rate; however, by combining both feature types, Sel classification also achieves higher accuracy than Total.
All classifications performed using only selected features provide higher true positive rates than classifications performed with all the calculated textural features. This justifies the extraction of only entropy, contrast, correlation, minimum and wavelet standard deviation. Reducing the calculation of features to only the relevant and non-redundant features greatly decreases the calculation time and simplifies data management.

3.4. Comparison of Textural Feature Performance

In Table 5, textural features based on GLCM and wavelet transforms can be compared. When analyzing these two different approaches, GLCM features provide better values of the true negative rate than wavelet features in both Total and Sel classifications. The combination of GLCM and wavelet features (H+W) increases TNR in the two cases. The evaluation of the remaining performance measures reveals better values when using GLCM based features in Total and Sel classifications. The only quality measure in which wavelet features gives better values is the true positive rate with selected features.

4. Discussion

4.1. Optimisation of the Uprooted Detection Process

The present study shows that the selection of features that maximized discrimination between Orchard and Uprooted classes improved the results. The highest values for true positive rate are obtained using only the selected features, indicating better identification of uprooted parcels when using this methodology. The finding that better results were achieved using fewer features is particularly interesting because it decreases the time needed to extract textural information. Reducing the computational cost is very important for extrapolating methodologies developed in pilot areas to large tracts of land. This applies especially when working with very high-resolution images such as orthophotos, when reducing the process time and computing resources can make the difference between efficiency and ineffectiveness of a methodology. The feature selection involves reducing the number of calculated features from 56 to 5, which means a decrease of approximately 91% of the features. In this case, calculation and exporting (into a format compatible with standard statistical analysis programs) only the selected features saves near 90% of processing time.
The classification based on the selected features achieved an uprooted identification rate (i.e., true positive rate) over 80%, an orchard identification rate (i.e., true negative rate) close to 90% and a general accuracy of over 88%. Although other studies centered in the identification of uprooted orchard were not found, several authors have proposed approaches to automatically identify permanent crops using VHR imagery. In those studies permanent crops were detected as target class [4,34,37,68] or in a multi-class Land Use/Land Cover (LULC) approach [69] using different combinations of spectral and textural information. This fact allows indirectly comparing the results of our investigation with the obtained in previous studies.
Kass et al. [4] compared the performance of QuickBird images and orthophotos to identify orchards and vineyards using GLCM textural features, resulting in a high overall accuracy, 92% and 88% using QuickBird and orthophotos respectively, in line with our findings. The user´s accuracy obtained by the orchard class reached values close to 90% with QuickBird images and close to 80% with orthophotos. They stated that the application of their methodology to the orthophotos alone resulted in a lower accuracy but still acceptable. Consequently, they considered orthophotos as an alternative source of information for this kind of detection, especially when QuickBird data were not available. Warner and Steinmaus [34] performed a spatial classification of orchards and vineyards with high spatial resolution IKONOS panchromatic imagery. They compared the performance of GLCM and spatial autocorrelation-based features. The latter classification resulted in an estimated accuracy of 95%. By comparison, a maximum likelihood classification of co-occurrence texture bands had a lower accuracy (86%). Similarly, the investigation performed by Akar and Güngör [37] aimed to determine the extent and distribution of hazelnuts and tea grown areas using spectral information and textural features derived from GLCM and Gabor filter. The source of information was WorldView-2 images. Results showed that integrating spectral data and texture information provided accuracies of 83.54% and 87.89% when textural features derived from GLCM and Gabor filter were used respectively. The accuracy increased by combining different groups of texture variables. Finally, the work of Ruiz et al. [70] analyzed textural features based on GLCM, energy filters and edgeness factor, Gabor filters, and wavelet transform to classify VHR imagery in different types of landscapes. In their study, Ruiz et al. [68] identified citrus orchards in a multi-class LULC approach. The combination of the four types of textural features reached 80% producer´s accuracy and 86% user’s accuracy. The authors concluded that the combination of GLCM based features with any of the other methods, energy filters, Gabor filters or wavelets, produced a significant increase in the overall accuracy levels, especially with the latter. This is probably due to the complementary condition of the methods based on wavelet filtering with respect to the direct statistical method based on the GLCM.
The results of the methodology proposed in this paper are consistent with other studies, both in accuracy and general conclusions. This fact ensures the potential and usefulness of the orthophoto as source of information in automatic RS procedures.

4.2. Error Sources

There are several issues that can be identified as possible sources of classification errors. First, the lack of agreement between image boundaries and the cartography could produce objects that are not representative of reality during the parcel level segmentation process. For this reason, it is very important to have high quality cartography and a correctly geo-referenced image. Second, the existence of two different land uses in one single parcel is infrequent but adds complexity to the technique, as it could be solved by refining the initial segmentation using multi-level segmentation [71,72]. Third, the Uprooted class is very heterogeneous because after the removal of the trees, the land use can be diverse. In some cases, the new texture of the parcel can be similar to the texture presented by the Orchard class. This may lead to misclassification (false negatives). Fourth, Orchard class is more homogeneous than the Uprooted class; however, false positives can be obtained when there is low contrast between the trees and ground (e.g., there is grass within the trees), when the trees are very small (i.e., distribution pattern is negligible) or when the trees are randomly located throughout the parcel (Figure 7). In randomly distributed parcels, false positives occur due to the lack of a regular texture.

4.3. Automatic Identification of Uprooted Parcels as an Aid to Photointerpretation

The automatic detection methodology of uprooted orchards presented in this paper does not aim to completely replace photointerpretation in the updating process of agricultural databases, since it restricts the automatic identification to parcels that are classified with high certainty [9]. This methodology is proposed to serve as a phase of the updating process, incorporated therein to reduce the number of parcels that must be photointerpreted.
Photointerpretation should fit as much as possible to incorrectly classified parcels, i.e., false positives and false negatives. For example, Table 4 shows classifications in which the percentage of correctly classified parcels is near 85%. In such cases, photointerpretation should be limited to incorrectly classified parcels, which account for approximately 15%.
In reality, when updating, there is no ground truth. Therefore, the delimitation of these false positives and negatives can be performed using the probability of group membership. When classifying a parcel using discriminant analysis, the probabilities of group membership for all the classes (Uprooted/Orchard) are obtained. These probabilities can provide significant information about the confidence for the attribution of a case to a class [73]. The expected confidence for an assignment is higher when the probability of membership in the class is high [74]. If a parcel has similar membership to both classes, this indicates an unstable classification [75]. On the contrary, if a parcel has similar membership to both classes, this indicates an unreliable classification. Candidates for revision belong to this last case.
Figure 8a shows a histogram of the probability of membership in the Uprooted class provided by the classification Tex_Sel_H+W (the histogram of the probability of membership in the Orchard class is opposite to that of the Uprooted class). The parcels classified as uprooted have a membership probability in the Uprooted class that is higher than 0.5, while the parcels classified as Orchard have a membership probability in the Uprooted class that is lower than 0.5. In this figure, parcel accumulation at the extremes of the graphic can be observed. These parcels have been classified as uprooted (high probability area) or orchard (low probability area) with a high degree of confidence. The parcels located in the area close to 0.5 have been assigned to both classes with a low degree of confidence. When confronting the histogram of probability of membership in the Uprooted class with the ground truth (Figure 8b), confusion between classes can be observed in the central part of the figure. Parcel candidates for revision are located in the central part of the graphic.
The number of parcels that need to be revised depends on a chosen threshold value: the harsher the threshold, the greater the number of parcels to revise. For example, setting a threshold at 0.9 involves the review of all the parcels with a probability of membership that is less than 0.9 in both classes (probability of membership in the Uprooted class between 0.1 and 0.9). In the example, this decision involves a 60% reduction in the photointerpretation. If the threshold is fixed at 0.8, the photointerpretation decreases by 75%, while, if it is fixed at 0.7, this percentage reaches 85%. Not many studies have addressed this issue so it is not easy to set a threshold based on literature; however, as an example, if the threshold is set at 0.7, and extrapolating the results of this study to the total Orchard Register of Navarra 2006, it will be possible to reduce the photointerpretation from 2,3000 parcels to less than 3500.
In general, the obtained precision is lower than accuracy, true positive rate and true negative rate. Precision indicates over-classification in the Uprooted class, i.e., a proportion of the parcels classified as uprooted actually are not. In the example shown in Figure 8b (P = 0.516), 49.4% of the parcels classified as uprooted are actually orchards. However, as observed in this figure, the over-classification in the Uprooted class does not involve a large proportion of all study parcels. Moreover, a large number of these parcels will be reviewed and re-assigned to their correct class during the post-classification review, based on their probability membership values. In this example, setting the review threshold at 0.9 would imply the photointerpretation and correct identification of more than 90% of the over-classified uprooted parcels. If the threshold is fixed at 0.8, the photointerpretation and re-assignation of the over-classified uprooted parcels decreases to 68%, while, if it is fixed at 0.7, this percentage reaches 51%. The truly problematic parcels are those that are incorrectly classified and also exhibit membership probability values outside the review threshold. These are parcels of the classes Orchard or Uprooted that are barely discriminable between them only by their texture, such as those presented in Figure 7. In the example shown in Figure 8, if the post-classification review threshold is set at 0.9, these problematic parcels that are incorrectly classified and not included in the post-classification photointerpretation phase involve an error in the methodology of 3%, while, if the post-classification review threshold is fixed at 0.7, this error reaches 7%. This means that 3% or 7% of the study parcels would be incorrectly classified in the whole study area. The review threshold must be set by the technician depending on the admissible error of the database.

5. Conclusions

The usefulness of GLCM and wavelet based textural features calculated from orthophoto at parcel-level for automatically detecting uprooted parcels by means of discriminant analysis classification algorithm has been demonstrated. The obtained classification results, which achieved a true positive rate of over 80% and an accuracy of over 88%, justify the use of the orthophoto as an alternative information source to satellite imagery that can be used to differentiate between uprooted parcels and orchards. In Spain, where the PNOA provides annual orthophoto coverage for topographic mapping development, this is an interesting finding from both a technical and economic point of view. Many countries acquire orthophotos periodically for topographic and cadastral purposes. The use of orthophotos in automated RS applications, such as the one presented in this article, could extend the utility of these datasets, which are freely available in most cases, and thus reduce the need to purchase satellite imagery for this particular application.
The proposed methodology could facilitate the updating procedure of the analyzed agricultural database by reducing the number of fields to photointerpret by 60–85%, depending on the membership threshold value selected, and could be easily adopted by different stakeholders and could increase significantly the efficiency of agricultural database updating tasks. The approach allows visualizing the results in a GIS environment, facilitating the location of the parcels to be photointerpreted.
While this study focuses on the identification of uprooted orchards, it can also be applied to locate other permanent crops because they often have a regular plantation pattern that can be represented by textural features. The identification of uprooted vineyards, citrus groves or olive groves helps with the maintenance of the Permanent Crops Inventory, which helps to meet the requirements of the European Common Policy for subsidy assignation.

Acknowledgments

This study was funded by the Spanish National Institute for Agricultural and Food Research and Technology (INIA) through its training program for researchers.

Author Contributions

Raquel Ciriza, Lourdes Albizua, Jesús Álvarez-Mozos and María González-Audícana provided the overall conception of this research, designed the methodologies and experiments. Raquel Ciriza and Ion Sola carried out the implementation of proposed algorithms and conducted the experiments. Raquel Ciriza, Ion Sola, Jesús Álvarez-Mozos and María González-Audícana performed the data analyses and shared equally in the writing of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations area used in this manuscript in order of appearance
AGDBsAgricultural geographical databases
SIFTScale-Invariant Feature Transform
GLCMGrey Levels Co-occurrence Matrix
VHRVery High Resolution
UAVUnmanned Aerial Vehicle
GISGeographic Information System
RSRemote Sensing
PNOANational Plan of Aerial Orthophotography
DMCDigital Mapping Camera
UUprooted class
OOrchard class
GTGround Truth
HGLCM based features
WWavelet plane based features
ASMAngular Second Moment
PCAPrincipal Component Analysis
PCPrincipal Component
TPRTrue Positive Rate
TNRTrue Negative Rate
AAccuracy
PPrecision

References

  1. European Commission. Europeans, Agriculture and the Common Agricultural Policy (CAP). Available online: https://ec.europa.eu/agriculture/survey.es (accessed on 11 April 2017).
  2. Lister, T.W.; Lister, A.J.; Alexander, E. Land use change monitoring in Maryland using a probabilistic sample and rapid photointerpretation. Appl. Geogr. 2014, 51, 1–7. [Google Scholar] [CrossRef]
  3. Castillejo-González, I.L.; López-Granados, F.; García-Ferrer, A.; Peña-Barragán, J.M.; Jurado-Expósito, M.; de la Orden, M.S.; González-Audicana, M. Object- and pixel-based analysis for mapping crops and their agro-environmental associated measures using Quickbird imagery. Comput. Electron. Agric. 2009, 68, 207–215. [Google Scholar] [CrossRef]
  4. Kass, S.; Notarnicola, C.; Zebisch, M. Identification of orchards and vineyards with different texture-based measurements by using an object-oriented classification approach. Int. J. Geogr. Inf. Sci. 2011, 25, 931–947. [Google Scholar] [CrossRef]
  5. Martinez-Casasnovas, J.A.; Agelet-Fernandez, J.; Arno, J.; Ramos, M.C. Analysis of vineyard differential management zones and relation to vine development, grape maturity and quality. Span. J. Agric. Res. 2012, 10, 326–337. [Google Scholar] [CrossRef]
  6. Reis, S.; Taşdemir, K. Identification of hazelnut fields using spectral and gabor textural features. ISPRS J. Photogramm. Remote Sens. 2011, 66, 652–661. [Google Scholar] [CrossRef]
  7. Karakizi, C.; Oikonomou, M.; Karantzalos, K. Vineyard detection and vine variety discrimination from very high resolution satellite data. Remote Sens. 2016, 8, 235. [Google Scholar] [CrossRef]
  8. Knudsen, T.; Olsen, B.P. Automated change detection for updates of digital map databases. Photogramm. Eng. Remote Sens. 2003, 69, 1289–1296. [Google Scholar] [CrossRef]
  9. Schmedtmann, J.; Campagnolo, M. Reliable crop identification with satellite imagery in the context of common agriculture policy subsidy control. Remote Sens. 2015, 7, 9325. [Google Scholar] [CrossRef]
  10. Arivazhagan, S.; Ganesan, L. Texture classification using wavelet transform. Pattern Recognit. Lett. 2003, 24, 1513–1521. [Google Scholar] [CrossRef]
  11. Puissant, A.; Hirsch, J.; Weber, C. The utility of texture analysis to improve per-pixel classification for high to very high spatial resolution imagery. Int. J. Remote Sens. 2005, 26, 733–745. [Google Scholar] [CrossRef]
  12. Sali, E.; Wolfson, H. Texture classification in aerial photographs and satellite data. Int. J. Remote Sens. 1992, 13, 3395–3408. [Google Scholar] [CrossRef]
  13. Marceau, D.J.; Howarth, P.J.; Dubois, J.M.M.; Gratton, D.J. Evaluation of gray-level cooccurrence matrix method for land-cover classification using spot imagery. IEEE Trans. Geosci. Remote Sens. 1990, 28, 513–519. [Google Scholar] [CrossRef]
  14. Yu, C.; Qiu, Q.; Zhao, Y.; Chen, X. Satellite image classification using morphological component analysis of texture and cartoon layers. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1109–1113. [Google Scholar] [CrossRef]
  15. Coggins, J.M.; Jain, A.K. A spatial-filtering approach to texture analysis. Pattern Recognit. Lett. 1985, 3, 195–203. [Google Scholar] [CrossRef]
  16. Haralick, R.M.; Shanmuga, K.; Dinstein, I. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, SMC3, 610–621. [Google Scholar] [CrossRef]
  17. Curran, P.J. The semivariogram in remote sensing: An introduction. Remote Sens. Environ. 1988, 24, 493–507. [Google Scholar] [CrossRef]
  18. Myint, S.W. Fractal approaches in texture analysis and classification of remotely sensed data: Comparisons with spatial autocorrelation techniques and simple descriptive statistics. Int. J. Remote Sens. 2003, 24, 1925–1947. [Google Scholar] [CrossRef]
  19. Song, S.; Xu, B.; Yang, J. SAR target recognition via supervised discriminative dictionary learning and sparse representation of the sar-hog feature. Remote Sens. 2016, 8, 683. [Google Scholar] [CrossRef]
  20. Yang, Y.; Newsam, S. Comparing SIFT descriptors and Gabor texture features for classification of remote sensed imagery. In Proceedings of the IEEE International Conference on Image Processing, San Diego, CA, USA, 12–15 October 2008; pp. 1852–1855. [Google Scholar]
  21. Chehata, N.; Le Bris, A.; Lagacherie, P. Comparison of VHR panchromatic texture features for tillage mapping. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Melbourne, Australia, 21–26 July 2013; pp. 3128–3131. [Google Scholar]
  22. Berberoglu, S.; Curran, P.J.; Lloyd, C.D.; Atkinson, P.M. Texture classification of mediterranean land cover. Int. J. Appl. Earth Observ. Geoinf. 2007, 9, 322–334. [Google Scholar] [CrossRef]
  23. Baraldi, A.; Parmiggiani, F. An investigation of the textural characteristics associated with gray-level cooccurrence matrix statistical parameters. IEEE Trans. Geosci. Remote Sens. 1995, 33, 293–304. [Google Scholar] [CrossRef]
  24. Tuominen, S.; Pekkarinen, A. Performance of different spectral and textural aerial photograph features in multi-source forest inventory. Remote Sens. Environ. 2005, 94, 256–268. [Google Scholar] [CrossRef]
  25. Kohli, D.; Warwadekar, P.; Kerle, N.; Sliuzas, R.; Stein, A. Transferability of object-oriented image analysis methods for slum identification. Remote Sens. 2013, 5, 4209. [Google Scholar] [CrossRef]
  26. Aguilar, M.A.; Saldaña, M.M.; Aguilar, F.J. Geoeye-1 and Worldview-2 pan-sharpened imagery for object-based classification in urban environments. Int. J. Remote Sens. 2013, 34, 2583–2606. [Google Scholar] [CrossRef]
  27. Eckert, S. Improved forest biomass and carbon estimations using texture measures from Worldview-2 satellite data. Remote Sens. 2012, 4, 810. [Google Scholar] [CrossRef]
  28. Daliman, S.; Rahman, S.A.; Bakar, S.A.; Busu, S. Segmentation of oil palm area based on GLCM-SVM and NDVI. In Proceedings of the IEEE Region 10 Symposium, Kuala Lumpur, Malaisya, 14–16 April 2014; pp. 645–650. [Google Scholar]
  29. Chuang, Y.-C.; Shiu, Y.-S. A comparative analysis of machine learning with Worldview-2 pan-sharpened imagery for tea crop mapping. Sensors 2016, 16, 594. [Google Scholar] [CrossRef] [PubMed]
  30. Kuffer, M.; Pfeffer, K.; Sliuzas, R.; Baud, I. Extraction of slum areas from VHR imagery using GLCM variance. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 1830–1840. [Google Scholar] [CrossRef]
  31. Wang, T.; Zhang, H.; Lin, H.; Fang, C. Textural–spectral feature-based species classification of mangroves in mai po nature reserve from Worldview-3 imagery. Remote Sens. 2016, 8, 24. [Google Scholar] [CrossRef]
  32. Feng, Q.; Liu, J.; Gong, J. Urban flood mapping based on unmanned aerial vehicle remote sensing and random forest classifier—A case of Yuyao, China. Water 2015, 7, 1437. [Google Scholar] [CrossRef]
  33. Feng, Q.; Liu, J.; Gong, J. UAV remote sensing for urban vegetation mapping using random forest and texture analysis. Remote Sens. 2015, 7, 1074. [Google Scholar] [CrossRef]
  34. Warner, T.A.; Steinmaus, K. Spatial classification of orchards and vineyards with high spatial resolution panchromatic imagery. Photogramm. Eng. Remote Sens. 2005, 71, 179–187. [Google Scholar] [CrossRef]
  35. Kupidura, P.; Gwadera, L. Comparison of different approaches to extract heterogeneous objects from an image using an orchards example. In Proceedings of the International Archives of the Photogrammetry, Saint-Mandé France, 1–3 September 2010. [Google Scholar]
  36. Sertel, E.; Yay, I. Vineyard parcel identification from Worldview-2 images using object-based classification model. J. Appl. Remote Sens. 2014, 8, 083535. [Google Scholar] [CrossRef]
  37. Akar, Ö.; Güngör, O. Integrating multiple texture methods and NDVI to the random forest classification algorithm to detect tea and hazelnut plantation areas in northeast turkey. Int. J. Remote Sens. 2015, 36, 442–464. [Google Scholar] [CrossRef]
  38. Randen, T.; Husoy, J.H. Filtering for texture classification: A comparative study. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 291–310. [Google Scholar] [CrossRef]
  39. Ferro, C.J.S.; Warner, T.A. Scale and texture in digital image classification. Photogramm. Eng. Remote Sens. 2002, 68, 51–63. [Google Scholar]
  40. Simard, M.; DeGrandi, G.; Thomson, K.P.B. Adaptation of the wavelet transform for the construction of multiscale texture maps of sar images. Can. J. Remote Sens. 1998, 24, 264–285. [Google Scholar] [CrossRef]
  41. Recio, J.A. Técnicas de Extracción de Características y Clasificación de Imágenes Orientada a Objetos Aplicadas a la Actualización de Bases de Datos de Ocupación del Suelo. Ph.D. Thesis, Universidad Politécnica de Valencia, España, Spain, 2009. [Google Scholar]
  42. Tuceryan, M.; Jain, A.K. Texture analysis. In The Handbook of Pattern Recognition and Computer Vision; Pau, L.F., Wang, P.S.P., Eds.; World Scientific Publishing Co.: New York, NY, USA, 1993. [Google Scholar]
  43. Walter, V. Object-based classification of remote sensing data for change detection. ISPRS J. Photogramm. Remote Sens. 2004, 58, 225–238. [Google Scholar] [CrossRef]
  44. Blaschke, T.; Lang, S.; Lorup, E.; Strobl, S.; Zeil, P. Object-oriented image processing in an integrated GIS/Remote Sensing environment and perspectives for environmental applications. In Environmental Information for Planning, Politics and the Public; Cremers, A., Greve, K., Eds.; Metropolis-Verlag: Marburg, Germany, 2000; Volume 2, pp. 555–570. [Google Scholar]
  45. Walter, V. Automated GIS Data Collection and Update; Wichmann, H., Ed.; Huthig Gmbh: Heidelberg, Germany, 1999; pp. 267–280. [Google Scholar]
  46. Foody, G.M.; Mathur, A. The use of small training sets containing mixed pixels for accurate hard image classification: Training on mixed spectral responses for classification by a svm. Remote Sens. Environ. 2006, 103, 179–189. [Google Scholar] [CrossRef]
  47. Foody, G.M. The significance of border training patterns in classification by a feedforward neural network using back propagation learning. Int. J. Remote Sens. 1999, 20, 3549–3562. [Google Scholar] [CrossRef]
  48. Hintze, J.L. NSSC Statistical System, User’s Guide IV; NCSS: Kasville, UT, USA, 2007. [Google Scholar]
  49. Thessler, S.; Sesnie, S.; Ramos Bendaña, Z.S.; Ruokolainen, K.; Tomppo, E.; Finegan, B. Using k-nn and discriminant analyses to classify rain forest types in a landsat tm image over northern costa rica. Remote Sens. Environ. 2008, 112, 2485–2494. [Google Scholar] [CrossRef]
  50. Blackard, J.A.; Dean, D.J. Comparative accuracies of artificial neural networks and discriminant analysis in predicting forest cover types from cartographic variables. Comput. Electron. Agric. 1999, 24, 131–151. [Google Scholar] [CrossRef]
  51. Lopez-Granados, F.; Gomez-Casero, M.T.; Pena-Barragan, J.M.; Jurado-Exposito, M.; Garcia-Torres, L. Classifying irrigated crops as affected by phenological stage using discriminant analysis and neural networks. J. Am. Soc. Hortic. Sci. 2010, 135, 465–473. [Google Scholar]
  52. Karimi, Y.; Prasher, S.O.; McNairn, H.; Bonnell, R.B.; Dutilleul, P.; Goel, P.K. Classification accuracy of discriminant analysis, artificial neural networks, and decision trees for weed and nitrogen stress detection in corn. Trans. Am. Soc. Agric. Eng. 2005, 48, 1261–1268. [Google Scholar] [CrossRef]
  53. Gómez-Casero, M.T.; López-Granados, F.; Peña-Barragán, J.M.; Jurado-Expósito, M.; García-Torres, L.; Fernández-Escobar, R. Assessing nitrogen and potassium deficiencies in olive orchards through discriminant analysis of hyperspectral data. J. Am. Soc. Hortic. Sci. 2007, 132, 611–618. [Google Scholar]
  54. Karimi, Y.; Prasher, S.O.; McNairn, H.; Bonnell, R.B.; Dutilleul, P.; Goel, P.K. Discriminant analysis of hyperspectral data for assessing water and nitrogen stresses in corn. Trans. Am. Soci.Agric. Eng. 2005, 48, 805–813. [Google Scholar] [CrossRef]
  55. Zhang, L.; Zhang, L.; Tao, D.; Huang, X. On combining multiple features for hyperspectral remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2012, 50, 879–893. [Google Scholar] [CrossRef]
  56. Schiewe, J.; Tufte, L.; Ehlers, M. Potential and problems of multi-scale segmentation methods in Remote Sensing. Potenzial und Probl. Multiskaliger Segmentierungsmethoden der Fernerkund. 2001, 14, 34–39. [Google Scholar]
  57. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef]
  58. Shapiro, L.G.; Stockman, C.G. Computer Vision; Pentice Hall: London, UK, 2001. [Google Scholar]
  59. Zhang, Q.; Wang, J.; Gong, P.; Shi, P. Study of urban spatial patterns from spot panchromatic imagery using textural analysis. Int. J. Remote Sens. 2003, 24, 4137–4160. [Google Scholar] [CrossRef]
  60. Chitre, Y.; Dhawan, A.P. M-band wavelet discrimination of natural textures. Pattern Recognit. 1999, 32, 773–789. [Google Scholar] [CrossRef]
  61. Zhang, J.; Tan, T. Brief review of invariant texture analysis methods. Pattern Recognit. 2002, 35, 735–747. [Google Scholar] [CrossRef]
  62. Nunez, J.; Otazu, X.; Fors, O.; Prades, A.; Pala, V.; Arbiol, R. Multiresolution-based image fusion with additive wavelet decomposition. IEEE Trans. Geosci. Remote Sens. 1999, 37, 1204–1211. [Google Scholar] [CrossRef]
  63. Guyon, I.; Elisseeff, A. An introduction to variable and feature selection. J. Mach. Learn. Res. 2003, 3, 1157–1182. [Google Scholar] [CrossRef]
  64. Ladha, L.; Deepa, T. Feature selection methods and algorithms. Int. J. Comput. Sci. Eng. 2011, 3, 1787–1797. [Google Scholar]
  65. Nguyen, T.H.; Chng, S.; Li, H. T-test distance and clustering criterion for speaker diarization. In Proceedings of the Interspeech, Brisbane, Australia, 22–26 September 2008; pp. 36–39. [Google Scholar]
  66. Lark, R.M. Geostatistical description of texture on an aerial photograph for discriminating classes of land cover. Int. J. Remote Sens. 1996, 17, 2115–2133. [Google Scholar] [CrossRef]
  67. Lobo, A.; Chic, O.; Casterad, A. Classification of mediterranean crops with multisensor data: Per-pixel versus per-object statistics and image segmentation. Int. J. Remote Sens. 1996, 17, 2385–2400. [Google Scholar] [CrossRef]
  68. Aksoy, S.; Yalniz, I.Z.; Taşdemir, K. Automatic detection and segmentation of orchards using very high resolution imagery. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3117–3131. [Google Scholar] [CrossRef]
  69. Balaguer-Beser, A.; Ruiz, L.A.; Hermosilla, T.; Recio, J.A. Using semivariogram indices to analyse heterogeneity in spatial patterns in remotely sensed images. Comput. Geosci. 2013, 50, 115–127. [Google Scholar] [CrossRef]
  70. Ruiz, L.A.; Fdez-Sarría, A.; Recio, J.A. Texture feature extraction for classification of remote sensing data using wavelet decomposition: A comparative study. In Proceedings of the 20th ISPRS Congress, London, UK; 2004; pp. 1109–1114. [Google Scholar]
  71. Carleer, A.P.; Wolff, E. Urban land cover multi-level region-based classification of vhr data by selecting relevant features. Int. J. Remote Sens. 2006, 27, 1035–1051. [Google Scholar] [CrossRef]
  72. Myint, S.W.; Gober, P.; Brazel, A.; Grossman-Clarke, S.; Weng, Q. Per-pixel vs. Object-based classification of urban land cover extraction using high spatial resolution imagery. Remote Sens. Environ. 2011, 115, 1145–1161. [Google Scholar] [CrossRef]
  73. Foody, G.M.; Cambell, N.A.; Trodd, N.M.; Wood, T.F. Derivation and applications of probabilistic measures of class membership from the maximum likelihood classification. Photogramm. Eng. Remote Sens. 1992, 58, 1335–1341. [Google Scholar]
  74. Dean, A.M.; Smith, G.M. An evaluation of per-parcel land cover mapping using maximum likelihood class probabilities. Int. J. Remote Sens. 2003, 24, 2905–2920. [Google Scholar] [CrossRef]
  75. Benz, U.C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-resolution, object-oriented fuzzy analysis of remote sensing data for gis-ready information. ISPRS J. Photogramm. Remote Sens. 2004, 58, 239–258. [Google Scholar] [CrossRef]
Figure 1. Study sites (in red) superimposed on the map of municipalities of Navarre, colored according to the area in hectares (ha) of orchards cultivated per municipality.
Figure 1. Study sites (in red) superimposed on the map of municipalities of Navarre, colored according to the area in hectares (ha) of orchards cultivated per municipality.
Remotesensing 09 00492 g001
Figure 2. Multi-stage diagram for detection of uprooted orchards by textural analysis of orthophotos using Haralick features based on GLCM and plane-based Wavelet features, both independently and in combination. Orchard and uprooted parcels are represented in green and yellow, respectively.
Figure 2. Multi-stage diagram for detection of uprooted orchards by textural analysis of orthophotos using Haralick features based on GLCM and plane-based Wavelet features, both independently and in combination. Orchard and uprooted parcels are represented in green and yellow, respectively.
Remotesensing 09 00492 g002
Figure 3. (A.1) Orthophoto of an Orchard; (A.2) orthophoto of an Uprooted orchard (B.1); co-occurrence matrix for an Orchard, in real and logarithmic scales; (B.2) co-occurrence matrix for an Uprooted orchard (C.1); Wavelet plane 1 image for an Orchard (C.2); and wavelet plane 1 image for an Uprooted orchard.
Figure 3. (A.1) Orthophoto of an Orchard; (A.2) orthophoto of an Uprooted orchard (B.1); co-occurrence matrix for an Orchard, in real and logarithmic scales; (B.2) co-occurrence matrix for an Uprooted orchard (C.1); Wavelet plane 1 image for an Orchard (C.2); and wavelet plane 1 image for an Uprooted orchard.
Remotesensing 09 00492 g003
Figure 4. Loading plots for GLCM based features generated by: (a) the first (PC1) and the second (PC2) principal components; and (b) the first and the third (PC3) principal components.
Figure 4. Loading plots for GLCM based features generated by: (a) the first (PC1) and the second (PC2) principal components; and (b) the first and the third (PC3) principal components.
Remotesensing 09 00492 g004
Figure 5. T distance between Orchard and Uprooted classes by: (A) GLCM based features; and (B) features based on wavelet planes.
Figure 5. T distance between Orchard and Uprooted classes by: (A) GLCM based features; and (B) features based on wavelet planes.
Remotesensing 09 00492 g005
Figure 6. Loading plot for wavelet features generated by the first (PC1) and the second (PC2) principal.
Figure 6. Loading plot for wavelet features generated by the first (PC1) and the second (PC2) principal.
Remotesensing 09 00492 g006
Figure 7. Examples of parcels that have been incorrectly classified.
Figure 7. Examples of parcels that have been incorrectly classified.
Remotesensing 09 00492 g007
Figure 8. (a) Histogram showing the probability of membership in the class Uprooted obtained with Tex_Sel_H+W; (b) Histogram showing the probability of membership in the class Uprooted obtained with Tex_Sel_H+W confronted with the ground truth.
Figure 8. (a) Histogram showing the probability of membership in the class Uprooted obtained with Tex_Sel_H+W; (b) Histogram showing the probability of membership in the class Uprooted obtained with Tex_Sel_H+W confronted with the ground truth.
Remotesensing 09 00492 g008
Table 1. Textural features based on GLCM.
Table 1. Textural features based on GLCM.
GLCM Based FeatureEquationDirection (°)Name
Homogeneity i = 0 N 1 j = 0 N 1 P i , j 1 + ( i j ) 2 0h_h_0
45h_h_45
90h_h_90
135h_h_135
Allh_h_All
Dissimilarity i = 0 N 1 j = 0 N 1 P i , j | i j | 0h_d_0
45h_d_45
90h_d_90
135h_d_135
Allh_d_All
Contrast i = 0 N 1 j = 0 N 1 P i , j ( i j ) 2 0h_cn_0
45h_cn_45
90h_cn_90
135h_cn_135
Allh_cn_All
Entropy i = 0 N 1 j = 0 N 1 P i , j ln P i , j 0h_e_0
45h_e_45
90h_e_90
135h_e_135
Allh_e_All
Angular second moment i = 0 N 1 j = 0 N 1 P i , j 2 0h_a_0
45h_a_45
90h_a_90
135h_a_135
Allh_a_All
Mean i = 0 N 1 i P ( x ) = μ I 0h_m_0
45h_m_45
90h_m_90
135h_m_135
Allh_m_All
Standard Deviation i = 0 N 1 j = 0 N 1 P i , j ( i μ I ) 2 = σ I 0h_s_0
45h_s_45
90h_s_90
135h_s_135
Allh_s_All
Correlation i = 0 N 1 j = 0 N 1 ( i μ I ) ( j μ J ) σ I σ J 0h_cr_0
45h_ cr_45
90h_cr_90
135h_cr_135
Allh_cr_All
where N is the number of grey levels; P is the normalized symmetric GLCM of dimension N × N; Pi,j is the (i,j)th element of P.
μ J = i = 0 N 1 j = 0 N 1 j P i , j σ J = i = 0 N 1 j = 0 N 1 P i , j ( j μ J ) 2
Table 2. Plane-based wavelet features.
Table 2. Plane-based wavelet features.
Wavelet FeatureEquationPlaneName
Standard deviation 1 L 1 · i = 1 L ( x i x ¯ ) 1w_s_1
2w_s_2
3w_s_3
4w_s_4
Maximum x max 1w_mx_1
2w_mx_2
3w_mx_3
4w_mx_4
Minimum x min 1w_mn_1
2w_mn_2
3w_mn_3
4w_mn_4
Range x max x min 1w_s_1
2w_s_2
3w_s_3
4w_s_4
where L is the number of pixels of the parcel; xi is the value of each pixel of the parcel.
x ¯ = 1 L · i = 1 L x i
Table 3. Confusion matrix for binary classification.
Table 3. Confusion matrix for binary classification.
Classified as UprootedClassified as Orchard
GT UprootedTrue Positive (TP)False Negative (FN)
GT OrchardFalse Positive (FP)True Negative (TN)
Table 4. Quality indices for binary classification.
Table 4. Quality indices for binary classification.
AccuracyPrecisionTrue Positive RateTrue Negative Rate
A = T P + T N T P + F P + F N + T N P = T P T P + F P T P R = T P T P + F N T N R = T N T N + F P
Table 5. Classifications and their results in terms of accuracy, precision, true positive rate and true negative rate.
Table 5. Classifications and their results in terms of accuracy, precision, true positive rate and true negative rate.
Number of FeaturesNameAPTPRTNR
All featuresGLCM 40Tex_Total _H0.8760.5040.7410.896
Wavelet16Tex_Total_W0.8380.4160.7220.855
GLCM+Wavelet56Tex_Total_H+W0.8710.4890.6720.900
Selected featuresGLCM3Tex_Sel_H0.8660.4770.7570.881
Wavelet2Tex_Sel_W0.7890.3550.8490.780
GLCM+Wavelet5Tex_Sel_H+W0.8810.5160.8030.892

Share and Cite

MDPI and ACS Style

Ciriza, R.; Sola, I.; Albizua, L.; Álvarez-Mozos, J.; González-Audícana, M. Automatic Detection of Uprooted Orchards Based on Orthophoto Texture Analysis. Remote Sens. 2017, 9, 492. https://doi.org/10.3390/rs9050492

AMA Style

Ciriza R, Sola I, Albizua L, Álvarez-Mozos J, González-Audícana M. Automatic Detection of Uprooted Orchards Based on Orthophoto Texture Analysis. Remote Sensing. 2017; 9(5):492. https://doi.org/10.3390/rs9050492

Chicago/Turabian Style

Ciriza, Raquel, Ion Sola, Lourdes Albizua, Jesús Álvarez-Mozos, and María González-Audícana. 2017. "Automatic Detection of Uprooted Orchards Based on Orthophoto Texture Analysis" Remote Sensing 9, no. 5: 492. https://doi.org/10.3390/rs9050492

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop