Next Article in Journal
Machine Learning Regression Analysis for Estimation of Crop Emergence Using Multispectral UAV Imagery
Next Article in Special Issue
Assessing Grapevine Nutrient Status from Unmanned Aerial System (UAS) Hyperspectral Imagery
Previous Article in Journal
Elevation Spatial Variation Error Compensation in Complex Scene and Elevation Inversion by Autofocus Method in GEO SAR
Previous Article in Special Issue
Crop Nitrogen Retrieval Methods for Simulated Sentinel-2 Data Using In-Field Spectrometer Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Crops Fine Classification in Airborne Hyperspectral Imagery Based on Multi-Feature Fusion and Deep Learning

1
Faculty of Resources and Environmental Science, Hubei University, Wuhan 430062, China
2
Hubei Key Laboratory of Regional Development and Environmental Response, Hubei University, Wuhan 430062, China
3
Key Laboratory of Urban Land Resources Monitoring and Simulation, MNR, Shenzhen 518034, China
4
School of Printing and Packaging, Wuhan University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(15), 2917; https://doi.org/10.3390/rs13152917
Submission received: 31 May 2021 / Revised: 18 July 2021 / Accepted: 19 July 2021 / Published: 24 July 2021
(This article belongs to the Special Issue Precision Agriculture Using Hyperspectral Images)

Abstract

:
Hyperspectral imagery has been widely used in precision agriculture due to its rich spectral characteristics. With the rapid development of remote sensing technology, the airborne hyperspectral imagery shows detailed spatial information and temporal flexibility, which open a new way to accurate agricultural monitoring. To extract crop types from the airborne hyperspectral images, we propose a fine classification method based on multi-feature fusion and deep learning. In this research, the morphological profiles, GLCM texture and endmember abundance features are leveraged to exploit the spatial information of the hyperspectral imagery. Then, the multiple spatial information is fused with the original spectral information to generate classification result by using the deep neural network with conditional random field (DNN+CRF) model. Specifically, the deep neural network (DNN) is a deep recognition model which can extract depth features and mine the potential information of data. As a discriminant model, conditional random field (CRF) considers both spatial and contextual information to reduce the misclassification noises while keeping the object boundaries. Moreover, three multiple feature fusion approaches, namely feature stacking, decision fusion and probability fusion, are taken into account. In the experiments, two airborne hyperspectral remote sensing datasets (Honghu dataset and Xiong’an dataset) are used. The experimental results show that the classification performance of the proposed method is satisfactory, where the salt and pepper noise is decreased, and the boundary of the ground object is preserved.

Graphical Abstract

1. Introduction

Accurate and timely grasp of the information about the agricultural resources is extremely important for agricultural development. Obtaining the area and spatial distribution of crops is an important way to obtain agricultural information [1,2]. Traditional methods obtain crop classification results through field measurement, investigation and statistics, which are time-consuming, labor-consuming and money-consuming [3,4]. Remote sensing technology advances by leaps and bounds, and the resolution and timeliness of remote sensing images have been improved, and hyperspectral remote sensing data have been widely used [5,6]. Specifically, hyperspectral data play a great role in agricultural surveys [7,8,9,10], and have been used for crop condition monitoring, agricultural yield estimation, pest monitoring and so on. In agricultural survey, the fine classification of the hyperspectral image provides the information of crops distribution [11,12,13]. Fine classification of crops requires images with high spatial and spectral resolution [14]. In recent years, airborne hyperspectral technology has developed rapidly, the application of airborne hyperspectral imagery can solve the above needs.
In order to obtain crop fine classification information, scholars have done numerous studies [15,16]. Galva et al. used the hyperspectral data obtained on the EO-1 satellite and established a discriminant model through step regression analysis to identify and classify five sugarcane varieties in the southeastern region of Brazil, with a classification accuracy of 87.5% [17]. Li Dan used hyperion data and adopted the linear spectral mixture model and support vector machine (SVM) method to extract the litchi planting area in the north of Guangzhou. The results showed that the linear spectral decomposition of mixed pixels and SVM method can break the difficulty of extracting training samples and make full use of the features of hyperion hyperspectral images. The extraction accuracy of litchi is 85.3% [18]. Bhojaraja et al. used the spectral angle matching (SAM) method to extract the area of areca in Karnataka, India on hyperion hyperspectral data, with an accuracy of 73.68% [19]. Airborne hyperspectral remote sensing images have higher spatial resolution and spectral resolution. Therefore, airborne hyperspectral imaging technology has further amplified its application in disaster monitoring, fine agriculture, forest pest monitoring and prevention and other fields. Concurrently, good results also have been obtained in the field of crop fine classification. F. Melgani compared two support vector machine methods. The first was a linear SVM algorithm without kernel transformation, and the second was a nonlinear SVM algorithm based on Gaussian kernel function. Using AVIRIS data to identify and classify corn and other crops, the accuracy exceeds 80% [20]. Taking AVIRIS data as the data source, Tarabalka et al. firstly used the support vector machine to classify images based on pixels, and then used the Markov random field to make detailed classification results by using spatial context information. Combining SVM and MRF, the author classified soybeans and wheat, with an accuracy of 92% [21]. Wei et al. selected Hanchuan and Honghu UAV datasets for experiments and extracted spatial features from UAV hyperspectral images and fused them with spectral features. As a univariate potential function of the Conditional Random Field, the accuracy of strawberry, cotton and Chinese cabbage was over 98% [22]. Liu et al. gradually layered the experimental data (Beijing Shunyi) to extract and mine crop information. Then, Liu chose different parameters and extraction methods for each layer according to different goals [23]. The accuracy of wheat, corn and other ground objects was above 95%. Yu et al. used the support vector machine to obtain probability images of ground objects in the Salinas data set, and classified them by the conditional random field model, with various accuracy of over 94% [24]. Li et al. proposed two methods using spatial context support vector machines for crop classification. One is based on MRFs, which used spatial features in the original space. Another method used spatial features in the feature space, which is the nearest neighbor in the feature space. Then, the overall classification accuracy of the 16 data sets is 95.5% [25].
However, the airborne hyperspectral image shows richer spatial information. The above methods are insufficient for mining hyperspectral spatial information, and it is difficult to obtain good results [26,27]. Therefore, we propose a method for crops fine classification in airborne hyperspectral imagery based on multi-feature fusion and deep learning. First of all, three spatial features of hyperspectral images were extracted, namely morphological profile, GLCM texture and endmember abundance features. The extracted spatial features are fused with the original spectral information for the recognition of crop. Mathematical morphology is a non-linear image processing theory. It is mainly used to analyze the spatial relationship between pixels in an image by a structural element with given size and shape. GLCM expresses the texture by calculating the joint conditional probability density between the gray levels of pixels. The feature reflects the correlation between the gray levels of any pixels in the image and the statistical properties of the texture features. GLCM texture can mine the internal information of the image and exploit the external structure and subtle properties of the image. There are mixed pixels in hyperspectral remote sensing images. The same feature has different spectral and the same spectral represents different features. Moreover, a deep neural network model is used. The DNN model has multiple hidden layers, and the hidden layers are fully connected. In the experiment, the DNN model is used to learn the potential features of airborne hyperspectral imagery. The internal information is excavated and the probability image is obtained. Conditional random field is used as a classifier to remove noise and preserve the boundary of ground features. As a discriminative model, the CRF model directly models the posterior probability of the label field through a specific observation field. The probability image is taken as the unary potential function of the conditional random field model, thus reducing the salt and pepper noise.
This article will also explain the following: Section 2 explains the spatial features, fusion methods and classification model. Then, Section 3 introduces the datasets and analyzes the experimental results. Section 4 summarizes the whole article.

2. Materials and Methods

Figure 1 shows the flowchart of the proposed method for the fine classification of crops in airborne hyperspectral remote sensing images using multi-feature fusion and deep learning. The original hyperspectral image is reduced in dimensionality by using principal component analysis (PCA), and the first four bands are selected as the base image. The morphological features, texture information and endmember abundance features of the image are extracted based on the base images to mine the spatial information. Subsequently, the DNN-CRF is employed as the classification model to mining potential information and obtaining the classification results.

2.1. Multiple Feature Extraction

Hyperspectral images have abundant information. Different features can express different details of hyperspectral images. Multi-feature fusion in hyperspectral images is beneficial to solving the problem of insufficient single feature information. Naturally, multi-features fusion has greatly promoted the accuracy of image classification.

2.1.1. Texture Features

Hyperspectral remote sensing images have profuse texture information. It is an internal feature common to all object surfaces and represents important information about the distribution of objects and neighborhood relations [28]. The Gray Level Co-Occurrence Matrix (GLCM) is often used to extract texture characteristics [29]. By calculating the correlation between the gray levels of two pixels in a certain distance and a certain direction in an image, it reflects the comprehensive information of the direction, interval, amplitude of change and speed of the image [30].
Suppose f ( x , y ) is a two-dimensional digital image with a size of M*N and a gray level of Ng, where # x is the number of elements in the set x and P P is a matrix of N g × N g . If the distance between ( x 1 , y 1 ) and ( x 2 , y 2 ) is d and the angle is θ (0°, 45°, 90° and 135°), the GLCM of various pitches and angles is:
P ( i , j , d , 0 ° ) = # { ( x 1 , y 1 ) , ( x 2 , y 2 ) M , N | ( x 1 x 2 = 0 , | y 1 y 2 | = d ) , f ( x 1 , y 1 ) = i , f ( x 2 , y 2 ) = j } P ( i , j , d , 45 ° ) = # { ( x 1 , y 1 ) , ( x 2 , y 2 ) M , N | ( x 1 x 2 = d , y 1 y 2 = d ) o r ( x 1 x 2 = d , y 1 y 2 = d ) , f ( x 1 , y 1 ) = i , f ( x 2 , y 2 ) = j } P ( i , j , d , 90 ° ) = # { ( x 1 , y 1 ) , ( x 2 , y 2 ) M , N | ( | x 1 x 2 | = d , y 1 y 2 = 0 ) , f ( x 1 , y 1 ) = i , f ( x 2 , y 2 ) = j } P ( i , j , d , 135 ° ) = # { ( x 1 , y 1 ) , ( x 2 , y 2 ) M , N | ( x 1 x 2 = d , y 1 y 2 = d ) o r ( x 1 x 2 = d , y 1 y 2 = d ) , f ( x 1 , y 1 ) = i , f ( x 2 , y 2 ) = j }
In this method, we use the six measurements, namely mean, homogeneity, contrast, dissimilarity, entropy and second moment to depict the textural information of the image. Specifically, the mean value represents the regularity of the image gray value, and the uniformity of the local image gray level is represented by homogeneity [30]. The contrast shows the sharpness and texture depth of an image, and dissimilarity shows the measure of the degree of difference. Entropy expresses the complexity or unevenness of the image texture, and the angular second moment indicates the uniform characteristics of the local gray distribution of the image and the width of the texture.

2.1.2. Endmember Abundance Features

Affected by the mixing effect of sensors, atmospheric transmission, there are plenty of mixed pixels in the hyperspectral imagery. For reducing the limitations of mixed pixels on the classification process of hyperspectral images, endmember abundance features are extracted [31]. Endmember is a kind of characteristic object with relatively fixed spectrum. Sequential Maximum Angle Convex Cone (SMACC) is a method based on the convex cone model, which uses constraint conditions to identify the endmember spectrum of the image [32]. Firstly, the convex cone is determined by the pole and first endmember spectral. Then, the next end member spectrum is generated by applying the oblique projection of the constraint conditions. The addition of cones can generate a new endmember spectrum until the specified endmember spectrum category is satisfied. In this paper, the SMACC method often extracts endmember spectrum from image. The mathematical formula of the SMACC method is:
H ( c , i )   =   k N   R ( c , k )   A ( k , j )
where H is the endmember spectrum; c is the band index and i is the pixel index; k is the index from 1 to the largest end member; R is the matrix containing the endmember spectrum; A is the abundance of the endmember j to the endmember k in each pixel degree matrix.

2.1.3. Morphological Profiles

Morphology is a theory based on mathematical morphology for mining the morphological profiles of target objects [33]. The basic operations of the morphological algorithm include erosion, dilation, opening and closing [34]. The opening and closing are the combined operation of erosion and dilation. The opening operation performs dilation processing on an erosion image, which can remove the brighter structure in the image. In contrast, the closing operation performs erosion processing on a Dilation image, which can remove darker structures in the image.
Using the opening and closing of morphological reconstruction, the shape and structure can be preserved, and fine noise can be removed. The opening and closing operator are proved to be effective in processing the spatial information for classification of hyperspectral images. Let γ S E ( I )   be the morphological opening Structuring Elements (SE have properties such as size and shape) of image I , and φ S E ( I ) be the closed morphological. A series of S E s of increasing size are defined as MPs:
M P λ = { M P γ λ ( I ) = γ λ ( I ) } ,     λ [ 0 , n ] M P φ = { M P φ λ ( I ) = φ λ ( I ) } , λ [ 0 , n ] With   γ 0 ( I ) = φ 0 ( I ) = I
In the formula, λ is the radius of S E of the commonly used disk. A grayscale image can be used to generate MPs for open/close reconstruction. A set of S E s with gradually increasing size is used to display the multi-scale information of the image.

2.2. Fusion Strategy

2.2.1. Decision Fusion

Decision fusion method is often used in the fusion of multi-features in the image classification [35]. According to different mathematical foundations, the decision fusion is roughly divided into four types: methods based on evidence theory, methods based on probability, methods based on fuzzy logic and methods of voting and election strategies. The basic idea of decision fusion strategy is: each voter evaluates and ranks different candidates, and then calculates the number of votes of all voters. The candidate with the largest number of votes wins the competition.
Decision fusion is a process that data reduction mapping from multiple inputs to a smaller number of outputs [36]. Firstly, the three features extracted from the original image are fused with the spectral information respectively. We can obtain the respective classification results with different features. Then, the classification results of feature are fused by decision fusion to obtain the final classification result. Decision fusion uses the most frequently occurring category as the label of this pixel. Therefore, a classification image can be given based on the classification results of multiple features.
Where A m is the number of votes calculated for candidate m , n is the number of features. If candidate n has the largest number of votes, then candidate n can be the winner after k classifier evaluations and be considered to be the best.

2.2.2. Probability Fusion

Probability fusion based on the probability output result of the classifier. The probability outputs with different features are calculated, on which the probability fusion is performed. The main steps of probability fusion are as follows: firstly, we obtain the classification probability image of each spatial feature and spectral feature through the classifier. Then, probability images of multiple features are fused to obtain probability classification images. The classification result is gain by probabilistic fusion of probability images.

2.2.3. Stacking Fusion

Stacking fusion is classified by the combination of feature vectors. Stacking fusion strategy steps are as follows: firstly, we combine the extracted spatial features with spectral information to form the new feature that is used as input to the classifier. Specifically, the image of classification is the result of fusion of spatial and spectral features. The step of stacking fusion is the fusion of features before classification. Stacking fusion is represented as:
γ = [ φ s p e c T X s p e c , φ s p a t T X s p a t ]
where X s p e c is the spectral feature. X s p a t is the feature related to the extended the morphological profiles, GLCM texture, and endmember abundance features. Then, there is the feature fusion expression, where γ is the fusion feature and φ is the linear mapping moment of the extracted feature.

2.3. Image Classification

2.3.1. Deep Neural Networks

Deep Neural Network (DNN) has a strong learning ability, that has been often used for image classification. DNN is used as the classification model to potential features of images. The basic structure of DNN composed of several input layers, hidden layer and output layer. After the input, a linear relationship is learned in hidden layers, and the output result is obtained through the activation function.
The training of the deep neural network includes the forward propagation and back propagation process. The forward propagation algorithm performs a series of linear operations and activation operations with the input value vector by using multiple weight coefficient matrices and bias vectors. Back propagation algorithm optimizes the selected loss function to find the minimum value. A series of linear coefficient matrices and bias vectors are updated. It mines deep features of target high-dimensional data by constructing multiple hidden layers of neuron connections. The structure diagram of DNN is shown in Figure 2.
The forward propagation algorithm uses several weight coefficient matrices W and bias vector b. After we input data, the result of the next layer was calculated based on the output of the previous layer. The output result is not limited to a single neuron. The output layer can have multiple neurons. The forward propagation formula is:
a l = σ ( z l ) = σ ( W l a l 1 + b l )
where l is the number of input layers, W is the matrix of all hidden layers and output layers. b is the offset vector, and the final output is a l . Back propagation is the core of deep learning. By defining a loss parameter, the gap between the probability output of the model and the real sample is calculated. Here, cross entropy is selected as the loss parameter. The back-propagation algorithm is the opposite process to the forward propagation algorithm. It pushes backwards from the L layer to the first layer, revises W and b through repeated iterations, and finally obtains W and b as the parameters that can be finally classified.

2.3.2. Conditional Random Field

Conditional Random Field (CRF) is a class of statistical modeling method often applied in pattern recognition and machine learning and used for structured prediction. Whereas a classifier predicts a label for a single sample without considering “neighboring” samples, a CRF can take context into account. To do so, the prediction is modeled as a graphical model, which implements dependencies between the predictions. What kind of graph is used depends on the application.
CRF model, as a discriminative model, is extensively used for image classification and target labeling. The Conditional Random Field model (CRF) uses a unified probability framework to simulate the local neighborhood interaction between random variables. It directly simulates the posterior probability of the label and obtains the corresponding Gibbs energy. At the same time, the classification image can obtain the label image with the maximized posterior probability through Bayesian Maximum Posterior Rule (MAP). The CRF model directly simulates the posterior distribution of the label x, given the observation y.
The unary potential function uses the relationship between the label and the observed image data to model. It calculates the single pixel with a specific category label through the feature vector. The binary potential function simulates the spatial context information between a pixel and its neighborhood by considering the field and the observation field. This paper uses the results of DNN classification output to define the unary potential function of the CRF model.
The calculation process of the conditional random field is as follows:
E ( x | y )   =   i V   𝜓 i ( x i , y )   +   𝜆 i V , j N i 𝜓 ij ( x i , x j , y )
V is the set of all the pixels of the observed data; N is the number of pixels in the observed data. Let ψ i ( x i , y ) and ψ i j ( x i , x j , y ) are the unary potential function and the binary potential function E ( x | y ) respectively defined on the local area of the pixel i . The adjustment parameter of the binary potential function is defined as a non-negative constant, which is used to measure the influence of the unary potential function and the binary potential function.

3. Experiments

3.1. Experiential Data

3.1.1. Honghu Dataset

Honghu City is under the jurisdiction of Jingzhou City of Hubei Province. It is located in the central and southern part of Hubei Province, spanning between 113°07′~114°05′ east longitude and 29°39′~30°12′ north latitude in Figure 3. There are 102 lakes in Honghu City, and Honghu is known as the “Kidney of Hubei”.
The first set of experimental data is the open-sourced high-resolution hyperspectral dataset (Honghu dataset, Figure 4) obtained in Honghu City of Hubei Province using a 17-mm focal length Headwall Nano-Hyperspec imaging sensor equipped on a DJI Matrice 600 Pro UAV platform in November 2017.The dataset is provided by the Intelligent Data Extraction and Remote Sensing Analysis Group (RSIDEA Group) of Wuhan University. The original image is shown in Figure 4a, with a spatial resolution of 0.4 m, a size of 400 × 400 and 270 bands from 400 to 1000 nm. Table 1 shows the feature types and corresponding pixel numbers of the Honghu data set.

3.1.2. Xiong’an Dataset

Xiong’an New District is located in Baoding City, Hebei Province, China (Figure 5). The planning scope covers Xiongxian, Rongcheng, Anxin and some surrounding areas in Hebei Province. The Xiong’an New Area is located in the mid-latitude zone, with a warm temperate monsoon continental climate.
In October 2017, the Institute of Remote Sensing and Digital Earth of the Chinese Academy of Sciences and the Shanghai Institute of Technical Physics of the Chinese Academy of Sciences conducted an aerial hyperspectral remote sensing data acquisition experiment in Xiong’an New District, Hebei Province (Xiong’an dataset, Figure 6). The hyperspectral image data of Horseshoe Bay Village in Xiong’an New District was collected by full spectrum multimode imaging spectrometer for high resolution special aviation system, with a spatial resolution of 0.5 m, a size of 3750 × 1580 and 250 bands from 400 to 1000 nm. Table 2 shows the feature types and corresponding pixel numbers of the Xiong’an data set.

3.2. Experiment Description

3.2.1. Experimental Setup

In order to verify the effectively of this method, we compared the following seven sets of experiments: the original spectral image, GLCM texture, morphological profiles, endmember abundance features, decision fusion, probability fusion and stacking fusion.
The airborne hyperspectral image has rich spectral characteristics. For dimensionality reduction, we use PCA to reduce the airborne hyperspectral image to the first eight bands. Using the data after the PCA as the basic data source, texture features are extracted through GLCM. Among them, we set the window size to 7 × 7. The direction is set to 0°, 45°, 90° and 135°. The average of the results in the four directions is used to represent the GLCM texture. The endmember spectral is extracted. RMS Error Tolerance is set to 0, so that abundance images and spectra can be obtained. The morphological profiles are obtained by morphological opening and closing reconstruction. The radius of the disk operator is set to 1, 3, 5 and 7.
Deep neural network has five hidden layers, the number of neurons in each layer was 29. The learning rate is set to 0.00001. The number of iterations is 1800. The minibatch size is set to 27. In order to avoid overfitting, this paper uses the dropout method to randomly delete 30% of the neural nodes to reduce the network complexity and improve the generalization ability of the model. Supported by a large number of experiments, the parameters of CRF, λ and θ are set to 1.6 and 3.0, respectively. The accuracy of each crop, the overall accuracy (OA) and Kappa coefficient (Kappa) are used to verify the classification results. Kappa coefficient is a measure of classification accuracy, which can be calculated by:
K a p p a = P o P e 1 P e
where P o is the sum of the number of samples of each class divided by the total number of samples, that is, the overall classification accuracy. Suppose that the number of real samples of each class is A1, A2, …, AC respectively, and the number of predicted samples of each class is B1, B2, …, BC respectively, and the total number of samples is n, The formula is as follows:
P e = n = 1 n ( A n B n n n )

3.2.2. Experimental Results

The classification results of Honghu are shown in Figure 7a is the classification result of the original image. The result shows that there are still many misclassifications of ground objects. The white radish and small brassica chinensis in the lower left corner of the image are misrecognized. The carrot in the upper right corner of the image is classified as tuber mustard. Figure 7b is the endmember abundance classification result. The cabbage in the lower left corner was misclassification but recognized as film-covered lettuce. Moreover, some brassica chinensis were mistakenly classified as rapeseed and small brassica chinensis. Romaine lettuce in the middle was also mistakenly classified as film-covered lettuce. The result of GLCM texture is shown in Figure 7c. In addition to the misclassification of lactuca sativa and carrot, there are also some small Brassica chinensis that are classified as pakchoi cabbage. The classification results of morphological profiles are slightly improved in Figure 7d. However, the cabbage is still classified as small green vegetables and film-covered lettuce. Part of the greens was also mistakenly classified as small greens and rapeseed. The results of decision fusion, probability fusion and stacking fusion are shown in Figure 7e–g. The classification results of the three fusion strategies are satisfactory. Almost all categories are classified correctly, but there are still misclassifications. For example, in the decision fusion result, some Chinese cabbage was wrongly classified as bare land and rape, green cabbage was wrongly classified as small green vegetables, and some small green vegetables were mixed with rape. In the result of probabilistic fusion, various features can be better distinguished. Among them, the precision of carrot, sprouting garlic, celtuce, etc. has been greatly improved.
Table 3 is the classification accuracy of different features and fusion strategies of the Honghu dataset. The OA of the original spectral is 91.05%, and the accuracy of celtuce, romaine lettuce, and carrots is 0%. The overall accuracy of classification using endmember abundance is 91.77%. Compared with the classification results of the original image, the accuracy of part types of endmember abundance was improved, such as celtuce and carrot, but pakchoi cabbage was misclassified. The OA of GLCM texture and morphological profiles were 91.92% and 93.64%, respectively. The accuracy of some crops such as bare soil, cotton, lettuce were improved. The accuracy of multi-features fusion more than 95%, and the 18 categories in the classification result are basically consistent with the ground truth. The probability fusion and stacking fusion classification results are generally better, with OA reaching 96.89% and 98.7%, respectively. The accuracy of multi-feature fusion is higher than that of single-feature classification, which shows that the fusion of multiple features is helpful for the fine classification of crops.
Figure 8 shows the classification results of Xiongan. Figure 8a is the classification result of the original image. Large-area classification results are better, but the accuracy of small areas such as peach, vegetable field, and locust is only 0%. The classification result of endmember abundance is shown in Figure 8b. The peach in the upper part is still misclassified, and there are many small pixels in the upper right corner that have been misclassified. Figure 8c is the result of the GLCM texture. Compared with the first two sets of experiments, there was a great improvement in the maintenance of the ground object boundary. The morphological profiles can well maintain the shape characteristics of the image. The experimental results shown in Figure 8d clearly show that almost all categories of different area sizes are displayed.
Figure 8e is the result of decision fusion, which improves the classification result of a single feature. The advantages of fusion can be seen from the classification results. The upper white wax, Koelreuteria Paniculata and the upper right of the image all have complete classification results. The classification results of probability fusion and stacking fusion are shown in Figure 8f,g. As can be seen in the classification diagram, the misclassification phenomenon in the upper middle and upper right is reduced. The classification results of multi-features fusion have been significantly improved.
The classification accuracy of different features is shown in Table 4. In the classification results of the original spectral, the classification accuracy of sparse forest, peach, and soybean is 0%, and all of them are mistaken for pear tree. The accuracy of acer compound and corn is less than 60%, and the OA is 85.46%. In the classification results using endmember abundance, it can be seen that the classification accuracy of most crops has been improved, but the classification accuracy of peach, soybean, and locust is still 0. According to the classification results of GLCM texture, the classification accuracy of peach trees has increased by 3.17%, the classification accuracy of maple leaves has reached 91.28%, and the OA has reached 90.85%. In the classification results of morphological profiles, the classification accuracy of peach has increased by more than 60%, the classification accuracy of rice is the highest at 98.83%, and the OA is 94.08%. The last three groups are the results of decision fusion, probability fusion and staking fusion. Among them, the OA of decision fusion is 94.34%, and the Kappa coefficient is 0.915. Except for vegetable field and sparse forest, the classification accuracy of all crops is above 50%. The OA of probability fusion is 95.74% and Kappa coefficient is 0.928. Only the classification accuracy of vegetable plots is below 60%, and there are seven types of crop categories above 95%. In the classification accuracy of stacking fusion, there are 12 categories that reach more than 99%, including rice, water and willow. The OA is 99.71%, and the overall accuracy is satisfying.

3.3. Discussion

3.3.1. Effect of Sample Size

In order to verify the effect of the sample size of this method on the results, 3%, 5%, 10% of testing sample were used as the training samples. The experimental results of different algorithms using different training samples are shown in the Table 3.
From Table 5, we can see that as the number of samples increases, the classification accuracy of the image also increases. Table 4 shows the classification results of different samples in Honghu. The training sample result of 10% of the original image has reached 98.71%. The results of Xiong’an are shown in Table 3, with the highest accuracy of 99.94%. The accuracy of different fusion strategies is also increasing. The classification accuracy of more than 3% of the training samples of the original image is more than 90%. Therefore, it can be seen that the number of training samples also plays a great role in image classification.

3.3.2. Effect of Classifier

In order to verify the effect of the classifier on the results, we chose different classifiers for experiments. Here, we choose the method based on SVM classifier and DNN classifier. At the same time, we still use three fusion strategies combined with different classification classifier.
The sample size is 3% of the original image. In addition, the results of different classifier are in Table 4. In Table 4, the classification accuracy of the DNN classifier is the highest, followed by the SVM classifier. Through deep learning to mine the potential information of the image and build a deep network classifier, combined with the conditional random field, the classification accuracy of the image by DNN classifier has increased by about 10%. The inherent information of airborne hyperspectral images is difficult to mine close to ordinary classifiers. The combination of multi-features and deep learning can mine deep information. This proves that the DNN classifier is suitable for airborne image classification. The experimental results are in Table 6.

3.3.3. Effect of CRF

At the same time, we also discussed the effect of conditional random fields on the classification results. One group of experimental methods with CRF, and the other group without CRF. Firstly, the fusion image is input into the DNN model to obtain the probability image, and the accuracy of the probability image is evaluated accurately to obtain the classification result of the DNN method. The other is the method proposed in this paper. Multi-feature fusion data input into the model with CRF to obtain the classification results and evaluated the accuracy. The accuracy results of the comparison method are shown in Table 7. From the table, we can see that the accuracy of the model with CRF is about 5% higher than that of the model without CRF. Therefore, we can conclude that the deep classification model can improve the accuracy in the process of crops fine classification. The use of conditional random fields can improve the accuracy of fine classification of crops.

4. Discussion

In this paper, we proposed a method for crops fine classification in airborne hyperspectral image based on multi-feature fusion and deep learning. We extracted GLCM texture, morphological profiles, and endmember abundance features from airborne hyperspectral imagery. To fuse the spatial and spectral information of the image, decision fusion, probability fusion and stacking fusion are used. At the same time, the classification model consists of deep neural networks and conditional random fields are employed. The deep learning model can mine the deep information of the image. The CRF keeps the boundaries of the ground features intact while reducing noise. We conducted experiments on the Honghu dataset and the Xiong’an dataset. The results proved that the DNN-CRF method proposed in this paper helps to improve the accuracy of crops classification. Specifically, the classification accuracy of multi-feature fusion is higher than that of single feature. The experimental results proved that multi-feature fusion can help improve the classification accuracy. The larger the number of training samples, the higher the accuracy can be obtained. Additionally, the classification accuracy of DNN is higher than that of SVM, as DNN mined the deep features of the image for crops fine classification. Moreover, the accuracy of the experiment with CRF is higher than the accuracy of classification without CRF. It can be seen that CRF has the effect of improving the accuracy of crop classification. In the future work, we will consider more other types of neural networks, such as CNNs, as well as the integration of UAV and aerial images, to meet the needs of larger scale fine classification of crops, trying our best to apply our research results to the field of agriculture faster and better.

Author Contributions

L.W., K.W. and Y.L. were responsible for the construction of the overall framework of the paper. Y.L. collected the data and performed data preprocessing. Q.L. and H.L. designed the experiment and implemented. Z.W., R.W. and L.C. sorted out and analyzed the experimental results. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the “National Key Research and Development Program of China” (2019YFB2102902, 2017YFB0504202), the Open Fund of Key Laboratory of Urban Land Resources Monitoring and Simulation, MNR (KF-2019-04-006), the ‘‘Natural Science Foundation Key projects of Hubei Province’’ under Grant 2020CFA005, the Central Government Guides Local Science and Technology Development Projects (2019ZYYD050), the Opening Foundation of Hunan Engineering and Research Center of Natural Resource Investigation and Monitoring (2020-2), the Open Fund of the State Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University (18R02), and the Open Fund of Key Laboratory of Agricultural Remote Sensing of the Ministry of Agriculture (20170007).

Data Availability Statement

Not applicable.

Acknowledgments

The datasets are provided by the Intelligent Data Extraction and Remote Sensing Analysis Group of Wuhan University (RSIDEA). The Remote Sensing Monitoring and Evaluation of Ecological Intelligence Group (RSMEEI) helped to process the datasets.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, S.; Lei, Y.; Wang, L.; Li, H.; Zhao, H. Crop Classification Using MODIS NDVI Data Denoised by Wavelet: A Case Study in Hebei Plain, China. Chin. Geogr. Sci. 2011, 3, 68–79. [Google Scholar] [CrossRef]
  2. Prodhan, F.A.; Zhang, J.; Yao, F.; Shi, L.; Pangali Sharma, T.P.; Zhang, D.; Cao, D.; Zheng, M.; Ahmed, N.; Mohana, H.P. Deep Learning for Monitoring Agricultural Drought in South Asia Using Remote Sensing Data. Remote Sens. 2021, 13, 1715. [Google Scholar] [CrossRef]
  3. Yang, N. Application and development of remote sensing technology in geological disaster prevention and mineral exploration. Value Eng. 2020, 39, 242–243. [Google Scholar]
  4. Zhang, H.; Wang, L.; Tian, T.; Yin, J. A Review of Unmanned Aerial Vehicle Low-Altitude Remote Sensing (UAV-LARS) Use in Agricultural Monitoring in China. Remote Sens. 2021, 13, 1221. [Google Scholar] [CrossRef]
  5. Peng, X.; Han, W.; Ao, J.; Wang, Y. Assimilation of LAI Derived from UAV Multispectral Data into the SAFY Model to Estimate Maize Yield. Remote Sens. 2021, 13, 1094. [Google Scholar] [CrossRef]
  6. Bo, Y.; Liu, X. Object-Based Crop Species Classification Based on the Combination of Airborne Hyperspectral Images and LiDAR Data. Remote Sens. 2015, 7, 922–950. [Google Scholar]
  7. Pádua, L.; Marques, P.; Hruška, J.; Adão, T.; Peres, E.; Morais, R.; Sousa, J. Multi-Temporal Vineyard Monitoring through UAV-Based RGB Imagery. Remote Sens. 2018, 10, 1907. [Google Scholar] [CrossRef] [Green Version]
  8. Xiao, Z.; Gong, Y.; Long, Y.; Li, D.; Wang, X.; Liu, H. Airport Detection Based on a Multiscale Fusion Feature for Optical Remote Sensing Images. IEEE Geosci. Remote. Sens. Lett. 2017, 14, 1469–1473. [Google Scholar] [CrossRef]
  9. Lianze, T.; Yong, L.; Hongji, Z.; Sijia, L. Summary of UAV Remote Sensing Application Research in Agricultural Monitoring. Sci. Technol. Inf. 2018, 16, 122–124. [Google Scholar]
  10. Vincent, G.; Antin, C.; Laurans, M.; Heurtebize, J.; Durrieu, S.; Lavalley, C.; Dauzat, J. Mapping plant area index of tropical evergreen forest by airborne laser scanning. A cross-validation study using LAI2200 optical sensor. Remote. Sens. Environ. 2017, 198, 254–266. [Google Scholar] [CrossRef]
  11. Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
  12. Camps-Valls, G.; Bruzzone, L. Kernel-based methods for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1351–1362. [Google Scholar] [CrossRef]
  13. Meyer, A.; Paglieroni, D.; Astaneh, C. K-means reclustering: Algorithmic options with quantifiable performance comparisons. In Optical Engineering at the Lawrence Livermore National Laboratory; International Society for Optics and Photonics: Bellingham, WA, USA, 2003; Volume 5001, pp. 84–92. [Google Scholar]
  14. Yi, C.; Nasrabadi, N.M.; Tran, T.D. Hyperspectral Image Classification via Kernel Sparse Representation. IEEE Trans. Geosci. Remote Sens. 2013, 51, 217–231. [Google Scholar] [CrossRef] [Green Version]
  15. Makantasis, K.; Karantzalos, K.; Doulamis, A.; Doulamis, N. Deep supervised learning for hyperspectral data classification through convolutional neural networks. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 4959–4962. [Google Scholar] [CrossRef]
  16. He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [Green Version]
  17. Galvao, L.S.; Formaggio, A.R.; Tisot, D.A. Discrimination of sugarcane varieties in Southeastern Brazil with EO-1 Hyperion data. Remote Sens. Environ. 2005, 94, 523–534. [Google Scholar] [CrossRef]
  18. Li, D.; Chen, S.; Chen, X. Research on method for extracting vegetation information based on hyperspectral remote sensing data. Trans. Chin. Soc. Agric. Eng. 2010, 26, 181–185. [Google Scholar]
  19. Bhojaraja, B.E.; Hegde, G. Mapping agewise discrimination of are canut crop water requirement using hyperspectral remote sensing. In Proceedings of the International Conference on Water Resources.Coastal and Ocean Engineering, Mangalore, India, 12–14 March 2015; pp. 1437–1444. [Google Scholar]
  20. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
  21. Tarabalka, Y.; Fauvel, M.; Chanussot, J.; Benediktsson, J.A. SVM- and MRF-Based Method for Accurate Classification of Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2010, 7, 736–740. [Google Scholar] [CrossRef] [Green Version]
  22. Wei, L.; Yu, M.; Zhong, Y.; Zhao, J.; Liang, Y.; Hu, X. Spatial–Spectral Fusion Based on Conditional Random Fields for the Fine Classification of Crops in UAV-Borne Hyperspectral Remote Sensing Imagery. Remote Sens. 2019, 11, 780. [Google Scholar] [CrossRef] [Green Version]
  23. Liang, L.I.U.; Xiao-Guang, J.I.A.N.G.; Xian-Bin, L.I.; Ling-Li, T.A.N.G. Study on Classification of Agricultural Crop by Hyperspectral Remote Sensing Data. J. Grad. Sch. Chin. Acad. Sci. 2006, 23, 484–488. [Google Scholar]
  24. Wei, L.; Yu, M.; Liang, Y.; Yuan, Z.; Huang, C.; Li, R.; Yu, Y. Precise Crop Classification Using Spectral-Spatial-Location Fusion Based on Conditional Random Fields for UAV-Borne Hyperspectral Remote Sensing Imagery. Remote Sens. 2019, 11, 2011. [Google Scholar] [CrossRef] [Green Version]
  25. Li, C.H.; Kuo, B.C.; Lin, C.T.; Huang, C.S. A Spatial–Contextual Support Vector Machine for Remotely Sensed Image Classification. IEEE Trans. Geosci. Remote Sens. 2012, 50, 784–799. [Google Scholar] [CrossRef]
  26. Zhao, C.; Luo, G.; Wang, Y.; Chen, C.; Wu, Z. UAV Recognition Based on Micro-Doppler Dynamic Attribute-Guided Augmentation Algorithm. Remote Sens. 2021, 13, 1205. [Google Scholar] [CrossRef]
  27. Singh, J.; Mahapatra, A.; Basu, S.; Banerjee, B. Assessment of Sentinel-1 and Sentinel-2 Satellite Imagery for Crop Classification in Indian Region During Kharif and Rabi Crop Cycles. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 3720–3723. [Google Scholar]
  28. Bai, J.; Xiang, S.; Pan, C. A Graph-Based Classification Method for Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 803–817. [Google Scholar] [CrossRef]
  29. Ding, H.; Wang, X.; Wang, Y.; Luo, H. Ensemble Classification of Hyperspectral Images by Integrating Spectral and Texture Features. J. Indian Soc. Remote Sens. 2019, 47, 113–123. [Google Scholar] [CrossRef]
  30. AlSuwaidi, A.; Grieve, B.; Yin, H. Combining spectral and texture features in hyperspectral image analysis for plant monitoring. Meas. Sci. Technol. 2018, 29, 104001. [Google Scholar] [CrossRef] [Green Version]
  31. Wang, Y.; Yu, W.; Fang, Z. Multiple Kernel-Based SVM Classification of Hyperspectral Images by Combining Spectral, Spatial, and Semantic Information. Remote Sens. 2020, 12, 120. [Google Scholar] [CrossRef] [Green Version]
  32. Benediktsson, J.A.; Palmason, J.A.; Sveinsson, J.R. Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans. Geosci. Remote Sens. 2005, 43, 480–491. [Google Scholar] [CrossRef]
  33. Xuan, H.; Qikai, L. Hyperspectral Image Classification Algorithm Based on Saliency Profile. Acta Opt. Sin. 2020, 40, 1611001. [Google Scholar] [CrossRef]
  34. Huang, X.; Guan, X.; Benediktsson, J.A.; Zhang, L.; Li, J.; Plaza, A.; Dalla Mura, M. Multiple Morphological Profiles from Multicomponent-Base Images for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4653–4669. [Google Scholar] [CrossRef]
  35. Wang, Z.; Miao, X.; Huang, Z.; Luo, H. Research of Target Detection and Classification Techniques Using Millimeter-Wave Radar and Vision Sensors. Remote Sens. 2021, 13, 1064. [Google Scholar] [CrossRef]
  36. Licciardi, G.; Pacifici, F.; Tuia, D.; Prasad, S.; West, T.; Giacco, F.; Thiel, C.; Inglada, J.; Christophe, E.; Chanussot, J.; et al. Decision fusion for the classification of hyperspectral data: Outcome of the 2008 grs-s data fusion contest. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3857–3865. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The whole procedure of our proposed method.
Figure 1. The whole procedure of our proposed method.
Remotesensing 13 02917 g001
Figure 2. Basic structure of DNN.
Figure 2. Basic structure of DNN.
Remotesensing 13 02917 g002
Figure 3. Experimental data: Map of Honghu.
Figure 3. Experimental data: Map of Honghu.
Remotesensing 13 02917 g003
Figure 4. Honghu dataset (a) the original image; (b) the ground-truth.
Figure 4. Honghu dataset (a) the original image; (b) the ground-truth.
Remotesensing 13 02917 g004
Figure 5. Experimental data: Map of Xiong’an region.
Figure 5. Experimental data: Map of Xiong’an region.
Remotesensing 13 02917 g005
Figure 6. Xiong’an dataset (a) The original image; (b) the ground-truth.
Figure 6. Xiong’an dataset (a) The original image; (b) the ground-truth.
Remotesensing 13 02917 g006
Figure 7. Classification results of Honghu (a) Original Spectral, (b) Endmember Abundance, (c) GLCM Texture, (d) Morphological Profiles, (e) Decision Fusion, (f) Probability Fusion and (g) Stacking Fusion, (h) Ground Truth.
Figure 7. Classification results of Honghu (a) Original Spectral, (b) Endmember Abundance, (c) GLCM Texture, (d) Morphological Profiles, (e) Decision Fusion, (f) Probability Fusion and (g) Stacking Fusion, (h) Ground Truth.
Remotesensing 13 02917 g007aRemotesensing 13 02917 g007b
Figure 8. Classification results of Xiong’an dataset (a) Original Spectral, (b) Endmember Abundance, (c) GLCM Texture. (d) Morphological Profiles, (e) Decision Fusion, (f) Probability Fusion, and (g) Stacking Fusion, (h) Ground Truth.
Figure 8. Classification results of Xiong’an dataset (a) Original Spectral, (b) Endmember Abundance, (c) GLCM Texture. (d) Morphological Profiles, (e) Decision Fusion, (f) Probability Fusion, and (g) Stacking Fusion, (h) Ground Truth.
Remotesensing 13 02917 g008aRemotesensing 13 02917 g008b
Table 1. Honghu dataset.
Table 1. Honghu dataset.
TypePixelTypePixelTypePixel
roof66cabbage309celtuce30
bare soil354tuber mustard343film-covered lettuce217
cotton42brassica parachinensis189romaine lettuce90
rape1137brassica chinensis217carrot83
Chinese cabbage323small brassica chinensis477white radish122
pakchoi cabbage121lactuca sativa158sprouting garlic61
Table 2. Xiong’an dataset.
Table 2. Xiong’an dataset.
TypePixelTypePixelTypePixel
rice135,662peach19,686koelreuteria paniculata6992
water49,695corn17,750bare land11,523
grassland126,569pear tree308,285rice stubble58,149
acer compound67,695soybean2146locust1684
willow54,384poplar27322sparse forest449
sophora japonica142,827vegetable field8745house8885
White wax50,834elm4606
Table 3. The results of Honghu classification.
Table 3. The results of Honghu classification.
TypesTraining SamplesText SamplesOriginal ImageEndmember AbundanceGLCM TextureMorphological ProfilesDecision FusionProbability FusionStacking Fusion
Red roof66213899.50100.00100.00100.0099.5399.5399.67
Bare soil35411,45699.3899.6799.0399.6898.9199.5299.53
Cotton42138298.7598.7796.7499.1398.9199.2899.93
Rape113736,783100.0099.58100.0099.7299.3199.9699.83
Chinese cabbage32310,47299.7399.6999.6499.6798.4299.6899.27
Pakchoi cabbage121393499.730.0099.750.0074.9199.7595.32
Cabbage309999899.8999.9599.9299.9599.5499.8899.97
Tuber mustard34311,09890.1098.5999.2098.5796.2798.1398.88
Brassica parachinensis189611488.7296.3488.3996.6593.6288.1796.24
Brassica chinensis217703699.8675.9999.6275.6888.0399.0597.29
Small brassica chinensis47715,45193.5894.3493.1294.1695.9292.6398.33
Lactuca sativa158511479.3392.6578.2992.7192.4595.4496.05
Celtuce309730.000.0087.6786.8489.0094.7197.43
Film-covered lettuce217704699.6599.5299.2998.8597.3999.8799.45
Romaine lettuce9029210.000.0099.6590.9691.3495.5295.31
Carrot8327100.0094.3278.6594.3290.0098.3896.64
White radish122396086.7483.4364.4983.4390.7383.6697.85
Sprouting garlic61200586.7490.5795.4690.5289.8397.5697.71
OA 91.05%91.77%91.9293.6495.9896.8998.71
Kappa 0.9090.9060.9080.9270.9540.9640.985
Kappa 0.90990.90650.90830.92770.95450.96490.9854
Table 4. The results of Xiong’an classification.
Table 4. The results of Xiong’an classification.
TypesTraining
Samples
Text
Samples
Original
Image
Endmember
Abundance
GLCM
Texture
Morphological
Profiles
Decision
Fusion
Probability
Fusion
Stacking
Fusion
Rice135,662316,54498.4098.2198.2874.5898.6098.8399.95
Waters49,695115,95592.4693.7493.8794.1997.1696.6199.72
Grassland126,569295,32991.1390.6390.7388.5291.8393.4299.69
Acer compound67,695157,95453.1188.5691.2887.4091.5395.0899.86
Willow54,384126,89781.0278.8584.2377.6693.2693.3999.98
Sophora japonica142,827333,26386.2183.8185.7385.3690.6294.6199.95
White wax50,834118,61280.3273.3282.7494.0697.1897.2799.76
Peach19,68645,9340.000.003.1710.7152.7664.2798.72
Corn17,75041,41758.7539.3452.1470.7682.4381.2299.07
Pear tree308,285719,33197.9897.6897.2196.5696.5297.6099.81
Soybean214650070.000.0012.3653.5554.6135.9798.24
Poplar27,32263,75268.6260.3970.4971.6077.1680.5698.66
Vegetable field874520,4050.000.000.0036.4837.3824.6197.02
Elm460610,74888.4974.7885.6483.4186.9590.6998.68
Koelreuteria Paniculata699216,31495.6995.6594.8495.4299.3598.6299.91
Bare land11,52326,88796.4996.0796.3796.4197.6297.2499.64
Rice stubble58,149135,68296.9596.9096.8697.5098.9998.7199.97
Locust168439290.000.000.0017.2964.890.0098.07
Sparse forest44910480.000.000.000.8350.310.0088.84
House888520,73288.4690.9990.0892.4894.3894.1498.69
OA 88.8687.4688.8589.1789.3492.7499.71
KAPPA 0.8360.8460.8630.8850.8880.9100.995
Table 5. The results of different sample size classification (%).
Table 5. The results of different sample size classification (%).
HonghuXiong’an
Sample size3%5%10%3%5%10%
Original image91.0591.8592.3288.8690.7891.71
Decision Fusion95.9896.3996.8789.3491.6793.26
Probability Fusion96.8997.2297.5192.7493.2694.92
Stacking Fusion98.7199.0999.4299.7199.7899.94
Table 6. The results of different classifier classification.
Table 6. The results of different classifier classification.
DataAccuracyDecision FusionProbability FusionStacking Fusion
SVMDNNSVMDNNSVMDNN
HonghuOA89.06%95.98%91.95%96.89%95.11%98.71%
Kappa0.8750.9080.9080.9540.9460.985
Xiong’anOA87.52%89.34%90.28%92.74%95.15%99.71%
Kappa0.8450.9150.9030.9280.9290.995
Table 7. The results of different model classification.
Table 7. The results of different model classification.
DataAccuracyDecision FusionProbability FusionStacking Fusion
Without CRFWith CRFWithout CRFWith CRFWithout CRFWith CRF
HonghuOA89.89%95.98%92.76%96.89%94.91%98.71%
Kappa0.8840.9080.9100.9540.9420.985
Xiong’anOA88.72%89.34%91.03%92.74%91.56%99.71%
Kappa0.8520.9150.8650.9280.8660.995
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wei, L.; Wang, K.; Lu, Q.; Liang, Y.; Li, H.; Wang, Z.; Wang, R.; Cao, L. Crops Fine Classification in Airborne Hyperspectral Imagery Based on Multi-Feature Fusion and Deep Learning. Remote Sens. 2021, 13, 2917. https://doi.org/10.3390/rs13152917

AMA Style

Wei L, Wang K, Lu Q, Liang Y, Li H, Wang Z, Wang R, Cao L. Crops Fine Classification in Airborne Hyperspectral Imagery Based on Multi-Feature Fusion and Deep Learning. Remote Sensing. 2021; 13(15):2917. https://doi.org/10.3390/rs13152917

Chicago/Turabian Style

Wei, Lifei, Kun Wang, Qikai Lu, Yajing Liang, Haibo Li, Zhengxiang Wang, Run Wang, and Liqin Cao. 2021. "Crops Fine Classification in Airborne Hyperspectral Imagery Based on Multi-Feature Fusion and Deep Learning" Remote Sensing 13, no. 15: 2917. https://doi.org/10.3390/rs13152917

APA Style

Wei, L., Wang, K., Lu, Q., Liang, Y., Li, H., Wang, Z., Wang, R., & Cao, L. (2021). Crops Fine Classification in Airborne Hyperspectral Imagery Based on Multi-Feature Fusion and Deep Learning. Remote Sensing, 13(15), 2917. https://doi.org/10.3390/rs13152917

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop