# Decision Fusion Framework for Hyperspectral Image Classification Based on Markov and Conditional Random Fields

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Methodology

#### 2.1. Preliminaries

#### 2.1.1. MRF Regularization

#### 2.1.2. CRF Regularization

#### 2.2. The Decision Sources

#### 2.3. MRF with Cross Links for Fusion (MRFL)

#### 2.4. CRF with Cross Links for Fusion (CRFL)

**α**,

**p**in the pairwise potentials (see Figure 2).

- ${\psi}_{i,j}^{\alpha}({y}_{i}^{\alpha},{y}_{j}^{\alpha}|{\mathit{\alpha}}_{i},{\mathit{\alpha}}_{j})=(1-\delta ({y}_{i}^{\alpha},{y}_{j}^{\alpha}))exp(-\frac{{\parallel \mathit{\alpha}}_{i}-{\mathit{\alpha}}_{j}{\parallel}_{2}^{2}}{{\sigma}^{\alpha}}),$
- ${\psi}_{i,j}^{p}({y}_{i}^{p},{y}_{j}^{p}|{\mathit{p}}_{i},{\mathit{p}}_{j})=(1-\delta ({y}_{i}^{p},{y}_{j}^{p}))exp(-\frac{{\parallel \mathit{p}}_{i}-{\mathit{p}}_{j}{\parallel}_{2}^{2}}{{\sigma}^{p}}),$
- ${\psi}_{i,i}^{\alpha p}({y}_{i}^{\alpha},{y}_{i}^{p}|{\mathit{\alpha}}_{i},{\mathit{p}}_{i})=(1-\delta ({y}_{i}^{\alpha},{y}_{i}^{p}))exp(-\frac{{\parallel \mathit{\alpha}}_{i}-{\mathit{p}}_{i}{\parallel}_{2}^{2}}{{\sigma}^{\alpha p}}).$

## 3. Experimental Results and Discussion

#### 3.1. Hyperspectral Data Sets

#### 3.1.1. University of Pavia

#### 3.1.2. Indian Pines

#### 3.2. Parameter Settings

- the performance of the sparse representation obtained by the pixels fractional abundances from SunSAL as decisions, when combined with classification probabilities in a decision fusion scheme;
- the comparison of the performances of MRFL and CRFL as decision fusion methods;
- the flexibility of the proposed fusion methods, by including additional decision sets;
- the performance of the method in the case of small training sample sizes.

- The OA initially improves with increasing $\beta $ and $\gamma $, proving the effectiveness of incorporating the spatial neighborhood and the consistency terms in our proposed methods, to correct for the wrongly initially assigned labels from the individual sources.
- In general, the OA is more sensitive to changes of $\beta $, and remains relatively stable for a large range of values of $\gamma $.
- A significant accuracy drop can be observed for higher values of $\beta $ and $\gamma $ in the MRFL method, whereas the CRFL method produces more stable results for different combinations of $\beta $ and $\gamma $. This allows for applying the CRFL method to other images without having to perform extensive and exhaustive parameter grid searches.
- The optimal values of $\beta $ and $\gamma $ are substantially higher for CRFL than for MRFL. This is because the CRFL method inherently uses observed data in the pairwise potentials, and thus heavily penalizes small differences between decisions that correspond to different class labels.
- For the Indian Pines image, $\gamma $ is much higher than $\beta $ in the case of CRFL. This can be attributed to the presence of large homogeneous regions that imply a low influence of the spatial neighborhood compared to the consistency terms. In contrast, the University of Pavia image contains less large homogeneous regions, leading to an increase of the influence of the spatial neighborhood, with larger values of $\beta $ in the case of CRFL.

#### 3.3. Experiments

#### 3.3.1. Experiment 1: Complementarity of the Abundances

#### 3.3.2. Experiment 2: Validation of the Decision Fusion Framework

- SunSAL [29]—sparse spectral unmixing is applied to each test pixel, obtaining the abundance vector $\mathit{\alpha}({\mathbf{x}}_{i})=({\alpha}_{1}({\mathbf{x}}_{i}),\dots ,{\alpha}_{C}({\mathbf{x}}_{i}))$. From this vector, the pixel is labeled as the class corresponding to the largest abundance value: ${\widehat{y}}_{i}^{\alpha}=arg{max}_{c}{\alpha}_{c}({\mathbf{x}}_{i})$. This is a single source, spectral only method.
- MLR—Multinomial Logistic Regression classifier [36] generating the class probabilities $\mathbf{p}({\mathbf{x}}_{i})=({p}_{1}({\mathbf{x}}_{i}),\dots ,{p}_{C}({\mathbf{x}}_{i}))$. From this vector, the pixel is labeled as the class corresponding to the largest probability ${\widehat{y}}_{i}^{p}=arg{max}_{c}{p}_{c}({\mathbf{x}}_{i})$. This is also a single source, spectral only method.
- LC—linear combination, a simple decision fusion approach, using a linear combination of the obtained abundances and class probabilities by applying the linear opinion pool rule from: [15]. This is a spectral only fusion method. This method was applied in [30] on the same sources as initialisation for a semi-supervised approach.
- MRFG_a [23]—a decision fusion framework from the recent literature. The principle of this method is to linearly combine different decision sources, weighted by the accuracies of each of the sources. The obtained single source is then regularized by a MRF, as in Equation (1). In [23], three different sources were applied. For a fair comparison, we apply their fusion method with the abundances and class probabilities from our method as decision sources.
- MRFG—the same decision fusion method as MRFG_a, but this time, the posterior classification probabilities from the abundances as obtained in [23] are employed. In that work, the abundances were obtained with a matched filtering technique. To produce the posterior classification probabilities, the MLR classifier was used.
- MRF_a—this method applies a MRF regularization on the output of SunSAL as a single source. This is a spatial-spectral single source method.
- MRF_p—a spatial-spectral single source method, applying MRF as a regularizer on the output of the MLR classifier.
- CRF_a—a spatial-spectral single source method, applying CRF as a regularizer on the output of SunSAL.
- CRF_p—a spatial-spectral single source method, applying CRF as a regularizer on the output of the MLR classifier.

#### (a) University of Pavia dataset

#### (b) Indian Pines dataset

#### 3.3.3. Experiment 3: Comparison of Different Decision Sources

- Pair 1: probabilities based on the spectra and probabilities based on the fractional abundances.The first source is the same as in the previous experiments and the fractional abundances were obtained using SunSAL. Subsequently, the abundances were used as input to an MLR classifier, to produce posterior classification probabilities for this set. Ultimately, these two sources of information were fused with the proposed MRFL and CRFL fusion schemes. Therefore, the only difference with the previous experiment is that, instead of the abundances, classification probabilities from the abundances are used.
- Pair 2: probabilities based on morphological profiles and probabilities based on the fractional abundances. Initially, (partial) morphological profiles were extracted as in [6] and used as input to an MLR classifier, to produce posterior classification probabilities. These were fused with the probabilities from the abundances using the proposed MRFL and CRFL fusion schemes. The difference with before is that the morphological profiles contain spatial-spectral information.
- Pair 3: probabilities based on morphological profiles and probabilities based on the spectra.
- Pair 4: For the CRFL pairwise fusion, we conducted one additional pairwise fusion, between the the pure fractional abundances and the probabilities based on the morphological profiles.

- In general, accuracies go down when the abundances are not directly used, but, instead, class probabilities are calculated from them (Pair 1).
- For the University of Pavia image, accuracies slightly improve when the spectral features are replaced by contextual features, but part of the effect disappears again because of the above-mentioned effect (Pair 2 and Pair 3). The best result is obtained with a direct use of abundances along with contextual features (CRF_Pair4).
- For the Indian Pines image, no improvement is observed when including contextual features.

#### 3.3.4. Experiment 4: Additional Sources in the Fusion Framework

## 4. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Hughes, G. On the Mean Accuracy of Statistical Pattern Recognizers. IEEE Trans. Inf. Theor.
**2006**, 14, 55–63. [Google Scholar] [CrossRef] - Plaza, A.; Martinez, P.; Plaza, J.; Perez, R. Dimensionality reduction and classification of hyperspectral image data using sequences of extended morphological transformations. IEEE Trans. Geosci. Remote Sens.
**2005**, 43, 466–479. [Google Scholar] [CrossRef] [Green Version] - Dalla Mura, M.; Benediktsson, J.A.; Waske, B.; Bruzzone, L. Extended profiles with morphological attribute filters for the analysis of hyperspectral data. Int. J. Remote Sens.
**2010**, 31, 5975–5991. [Google Scholar] [CrossRef] - Liao, W.; Bellens, R.; Pizurica, A.; Philips, W.; Pi, Y. Classification of Hyperspectral Data Over Urban Areas Using Directional Morphological Profiles and Semi-Supervised Feature Extraction. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.
**2012**, 5, 1177–1190. [Google Scholar] [CrossRef] - Benediktsson, J.A.; Palmason, J.A.; Sveinsson, J.R. Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans. Geosci. Remote Sens.
**2005**, 43, 480–491. [Google Scholar] [CrossRef] - Liao, W.; Chanussot, J.; Dalla Mura, M.; Huang, X.; Bellens, R.; Gautama, S.; Philips, W. Taking optimal advantage of fine spatial information: promoting partial image reconstruction for the morphological analysis of very-high-resolution images. IEEE Geosci. Remote Sens. Mag.
**2017**, 5, 8–28. [Google Scholar] [CrossRef] - Licciardi, G.; Marpu, P.R.; Chanussot, J.; Benediktsson, J.A. Linear versus nonlinear PCA for the classification of hyperspectral data based on the extended morphological profiles. IEEE Geosci. Remote Sens. Lett.
**2012**, 9, 447–451. [Google Scholar] [CrossRef] - Song, B.; Li, J.; Dalla Mura, M.; Li, P.; Plaza, A.; Bioucas-Dias, J.M.; Benediktsson, J.A.; Chanussot, J. Remotely sensed image classification using sparse representations of morphological attribute profiles. IEEE Trans. Geosci. Remote Sens.
**2014**, 52, 5122–5136. [Google Scholar] [CrossRef] - Fauvel, M.; Benediktsson, J.; Chanussot, J.; Sveinsson, J. Spectral and spatial classification of hyperspectral data using SVMs and morphological profiles. IEEE Trans. Geosci. Remote Sens.
**2008**, 46, 3804–3814. [Google Scholar] [CrossRef] - Tuia, D.; Matasci, G.; Camps-Valls, G.; Kanevski, M. Learning the relevant image features with multiple kernels. In Proceedings of the 2009 IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009; Volume 2, pp. II-65–II-68. [Google Scholar]
- Li, J.; Huang, X.; Gamba, P.; Bioucas-Dias, J.M.; Zhang, L.; Benediktsson, J.A.; Plaza, A.J. Multiple feature learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens.
**2015**, 53, 1592–1606. [Google Scholar] [CrossRef] - Licciardi, G.; Pacifici, F.; Tuia, D.; Prasad, S.; West, T.; Giacco, F.; Thiel, C.; Inglada, J.; Christophe, E.; Chanussot, J.; et al. Decision fusion for the classification of hyperspectral data: outcome of the 2008 GRSS data fusion contest. IEEE Trans. Geosci. Remote Sens.
**2009**, 47, 3857–3865. [Google Scholar] [CrossRef] - Song, B.; Li, J.; Li, P.; Plaza, A. Decision fusion based on extended multi-attribute profiles for hyperspectral image classification. In Proceedings of the 5th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Gainesville, FL, USA, 26–28 June 2013. [Google Scholar]
- Li, W.; Prasad, S.; Tramel, E.W.; Fowler, J.E.; Du, Q. Decision fusion for hyperspectral image classification based on minimum-distance classifiers in the wavelet domain. In Proceedings of the 2014 IEEE China Summit & International Conference on Signal and Information Processing (ChinaSIP), Xi’an, China, 9–13 July 2014; pp. 162–165. [Google Scholar]
- Benediktsson, J.A.; Kanellopoulos, I. Classification of multisource and hyperspectral data based on decision fusion. IEEE Trans. Geosci. Remote Sens.
**1999**, 37, 1367–1377. [Google Scholar] [CrossRef] [Green Version] - Kalluri, H.R.; Prasad, S.; Bruce, L.M. Decision-level fusion of spectral reflectance and derivative information for robust hyperspectral land cover classification. IEEE Trans. Geosci. Remote Sens.
**2010**, 48, 4047–4058. [Google Scholar] [CrossRef] - Yang, H.; Du, Q.; Ma, B. Decision fusion on supervised and unsupervised classifiers for hyperspectral imagery. IEEE Geosci. Remote Sens. Lett.
**2010**, 7, 875–879. [Google Scholar] [CrossRef] - Li, S.; Lu, T.; Fang, L.; Jia, X.; Benediktsson, J.A. Probabilistic fusion of pixel-level and superpixel-level hyperspectral image classification. IEEE Trans. Geosci. Remote Sens.
**2016**, 54, 7416–7430. [Google Scholar] [CrossRef] - Khodadadzadeh, M.; Li, J.; Ghassemian, H.; Bioucas-Dias, J.; Li, X. Spectral-spatial classification of hyperspectral data using local and global probabilities for mixed pixel characterization. IEEE Trans. Geosci. Remote Sens.
**2014**, 52, 6298–6314. [Google Scholar] [CrossRef] - Khodadadzadeh, M.; Li, J.; Plaza, A.; Ghassemian, H.; Bioucas-Dias, J.M. Spectral-spatial classification for hyperspectral data using SVM and subspace MLR. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium—IGARSS, Melbourne, Australia, 21–26 July 2013; pp. 2180–2183. [Google Scholar]
- Xia, J.; Chanussot, J.; Du, P.; He, X. Spectral–spatial classification for hyperspectral data using rotation forests with local feature extraction and markov random fields. IEEE Trans. Geosci. Remote Sens.
**2015**, 53, 2532–2546. [Google Scholar] [CrossRef] - Lu, Q.; Huang, X.; Li, J.; Zhang, L. A novel MRF-based multifeature fusion for classification of remote sensing images. IEEE Geosci. Remote Sens. Lett.
**2016**, 13, 515–519. [Google Scholar] [CrossRef] - Lu, T.; Li, S.; Fang, L.; Jia, X.; Benediktsson, J.A. From subpixel to superpixel: a novel fusion framework for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens.
**2017**, 55, 4398–4411. [Google Scholar] [CrossRef] - Gómez-Chova, L.; Tuia, D.; Moser, G.; Camps-Valls, G. Multimodal classification of remote sensing images: a review and future directions. Proc. IEEE
**2015**, 103, 1560–1584. [Google Scholar] [CrossRef] - Solberg, A.H.S.; Taxt, T.; Jain, A.K. A markov random field model for classification of multisource satellite imagery. IEEE Trans. Geosci. Remote Sens.
**1996**, 34, 100–113. [Google Scholar] [CrossRef] - Wegner, J.D.; Hansch, R.; Thiele, A.; Soergel, U. Building detection from one orthophoto and high-resolution InSAR data using conditional random fields. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.
**2011**, 4, 83–91. [Google Scholar] [CrossRef] - Albert, L.; Rottensteiner, F.; Heipke, C. A higher order conditional random field model for simultaneous classification of land cover and land use. Int. J. Photogramm. Remote Sens.
**2017**, 130, 63–80. [Google Scholar] [CrossRef] - Tuia, D.; Volpi, M.; Moser, G. Decision fusion with multiple spatial supports by conditional random fields. IEEE Trans. Geosci. Remote Sens.
**2018**, 56, 3277–3289. [Google Scholar] [CrossRef] - Bioucas-Dias, J.; Figueiredo, M. Alternating direction algorithms for constrained sparse regression: Application to hyperspectral unmixing. In Proceedings of the 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Reykjavik, Iceland, 14–16 June 2010. [Google Scholar]
- Dopido, I.; Li, J.; Gamba, P.; Plaza, A. A new hybrid strategy combining semisupervised classification and unmixing of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.
**2014**, 7, 3619–3629. [Google Scholar] [CrossRef] - Chen, Y.; Nasrabadi, N.M.; Tran, T.D. Hyperspectral image classification using dictionary-based sparse representation. IEEE Trans. Geosci. Remote Sens.
**2011**, 49, 3973–3985. [Google Scholar] [CrossRef] - Li, W.; Du, Q. Joint within-class collaborative representation for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.
**2014**, 7, 2200–2208. [Google Scholar] [CrossRef] - Sun, X.; Qu, Q.; Nasrabadi, N.M.; Tran, T.D. Structured priors for sparse-representation-based hyperspectral image classification. IEEE Geosci. Remote Sens. Lett.
**2014**, 11, 1235–1239. [Google Scholar] - Bishop, C.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Scheunders, P.; Tuia, D.; Moser, G. Contributions of machine learning to remote sensing data analysis. In Comprehensive Remote Sensing; Liang, S., Ed.; Elsevier: Amsterdam, The Netherlands, 2017; Volume 2, Chapter 10. [Google Scholar]
- Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning; Springer: New York, NY, USA, 2009. [Google Scholar]
- Namin, S.T.; Najafi, M.; Salzmann, M.; Petersson, L. A multi-modal graphical model for scene analysis. In Proceedings of the 2015 IEEE Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 5–9 January 2015; pp. 1006–1013. [Google Scholar] [CrossRef]
- Boykov, Y.; Kolmogorov, V. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Trans. Pattern Anal. Mach. Intell.
**2004**, 26, 1124–1137. [Google Scholar] [CrossRef] - Boykov, Y.; Veksler, O.; Zabih, R. Fast approximation energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell.
**2001**, 23, 1222–1239. [Google Scholar] [CrossRef] - Kohli, P.; Ladicky, L.; Torr, P. Robust higher order potentials for enforcing label consistency. Int. J. Comp. Vis.
**2009**, 82, 302–324. [Google Scholar] [CrossRef] - Kohli, P.; Ladicky, L.; Torr, P. Graph Cuts for Minimizing Robust Higher Order Potentials; Technical Report; Oxford Brookes University: Oxford, UK, 2008. [Google Scholar]
- Boykov, Y.; Jolly, M.P. Interactive graph cuts for optimal boundary and region segmentation of objects in n-D images. In Proceedings of the Eighth IEEE International Conference on Computer Vision, Vancouver, BC, Canada, 7–14 July 2001. [Google Scholar]
- Weinmann, M.; Schmidt, A.; Mallet, C.; Hinz, S.; Rottensteiner, F.; Jutzi, B. Contextual classification of point cloud data by exploiting individual 3D neighborhoods. ISPRS Ann. Photogramm. Remote Sensi. Spat. Inf. Sci.
**2015**, II-3/W4, 271–278. [Google Scholar] [CrossRef] - Iordache, M.D.; Bioucas-Dias, J.; Plaza, A. Collaborative sparse regression for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens.
**2013**, 52, 341–354. [Google Scholar] [CrossRef] - Iordache, M.D.; Bioucas-Dias, J.; Plaza, A. Total variation spatial regularization for sparse hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens.
**2012**, 50, 4484–4502. [Google Scholar] [CrossRef]

**Figure 1.**The graph representation of MRFL. Green nodes denote the random variables associated with ${y}^{\alpha}$, blue nodes denote the random variables associated with ${y}^{p}$. Black lines denote the edges that model the spatial neighborhood dependencies. Red lines denote the cross links between ${y}^{\alpha}$ and ${y}^{p}$, encoding the potential interactions ${\psi}_{i,i}^{\alpha p}({y}_{i}^{\alpha},{y}_{i}^{p})$. $\gamma $ is the parameter that controls the influence of these interaction terms.

**Figure 2.**Graph representation of CRFL. The purple nodes denote random variables associated with the observed data, the green nodes denote random variables associated with the labels ${y}^{\alpha}$, blue nodes denote random variables associated with the labels ${y}^{p}$. The turqoise lines denote the link of the labels with the observed data. Black lines denote the edges that model the spatial neighborhood dependencies. Red lines denote the cross links between $(\mathit{\alpha},{y}^{\alpha})$ and $(\mathit{p},{y}^{p})$ encoding the potential interactions ${\psi}_{i,i}^{\alpha p}({y}_{i}^{\alpha},{y}_{i}^{p}|\mathit{\alpha},\mathit{p})$. $\gamma $ is the parameter that controls the influence of these interaction terms.

**Figure 3.**University of Pavia: (

**a**) false color composite image (R:40,G:20,B:10); (

**b**) ground reference map.

**Figure 5.**Effect of $\beta $ and $\gamma $ on the Overal Accuracy (OA) for both proposed methods: MRFL and CRFL.

**Figure 6.**Confusion matrices between (

**a**) SunSAL and the MLR classifier on the University of Pavia image; (

**b**) the SVM and the MLR classifier on the University of Pavia image; (

**c**) SunSAL and the MLR classifier on the Indian Pines image; (

**d**) the SVM and the MLR classifier on the Indian Pines image.

**Figure 7.**Boxplot from Overall Accuracies (OA) for several methods on the University of Pavia image, including the proposed ones: MRFL and CRFL (100 experiments).

**Figure 8.**University of Pavia classification maps generated from different methods; (

**a**) SunSAL, (

**b**) MLR, (

**c**) LC, (

**d**) MRFG_a, (

**e**) MRFG, (

**f**) MRF_a, (

**g**) MRF_p, (

**h**) MRFL, (

**i**) CRF_a, (

**j**) CRF_p, (

**k**) CRFL, (

**l**) Ground truth.

**Figure 9.**Boxplot from Overal Accuracies (OA) for several methods on the Indian Pines image including the proposed ones: MRFL and CRFL (100 experiments).

**Figure 10.**Indian Pines classification maps generated from different methods. (

**a**) SunSAL, (

**b**) MLR, (

**c**) LC, (

**d**) MRFG_a, (

**e**) MRFG, (

**f**) MRF_a, (

**g**) MRF_p, (

**h**) MRFL, (

**i**) CRF_a, (

**j**) CRF_p, (

**k**) CRFL, (

**l**) Ground truth.

Image | $\mathit{\lambda}$ | ${\mathit{\beta}}_{\mathit{MRFL}}$ | ${\mathit{\gamma}}_{\mathit{MRFL}}$ | ${\mathit{\beta}}_{\mathit{CRFL}}$ | ${\mathit{\gamma}}_{\mathit{CRFL}}$ |
---|---|---|---|---|---|

University of Pavia | $5\times {10}^{-4}$ | 1.0 | 1.0 | 25 | 25 |

Indian Pines | ${10}^{-3}$ | 1.0 | 0.8 | 5 | 25 |

**Table 2.**Classification accuracies [%] with their standard deviations for the University of Pavia image (the highest accuracies are denoted in bold).

Class | Train | Test | SunSAL | MLR | LC | MRFG_a | MRFG | MRF_a | MRF_p | MRFL | CRF_a | CRF_p | CRFL |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Asphalt | 10 | 6621 | 33.04 $\pm \phantom{\rule{3.33333pt}{0ex}}9.09$ | 50.72 $\pm \phantom{\rule{3.33333pt}{0ex}}9.30$ | 50.42 $\pm \phantom{\rule{3.33333pt}{0ex}}8.98$ | 66.60 $\pm \phantom{\rule{3.33333pt}{0ex}}12.90$ | 58.05 $\pm \phantom{\rule{3.33333pt}{0ex}}13.20$ | 52.26 $\pm \phantom{\rule{3.33333pt}{0ex}}18.50$ | 58.80 $\pm \phantom{\rule{3.33333pt}{0ex}}11.40$ | 77.56 $\pm \phantom{\rule{3.33333pt}{0ex}}12.80$ | 50.40 $\pm \phantom{\rule{3.33333pt}{0ex}}17.60$ | 59.28 $\pm \phantom{\rule{3.33333pt}{0ex}}11.90$ | 77.86$\pm \phantom{\rule{3.33333pt}{0ex}}17.80$ |

Meadows | 10 | 18,639 | 68.95 $\pm \phantom{\rule{3.33333pt}{0ex}}8.46$ | 67.16 $\pm \phantom{\rule{3.33333pt}{0ex}}9.91$ | 73.46 $\pm \phantom{\rule{3.33333pt}{0ex}}7.46$ | 78.08 $\pm \phantom{\rule{3.33333pt}{0ex}}18.00$ | 71.18 $\pm \phantom{\rule{3.33333pt}{0ex}}13.60$ | 80.97 $\pm \phantom{\rule{3.33333pt}{0ex}}11.30$ | 71.80 $\pm \phantom{\rule{3.33333pt}{0ex}}11.10$ | 82.13 $\pm \phantom{\rule{3.33333pt}{0ex}}8.30$ | 81.66 $\pm \phantom{\rule{3.33333pt}{0ex}}10.90$ | 72.70 $\pm \phantom{\rule{3.33333pt}{0ex}}11.38$ | 88.74$\pm \phantom{\rule{3.33333pt}{0ex}}8.10$ |

Gravel | 10 | 2089 | 60.59 $\pm \phantom{\rule{3.33333pt}{0ex}}9.12$ | 70.93 $\pm \phantom{\rule{3.33333pt}{0ex}}7.51$ | 77.01 $\pm \phantom{\rule{3.33333pt}{0ex}}5.54$ | 87.85 $\pm \phantom{\rule{3.33333pt}{0ex}}7$ | 71.18 $\pm \phantom{\rule{3.33333pt}{0ex}}13.60$ | 88.30 $\pm \phantom{\rule{3.33333pt}{0ex}}10.30$ | 78.82 $\pm \phantom{\rule{3.33333pt}{0ex}}9.40$ | 90.90 $\pm \phantom{\rule{3.33333pt}{0ex}}7.90$ | 86.49 $\pm \phantom{\rule{3.33333pt}{0ex}}10.10$ | 76.86 $\pm \phantom{\rule{3.33333pt}{0ex}}9.20$ | 92.71$\pm \phantom{\rule{3.33333pt}{0ex}}9.80$ |

Trees | 10 | 3054 | 84.61 $\pm \phantom{\rule{3.33333pt}{0ex}}7.38$ | 88.95 $\pm \phantom{\rule{3.33333pt}{0ex}}5.96$ | 91.49 $\pm \phantom{\rule{3.33333pt}{0ex}}4.41$ | 91.79 $\pm \phantom{\rule{3.33333pt}{0ex}}4.70$ | 91.72 $\pm \phantom{\rule{3.33333pt}{0ex}}5.50$ | 91.56 $\pm \phantom{\rule{3.33333pt}{0ex}}5.80$ | 88.92 $\pm \phantom{\rule{3.33333pt}{0ex}}6.20$ | 91.95$\pm \phantom{\rule{3.33333pt}{0ex}}5.00$ | 90.80 $\pm \phantom{\rule{3.33333pt}{0ex}}5.80$ | 88.76 $\pm \phantom{\rule{3.33333pt}{0ex}}6.20$ | 89.51 $\pm \phantom{\rule{3.33333pt}{0ex}}6.80$ |

Metal Sheet | 10 | 1335 | 95.43 $\pm \phantom{\rule{3.33333pt}{0ex}}3.72$ | 97.81 $\pm \phantom{\rule{3.33333pt}{0ex}}1.53$ | 98.71 $\pm \phantom{\rule{3.33333pt}{0ex}}0.85$ | 98.96 $\pm \phantom{\rule{3.33333pt}{0ex}}0.77$ | 98.85 $\pm \phantom{\rule{3.33333pt}{0ex}}0.75$ | 99.30 $\pm \phantom{\rule{3.33333pt}{0ex}}1.50$ | 98.11 $\pm \phantom{\rule{3.33333pt}{0ex}}1.30$ | 99.46$\pm \phantom{\rule{3.33333pt}{0ex}}0.40$ | 98.59 $\pm \phantom{\rule{3.33333pt}{0ex}}2.30$ | 97.88 $\pm \phantom{\rule{3.33333pt}{0ex}}1.47$ | 99.28 $\pm \phantom{\rule{3.33333pt}{0ex}}1.30$ |

Bare Soil | 10 | 5019 | 46.80 $\pm \phantom{\rule{3.33333pt}{0ex}}10.18$ | 55.85 $\pm \phantom{\rule{3.33333pt}{0ex}}9.02$ | 56.10 $\pm \phantom{\rule{3.33333pt}{0ex}}8.59$ | 59.9 $\pm \phantom{\rule{3.33333pt}{0ex}}11.00$ | 59.90 $\pm \phantom{\rule{3.33333pt}{0ex}}14.60$ | 53.44 $\pm \phantom{\rule{3.33333pt}{0ex}}18.50$ | 59.18 $\pm \phantom{\rule{3.33333pt}{0ex}}11.20$ | 61.29 $\pm \phantom{\rule{3.33333pt}{0ex}}13.60$ | 52.20 $\pm \phantom{\rule{3.33333pt}{0ex}}18.90$ | 59.04 $\pm \phantom{\rule{3.33333pt}{0ex}}11.40$ | 63.07$\pm \phantom{\rule{3.33333pt}{0ex}}27.00$ |

Bitumen | 10 | 1320 | 48.80 $\pm \phantom{\rule{3.33333pt}{0ex}}11.87$ | 80.75 $\pm \phantom{\rule{3.33333pt}{0ex}}8.68$ | 83.24 $\pm \phantom{\rule{3.33333pt}{0ex}}6.74$ | 95.43 $\pm \phantom{\rule{3.33333pt}{0ex}}2.70$ | 87.82 $\pm \phantom{\rule{3.33333pt}{0ex}}12.12$ | 91.07 $\pm \phantom{\rule{3.33333pt}{0ex}}10.30$ | 90.98 $\pm \phantom{\rule{3.33333pt}{0ex}}6.90$ | 97.14$\pm \phantom{\rule{3.33333pt}{0ex}}2.80$ | 90.15 $\pm \phantom{\rule{3.33333pt}{0ex}}10.00$ | 89.87 $\pm \phantom{\rule{3.33333pt}{0ex}}7.40$ | 96.55 $\pm \phantom{\rule{3.33333pt}{0ex}}7.80$ |

Bricks | 10 | 3672 | 36.74 $\pm \phantom{\rule{3.33333pt}{0ex}}11.04$ | 61.60 $\pm \phantom{\rule{3.33333pt}{0ex}}9.37$ | 55.99 $\pm \phantom{\rule{3.33333pt}{0ex}}8.59$ | 71.85 $\pm \phantom{\rule{3.33333pt}{0ex}}13.80$ | 63.11 $\pm \phantom{\rule{3.33333pt}{0ex}}17.00$ | 34.75 $\pm \phantom{\rule{3.33333pt}{0ex}}23.52$ | 72.33 $\pm \phantom{\rule{3.33333pt}{0ex}}11.70$ | 72.64$\pm \phantom{\rule{3.33333pt}{0ex}}17.60$ | 34.72 $\pm \phantom{\rule{3.33333pt}{0ex}}22.00$ | 72.64$\pm \phantom{\rule{3.33333pt}{0ex}}11.40$ | 58.25 $\pm \phantom{\rule{3.33333pt}{0ex}}28.00$ |

Shadows | 10 | 937 | 98.61 $\pm \phantom{\rule{3.33333pt}{0ex}}10.11$ | 95.52 $\pm \phantom{\rule{3.33333pt}{0ex}}2.42$ | 99.31 $\pm \phantom{\rule{3.33333pt}{0ex}}0.47$ | 99.83 $\pm \phantom{\rule{3.33333pt}{0ex}}0.16$ | 99.66 $\pm \phantom{\rule{3.33333pt}{0ex}}0.50$ | 99.96 $\pm \phantom{\rule{3.33333pt}{0ex}}0.09$ | 96.88 $\pm \phantom{\rule{3.33333pt}{0ex}}2.10$ | 99.88 $\pm \phantom{\rule{3.33333pt}{0ex}}0.07$ | 99.92 $\pm \phantom{\rule{3.33333pt}{0ex}}0.10$ | 96.74 $\pm \phantom{\rule{3.33333pt}{0ex}}2.10$ | 99.97$\pm \phantom{\rule{3.33333pt}{0ex}}0.10$ |

OA (OA-SD) | - | - | 59.56 $\pm \phantom{\rule{3.33333pt}{0ex}}3.37$ | 66.53 $\pm \phantom{\rule{3.33333pt}{0ex}}3.80$ | 69.45 $\pm \phantom{\rule{3.33333pt}{0ex}}3.10$ | 76.74 $\pm \phantom{\rule{3.33333pt}{0ex}}4.00$ | 70.59 $\pm \phantom{\rule{3.33333pt}{0ex}}5.40$ | 71.71 $\pm \phantom{\rule{3.33333pt}{0ex}}4.70$ | 71.88 $\pm \phantom{\rule{3.33333pt}{0ex}}4.30$ | 80.67 $\pm \phantom{\rule{3.33333pt}{0ex}}3.60$ | 71.39 $\pm \phantom{\rule{3.33333pt}{0ex}}4.60$ | 72.19 $\pm \phantom{\rule{3.33333pt}{0ex}}4.48$ | 82.47$\pm \phantom{\rule{3.33333pt}{0ex}}4.40$ |

AA (AA-SD) | - | - | 63.73 $\pm \phantom{\rule{3.33333pt}{0ex}}2.00$ | 74.37 $\pm \phantom{\rule{3.33333pt}{0ex}}1.74$ | 76.19 $\pm \phantom{\rule{3.33333pt}{0ex}}1.40$ | 83.37 $\pm \phantom{\rule{3.33333pt}{0ex}}2.14$ | 81.35 $\pm \phantom{\rule{3.33333pt}{0ex}}3.50$ | 76.84 $\pm \phantom{\rule{3.33333pt}{0ex}}3.00$ | 79.55 $\pm \phantom{\rule{3.33333pt}{0ex}}2.00$ | 85.88$\pm \phantom{\rule{3.33333pt}{0ex}}2.30$ | 76.11 $\pm \phantom{\rule{3.33333pt}{0ex}}3.00$ | 79.31 $\pm \phantom{\rule{3.33333pt}{0ex}}2.00$ | 85.10 $\pm \phantom{\rule{3.33333pt}{0ex}}3.90$ |

$\kappa $ ($\kappa $-SD) | - | - | 0.48 $\pm \phantom{\rule{3.33333pt}{0ex}}0.03$ | 0.57 $\pm \phantom{\rule{3.33333pt}{0ex}}0.04$ | 0.61 $\pm \phantom{\rule{3.33333pt}{0ex}}0.03$ | 0.70 $\pm \phantom{\rule{3.33333pt}{0ex}}0.04$ | 0.70 $\pm \phantom{\rule{3.33333pt}{0ex}}0.05$ | 0.63 $\pm \phantom{\rule{3.33333pt}{0ex}}0.05$ | 0.64 $\pm \phantom{\rule{3.33333pt}{0ex}}0.04$ | 0.74 $\pm \phantom{\rule{3.33333pt}{0ex}}0.04$ | 0.63 $\pm \phantom{\rule{3.33333pt}{0ex}}0.05$ | 0.64 $\pm \phantom{\rule{3.33333pt}{0ex}}0.04$ | 0.77 $\pm \phantom{\rule{3.33333pt}{0ex}}0.05$ |

**Table 3.**Classification accuracies [%] with their standard deviations for the Indian Pines image (the highest accuracies are denoted in bold).

Class | Train | Test | SunSAL | MLR | LC | MRFG_a | MRFG | MRF_a | MRF_p | MRFL | CRF_a | CRF_p | CRFL |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Corn-notill | 10 | 1418 | 53.97 $\pm \phantom{\rule{3.33333pt}{0ex}}10.00$ | 55.83 $\pm \phantom{\rule{3.33333pt}{0ex}}8.32$ | 64.71 $\pm \phantom{\rule{3.33333pt}{0ex}}8.82$ | 61.85 $\pm \phantom{\rule{3.33333pt}{0ex}}14.10$ | 58.39 $\pm \phantom{\rule{3.33333pt}{0ex}}14.90$ | 70.13 $\pm \phantom{\rule{3.33333pt}{0ex}}12.80$ | 60.96 $\pm \phantom{\rule{3.33333pt}{0ex}}9.60$ | 75.12$\pm \phantom{\rule{3.33333pt}{0ex}}11.05$ | 73.01 $\pm \phantom{\rule{3.33333pt}{0ex}}14.73$ | 63.27 $\pm \phantom{\rule{3.33333pt}{0ex}}10.30$ | 74.50 $\pm \phantom{\rule{3.33333pt}{0ex}}10.78$ |

Corn-mintill | 10 | 820 | 43.20 $\pm \phantom{\rule{3.33333pt}{0ex}}10.00$ | 59.15 $\pm \phantom{\rule{3.33333pt}{0ex}}8.87$ | 59.57 $\pm \phantom{\rule{3.33333pt}{0ex}}10.33$ | 68.86 $\pm \phantom{\rule{3.33333pt}{0ex}}12.50$ | 63.91 $\pm \phantom{\rule{3.33333pt}{0ex}}13.00$ | 58.74 $\pm \phantom{\rule{3.33333pt}{0ex}}16.20$ | 66.48 $\pm \phantom{\rule{3.33333pt}{0ex}}10.70$ | 69.46 $\pm \phantom{\rule{3.33333pt}{0ex}}14.90$ | 60.00 $\pm \phantom{\rule{3.33333pt}{0ex}}20.30$ | 65.64 $\pm \phantom{\rule{3.33333pt}{0ex}}11.50$ | 69.53$\pm \phantom{\rule{3.33333pt}{0ex}}14.80$ |

Grass pasture | 10 | 473 | 81.02 $\pm \phantom{\rule{3.33333pt}{0ex}}6.20$ | 82.41 $\pm \phantom{\rule{3.33333pt}{0ex}}8.76$ | 84.83 $\pm \phantom{\rule{3.33333pt}{0ex}}6.64$ | 89.50$\pm \phantom{\rule{3.33333pt}{0ex}}5.90$ | 89.27 $\pm \phantom{\rule{3.33333pt}{0ex}}5.50$ | 80.52 $\pm \phantom{\rule{3.33333pt}{0ex}}7.40$ | 84.23 $\pm \phantom{\rule{3.33333pt}{0ex}}9.90$ | 82.38 $\pm \phantom{\rule{3.33333pt}{0ex}}8.50$ | 78.97 $\pm \phantom{\rule{3.33333pt}{0ex}}7.60$ | 83.49 $\pm \phantom{\rule{3.33333pt}{0ex}}10.20$ | 81.60 $\pm \phantom{\rule{3.33333pt}{0ex}}8.10$ |

Grass trees | 10 | 720 | 88.24 $\pm \phantom{\rule{3.33333pt}{0ex}}4.73$ | 91.75 $\pm \phantom{\rule{3.33333pt}{0ex}}3.77$ | 96.12 $\pm \phantom{\rule{3.33333pt}{0ex}}2.07$ | 97.31 $\pm \phantom{\rule{3.33333pt}{0ex}}2.40$ | 96.47 $\pm \phantom{\rule{3.33333pt}{0ex}}2.50$ | 98.45 $\pm \phantom{\rule{3.33333pt}{0ex}}2.30$ | 95.66 $\pm \phantom{\rule{3.33333pt}{0ex}}3.70$ | 99.52$\pm \phantom{\rule{3.33333pt}{0ex}}1.40$ | 99.15 $\pm \phantom{\rule{3.33333pt}{0ex}}1.60$ | 94.94 $\pm \phantom{\rule{3.33333pt}{0ex}}3.74$ | 98.94 $\pm \phantom{\rule{3.33333pt}{0ex}}1.80$ |

Hey Windrowed | 10 | 468 | 99.62 $\pm \phantom{\rule{3.33333pt}{0ex}}0.43$ | 99.82 $\pm \phantom{\rule{3.33333pt}{0ex}}0.47$ | 100$\pm \phantom{\rule{3.33333pt}{0ex}}0.03$ | 100$\pm \phantom{\rule{3.33333pt}{0ex}}0.00$ | 100$\pm \phantom{\rule{3.33333pt}{0ex}}0.00$ | 100$\pm \phantom{\rule{3.33333pt}{0ex}}0.00$ | 99.98 $\pm \phantom{\rule{3.33333pt}{0ex}}0.14$ | 100$\pm \phantom{\rule{3.33333pt}{0ex}}0.00$ | 100$\pm \phantom{\rule{3.33333pt}{0ex}}0.00$ | 99.9 $\pm \phantom{\rule{3.33333pt}{0ex}}0.28$ | 100$\pm \phantom{\rule{3.33333pt}{0ex}}0.00$ |

Soybean-notill | 10 | 962 | 49.08 $\pm \phantom{\rule{3.33333pt}{0ex}}9.04$ | 58.56 $\pm \phantom{\rule{3.33333pt}{0ex}}11.69$ | 64.28 $\pm \phantom{\rule{3.33333pt}{0ex}}6.74$ | 68.50 $\pm \phantom{\rule{3.33333pt}{0ex}}12.90$ | 65.87 $\pm \phantom{\rule{3.33333pt}{0ex}}13.80$ | 71.24 $\pm \phantom{\rule{3.33333pt}{0ex}}8.90$ | 64.85 $\pm \phantom{\rule{3.33333pt}{0ex}}12.00$ | 75.23 $\pm \phantom{\rule{3.33333pt}{0ex}}5.10$ | 75.81 $\pm \phantom{\rule{3.33333pt}{0ex}}6.20$ | 68.74 $\pm \phantom{\rule{3.33333pt}{0ex}}11.77$ | 76.47$\pm \phantom{\rule{3.33333pt}{0ex}}3.70$ |

Soybean-mintill | 10 | 2445 | 47.98 $\pm \phantom{\rule{3.33333pt}{0ex}}10.18$ | 48.15 $\pm \phantom{\rule{3.33333pt}{0ex}}10.78$ | 55.24 $\pm \phantom{\rule{3.33333pt}{0ex}}9.42$ | 55.83 $\pm \phantom{\rule{3.33333pt}{0ex}}12.00$ | 53.76 $\pm \phantom{\rule{3.33333pt}{0ex}}12.40$ | 62.88 $\pm \phantom{\rule{3.33333pt}{0ex}}15.60$ | 51.54 $\pm \phantom{\rule{3.33333pt}{0ex}}11.70$ | 65.73 $\pm \phantom{\rule{3.33333pt}{0ex}}13.80$ | 66.07 $\pm \phantom{\rule{3.33333pt}{0ex}}18.80$ | 53.47 $\pm \phantom{\rule{3.33333pt}{0ex}}12.85$ | 67.80$\pm \phantom{\rule{3.33333pt}{0ex}}14.90$ |

Soybean-clean | 10 | 583 | 64.55 $\pm \phantom{\rule{3.33333pt}{0ex}}10.58$ | 62.75 $\pm \phantom{\rule{3.33333pt}{0ex}}8.96$ | 79.55 $\pm \phantom{\rule{3.33333pt}{0ex}}10.07$ | 77.18 $\pm \phantom{\rule{3.33333pt}{0ex}}12.06$ | 72.54 $\pm \phantom{\rule{3.33333pt}{0ex}}12.54$ | 81.72 $\pm \phantom{\rule{3.33333pt}{0ex}}15.50$ | 69.10 $\pm \phantom{\rule{3.33333pt}{0ex}}10.42$ | 89.05$\pm \phantom{\rule{3.33333pt}{0ex}}11.70$ | 86.19 $\pm \phantom{\rule{3.33333pt}{0ex}}16.60$ | 71.23 $\pm \phantom{\rule{3.33333pt}{0ex}}10.11$ | 88.20 $\pm \phantom{\rule{3.33333pt}{0ex}}12.20$ |

Woods | 10 | 1255 | 78.41 $\pm \phantom{\rule{3.33333pt}{0ex}}8.94$ | 84.04 $\pm \phantom{\rule{3.33333pt}{0ex}}8.11$ | 88.39 $\pm \phantom{\rule{3.33333pt}{0ex}}6.23$ | 89.85 $\pm \phantom{\rule{3.33333pt}{0ex}}8.20$ | 89.00 $\pm \phantom{\rule{3.33333pt}{0ex}}9.00$ | 91.99 $\pm \phantom{\rule{3.33333pt}{0ex}}8.70$ | 87.04 $\pm \phantom{\rule{3.33333pt}{0ex}}8.50$ | 92.38 $\pm \phantom{\rule{3.33333pt}{0ex}}7.10$ | 92.34 $\pm \phantom{\rule{3.33333pt}{0ex}}8.90$ | 86.32 $\pm \phantom{\rule{3.33333pt}{0ex}}8.73$ | 92.42$\pm \phantom{\rule{3.33333pt}{0ex}}7.30$ |

Buildings | 10 | 376 | 56.36 $\pm \phantom{\rule{3.33333pt}{0ex}}9.96$ | 59.92 $\pm \phantom{\rule{3.33333pt}{0ex}}6.61$ | 63.48 $\pm \phantom{\rule{3.33333pt}{0ex}}7.00$ | 71.54 $\pm \phantom{\rule{3.33333pt}{0ex}}9.20$ | 69.24 $\pm \phantom{\rule{3.33333pt}{0ex}}9.70$ | 70.86 $\pm \phantom{\rule{3.33333pt}{0ex}}13.40$ | 69.30 $\pm \phantom{\rule{3.33333pt}{0ex}}7.90$ | 80.12$\pm \phantom{\rule{3.33333pt}{0ex}}12.49$ | 70.38 $\pm \phantom{\rule{3.33333pt}{0ex}}16.60$ | 64.15 $\pm \phantom{\rule{3.33333pt}{0ex}}7.42$ | 70.00 $\pm \phantom{\rule{3.33333pt}{0ex}}12.64$ |

OA (OA-SD) | - | - | 61.11 $\pm \phantom{\rule{3.33333pt}{0ex}}2.43$ | 64.86 $\pm \phantom{\rule{3.33333pt}{0ex}}3.47$ | 71.06 $\pm \phantom{\rule{3.33333pt}{0ex}}2.29$ | 72.45 $\pm \phantom{\rule{3.33333pt}{0ex}}3.40$ | 70.16 $\pm \phantom{\rule{3.33333pt}{0ex}}3.35$ | 75.11 $\pm \phantom{\rule{3.33333pt}{0ex}}3.70$ | 69.31 $\pm \phantom{\rule{3.33333pt}{0ex}}3.90$ | 78.95 $\pm \phantom{\rule{3.33333pt}{0ex}}3.66$ | 77.22 $\pm \phantom{\rule{3.33333pt}{0ex}}4.67$ | 70.22 $\pm \phantom{\rule{3.33333pt}{0ex}}4.26$ | 79.00$\pm \phantom{\rule{3.33333pt}{0ex}}3.70$ |

AA (AA-SD) | - | - | 66.21 $\pm \phantom{\rule{3.33333pt}{0ex}}1.73$ | 70.23 $\pm \phantom{\rule{3.33333pt}{0ex}}2.49$ | 75.62 $\pm \phantom{\rule{3.33333pt}{0ex}}1.58$ | 78.04 $\pm \phantom{\rule{3.33333pt}{0ex}}2.43$ | 75.85 $\pm \phantom{\rule{3.33333pt}{0ex}}2.48$ | 78.65 $\pm \phantom{\rule{3.33333pt}{0ex}}2.60$ | 74.90 $\pm \phantom{\rule{3.33333pt}{0ex}}2.90$ | 82.90$\pm \phantom{\rule{3.33333pt}{0ex}}2.38$ | 80.20 $\pm \phantom{\rule{3.33333pt}{0ex}}3.20$ | 75.12 $\pm \phantom{\rule{3.33333pt}{0ex}}2.90$ | 81.95 $\pm \phantom{\rule{3.33333pt}{0ex}}2.30$ |

$\kappa $ ($\kappa $-SD) | - | - | 0.55 $\pm \phantom{\rule{3.33333pt}{0ex}}0.02$ | 0.59 $\pm \phantom{\rule{3.33333pt}{0ex}}0.02$ | 0.66 $\pm \phantom{\rule{3.33333pt}{0ex}}0.02$ | 0.68 $\pm \phantom{\rule{3.33333pt}{0ex}}0.03$ | 0.65 $\pm \phantom{\rule{3.33333pt}{0ex}}0.03$ | 0.71 $\pm \phantom{\rule{3.33333pt}{0ex}}0.04$ | 0.65 $\pm \phantom{\rule{3.33333pt}{0ex}}0.04$ | 0.75 $\pm \phantom{\rule{3.33333pt}{0ex}}0.04$ | 0.73 $\pm \phantom{\rule{3.33333pt}{0ex}}0.05$ | 0.66 $\pm \phantom{\rule{3.33333pt}{0ex}}0.04$ | 0.75 $\pm \phantom{\rule{3.33333pt}{0ex}}0.04$ |

MRF_Pair1 | MRF_Pair2 | MRF_Pair3 | CRF_Pair1 | CRF_Pair2 | CRF_Pair3 | CRF_Pair4 | |
---|---|---|---|---|---|---|---|

University of Pavia | 74.70 $\pm \phantom{\rule{3.33333pt}{0ex}}5.00$ | 80.59 $\pm \phantom{\rule{3.33333pt}{0ex}}3.70$ | 80.69 $\pm \phantom{\rule{3.33333pt}{0ex}}3.80$ | 76.73 $\pm \phantom{\rule{3.33333pt}{0ex}}5.00$ | 78.50 $\pm \phantom{\rule{3.33333pt}{0ex}}4.12$ | 79.07 $\pm \phantom{\rule{3.33333pt}{0ex}}4.03$ | 83.22 $\pm \phantom{\rule{3.33333pt}{0ex}}4.10$ |

Indian Pines | 76.51 $\pm \phantom{\rule{3.33333pt}{0ex}}3.73$ | 77.70 $\pm \phantom{\rule{3.33333pt}{0ex}}2.70$ | 77.26 $\pm \phantom{\rule{3.33333pt}{0ex}}2.91$ | 73.70 $\pm \phantom{\rule{3.33333pt}{0ex}}3.29$ | 73.25 $\pm \phantom{\rule{3.33333pt}{0ex}}2.87$ | 74.70 $\pm \phantom{\rule{3.33333pt}{0ex}}2.90$ | 77.03 $\pm \phantom{\rule{3.33333pt}{0ex}}2.72$ |

**Table 5.**Classification accuracies [%] based on the fusion of three sources (fractional abundances, probabilities based on spectra and probabilities based on morphological profiles) for the University of Pavia and Indian Pines images.

MRFL_3 | CRFL_3 | |
---|---|---|

University of Pavia | 83.52 $\pm \phantom{\rule{3.33333pt}{0ex}}3.52$ | 88.51 $\pm \phantom{\rule{3.33333pt}{0ex}}3.87$ |

Indian Pines | 82.85 $\pm \phantom{\rule{3.33333pt}{0ex}}2.62$ | 82.16 $\pm \phantom{\rule{3.33333pt}{0ex}}2.77$ |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Andrejchenko, V.; Liao, W.; Philips, W.; Scheunders, P.
Decision Fusion Framework for Hyperspectral Image Classification Based on Markov and Conditional Random Fields. *Remote Sens.* **2019**, *11*, 624.
https://doi.org/10.3390/rs11060624

**AMA Style**

Andrejchenko V, Liao W, Philips W, Scheunders P.
Decision Fusion Framework for Hyperspectral Image Classification Based on Markov and Conditional Random Fields. *Remote Sensing*. 2019; 11(6):624.
https://doi.org/10.3390/rs11060624

**Chicago/Turabian Style**

Andrejchenko, Vera, Wenzhi Liao, Wilfried Philips, and Paul Scheunders.
2019. "Decision Fusion Framework for Hyperspectral Image Classification Based on Markov and Conditional Random Fields" *Remote Sensing* 11, no. 6: 624.
https://doi.org/10.3390/rs11060624