Statistics Learning Network Based on the Quadratic Form for SAR Image Classification
Abstract
:1. Introduction
1.1. Related Work
1.2. Motivations
- The intrinsic randomness of SAR signal allows statistical models to be an effective tool for SAR image analysis. The parameters of a statistical model capture valuable information for describing the SAR image—for example, the mean and variance of a normal distribution . However, fitting these distribution parameters using CNNs is still a challenge, especially when insufficient training data is available. Consider the variance of as an example. This variable can be estimated by
- The CNN has demonstrated a powerful ability to learn features, but that ability relies primarily on the availability of big data. However, collecting massive amounts of SAR data is difficult in practice. Moreover, the coherent imaging mechanism causes the SAR signals to present strong fluctuations. This distinctive nature calls for many degrees of freedom (DoFs) for SAR image description, which increases the difficulty of applying a CNN to SAR image interpretation.
1.3. Contributions
- A quadratic primitive is designed to comprehensively learn elementary statistical features. The novel aspects of this primitive lie in an adaptation and extension of the standard convolutional primitive to cope with the SAR image. Concretely, the weighted combination of high-order components, including the quadratic and cross terms, are implemented in this primitive. As shown in Section 3.1, the motivation behind this primitive is derived from the finding that the quadratic and cross terms are often needed for elementary statistical parameter fitting.
- With the aid of the quadratic primitive, the SLN is presented for SAR image classification as depicted in Figure 3. The SLN is a type of deep model that seeks to automatically fit statistical features for SAR image representation.
2. Preliminaries
2.1. SAR Image Statistics
2.2. Deep Learning Framework
- Activation and pooling module: This module is mainly devoted to nonlinear transformation and dimensionality reduction, which are implemented by activation functions and pooling operations, respectively. Popular activation functions include the sigmoid, tanh and rectified linear unit (ReLU), etc. For the pooling operation, max and average pooling are widely used at present [54].
3. Methodology
3.1. Quadratic Primitive
3.2. Comparison between Quadratic and Convolutional Primitives
3.3. Statistics Learning Network
3.3.1. Forward Pass
3.3.2. Optimization
4. Experiments
4.1. Datasets
4.2. Experimental Setup
4.3. Experiment Results
4.3.1. Classification Accuracy
4.3.2. Confusion Matrix
4.3.3. Classification Map
4.3.4. Average Accuracy vs. Training Samples
4.4. Analysis and Discussion
- Given a random vector , its linear transformation is still a random variable, because can be regarded as a statistic that is a function of some observable random variables. This fact implies that the strategy used for elementary statistical feature extraction can be extended to deeper layers of the SLN. Using such an extension is expected to be more effective in SAR image interpretation. However, this approach also increases the number of weights that must be learned. Therefore, a tradeoff should be considered, and the effectiveness of this extension is in need of investigation.
- The SLN is a patchwise approach; thus, when the SLN is used to process large images, it suffers from the same disadvantage as the patch-based methods [64,65]. For semantic labeling or segmentation, post-processing, e.g., smoothness constraints, or other end-to-end methods such as a fully convolutional network [66] should considered in further research.
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
Abbreviations
CNN | Convolutional Neural Network |
SAR | Synthetic Aperture Radar |
DoFs | Degrees of Freedom |
SLN | Statistics Learning Network |
FMM | Finite Mixture Models |
GLCM | Gray-Level Co-occurrence Matrix |
GMRF | Gaussian Markov Random Field |
LBP | Local Binary Pattern |
SVM | Support Vector Machine |
DBN | Deep Belief Network |
RBM | Restricted Boltzmann Machine |
WRBM | Wishart-Bernoulli RBM |
PolSAR | Polarimetric SAR |
g-DBN | Generalized Gamma Deep Belief Network |
GMM | Gamma Mixture Model |
ReLU | Rectified Linear Unit |
MoM | Method of Moments |
CETC | China Electronics Technology Group Corporation |
AA | Average Accuracy |
References
- Moreira, A.; Prats-Iraola, P.; Younis, M.; Krieger, G.; Hajnsek, I.; Papathanassiou, K.P. A tutorial on synthetic aperture radar. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–43. [Google Scholar] [CrossRef] [Green Version]
- Lee, I.K.; Shamsoddini, A.; Li, X.; Trinder, J.C.; Li, Z. Extracting hurricane eye morphology from spaceborne SAR images using morphological analysis. J. Photogramm. Remote Sens. 2016, 117, 115–125. [Google Scholar] [CrossRef]
- Ma, P.; Lin, H.; Lan, H.; Chen, F. Multi-dimensional SAR tomography for monitoring the deformation of newly built concrete buildings. J. Photogramm. Remote Sens. 2015, 106, 118–128. [Google Scholar] [CrossRef] [Green Version]
- Zhang, C.; Sargent, I.; Pan, X.; Li, H.; Gardiner, A.; Hare, J.; Atkinson, P.M. An object-based convolutional neural network (OCNN) for urban land use classification. Remote Sens. Environ. 2018, 216, 57–70. [Google Scholar] [CrossRef]
- Werninghaus, R.; Buckreuss, S. The TerraSAR-X mission and system design. IEEE Trans. Geosci. Remote Sens. 2010, 48, 606–614. [Google Scholar] [CrossRef]
- Seguin, G.; Srivastava, S.; Auger, D. Evolution of the RADARSAT Program. IEEE Geosci. Remote Sens. Mag. 2014, 2, 56–58. [Google Scholar] [CrossRef]
- Aschbacher, J.; Milagro-Pérez, M.P. The European Earth monitoring (GMES) programme: Status and perspectives. Remote Sens. Environ. 2012, 120, 3–8. [Google Scholar] [CrossRef]
- Gu, X.; Tong, X. Overview of China Earth Observation Satellite Programs [Space Agencies]. IEEE Geosci. Remote Sens. Mag. 2015, 3, 113–129. [Google Scholar]
- Mathieu, P.P.; Borgeaud, M.; Desnos, Y.L.; Rast, M.; Brockmann, C.; See, L.; Kapur, R.; Mahecha, M.; Benz, U.; Fritz, S. The ESA’s Earth Observation Open Science Program [Space Agencies]. IEEE Geosci. Remote Sens. Mag. 2017, 5, 86–96. [Google Scholar] [CrossRef]
- Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Motagh, M. Random forest wetland classification using ALOS-2 L-band, RADARSAT-2 C-band, and TerraSAR-X imagery. J. Photogramm. Remote Sens. 2017, 130, 13–31. [Google Scholar] [CrossRef]
- Deledalle, C.A.; Denis, L.; Tabti, S.; Tupin, F. MuLoG, or How to apply Gaussian denoisers to multi-channel SAR speckle reduction? IEEE Trans. Image Process. 2017, 26, 4389–4403. [Google Scholar] [CrossRef]
- Argenti, F.; Lapini, A.; Bianchi, T.; Alparone, L. A tutorial on speckle reduction in synthetic aperture radar images. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–35. [Google Scholar] [CrossRef]
- Li, H.C.; Hong, W.; Wu, Y.R.; Fan, P.Z. On the empirical-statistical modeling of SAR images with generalized gamma distribution. IEEE J. Sel. Top. Signal Process. 2011, 5, 386–397. [Google Scholar]
- Joughin, I.R.; Percival, D.B.; Winebrenner, D.P. Maximum likelihood estimation of K distribution parameters for SAR data. IEEE Trans. Geosci. Remote Sens. 1993, 31, 989–999. [Google Scholar] [CrossRef]
- Keinosuke, F. Introduction to Statistical Pattern Recognition, 2nd ed.; Academica Press: Washington, DC, USA, 1990; pp. 2133–2143. [Google Scholar]
- Oliver, C.; Quegan, S. Understanding Synthetic Aperture Radar Images; SciTech Publishing: Raleigh, NC, USA, 2004. [Google Scholar]
- Bombrun, L.; Beaulieu, J.M. Fisher distribution for texture modeling of polarimetric SAR data. IEEE Geosci. Remote Sens. Lett. 2008, 5, 512–516. [Google Scholar] [CrossRef]
- Gao, G. Statistical modeling of SAR images: A survey. Sensors 2010, 10, 775–795. [Google Scholar] [CrossRef] [PubMed]
- Liu, G.; Jia, H.; Rui, Z.; Zhang, H.; Jia, H.; Bing, Y.; Sang, M. Exploration of Subsidence Estimation by Persistent Scatterer InSAR on Time Series of High Resolution TerraSAR-X Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2011, 4, 159–170. [Google Scholar] [CrossRef]
- Li, H.; Krylov, V.A.; Fan, P.Z.; Zerubia, J.; Emery, W.J. Unsupervised Learning of Generalized Gamma Mixture Model With Application in Statistical Modeling of High-Resolution SAR Images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 2153–2170. [Google Scholar] [CrossRef]
- Nicolas, J.M.; Anfinsen, S.N. Introduction to second kind statistics: Application of log-moments and log-cumulants to the analysis of radar image distributions. Trait. Signal 2002, 19, 139–167. [Google Scholar]
- Deng, X.; López-Martínez, C.; Chen, J.; Han, P. Statistical Modeling of Polarimetric SAR Data: A Survey and Challenges. Remote Sens. 2017, 9, 348. [Google Scholar] [CrossRef]
- Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, 3, 610–621. [Google Scholar] [CrossRef] [Green Version]
- Datcu, M. Wavelet-based despeckling of SAR images using Gauss–Markov random fields. IEEE Trans. Geosci. Remote Sens. 2007, 45, 4127–4143. [Google Scholar]
- Lee, T.S. Image representation using 2D Gabor wavelet. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 18, 959–971. [Google Scholar]
- Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef] [Green Version]
- Maulik, U.; Chakraborty, D. Remote Sensing Image Classification: A survey of support-vector-machine-based advanced techniques. IEEE Geosci. Remote Sens. Mag. 2017, 5, 33–52. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. IEEE Geosci. Remote Sens. Mag. 2018, 5, 8–36. [Google Scholar] [CrossRef]
- Han, W.; Feng, R.; Wang, L.; Cheng, Y. A semi-supervised generative framework with deep learning features for high-resolution remote sensing image scene classification. J. Photogramm. Remote Sens. 2018, 145, 23–43. [Google Scholar] [CrossRef]
- Paoletti, M.; Haut, J.; Plaza, J.; Plaza, A. A new deep convolutional neural network for fast hyperspectral image classification. J. Photogramm. Remote Sens. 2018, 145, 120–147. [Google Scholar] [CrossRef]
- Anwer, R.M.; Khan, F.S.; van de Weijer, J.; Molinier, M.; Laaksonen, J. Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification. J. Photogramm. Remote Sens. 2018, 138, 74–85. [Google Scholar] [CrossRef] [Green Version]
- Liu, X.; Jiao, L.; Tang, X.; Sun, Q.; Zhang, D. Polarimetric Convolutional Network for PolSAR Image Classification. IEEE Trans. Geosci. Remote Sens. 2018. [Google Scholar] [CrossRef]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
- Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
- Liu, F.; Jiao, L.; Hou, B.; Yang, S. POL-SAR image classification based on Wishart DBN and local spatial information. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3292–3308. [Google Scholar] [CrossRef]
- Jiao, L.; Liu, F. Wishart deep stacking network for fast POLSAR image classification. IEEE Trans. Image Process. 2016, 25, 3273–3286. [Google Scholar] [CrossRef] [PubMed]
- Lv, Q.; Dou, Y.; Niu, X.; Xu, J.; Xu, J.; Xia, F. Urban land use and land cover classification using remotely sensed SAR data through deep belief networks. J. Sens. 2015, 2015, 538063. [Google Scholar] [CrossRef]
- Zhao, Z.; Guo, L.; Jia, M.; Wang, L. The Generalized Gamma-DBN for High-Resolution SAR Image Classification. Remote Sens. 2018, 10, 878. [Google Scholar] [CrossRef]
- Geng, J.; Wang, H.; Fan, J.; Ma, X. Deep supervised and contractive neural network for SAR image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2442–2459. [Google Scholar] [CrossRef]
- Geng, J.; Fan, J.; Wang, H.; Ma, X.; Li, B.; Chen, F. High-resolution SAR image classification via deep convolutional autoencoders. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2351–2355. [Google Scholar] [CrossRef]
- Zhao, Z.; Jiao, L.; Zhao, J.; Gu, J.; Zhao, J. Discriminant deep belief network for high-resolution SAR image classification. Pattern Recognit. 2017, 61, 686–701. [Google Scholar] [CrossRef]
- He, C.; Liu, X.; Han, G.; Kang, C.; Chen, Y. Fusion of statistical and learnt features for SAR images classification. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 5490–5493. [Google Scholar]
- Hänsch, R. Complex-Valued Multi-Layer Perceptrons—An Application to Polarimetric SAR Data. Photogramm. Eng. Remote Sens. 2010, 76, 1081–1088. [Google Scholar] [CrossRef]
- Zhang, Z.; Wang, H.; Xu, F.; Jin, Y.Q. Complex-valued convolutional neural network and its application in polarimetric SAR image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7177–7188. [Google Scholar] [CrossRef]
- Graves, A.; Mohamed, A.R.; Hinton, G. Speech Recognition with Deep Recurrent Neural Networks. arXiv, 2013; arXiv:1303.5778. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 2–8 December 2012; pp. 1097–1105. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv, 2014; arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas Valley, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. arXiv, 2014; arXiv:1409.4842. [Google Scholar]
- Huang, G.; Liu, Z.; Weinberger, K.Q.; van der Maaten, L. Densely connected convolutional networks. arXiv, 2016; arXiv:1608.06993. [Google Scholar]
- Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y.; Manzagol, P.A. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 2010, 11, 3371–3408. [Google Scholar]
- Fischer, A.; Igel, C. Training restricted Boltzmann machines: An introduction. Pattern Recognit. 2014, 47, 25–39. [Google Scholar] [CrossRef] [Green Version]
- Goodfellow, I.; Bengio, Y.; Courville, A.; Bengio, Y. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Hinton, G.E.; Srivastava, N.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R.R. Improving neural networks by preventing co-adaptation of feature detectors. Comput. Sci. 2012, 3, 212–223. [Google Scholar]
- Kang, G.; Li, J.; Tao, D. Shakeout: A New Approach to Regularized Deep Neural Network Training. IEEE Trans. Pattern Anal. Mach. Intell. 2017. [Google Scholar] [CrossRef]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv, 2015; arXiv:1502.03167. [Google Scholar]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 22 2018; pp. 7132–7141. [Google Scholar]
- Olah, C. Neural Networks, Manifolds, and Topology. Available online: https://colah.github.io/posts/2014-03-NN-Manifolds-Topology/ (accessed on 5 August 2018).
- Bottou, L.; Curtis, F.E.; Nocedal, J. Optimization methods for large-scale machine learning. arXiv, 2016; arXiv:1606.04838. [Google Scholar] [CrossRef]
- China Electronics Technology Group Corporation 38 Institute. Available online: http://www.cetc38.com.cn/ (accessed on 21 Augugust 2018).
- Fulkerson, B. Class segmentation and object localization with superpixel neighborhoods. ICCV 2009, 2009, 670–677. [Google Scholar]
- Sherrah, J. Fully Convolutional Networks for Dense Semantic Labelling of High-Resolution Aerial Imagery. arXiv, 2016; arXiv:1606.02585. [Google Scholar]
- Mnih, V. Machine Learning for Aerial Image Labeling. Ph.D. Thesis, University of Toronto, Toronto, ON, Canada, 2013. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar]
Type | Model | Application Cases |
---|---|---|
Empirical Distribution | Log-Normal [15] | homogeneous, amplitude/intensity |
Weibull [16] | homogeneous, single look, | |
amplitude/intensity | ||
Fisher [17] | homogeneous/heterogeneous, | |
single look/multi-look, amplitude/intensity | ||
Priori Hypothesis | Rayleigh [12] | homogeneous, single look, amplitude |
Gamma [13] | homogeneous, multi-look, intensity | |
[14] | heterogeneous, single look/multi-look, | |
amplitude/intensity |
Type | Model | Parameters | MoM Estimation |
---|---|---|---|
Empirical Distributions | Log-Normal [15] | ||
Weibull [16] | |||
Fisher [17] | |||
Priori Hypothesis | Rayleigh [12] | b | |
Gamma [13] | |||
[14] |
Dataset | Date | Band | #Class | #Samples per Class | #Image Patch |
---|---|---|---|---|---|
Guangdong | 2008 | X-band | 5 | 1000 | 5 × 1000 |
Orchard | 2010 | X-band | 7 | 200 | 7 × 200 |
Rice | 2010 | X-band | 7 | 200 | 7 × 200 |
Layer | Parameter | #Channel |
---|---|---|
Input | 64 × 64 | 1 |
Quadratic Module | 4 × 4, 2 | 4 |
conv1 | 3 × 3, 1 | |
relu1 | - | 16 |
pooling1 | 2 × 2, 2 | |
conv2 | 3 × 3, 1 | |
relu2 | - | 64 |
pooling2 | 2 × 2, 2 | |
conv3 | 3 × 3, 1 | |
relu3 | - | 128 |
pooling3 | 2 × 2, 2 | |
ip1 | 3 × 3, 1 | 256 |
relu3 | - | |
ip2 | 1 × 1, 1 | number of class |
GLCM | Gabor | LBP | CNN | SLN | |
---|---|---|---|---|---|
Vegetation | 80.00 | 66.80 | 89.60 | 89.20 ± 0.11 | 90.00 ± 0.10 |
Pool | 89.20 | 89.20 | 78.00 | 91.20 ± 0.03 | 92.80 ± 0.03 |
River | 88.00 | 74.00 | 93.20 | 93.60 ± 0.01 | 94.00 ± 0.01 |
LD Area | 65.20 | 70.80 | 67.20 | 74.40 ± 0.19 | 72.40 ± 0.12 |
HD Area | 82.00 | 78.80 | 76.40 | 70.40 ± 0.15 | 79.60 ± 0.20 |
AA | 80.88 | 75.92 | 80.88 | 83.76 | 85.76 |
Kappa | 0.76 | 0.70 | 0.76 | 0.79 | 0.82 |
GLCM | Gabor | LBP | CNN | SLN | |
---|---|---|---|---|---|
Mango1 | 86.00 | 94.00 | 82.00 | 98.00 ± 0.20 | 98.00 ± 0.16 |
Mango2 | 66.00 | 60.00 | 50.00 | 72.40 ± 0.14 | 72.80 ± 0.12 |
Mango3 | 86.00 | 78.00 | 68.00 | 86.86 ± 0.17 | 84.00 ± 0.25 |
Betel Nut | 88.00 | 82.00 | 80.00 | 97.20 ± 0.25 | 97.60 ± 0.07 |
Longan | 84.00 | 80.00 | 76.00 | 92.00 ± 0.05 | 94.00 ± 0.04 |
Forest | 76.00 | 82.00 | 72.00 | 78.40 ± 0.20 | 80.40 ± 0.14 |
Building | 74.00 | 78.00 | 80.00 | 82.40 ± 0.09 | 91.20 ± 0.15 |
AA | 79.99 | 79.14 | 72.57 | 86.74 | 88.28 |
Kappa | 0.76 | 0.75 | 0.67 | 0.84 | 0.86 |
GLCM | Gabor | LBP | CNN | SLN | |
---|---|---|---|---|---|
Rice1 | 90.00 | 47.50 | 75.00 | 90.00 ± 0.17 | 95.00 ± 0.10 |
Rice2 | 82.50 | 62.50 | 85.00 | 100.00 ± 0.02 | 100.00 ± 0.02 |
Rice3 | 70.00 | 75.00 | 75.00 | 80.00 ± 0.26 | 87.50 ± 0.07 |
Rice4 | 27.50 | 45.00 | 55.00 | 67.50 ± 0.25 | 65.00 ± 0.16 |
Rice5 | 67.50 | 72.50 | 55.00 | 87.50 ± 0.19 | 82.50 ± 0.15 |
Rice6 | 82.50 | 80.00 | 90.00 | 95.00 ± 0.06 | 95.00 ± 0.06 |
Grass | 100.00 | 82.50 | 100.00 | 97.50 ± 0.01 | 100.00 ± 0.01 |
AA | 74.28 | 66.42 | 76.42 | 88.21 | 89.28 |
Kappa | 0.70 | 0.60 | 0.72 | 0.86 | 0.87 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
He, C.; He, B.; Liu, X.; Kang, C.; Liao, M. Statistics Learning Network Based on the Quadratic Form for SAR Image Classification. Remote Sens. 2019, 11, 282. https://doi.org/10.3390/rs11030282
He C, He B, Liu X, Kang C, Liao M. Statistics Learning Network Based on the Quadratic Form for SAR Image Classification. Remote Sensing. 2019; 11(3):282. https://doi.org/10.3390/rs11030282
Chicago/Turabian StyleHe, Chu, Bokun He, Xinlong Liu, Chenyao Kang, and Mingsheng Liao. 2019. "Statistics Learning Network Based on the Quadratic Form for SAR Image Classification" Remote Sensing 11, no. 3: 282. https://doi.org/10.3390/rs11030282
APA StyleHe, C., He, B., Liu, X., Kang, C., & Liao, M. (2019). Statistics Learning Network Based on the Quadratic Form for SAR Image Classification. Remote Sensing, 11(3), 282. https://doi.org/10.3390/rs11030282