# Classification of Compressed Remote Sensing Multispectral Images via Convolutional Neural Networks

^{1}

^{2}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

- Usage of CNNs (pre-trained and from-scratch models) to tackle the (quantized vs. original) MS image classification task of a recently released dataset.
- Investigation of the dataset size with which a CNN should be trained to efficiently classify real satellite MS images.
- Exploration of the effect of the quantization and subsampling processes on the MS image classification task.
- Provision of a recovery algorithm of the real-valued measurements on high-dimensional data from their quantized and possibly corrupted observations.
- Quantification of the classification scheme’s performance on real quantized & subsampled recovered satellite MS images, highlighting the clear merits when it operates on the recovered images vis-à-vis their quantized counterparts.

## 2. Related Work

## 3. Problem Formulation and Proposed Method

#### 3.1. Quantized Multispectral Imagery Classification

#### 3.2. Deep Neural Networks for Land-Cover Classification

#### 3.2.1. Spatial Feature Learning with Convolutional Neural Networks

#### 3.2.2. Multispectral Image Prediction Modeling

- Load the pre-trained ResNet-50 network, which has been trained for a task of similar flavor (i.e., RGB image classification) to that at hand (i.e., MS image classification).
- Replace the classification layers (i.e., 1000 different classes in the ImageNet database) for the new MS image classification task.
- Train the network on the available dataset for the MS image classification task.
- Test accuracy of the trained network.

#### 3.3. Tensor Recovery from Quantized Measurements

#### 3.3.1. Quantization and Statistical Model

- The logistic model (logistic noise), which is common in statistics, with ${\mathcal{E}}_{{i}_{1}\dots {i}_{N}}$ i.i.d. according to the logistic distribution with zero mean and unit scale, and ${\Phi}_{\mathrm{log}}\left(x\right)=\frac{1}{1+{e}^{-x}}$.
- The probit model (standard normal noise) with ${\mathcal{E}}_{{i}_{1}\dots {i}_{N}}$ i.i.d. according to the standard normal distribution $\mathcal{N}(0,1)$, and ${\Phi}_{\mathrm{pro}}\left(x\right)={\int}_{-\infty}^{x}\mathcal{N}(s\mid 0,1)\phantom{\rule{0.166667em}{0ex}}ds.$

#### 3.3.2. Quantized Tensor Recovery

#### 3.3.3. Dynamic Weights

## 4. Experimental Evaluation

#### 4.1. Dataset Description

#### 4.2. Experimental Setup

#### 4.3. Effect of the Training Set Size on the Classification Performance

- Class Highway is most frequently mistaken with classes Industrial and Residential
- Class Permanent Crop is most frequently mistaken with classes Herbaceous Vegetation, Annual Crop and Pasture

#### 4.4. Effect of the Compression Ratio on the Recovery

#### 4.5. Effect of the Number of Quantization Bits on the Recovery

#### 4.6. Effect of the Tensor Unfolding and Dynamic Weights on the Recovery

#### 4.7. Effect of the Quantization on the Classification Performance

#### 4.8. Effect of the Quantization and the Recovery on the Classification Performance

#### 4.9. Joint Effects of the Quantization and the Recovery on the Classification Performance

#### 4.10. Effect of Missing Values on the Recovery

- 10 patches of size 3 × 3 × 13 pixels
- 20 patches of size 3 × 3 × 13 pixels
- 10 patches of size 7 × 7 × 13 pixels
- 20 patches of size 7 × 7 × 13 pixels

#### 4.11. Effect of Missing Values and the Recovery on the Classification Performance

#### 4.12. Effect of Missing Values and Quantization on the Recovery

#### 4.13. Effect of the Quantization & Missing Values on the Classification Performance

#### 4.14. Effect of the Quantization & Missing Values and the Recovery on the Classification Performance

#### 4.15. Joint Effects of Quantization & Missing Values and Recovery on the Classification Performance

#### 4.16. Comparison of the Proposed Scheme with Existing Methods

- The pre-trained ResNet-50 model was originally designed for RGB image recognition (and not for MS one), where its success rate was over $98.5\%$ (i.e., in the EUROSAT RGB dataset). Adapting it appropriately led to a logical performance drop to $92.8\%$, which was to be expected since the problem is of a similar flavor and not exactly the same. In contrast, the comparison-CNN reaches a success rate of up to $95.3\%$, indicating that models designed for video processing purposes can scale back well to image processing tasks.
- The comparison-CNN model is quite faster than the pre-trained ResNet-50 one, a fact which can be attributed to the number of trainable parameters of each architecture. To that end, in Table 3 we present the parameters that must be learned by each model.Of course, as long as the comparison-CNN model was approximately 12 times “lighter” than the ResNet-50 one, the computational time needed for its training was expected to be less, as shown in Figure 22b.

## 5. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## Abbreviations

RS | Remote Sensing |

MS | Multispectral |

HS | Hyperspectral |

DWT | Discrete Wavelet Transform |

ML | Machine Learning |

SVM | Support Vector Machine |

RF | Random Forest |

DL | Deep Learning |

CNN | Convolutional Neural Network |

LEOP | Launch and Early Orbit Phase |

k-NN | k-Nearest Neighbors |

PCA | Principal Component Analysis |

NN | Neural Network |

ILSVRC | ImageNet Large-Scale Visual Recognition Challenge |

SGD | Stochastic Gradient Descent |

CDF | Cumulative Distribution Function |

SVD | Singular Value Decomposition |

PSNR | Peak-Signal-to-Noise-Ratio |

MSE | Mean Square Error |

bpppb | bits per pixel per band |

ROC | Receiver Operating Characteristics |

## References

- Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral remote sensing data analysis and future challenges. IEEE Geosci. Remote Sens. Mag.
**2013**, 1, 6–36. [Google Scholar] [CrossRef] [Green Version] - Sudmanns, M.; Tiede, D.; Lang, S.; Bergstedt, H.; Trost, G.; Augustin, H.; Baraldi, A.; Blaschke, T. Big Earth data: Disruptive changes in Earth observation data management and analysis? Int. J. Digit. Earth
**2019**, 1–19. [Google Scholar] [CrossRef] - Huang, B. Satellite Data Compression; Springer Science & Business Media: Berlin, Germany, 2011. [Google Scholar]
- Zhou, S.; Deng, C.; Zhao, B.; Xia, Y.; Li, Q.; Chen, Z. Remote sensing image compression: A review. In Proceedings of the 2015 IEEE International Conference on Multimedia Big Data, Beijing, China, 20–22 April 2015; pp. 406–410. [Google Scholar]
- Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens.
**2011**, 66, 247–259. [Google Scholar] [CrossRef] - Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens.
**2016**, 114, 24–31. [Google Scholar] [CrossRef] - Stivaktakis, R.; Tsagkatakis, G.; Tsakalides, P. Deep Learning for Multilabel Land Cover Scene Categorization Using Data Augmentation. IEEE Geosci. Remote. Sens. Lett.
**2019**, 16, 1031–1035. [Google Scholar] [CrossRef] - Castelluccio, M.; Poggi, G.; Sansone, C.; Verdoliva, L. Land use classification in remote sensing images by convolutional neural networks. arXiv
**2015**, arXiv:1508.00092. [Google Scholar] - Long, Y.; Gong, Y.; Xiao, Z.; Liu, Q. Accurate object localization in remote sensing images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens.
**2017**, 55, 2486–2498. [Google Scholar] [CrossRef] - Zhou, W.; Newsam, S.; Li, C.; Shao, Z. Learning low dimensional convolutional neural networks for high-resolution remote sensing image retrieval. Remote Sens.
**2017**, 9, 489. [Google Scholar] [CrossRef] [Green Version] - Theodoridis, S.; Chatzis, S. A Tour to Deep Learning: From the Origins to Cutting Edge Research and Open Challenges; IEEE Signal Procesing Society: Piscataway, NJ, USA, 2018. [Google Scholar]
- Zhu, Z.; Qi, G.; Chai, Y.; Li, P. A geometric dictionary learning based approach for fluorescence spectroscopy image fusion. Appl. Sci.
**2017**, 7, 161. [Google Scholar] [CrossRef] - Cheng, G.; Han, J. A survey on object detection in optical remote sensing images. ISPRS J. Photogramm. Remote Sens.
**2016**, 117, 11–28. [Google Scholar] [CrossRef] [Green Version] - Pal, M. Random forest classifier for remote sensing classification. Int. J. Remote Sens.
**2005**, 26, 217–222. [Google Scholar] [CrossRef] - Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens.
**2004**, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version] - Zhang, L.; Zhang, L.; Du, B. Deep learning for remote sensing data: A technical tutorial on the state of the art. IEEE Geosci. Remote Sens. Mag.
**2016**, 4, 22–40. [Google Scholar] [CrossRef] - Tsagkatakis, G.; Aidini, A.; Fotiadou, K.; Giannopoulos, M.; Pentari, A.; Tsakalides, P. Survey of Deep-Learning Approaches for Remote Sensing Observation Enhancement. Sensors
**2019**, 19, 3929. [Google Scholar] [CrossRef] [Green Version] - Sharma, A.; Liu, X.; Yang, X.; Shi, D. A patch-based convolutional neural network for remote sensing image classification. Neural Netw.
**2017**, 95, 19–28. [Google Scholar] [CrossRef] - Zhao, W.; Du, S. Learning multiscale and deep representations for classifying remotely sensed imagery. ISPRS J. Photogramm. Remote Sens.
**2016**, 113, 155–165. [Google Scholar] [CrossRef] - Maggiori, E.; Tarabalka, Y.; Charpiat, G.; Alliez, P. Convolutional neural networks for large-scale remote-sensing image classification. IEEE Trans. Geosci. Remote Sens.
**2016**, 55, 645–657. [Google Scholar] [CrossRef] [Green Version] - Marcos, D.; Volpi, M.; Kellenberger, B.; Tuia, D. Land cover mapping at very high resolution with rotation equivariant CNNs: Towards small yet accurate models. ISPRS J. Photogramm. Remote Sens.
**2018**, 145, 96–107. [Google Scholar] [CrossRef] [Green Version] - Fotiadou, K.; Tsagkatakis, G.; Tsakalides, P. Deep convolutional neural networks for the classification of snapshot mosaic hyperspectral imagery. Electron. Imaging
**2017**, 2017, 185–190. [Google Scholar] [CrossRef] [Green Version] - Hamida, A.B.; Benoit, A.; Lambert, P.; Amar, C.B. 3-D Deep learning approach for remote sensing image classification. IEEE Trans. Geosci. Remote Sens.
**2018**, 56, 4420–4434. [Google Scholar] [CrossRef] [Green Version] - Lyu, H.; Lu, H.; Mou, L.; Li, W.; Wright, J.; Li, X.; Li, X.; Zhu, X.; Wang, J.; Yu, L.; et al. Long-term annual mapping of four cities on different continents by applying a deep information learning method to Landsat data. Remote Sens.
**2018**, 10, 471. [Google Scholar] [CrossRef] [Green Version] - Shin, H.C.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; Summers, R.M. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med Imaging
**2016**, 35, 1285–1298. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. Remote Sens.
**2019**, 152, 166–177. [Google Scholar] [CrossRef] - Davenport, M.A.; Plan, Y.; Van Den Berg, E.; Wootters, M. 1-bit matrix completion. Inf. Inference: A J. IMA
**2014**, 3, 189–223. [Google Scholar] [CrossRef] - Cai, T.; Zhou, W.X. A max-norm constrained minimization approach to 1-bit matrix completion. J. Mach. Learn. Res.
**2013**, 14, 3619–3647. [Google Scholar] - Bhaskar, S.A.; Javanmard, A. 1-bit matrix completion under exact low-rank constraint. In Proceedings of the 2015 49th Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 18–20 March 2015; pp. 1–6. [Google Scholar]
- Lan, A.S.; Studer, C.; Baraniuk, R.G. Matrix recovery from quantized and corrupted measurements. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 4973–4977. [Google Scholar]
- Lan, A.S.; Studer, C.; Baraniuk, R.G. Quantized matrix completion for personalized learning. arXiv
**2014**, arXiv:1412.5968. [Google Scholar] - Lafond, J.; Klopp, O.; Moulines, E.; Salmon, J. Probabilistic low-rank matrix completion on finite alphabets. In Advances in Neural Information Processing Systems, Proceedings of the Neural Information Processing Systems 2014, Montreal, QC, Canada, 8–13 December 2014; Neural Information Processing Systems Foundation, Inc.: La Jolla, CA, USA, 2014; pp. 1727–1735. [Google Scholar]
- Bhaskar, S.A. Probabilistic low-rank matrix completion from quantized measurements. J. Mach. Learn. Res.
**2016**, 17, 2131–2164. [Google Scholar] - Signoretto, M.; Van de Plas, R.; De Moor, B.; Suykens, J.A. Tensor versus matrix completion: A comparison with application to spectral data. IEEE Signal Process. Lett.
**2011**, 18, 403–406. [Google Scholar] [CrossRef] - Liu, J.; Musialski, P.; Wonka, P.; Ye, J. Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell.
**2012**, 35, 208–220. [Google Scholar] [CrossRef] - Giannopoulos, M.; Savvaki, S.; Tsagkatakis, G.; Tsakalides, P. Application of Tensor and Matrix Completion on Environmental Sensing Data. In ESANN 2017, Bruges, Belgium, 26–28 April 2017; ESANN: Bruges, Belgium, 2017. [Google Scholar]
- Giannopoulos, M.; Tsagkatakis, G.; Tsakalides, P. On the impact of Tensor Completion in the Classification of Undersampled Hyperspectral Imagery. In Proceedings of the 2018 26th European Signal Processing Conference (EUSIPCO), Rome, Italy, 3–7 September 2018; pp. 1975–1979. [Google Scholar]
- Aidini, A.; Tsagkatakis, G.; Tsakalides, P. 1-bit tensor completion. Electron. Imaging
**2018**, 2018, 261-1–261-6. [Google Scholar] [CrossRef] [Green Version] - Li, B.; Zhang, X.; Li, X.; Lu, H. Tensor completion from one-bit observations. IEEE Trans. Image Process.
**2018**, 28, 170–180. [Google Scholar] [CrossRef] - Ghadermarzy, N.; Plan, Y.; Yilmaz, O. Learning tensors from partial binary measurements. IEEE Trans. Signal Process.
**2018**, 67, 29–40. [Google Scholar] [CrossRef] [Green Version] - Tsagkatakis, G.; Amoruso, L.; Sykas, D.; Abbattista, C.; Tsakalides, P. Lightweight Onboard Hyperspectral Compression and Recovery by Matrix Completion. In Proceedings of the 5th International Workshop on On-Board Payload Data Compression (OBPDC 2016), Frascati, Italy, 28–29 September 2016. [Google Scholar]
- Li, N.; Li, B. Tensor completion for on-board compression of hyperspectral images. In Proceedings of the 2010 IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 517–520. [Google Scholar]
- Zhang, L.; Zhang, L.; Tao, D.; Huang, X.; Du, B. Compression of hyperspectral remote sensing images by tensor approach. Neurocomputing
**2015**, 147, 358–363. [Google Scholar] [CrossRef] - Fang, L.; He, N.; Lin, H. CP tensor-based compression of hyperspectral images. JOSA A
**2017**, 34, 252–258. [Google Scholar] [CrossRef] - Marsetic, A.; Kokalj, Z.; Ostir, K. The effect of lossy image compression on object based image classification-WorldView-2 case study. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.
**2012**, 3819, 187–192. [Google Scholar] [CrossRef] [Green Version] - Garcia-Vilchez, F.; Muñoz-Marí, J.; Zortea, M.; Blanes, I.; González-Ruiz, V.; Camps-Valls, G.; Plaza, A.; Serra-Sagristà, J. On the impact of lossy compression on hyperspectral image classification and unmixing. IEEE Geosci. Remote Sens. Lett.
**2010**, 8, 253–257. [Google Scholar] [CrossRef] - Chen, Z.; Hu, Y.; Zhang, Y. Effects of Compression on Remote Sensing Image Classification Based on Fractal Analysis. IEEE Trans. Geosci. Remote. Sens.
**2019**, 57, 4577–4590. [Google Scholar] [CrossRef] - Gimona, A.; Poggio, L.; Aalders, I.; Aitkenhead, M. The effect of image compression on synthetic PROBA-V images. Int. J. Remote Sens.
**2014**, 35, 2639–2653. [Google Scholar] [CrossRef] - Hagag, A.; Fan, X.; El-Samie, F.E.A. The effect of lossy compression on feature extraction applied to satellite Landsat ETM+ images. In Proceedings of the Eighth International Conference on Digital Image Processing (ICDIP 2016), Chengu, China, 20–22 May 2016. [Google Scholar]
- Theodoridis, S.; Pikrakis, A.; Koutroumbas, K.; Cavouras, D. Introduction to Pattern Recognition: A Matlab Approach; Academic Press: Cambridge, MA, USA, 2010. [Google Scholar]
- Theodoridis, S. Machine Learning: A Bayesian and Optimization Perspective; Academic Press: Cambridge, MA, USA, 2015. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems; NIPS: Lake Tahoe, NV, USA, 2012; pp. 1097–1105. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis.
**2015**, 115, 211–252. [Google Scholar] [CrossRef] [Green Version] - He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Duchi, J.; Hazan, E.; Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res.
**2011**, 12, 2121–2159. [Google Scholar] - Cichocki, A.; Mandic, D.; De Lathauwer, L.; Zhou, G.; Zhao, Q.; Caiafa, C.; Phan, H.A. Tensor decompositions for signal processing applications: From two-way to multiway component analysis. IEEE Signal Process. Mag.
**2015**, 32, 145–163. [Google Scholar] [CrossRef] [Green Version] - Håstad, J. Tensor rank is NP-complete. J. Algorithms
**1990**, 11, 644–654. [Google Scholar] [CrossRef] - Fazel, M.; Hindi, H.; Boyd, S.P. A rank minimization heuristic with application to minimum order system approximation. In Proceedings of the American control conference, Arlington, VA, USA, 25–27 June 2001; Citeseer: Princeton, NJ, USA, 2001; Volume 6, pp. 4734–4739. [Google Scholar]
- Helber, P.; Bischke, B.; Dengel, A.; Borth, D. Introducing EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification. In Proceedings of the IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 204–207. [Google Scholar]
- Helber, P.; Bischke, B.; Dengel, A.; Borth, D. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. arXiv
**2017**, arXiv:1709.00029. [Google Scholar] - Tran, D.; Bourdev, L.; Fergus, R.; Torresani, L.; Paluri, M. Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 4489–4497. [Google Scholar]
- Giannopoulos, M.; Tsagkatakis, G.; Blasi, S.; Toutounchi, F.; Mouchtaris, A.; Tsakalides, P.; Mrak, M.; Izquierdo, E. Convolutional neural networks for video quality assessment. arXiv
**2018**, arXiv:1809.10117. [Google Scholar] - Powers, D.M. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. J. Mach. Learn. Technol.
**2011**, 2, 37–63. [Google Scholar]

**Figure 2.**Residual learning framework with skip connections proposed in [55]. Instead of learning an unreferenced mapping (e.g., $\mathcal{H}\left(x\right)$), the network learns a residual mapping (e.g., $\mathcal{F}\left(x\right)+x$, with $\mathcal{F}\left(x\right)=\mathcal{H}\left(x\right)-x$). In this simple case, shortcut connections perform identity mapping, and subsequently add their output to the convolutional layers’ stack outputs.

**Figure 3.**ResNet-50 layer graph. The employed network comprises several convolutional, activation, batch-normalization layers, connected via the notion of shortcut connections.

**Figure 6.**Classification accuracy and respective computational time regarding the number of training examples. The more training examples are used, the better the classification accuracy, translating to a more computational thirsty network.y

**Figure 7.**Best CNN model’s classification accuracy and loss regarding training epochs. As the number of epochs increases, so does the performance of the CNN model, both in terms of classification accuracy as well as in classification loss.

**Figure 8.**CNN model confusion matrix. For most classes in the test set, the trained CNN model correctly predicts the actual class with a success rate of over $90\%$.

**Figure 9.**Recovery error as a function of the number of quantization bits and for different classes, using the logistic model.

**Figure 10.**The second spectral band of a MS image from the Highway class, and the corresponding quantized to 2, 4 and 6 bits images, as well the recovered images for each case, using the logistic model.

**Figure 11.**Recovery error as a function of the number of quantization bits on different unfoldings of the tensor, using the logistic model and the MS images of the Annual Crop class that indicate the impact of the dynamic weights on the recovery.

**Figure 12.**Classification accuracy regarding the number of training examples, for several quantization levels. The classification strength of the system clearly suffers even when the quantization process is performed to only 1 bit less than the nominal case.

**Figure 13.**Classification accuracy regarding the number of training examples, for various levels of quantized images subsequently recovered using the proposed method. Our recovery approach clearly improves classification performance even when operating with images quantized with as few as 4 bits.

**Figure 14.**General comparison of the quantization and the recovery processes on the classification performance.

**Figure 15.**Recovery error for the four missing value scenarios across the ten classes using the logistic model.

**Figure 16.**Classification accuracy as a function of the number of training examples for the case of images with missing values and their completed counterparts via our technique. Our recovery algorithm leads to higher levels of classification accuracy for all sampling scenarios

**Figure 17.**Recovery error for two quantization levels and the four missing value scenarios on classes Pasture (

**left**) and Permanent Crop (

**right**), using the logistic model.

**Figure 18.**The second spectral band of a MS image from the Industrial class, and the corresponding quantized and subsampled image using 8 bits and sampling scenario 1, as well the recovered image, using the logistic model.

**Figure 19.**Classification accuracy regarding the number of training examples for indicative number of quantization bits employed in the test set image samples and each different sampling scenario (

**a**,

**b**), recovered right after the quantization process (

**c**,

**d**). The system’s performance is heavily affected by the various confronted types of signal degradation, but augmenting the number of bits used for quantization—as well employing the proposed recovery scheme—clearly ameliorates the obtained performance, even when the spatial size and the number of missing measurements is large.

**Figure 20.**General comparison of quantization with 8–11 Bits &, missing values, and recovery processes on the classification performance.

**Figure 22.**Classification accuracy and respective computational time regarding the number of training examples, for competing classification models. The “lighter” comparison-CNN model outperforms the pre-trained ResNet-50 model by an accuracy margin of up to 2.5% in the best case, being faster at the same time.

**Figure 23.**Precision and recall plots per each different class for the pre-trained ResNet-50 (

**top**) and comparison-CNN (

**bottom**).

**Figure 24.**Comparison of the quantization and the recovery processes on the classification performance, when competing quantization schemes (JPEG, JPEG+PCA) are adopted.

**Figure 25.**General system comparison of the quantization-recovery-classification processes. The proposed system clearly outperforms both competing ones in every examined quantization bits scenario.

**Table 1.**Dataset split among training-validation-test sets. Each class is split into $90\%$-$5\%$-$5\%$ training-validation-test sets respectively, ending up with a training set of $18,000$ samples and validation & test sets of 1000 samples each.

Class Name | Available Samples | Training Set Samples | Validation Set Samples | Test Set Samples |
---|---|---|---|---|

Annual Crop | 2000 | 1800 | 100 | 100 |

Forest | 2000 | 1800 | 100 | 100 |

Herbaceous Vegetation | 2000 | 1800 | 100 | 100 |

Highway | 2000 | 1800 | 100 | 100 |

Industrial | 2000 | 1800 | 100 | 100 |

Pasture | 2000 | 1800 | 100 | 100 |

Permanent Crop | 2000 | 1800 | 100 | 100 |

Residential | 2000 | 1800 | 100 | 100 |

River | 2000 | 1800 | 100 | 100 |

Sea Lake | 2000 | 1800 | 100 | 100 |

Total | 20,000 | 18,000 | 1000 | 1000 |

**Table 2.**Recovery error for different compression ratios on the MS images of each class, using the logistic model.

PSNR (dB) | Compression Ratio | |||
---|---|---|---|---|

$\mathbf{\times}\mathbf{6.9}$ | $\mathbf{\times}\mathbf{3.4}$ | $\mathbf{\times}\mathbf{2.25}$ | $\mathbf{\times}\mathbf{1.67}$ | |

Annual Crop | 18.14 | 29.60 | 41.25 | 47.34 |

Forest | 17.59 | 29.68 | 41.34 | 49.09 |

Herbaceous Vegetation | 18.04 | 29.50 | 41.37 | 48.81 |

Highway | 17.76 | 29.48 | 41.02 | 47.82 |

Industrial | 17.56 | 29.40 | 40.68 | 47.11 |

Pasture | 18.09 | 29.75 | 41.22 | 47.73 |

Permanent Crop | 18.08 | 29.55 | 41.19 | 47.82 |

Residential | 17.87 | 29.46 | 40.90 | 48.04 |

River | 18.04 | 29.61 | 41.02 | 48.15 |

Sea Lake | 21.74 | 30.84 | 40.62 | 52.80 |

**Table 3.**Number of trainable parameters of the competing DL models. A more complex network architecture clearly leads to a more computational thirsty model.

Network Name | Number of Trainable Parameters |
---|---|

ResNet-50 | 23,586,442 |

Comparison-CNN | 1,969,994 |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Giannopoulos, M.; Aidini, A.; Pentari, A.; Fotiadou, K.; Tsakalides, P.
Classification of Compressed Remote Sensing Multispectral Images via Convolutional Neural Networks. *J. Imaging* **2020**, *6*, 24.
https://doi.org/10.3390/jimaging6040024

**AMA Style**

Giannopoulos M, Aidini A, Pentari A, Fotiadou K, Tsakalides P.
Classification of Compressed Remote Sensing Multispectral Images via Convolutional Neural Networks. *Journal of Imaging*. 2020; 6(4):24.
https://doi.org/10.3390/jimaging6040024

**Chicago/Turabian Style**

Giannopoulos, Michalis, Anastasia Aidini, Anastasia Pentari, Konstantina Fotiadou, and Panagiotis Tsakalides.
2020. "Classification of Compressed Remote Sensing Multispectral Images via Convolutional Neural Networks" *Journal of Imaging* 6, no. 4: 24.
https://doi.org/10.3390/jimaging6040024