You are currently on the new version of our website. Access the old version .
J. ImagingJournal of Imaging
  • Article
  • Open Access

28 May 2020

Spatially and Spectrally Concatenated Neural Networks for Efficient Lossless Compression of Hyperspectral Imagery

,
and
1
Department of Electrical and Computer Engineering, University of Alabama in Huntsville, Huntsville, AL 35899, USA
2
Chubbs Insurance Inc., New York, NY 10020, USA
*
Author to whom correspondence should be addressed.
Note: This work is independent of Shen’s current affiliation.

Abstract

To achieve efficient lossless compression of hyperspectral images, we design a concatenated neural network, which is capable of extracting both spatial and spectral correlations for accurate pixel value prediction. Unlike conventional neural network based methods in the literature, the proposed neural network functions as an adaptive filter, thereby eliminating the need for pre-training using decompressed data. To meet the demand for low-complexity onboard processing, we use a shallow network with only two hidden layers for efficient feature extraction and predictive filtering. Extensive simulations on commonly used hyperspectral datasets and the standard CCSDS test datasets show that the proposed approach attains significant improvements over several other state-of-the-art methods, including standard compressors such as ESA, CCSDS-122, and CCSDS-123.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.