Next Article in Journal
Neutrosophic Linear Equations and Application in Traffic Flow Problems
Previous Article in Journal
An Indoor Collaborative Coefficient-Triangle APIT Localization Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Data: Efficient and Secure Transmission

by
Raffaele Pizzolante
* and
Bruno Carpentieri
Dipartimento di Informatica, Università di Salerno, 84084 Fisciano (SA), Italy
*
Author to whom correspondence should be addressed.
Algorithms 2017, 10(4), 132; https://doi.org/10.3390/a10040132
Submission received: 23 October 2017 / Revised: 25 November 2017 / Accepted: 28 November 2017 / Published: 30 November 2017

Abstract

:
Airborne and spaceborne hyperspectral sensors collect information which is derived from the electromagnetic spectrum of an observed area. Hyperspectral data are used in several studies and they are an important aid in different real-life applications (e.g., mining and geology applications, ecology, surveillance, etc.). A hyperspectral image has a three-dimensional structure (a sort of datacube): it can be considered as a sequence of narrow and contiguous spectral channels (bands). The objective of this paper is to present a framework permits the efficient storage/transmission of an input hyperspectral image, and its protection. The proposed framework relies on a reversible invisible watermarking scheme and an efficient lossless compression algorithm. The reversible watermarking scheme is used in conjunction with digital signature techniques in order to permit the verification of the integrity of a hyperspectral image by the receiver.

1. Introduction

Hyperspectral imaging sensors obtain information from the electromagnetic spectrum of an observed area. Spectral imaging techniques cover a significant portion of the electromagnetic spectrum in which the frequencies are in a range that spans from the ultraviolet to the infrared. A hyperspectral sensor subdivides the spectrum into different spectral channels (referred to as “bands”). For such reasons, a hyperspectral image can be considered as a sort of “datacube” [1], since such data can be structured in a three-dimensional manner.
Hyperspectral images are efficiently used in a wide range of real-life and research applications (agriculture, mineralogy, physics, surveillance, etc.). Hyperspectral images are often shared among different entities, sometimes with different purposes (for example different research centres, etc.), in order to carry out conjunct tasks.
Several scenarios can be drawn in which these data are shared/stored/transmitted for sensitive objectives (e.g., military applications [2,3], counter-terrorism [4], forensic applications [5], etc.). Thus, an important concern might be to ensure data protection against tampering, which can occur even in very sensitive cases (e.g., target-detection applications, etc.). Since these images need to be efficiently transmitted to a base as soon as they are acquired, and they need to be efficiently stored, we propose a possible framework for efficient transmission and protection.
The proposed framework relies on two main components: a lossless compression algorithm and a reversible watermarking scheme. Generally, digital watermarking techniques are used for ensuring security, content authentication, and copyright protection (e.g., [6,7,8]). Using watermarking techniques, the input data (i.e., the hyperspectral images) might become a sort of “carrier” of hidden information, which can carry important data (e.g., [9,10]). We highlight the key aspects of the proposed framework, in Section 2, and we outline further details for a possible effective implementation, in Section 2.1. Regarding the implementation, we considered a reversible invisible watermarking scheme (Section 2.1.1), designed specifically for hyperspectral images, and the multiband lossless compression of hyperspectral images (LMBHI) algorithm [11] (reviewed in Section 2.1.2), which is a predictive-based lossless compression algorithm that achieves interesting results.
Finally, we report and discuss the obtained experimental results in Section 3 and Section 4 draws our conclusions.

2. An Efficient and Secure Transmission Framework

Figure 1 shows the architecture of the proposed transmission framework. First, the digest h ( H I ) of the input hyperspectral image HI is computed, by invoking a cryptographic hash function h(.) (e.g., SHA-3 Keccak [12], etc.). Subsequently, the obtained digest h ( H I ) is used as a watermark string and is embedded into HI, by using a reversible invisible watermarking scheme. As it is observable from Figure 1, the secret key K, is used for the computation of the digest as well as for the embedding of the obtained digest h ( H I ) into HI.
After that, the watermarked hyperspectral image HIW (i.e., the output of the reversible invisible watermarking scheme) is compressed by using an efficient lossless compression algorithm. The output of this compression stage, is denoted as HICW and it can be efficiently transmitted.
In the following, we denote as H I the hyperspectral image which is reconstructed from HIW (where HIW is obtained as the output of the decompression of HICW), and as w′ the watermark string which is extracted from HIW.
The receiver can verify the integrity of HI, in the following manner: if there are no alterations of HIW, then the value of h ( H I ) will be equal to the value of h ( H I ) . Consequently, it is satisfied that w = h ( H I ) = h ( H I ) = w′ and H I is exactly equal to HI.

2.1. An Effective Implementation of the Proposed Framework

For an effective implementation of the proposed framework, we considered the reversible watermarking scheme, described in Section 2.1.1, and the multiband lossless compression of hyperspectral images scheme (LMBHI), the compression algorithm is addressed in Section 2.1.2. However, other approaches can be efficiently used for an effective implementation of our proposal.

2.1.1. Reversible Invisible Watermarking Scheme for Hyperspectral Images

In [13], we have proposed a preliminary version of a reversible invisible watermarking scheme for hyperspectral images. This scheme relies on the approaches outlined in [9,14], and belongs to the category of additive schemes. In the additive scheme, the watermark signal w (i.e., a watermark string) is directly added to the input signal, namely the pixels of the input hyperspectral image HI. In this way the output produced (i.e., the watermarked hyperspectral image HIW) contains both the signals (the one that represents the HI and the one that represents the watermark w). A secret key, K, is used to perform the embedding phase.
It is important to note that this scheme is reversible. Therefore, it is possible to restore the original HI and to extract the watermark w. In addition, this scheme is fragile: a simple modification of HIW might cause the disappearance of the embedded watermark, w.
The basic objective of our scheme is to spread the bits of w among all the bands of HI. More precisely, each bit of w—referred to as bw—will be embedded into a set of four pixels S P = { x ( 0 ) ,   x ( 0 ) ,   x ( 2 ) ,   x ( 3 ) } . These pixels are pseudo-randomly selected by means of a pseudo-random number generator (PRNG) based on the secret key, K.
Since it is possible to have a set SP that cannot be used to carry bw due to the fact that the extraction algorithm might be unable to extract the hidden bit, it is necessary to classify the sets into two categories: “carrier sets” and “non-carrier sets”. A carrier set is a set in which a bit, bw, can be embedded, while a non-carrier set is a set in which a bit, bw, cannot be embedded.
When the algorithm identifies a carrier set SP, a bit bw can be embedded by means of the “embedWatermarkBit” procedure, reported in Algorithm 1, which returns as output the set S P w . S P w represents the set SP in which bw is embedded. To classify a set SP, the relationship between SP and its estimation S P E is exploited. This estimation is computed by means of a linear combination of the pixels of SP, as explained in the “estimate” procedure (see Algorithm 2). By using the estimation, the extraction algorithm can classify, in two steps, a set SP. Furthermore, the extraction algorithm can restore the original pixel values of the carrier set by manipulating a watermarked carrier set. In this manner, the reversibility property is obtained.
Algorithm 1 The “embedWatermarkBit” procedure (pseudo-code from [13])
procedure embedWatermarkBit ( S P , b w )
  • if b w = = 1 then
  • x ( 0 ) w = x ( 0 ) + 1 ;
  • x ( 1 ) w = x ( 1 ) 1 ;
  • x ( 2 ) w = x ( 2 ) 1 ;
  • x ( 3 ) w = x ( 3 ) + 1 ;
  • else
  • x ( 0 ) w = x ( 0 ) 1 ;
  • x ( 1 ) w = x ( 1 ) + 1 ;
  • x ( 2 ) w = x ( 2 ) + 1 ;
  • x ( 3 ) w = x ( 3 ) 1 ;
  • endif
  • S P w = { x ( 0 ) w ,   x ( 1 ) w ,   x ( 2 ) w ,   x ( 3 ) w } ;
  • return S P w ;
end procedure
Algorithm 3 reports the pseudo-code of the “embed” procedure. This procedure embeds the watermark string w into the input hyperspectral image HI by using in the embedding process the secret key K.
In details: the pseudo-random number generator (PRNG) G is initialized by using K as a seed. Subsequently, w is subdivided into M substrings (where M is the number of bands of HI). The ith substring, wi, will be embedded into the ith band of HI, denoted as HI(i). Therefore, each band will carry at least N / M bits, where N denotes the length of w.
The algorithm considers each substring wi and performs the following steps until all the bits composing wi are embedded into HI(i): four pixels (we denote them as x ( 0 ) , x ( 1 ) , x ( 2 ) , and x ( 3 ) ) are randomly selected by using the PRNG G to compose a set S P . Subsequently, the estimation S P E (composed by four estimated pixels, that we denote as x ( 0 ) E , x ( 1 ) E , x ( 2 ) E , and x ( 3 ) E ) of S P is calculated. This estimation is computed by a linear combination of the pixels of SP, as it is shown in the estimate procedure of Algorithm 2. In order to classify the set S P , the difference D, in absolute value, between x ( 0 ) and x ( 0 ) E is computed. In the case D is less than 1, then the set S P is classified as a carrier set and, therefore, the “embedWatermarkBit” procedure (Algorithm 1) is performed in order to embed bw into S P . Thus, the processing of bit bw is complete. The coordinates of the pixels x ( 0 ) , x ( 1 ) , x ( 2 ) , and x ( 3 ) will be no longer selectable and the algorithm proceeds by embedding the next bit of wi.
Algorithm 2 The “estimate” procedure (pseudo-code from [13])
procedure estimate ( S P )
  • x ( 0 ) E = 2 × x ( 0 ) + x ( 1 ) + x ( 2 ) 4 ;
  • x ( 1 ) E = 2 × x ( 1 ) + x ( 0 ) + x ( 3 ) 4 ;
  • x ( 2 ) E = 2 × x ( 2 ) + x ( 0 ) + x ( 3 ) 4 ;
  • x ( 3 ) E = 2 × x ( 3 ) + x ( 1 ) + x ( 2 ) 4 ;
  • S P E = { x ( 0 ) E ,   x ( 1 ) E ,   x ( 2 ) E ,   x ( 3 ) E } ;
  • return S P E ;
end procedure
In case Sp is classified, instead, as a non-carrier set, then the value of the pixels of Sp are modelled in order to increase the difference, D . In this manner, the extraction algorithm is able to correctly understand that Sp is a non-carrier set. As a consequence, the bit, bw, cannot be embedded into the set Sp and other four pixels (different from the ones previously selected) will be selected to compose a new set Sp, to try the embedding again.
Algorithm 3 The “embed” procedure (pseudo-code from [13])
procedure embed ( H I , w, K)
  • Let G be a PRNG;
  • N = l e n g t h O f ( w ) ;
  • Subdivide w into { w 1 ,   w 2 ,   ,   w M } ;
  • for i = 1 to M do
  • i d x = 1 ;
  • N i = l e n g t h O f ( w i )
  • repeat
  • b w = w i [ i d x ] ;
  •  By using G along with K, selects x ( 0 ) ,   x ( 1 ) ,   x ( 2 ) ,   x ( 3 ) from H I ( i ) ;
  • S P = { x ( 0 ) ,   x ( 1 ) ,   x ( 2 ) ,   x ( 3 ) } ;
  • S P E = e s t i m a t e ( S P ) ;
  • D = | x ( 0 ) x ( 0 ) E | ;
  • if D < 1 then
  • S P w = e m b e d W a t e r m a r k B i t ( S P ,   b w ) ;
  • i d x + + ;
  • else
  •  Modifying of S P , by using S P = embedWatermarkBit( S P , 0) or S P = embedWatermarkBit( S P , 1), in order to increase the value of D ;
  • endif
  •  Setting of the coordinates of x ( 0 ) , x ( 1 ) , x ( 2 ) and x ( 3 ) no longer selectable;
  • until i d x N i ;
  • end for
  • Copying of all modified and unmodified pixels to H I W ;
  • return H I W ;
end procedure

2.1.2. Multiband Lossless Compression of Hyperspectral Images

The predictive-based multiband lossless compression for hyperspectral images (LMBHI) [11] algorithm exploits the inter-band correlation (i.e., the correlation among the neighboring pixels of contiguous bands) as well as the intra-band correlations (i.e., the correlations among the neighboring pixels of the same band), by using a predictive coding model.
Each pixel of the input hyperspectral image HI is predicted by using one of the following predictive structures: the 2-D linearized median predictor (2-D LMP) [15] and the 3-D multiband linear predictor (3D-MBLP).
2-D LMP considers only the intra-band correlation and it is used only for the pixels of the first band for which there are no previous reference bands.
3D-MBLP exploits the intra-band and the inter-band correlations instead and is used to predict the pixels of all the bands except for the first one.
Once the prediction step is performed the prediction error, e, is modelled and coded. In particular, e is obtained by subtracting the value of the prediction of the current pixel from its effective value.
The 3D-MBLP predictor uses a three-dimensional prediction context which is defined by considering two parameters: B and N, where B indicates the number of the previous reference bands and N indicates the number of pixels that will be included in the prediction context that are in the current band and in the previous B reference bands.
In order to permit an efficient and relative indexing of the pixels that form the prediction context of the 3D-MBLP, an enumeration, E, is defined. We denote with I i , j the ith pixel (according to the enumeration E) of the jth band. In addition, we suppose that the current band is the k th band. In this manner, I 0 , j is referred to the pixel that has the same spatial coordinates of the current pixel (denoted as I 0 , k ), in the jth band.
The 3D-MBLP predictor is based on the least squares optimization technique and the prediction is computed, as in Equation (1).
I 0 , k ^ = i = 1 B α i × I 0 , k i
It is important to point out that the coefficients α 0 = [ α 1 ,   ,   α B ] T are chosen to minimize the energy of the prediction error, as in Equation (2).
P = i = 1 N ( I i , k I i , j ^ ) 2
It should be observable that Equation (2) can be rewritten (by using the matrix notation), as outlined in the equation
P = ( C α X ) T · ( C α X ) ,   where   C = [ I 1 , k 1 I 1 , k B I N , k 1 I N , k B ]   and   X = [ I 1 , k ,   ,   I N , k ] T .
Subsequently, by computing the derivate of P and by setting it to zero, the optimal coefficients can be obtained
( C T C ) α 0 = ( C T X )
Once the coefficients α 0 , which solve the linear system of Equation (3), are computed, the prediction I 0 , k ^ , of the current pixel I 0 , k , can be calculated.

3. Experimental Results

To validate the proposed framework, we have experimentally tested our proposed algorithms on two datasets, which are composed by airborne visible/infrared imaging spectrometer (AVIRIS) hyperspectral images. Such data is obtained by AVIRIS hyperspectral sensors (NASA Jet Propulsion Laboratory (JPL) [16]), which measure the spectrum in the wavelengths ranging from 380 to 2500 nm (subdivided into 224 spectral bands).

3.1. Description of the Test Datasets

Dataset 1.
The first dataset we used in the testing phase is composed by five AVIRIS hyperspectral images provided by the JPL (Jet Propulsion Laboratory) of NASA. Each hyperspectral image is subdivided into sub-images which are denoted as scenes. In detail, a scene is composed by 614 columns, 512 lines (except for the last scene that might have a lower number of lines), and 224 bands. Each pixel is stored as a signed integer and represented with 16 bits.
In Table 1, we report for each hyperspectral image (rows) and the number of its scenes (second column).
Dataset 2.
The second dataset is referred to as the “CCSDS Dataset” and it is composed of five calibrated and seven uncalibrated AVIRIS hyperspectral images. This dataset is publicly available, and it is provided by the Consultative Committee for Space Data Systems (CCSDS) Multispectral and Hyperspectral Data Compression [17].
Table 2 shortly reports the key information describing Dataset 2 by showing the number of scenes (second column) and the number of columns (third column), concerning the calibrated and the uncalibrated images (first column). We remark that a pixel of the calibrated and the uncalibrated images is stored by using 16 bits (16-bit signed integer, for the calibrated, and 16-bit unsigned integer, for the uncalibrated), except for the Hawaii and Maine hyperspectral images which have pixels stored by using 12 bits (unsigned) [17]. Each of the hyperspectral images in the Dataset 2 is composed of 512 lines.

3.2. Simulation Results Achieved by the Reversible Invisible Watermarking Scheme

This section outlines the experimental results we have achieved by using our reversible invisible watermarking scheme by considering both Dataset 1 and Dataset 2. Analogously to [18], we have considered the peak-signal-to-noise-ratio (PSNR) [19], to evaluate the distortion between the original image HI and the watermarked one (i.e., HIW). The PSNR metric is computed as in Equation (4).
P S N R ( H I ,   H I W ) = 1 M i = 1 M ( 10 log 10 ( ( 2 15 1 ) 2 M S E ( H I ( i ) ,   H I ( i ) W ) )
The mean squared error (MSE) instead is defined in Equation (5), in which the notation HI(i)(x, y) is referred to the pixel at the coordinates (x, y) of the ith band.
M S E ( H I ( i ) ,   H I ( i ) W ) = 1 W H x = 1 W y = 1 H ( H I ( i ) ( x , y ) H I ( i ) W ( x , y ) ) 2
In all our experiments, we have considered two watermarks:
  • w1—Composed by 1120 bits (pseudo-random generated)
  • w2—Composed by 2240 bits (pseudo-random generated).

3.2.1. Simulation Results on Dataset 1

In Table 3 we report, in terms of the PSNR metric, the achieved test simulation results by embedding the watermark w1 into Dataset 1. The PSNR value of the watermarked image with respect to the original image is reported for each scene (first column) of Cuprite (on the second column), Jasper Ridge (on the third column), Low Altitude (on the fourth column), Lunar Lake (on the fifth column), and Moffett Field (on the sixth column) hyperspectral images. In Table 4, we report the achieved simulation results by embedding the watermark w2.
Figure 2 synthetizes the average PSNR results we have achieved by embedding the watermark w1 (columns in red) and the watermark w2 (columns in blue), by considering Dataset 1.

3.2.2. Simulation Results on the Dataset 2

In Table 5 and Table 6, the achieved simulation results are reported, in terms of the PSNR metric by embedding the watermark w1 and the watermark w2, respectively. The value assumed by the PSNR metric is reported for each scene (first column) of the Yellowstone calibrated (on the second column) and uncalibrated (on the third column), Hawaii (on the fourth column), and Maine (on the sixth column) hyperspectral images.
Figure 3 reports the average PSNR results we have achieved by embedding the watermark w1 (columns in red) and the watermark w2 (columns in blue), by considering Dataset 2.

3.3. Simulation Results Achieved by the LBMHI Algorithm

In this section, we focus on the simulation results achieved by the LBMHI algorithm, on Dataset 1 and on Dataset 2, which are comparable with the other state-of-the-art predictive-based approaches [11]. Moreover, it is important to note that the parameters of the LBMHI algorithm can be configured.
Table 7 reports the simulations results, in terms of bits per pixel (BPP), achieved by the LMBHI compression algorithm, for each hyperspectral image of the Dataset 1 (rows from the second to the sixth), by considering the following configurations of the parameters: N = 8 and B = 1 (second column), N = 8 and B = 2 (third column), and N = 16 and B = 2 (fourth column). In Table 8, we report the experimental results achieved with Dataset 2 in the same manner of the ones reported in Table 7.

4. Conclusions and Future Works

Hyperspectral data are involved in real-life and sensitive applications (e.g., geoscience or military applications). In addition, the acquisition of such data is onerous and expensive. By considering such aspects, it is important to protect them by allowing the verification of the inalterability of these data by a receiver (since such data are often exchanged among several entities).
In this paper, we have focused on the protection and the efficient transmission of hyperspectral data by revisiting a framework for the secure and efficient transmission of hyperspectral images. This framework combines a reversible invisible watermarking scheme and the LMBHI algorithm.
In future work, we will consider the possible design of a hybrid approach which provides protection and compression at the same time, as well as the extension of the proposed framework to other typologies of 3-D data: e.g., 3-D medical images [15], etc.

Acknowledgments

The authors would like to thank their student Mario Saponara, who tested a preliminary version of the reversible invisible watermarking scheme.

Author Contributions

Raffaele Pizzolante and Bruno Carpentieri worked together and contributed equally.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rizzo, F.; Carpentieri, B.; Motta, G.; Storer, J.A. Low-complexity lossless compression of hyperspectral imagery via linear prediction. IEEE Signal Process. Lett. 2005, 12, 138–141. [Google Scholar] [CrossRef]
  2. Landgrebe, D. Hyperspectral image data analysis. IEEE Signal Process. Mag. 2002, 19, 17–28. [Google Scholar] [CrossRef]
  3. Smetek, T.E.; Bauer, K.W., Jr. A comparison of multivariate outlier detection methods for finding hyperspectral anomalies. Effic. Employ. Non-React. Sens. 2008, 3. [Google Scholar] [CrossRef]
  4. Eismann, M.T. Strategies for hyperspectral target detection in complex background environments. In Proceedings of the IEEE Aerospace Conference, Big Sky, MT, USA, 4–11 March 2006. [Google Scholar]
  5. Silva, C.S.; Pimentel, M.F.; Honorato, R.S.; Pasquini, C.; Prats-Montalbán, J.M.; Ferrer, A. Near infrared hyperspectral imaging for forensic analysis of document forgery. Analyst 2014, 139, 5176–5184. [Google Scholar] [CrossRef] [PubMed]
  6. Albano, P.; Bruno, A.; Carpentieri, B.; Castiglione, A.; Castiglione, A.; Palmieri, F.; Pizzolante, R.; Yim, K.; You, I. Secure and distributed video surveillance via portable devices. J. Ambient Intell. Humaniz. Comput. 2014, 5, 205–213. [Google Scholar] [CrossRef]
  7. Albano, P.; Bruno, A.; Carpentieri, B.; Castiglione, A.; Castiglione, A.; Palmieri, F.; Pizzolante, R.; Yim, K.; You, I. A Secure Distributed Video Surveillance System Based on Portable Devices. CD-ARES 2012, 7465, 403–415. [Google Scholar]
  8. Pizzolante, R.; Carpentieri, B.; Castiglione, A.; De Maio, G. The AVQ Algorithm: Watermarking and Compression Performances. In Proceedings of the Third International Conference on Intelligent Networking and Collaborative Ststems (INCoS), Fukuoka, Japan, 30 November–2 December 2011; pp. 698–702. [Google Scholar]
  9. Castiglione, A.; De Santis, A.; Pizzolante, R.; Castiglione, A.; Loia, V.; Palmieri, F. On the Protection of fMRI Images in Multi-Domain Environments. In Proceedings of the IEEE 29th International Conference on Advanced Information Networking and Applications (AINA 2015), Gwangiu, Korea, 24–27 March 2015; pp. 476–481. [Google Scholar]
  10. Pizzolante, R.; Carpentieri, B.; Castiglione, A.; Castiglione, A.; Palmieri, F. Text Compression and Encryption through Smart Devices for Mobile Communication. In Proceedings of the Seventh International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), Taichung, Taiwan, 3–5 July 2013; pp. 672–677. [Google Scholar]
  11. Pizzolante, R.; Carpentieri, B. Multiband and Lossless Compression of Hyperspectral Images. Algorithms 2016, 9, 16. [Google Scholar] [CrossRef]
  12. Bertoni, G.; Daemen, J.; Peeters, M.; Assche, G.V. The Making of KECCAK. Cryptologia 2014, 38, 26–60. [Google Scholar] [CrossRef]
  13. Pizzolante, R.; Carpentieri, B. A Lossless Invisible Watermarking Scheme for Hyperspectral Images. In Proceedings of the 8th European computing Conference (ECC’15), in Recent Advances on Systems, Signals, Control, Communications and Computers, Dubai, United Arab Emirates, 22–24 February 2015; WSEAS Press: Athens, Greece, 2015; pp. 119–123. [Google Scholar]
  14. Coatrieux, G.; le Guillou, C.; Cauvin, J.-M.; Roux, C. Reversible watermarking for knowledge digest embedding and reliability control in medical images. IEEE Trans. Inf. Technol. Biomed. 2009, 13, 158–165. [Google Scholar] [CrossRef] [PubMed]
  15. Pizzolante, R.; Carpentieri, B. Lossless, low-complexity, compression of three-dimensional volumetric medical images via linear prediction. In Proceedings of the 18th International Conference on Digital Signal Processing (DSP), Fira, Greece, 1–3 July 2013; pp. 1–6. [Google Scholar]
  16. NASA JPL. Available online: https://www.jpl.nasa.gov/ (accessed on 28 November 2017).
  17. Kiely, A.B.; Klimesh, M. Exploiting calibration-induced artifacts in lossless compression of hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2672–2678. [Google Scholar] [CrossRef]
  18. Christophe, E.; Mailhes, C.; Duhamel, P. Hyperspectral Image Compression: Adapting SPIHT and EZW to Anisotropic 3-D Wavelet Coding. IEEE Trans. Image Process. 2008, 17, 2334–2346. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 20th International Conference on IEEE Pattern Recognition (ICPR), Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
Figure 1. The architecture of the proposed framework.
Figure 1. The architecture of the proposed framework.
Algorithms 10 00132 g001
Figure 2. Histogram of the average PSNR achieved for the watermark w1 and the watermark w2 (Dataset 1).
Figure 2. Histogram of the average PSNR achieved for the watermark w1 and the watermark w2 (Dataset 1).
Algorithms 10 00132 g002
Figure 3. Histogram of the average PSNR achieved for the watermark w1 and the watermark w2 (Dataset 2).
Figure 3. Histogram of the average PSNR achieved for the watermark w1 and the watermark w2 (Dataset 2).
Algorithms 10 00132 g003
Table 1. Description of the Dataset 1.
Table 1. Description of the Dataset 1.
Hyperspectral ImageNumber of Scenes
Cuprite5
Jasper Ridge6
Low Altitude8
Lunar Lake3
Moffett Field4
Table 2. Description of the Dataset 2.
Table 2. Description of the Dataset 2.
Hyperspectral ImageNumber of ScenesColumns
Yellowstone (calibrated)5677
Yellowstone (uncalibrated)5680
Hawaii (uncalibrated)1614
Maine (uncalibrated)1680
Table 3. Achieved results in terms of PSNR, by embedding w1 (Dataset 1).
Table 3. Achieved results in terms of PSNR, by embedding w1 (Dataset 1).
ScenesCupriteJasper RidgeLow AltitudeLunar LakeMoffett Field
Scene 01124.45123.21123.40125.21123.34
Scene 02123.03123.28122.89125.13125.32
Scene 03124.36122.45124.11124.27127.09
Scene 04124.58122.41123.75-123.28
Scene 05124.83122.81123.72--
Scene 06-122.60123.96--
Scene 07--123.67--
Scene 08--123.09--
Average124.25122.79123.57124.87124.76
Table 4. Achieved results in terms of PSNR, by embedding w2 (Dataset 1). Notice that by “/”, we refer to the fact that the algorithm failed the embedding process (due to the limited dimensions of the hyperspectral image).
Table 4. Achieved results in terms of PSNR, by embedding w2 (Dataset 1). Notice that by “/”, we refer to the fact that the algorithm failed the embedding process (due to the limited dimensions of the hyperspectral image).
ScenesCupriteJasper RidgeLow AltitudeLunar LakeMoffett Field
Scene 01121.18119.88119.96121.85120.12
Scene 02119.91120.37119.72121.77122.18
Scene 03121.23119.06120.60121.26123.93
Scene 04121.44119.09120.63-120.24
Scene 05121.76119.54120.50--
Scene 06-/120.75--
Scene 07--120.14--
Scene 08--119.77--
Average121.10119.59120.26121.63121.62
Table 5. Achieved results in terms of PSNR, by embedding w1 (Dataset 2).
Table 5. Achieved results in terms of PSNR, by embedding w1 (Dataset 2).
ScenesYellowstone (Calibrated)Yellowstone (Uncalibrated)Hawaii (Uncalibrated)Maine (Uncalibrated)
Scene 00125.75119.42--
Scene 03128.84122.77130.45-
Scene 10132.42126.63-131.92
Scene 11127.19120.36--
Scene 18125.17118.55--
Average127.87121.55130.45131.92
Table 6. Achieved results in terms of PSNR, by embedding w2 (Dataset 2).
Table 6. Achieved results in terms of PSNR, by embedding w2 (Dataset 2).
ScenesYellowstone (Calibrated)Yellowstone (Uncalibrated)Hawaii (Uncalibrated)Maine (Uncalibrated)
Scene 00122.55116.18--
Scene 03125.59119.45127.28-
Scene 10129.44123.46-128.55
Scene 11123.63117.30--
Scene 18122.11115.15--
Average124.66118.31127.28128.55
Table 7. Achieved results in terms of BPP (Dataset 1).
Table 7. Achieved results in terms of BPP (Dataset 1).
Hyperspectral ImageAverage BPP
N = 8, B = 1N = 8, B = 2N = 16, B = 2
Cuprite5.01654.98864.8958
Jasper Ridge5.07225.02714.9535
Low Altitude5.34425.29955.2166
Lunar Lake5.02074.97964.8850
Moffett Field5.10125.03134.9594
Table 8. Achieved results in terms of BPP (Dataset 2).
Table 8. Achieved results in terms of BPP (Dataset 2).
Hyperspectral ImageAverage BPP
N = 8, B = 1N = 8, B = 2N = 16, B = 2
Yellowstone (calibrated)3.95113.82803.9511
Yellowstone (uncalibrated)6.52626.23286.5262
Hawaii (uncalibrated)2.95332.87482.9533
Maine (uncalibrated)3.07462.95283.0746

Share and Cite

MDPI and ACS Style

Pizzolante, R.; Carpentieri, B. Hyperspectral Data: Efficient and Secure Transmission. Algorithms 2017, 10, 132. https://doi.org/10.3390/a10040132

AMA Style

Pizzolante R, Carpentieri B. Hyperspectral Data: Efficient and Secure Transmission. Algorithms. 2017; 10(4):132. https://doi.org/10.3390/a10040132

Chicago/Turabian Style

Pizzolante, Raffaele, and Bruno Carpentieri. 2017. "Hyperspectral Data: Efficient and Secure Transmission" Algorithms 10, no. 4: 132. https://doi.org/10.3390/a10040132

APA Style

Pizzolante, R., & Carpentieri, B. (2017). Hyperspectral Data: Efficient and Secure Transmission. Algorithms, 10(4), 132. https://doi.org/10.3390/a10040132

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop