Abstract
Airborne and spaceborne hyperspectral sensors collect information which is derived from the electromagnetic spectrum of an observed area. Hyperspectral data are used in several studies and they are an important aid in different real-life applications (e.g., mining and geology applications, ecology, surveillance, etc.). A hyperspectral image has a three-dimensional structure (a sort of datacube): it can be considered as a sequence of narrow and contiguous spectral channels (bands). The objective of this paper is to present a framework permits the efficient storage/transmission of an input hyperspectral image, and its protection. The proposed framework relies on a reversible invisible watermarking scheme and an efficient lossless compression algorithm. The reversible watermarking scheme is used in conjunction with digital signature techniques in order to permit the verification of the integrity of a hyperspectral image by the receiver.
1. Introduction
Hyperspectral imaging sensors obtain information from the electromagnetic spectrum of an observed area. Spectral imaging techniques cover a significant portion of the electromagnetic spectrum in which the frequencies are in a range that spans from the ultraviolet to the infrared. A hyperspectral sensor subdivides the spectrum into different spectral channels (referred to as “bands”). For such reasons, a hyperspectral image can be considered as a sort of “datacube” [], since such data can be structured in a three-dimensional manner.
Hyperspectral images are efficiently used in a wide range of real-life and research applications (agriculture, mineralogy, physics, surveillance, etc.). Hyperspectral images are often shared among different entities, sometimes with different purposes (for example different research centres, etc.), in order to carry out conjunct tasks.
Several scenarios can be drawn in which these data are shared/stored/transmitted for sensitive objectives (e.g., military applications [,], counter-terrorism [], forensic applications [], etc.). Thus, an important concern might be to ensure data protection against tampering, which can occur even in very sensitive cases (e.g., target-detection applications, etc.). Since these images need to be efficiently transmitted to a base as soon as they are acquired, and they need to be efficiently stored, we propose a possible framework for efficient transmission and protection.
The proposed framework relies on two main components: a lossless compression algorithm and a reversible watermarking scheme. Generally, digital watermarking techniques are used for ensuring security, content authentication, and copyright protection (e.g., [,,]). Using watermarking techniques, the input data (i.e., the hyperspectral images) might become a sort of “carrier” of hidden information, which can carry important data (e.g., [,]). We highlight the key aspects of the proposed framework, in Section 2, and we outline further details for a possible effective implementation, in Section 2.1. Regarding the implementation, we considered a reversible invisible watermarking scheme (Section 2.1.1), designed specifically for hyperspectral images, and the multiband lossless compression of hyperspectral images (LMBHI) algorithm [] (reviewed in Section 2.1.2), which is a predictive-based lossless compression algorithm that achieves interesting results.
2. An Efficient and Secure Transmission Framework
Figure 1 shows the architecture of the proposed transmission framework. First, the digest of the input hyperspectral image HI is computed, by invoking a cryptographic hash function h(.) (e.g., SHA-3 Keccak [], etc.). Subsequently, the obtained digest is used as a watermark string and is embedded into HI, by using a reversible invisible watermarking scheme. As it is observable from Figure 1, the secret key K, is used for the computation of the digest as well as for the embedding of the obtained digest into HI.
Figure 1.
The architecture of the proposed framework.
After that, the watermarked hyperspectral image HIW (i.e., the output of the reversible invisible watermarking scheme) is compressed by using an efficient lossless compression algorithm. The output of this compression stage, is denoted as HICW and it can be efficiently transmitted.
In the following, we denote as the hyperspectral image which is reconstructed from HIW (where HIW is obtained as the output of the decompression of HICW), and as w′ the watermark string which is extracted from HIW.
The receiver can verify the integrity of HI, in the following manner: if there are no alterations of HIW, then the value of will be equal to the value of . Consequently, it is satisfied that w w′ and is exactly equal to HI.
2.1. An Effective Implementation of the Proposed Framework
For an effective implementation of the proposed framework, we considered the reversible watermarking scheme, described in Section 2.1.1, and the multiband lossless compression of hyperspectral images scheme (LMBHI), the compression algorithm is addressed in Section 2.1.2. However, other approaches can be efficiently used for an effective implementation of our proposal.
2.1.1. Reversible Invisible Watermarking Scheme for Hyperspectral Images
In [], we have proposed a preliminary version of a reversible invisible watermarking scheme for hyperspectral images. This scheme relies on the approaches outlined in [,], and belongs to the category of additive schemes. In the additive scheme, the watermark signal w (i.e., a watermark string) is directly added to the input signal, namely the pixels of the input hyperspectral image HI. In this way the output produced (i.e., the watermarked hyperspectral image HIW) contains both the signals (the one that represents the HI and the one that represents the watermark w). A secret key, K, is used to perform the embedding phase.
It is important to note that this scheme is reversible. Therefore, it is possible to restore the original HI and to extract the watermark w. In addition, this scheme is fragile: a simple modification of HIW might cause the disappearance of the embedded watermark, w.
The basic objective of our scheme is to spread the bits of w among all the bands of HI. More precisely, each bit of w—referred to as bw—will be embedded into a set of four pixels . These pixels are pseudo-randomly selected by means of a pseudo-random number generator (PRNG) based on the secret key, K.
Since it is possible to have a set SP that cannot be used to carry bw due to the fact that the extraction algorithm might be unable to extract the hidden bit, it is necessary to classify the sets into two categories: “carrier sets” and “non-carrier sets”. A carrier set is a set in which a bit, bw, can be embedded, while a non-carrier set is a set in which a bit, bw, cannot be embedded.
When the algorithm identifies a carrier set SP, a bit bw can be embedded by means of the “embedWatermarkBit” procedure, reported in Algorithm 1, which returns as output the set . represents the set SP in which bw is embedded. To classify a set SP, the relationship between SP and its estimation is exploited. This estimation is computed by means of a linear combination of the pixels of SP, as explained in the “estimate” procedure (see Algorithm 2). By using the estimation, the extraction algorithm can classify, in two steps, a set SP. Furthermore, the extraction algorithm can restore the original pixel values of the carrier set by manipulating a watermarked carrier set. In this manner, the reversibility property is obtained.
| Algorithm 1 The “embedWatermarkBit” procedure (pseudo-code from []) |
| procedure embedWatermarkBit (, ) |
|
| end procedure |
Algorithm 3 reports the pseudo-code of the “embed” procedure. This procedure embeds the watermark string w into the input hyperspectral image HI by using in the embedding process the secret key K.
In details: the pseudo-random number generator (PRNG) G is initialized by using K as a seed. Subsequently, w is subdivided into M substrings (where M is the number of bands of HI). The ith substring, wi, will be embedded into the ith band of HI, denoted as HI(i). Therefore, each band will carry at least bits, where N denotes the length of w.
The algorithm considers each substring wi and performs the following steps until all the bits composing wi are embedded into HI(i): four pixels (we denote them as , , , and ) are randomly selected by using the PRNG G to compose a set . Subsequently, the estimation (composed by four estimated pixels, that we denote as , , , and ) of is calculated. This estimation is computed by a linear combination of the pixels of SP, as it is shown in the estimate procedure of Algorithm 2. In order to classify the set , the difference D, in absolute value, between and is computed. In the case is less than 1, then the set is classified as a carrier set and, therefore, the “embedWatermarkBit” procedure (Algorithm 1) is performed in order to embed bw into . Thus, the processing of bit bw is complete. The coordinates of the pixels , , , and will be no longer selectable and the algorithm proceeds by embedding the next bit of wi.
| Algorithm 2 The “estimate” procedure (pseudo-code from []) |
| procedure estimate () |
|
| end procedure |
In case Sp is classified, instead, as a non-carrier set, then the value of the pixels of Sp are modelled in order to increase the difference, . In this manner, the extraction algorithm is able to correctly understand that Sp is a non-carrier set. As a consequence, the bit, bw, cannot be embedded into the set Sp and other four pixels (different from the ones previously selected) will be selected to compose a new set Sp, to try the embedding again.
| Algorithm 3 The “embed” procedure (pseudo-code from []) |
| procedure embed (, w, K) |
|
| end procedure |
2.1.2. Multiband Lossless Compression of Hyperspectral Images
The predictive-based multiband lossless compression for hyperspectral images (LMBHI) [] algorithm exploits the inter-band correlation (i.e., the correlation among the neighboring pixels of contiguous bands) as well as the intra-band correlations (i.e., the correlations among the neighboring pixels of the same band), by using a predictive coding model.
Each pixel of the input hyperspectral image HI is predicted by using one of the following predictive structures: the 2-D linearized median predictor (2-D LMP) [] and the 3-D multiband linear predictor (3D-MBLP).
2-D LMP considers only the intra-band correlation and it is used only for the pixels of the first band for which there are no previous reference bands.
3D-MBLP exploits the intra-band and the inter-band correlations instead and is used to predict the pixels of all the bands except for the first one.
Once the prediction step is performed the prediction error, e, is modelled and coded. In particular, e is obtained by subtracting the value of the prediction of the current pixel from its effective value.
The 3D-MBLP predictor uses a three-dimensional prediction context which is defined by considering two parameters: B and N, where B indicates the number of the previous reference bands and N indicates the number of pixels that will be included in the prediction context that are in the current band and in the previous B reference bands.
In order to permit an efficient and relative indexing of the pixels that form the prediction context of the 3D-MBLP, an enumeration, E, is defined. We denote with the ith pixel (according to the enumeration E) of the jth band. In addition, we suppose that the current band is the th band. In this manner, is referred to the pixel that has the same spatial coordinates of the current pixel (denoted as ), in the jth band.
The 3D-MBLP predictor is based on the least squares optimization technique and the prediction is computed, as in Equation (1).
It is important to point out that the coefficients are chosen to minimize the energy of the prediction error, as in Equation (2).
It should be observable that Equation (2) can be rewritten (by using the matrix notation), as outlined in the equation
Subsequently, by computing the derivate of and by setting it to zero, the optimal coefficients can be obtained
Once the coefficients , which solve the linear system of Equation (3), are computed, the prediction , of the current pixel , can be calculated.
3. Experimental Results
To validate the proposed framework, we have experimentally tested our proposed algorithms on two datasets, which are composed by airborne visible/infrared imaging spectrometer (AVIRIS) hyperspectral images. Such data is obtained by AVIRIS hyperspectral sensors (NASA Jet Propulsion Laboratory (JPL) []), which measure the spectrum in the wavelengths ranging from 380 to 2500 nm (subdivided into 224 spectral bands).
3.1. Description of the Test Datasets
Dataset 1.
The first dataset we used in the testing phase is composed by five AVIRIS hyperspectral images provided by the JPL (Jet Propulsion Laboratory) of NASA. Each hyperspectral image is subdivided into sub-images which are denoted as scenes. In detail, a scene is composed by 614 columns, 512 lines (except for the last scene that might have a lower number of lines), and 224 bands. Each pixel is stored as a signed integer and represented with 16 bits.
In Table 1, we report for each hyperspectral image (rows) and the number of its scenes (second column).
Table 1.
Description of the Dataset 1.
Dataset 2.
The second dataset is referred to as the “CCSDS Dataset” and it is composed of five calibrated and seven uncalibrated AVIRIS hyperspectral images. This dataset is publicly available, and it is provided by the Consultative Committee for Space Data Systems (CCSDS) Multispectral and Hyperspectral Data Compression [].
Table 2 shortly reports the key information describing Dataset 2 by showing the number of scenes (second column) and the number of columns (third column), concerning the calibrated and the uncalibrated images (first column). We remark that a pixel of the calibrated and the uncalibrated images is stored by using 16 bits (16-bit signed integer, for the calibrated, and 16-bit unsigned integer, for the uncalibrated), except for the Hawaii and Maine hyperspectral images which have pixels stored by using 12 bits (unsigned) []. Each of the hyperspectral images in the Dataset 2 is composed of 512 lines.
Table 2.
Description of the Dataset 2.
3.2. Simulation Results Achieved by the Reversible Invisible Watermarking Scheme
This section outlines the experimental results we have achieved by using our reversible invisible watermarking scheme by considering both Dataset 1 and Dataset 2. Analogously to [], we have considered the peak-signal-to-noise-ratio (PSNR) [], to evaluate the distortion between the original image HI and the watermarked one (i.e., HIW). The PSNR metric is computed as in Equation (4).
The mean squared error (MSE) instead is defined in Equation (5), in which the notation HI(i)(x, y) is referred to the pixel at the coordinates (x, y) of the ith band.
In all our experiments, we have considered two watermarks:
- w1—Composed by 1120 bits (pseudo-random generated)
- w2—Composed by 2240 bits (pseudo-random generated).
3.2.1. Simulation Results on Dataset 1
In Table 3 we report, in terms of the PSNR metric, the achieved test simulation results by embedding the watermark w1 into Dataset 1. The PSNR value of the watermarked image with respect to the original image is reported for each scene (first column) of Cuprite (on the second column), Jasper Ridge (on the third column), Low Altitude (on the fourth column), Lunar Lake (on the fifth column), and Moffett Field (on the sixth column) hyperspectral images. In Table 4, we report the achieved simulation results by embedding the watermark w2.
Table 3.
Achieved results in terms of PSNR, by embedding w1 (Dataset 1).
Table 4.
Achieved results in terms of PSNR, by embedding w2 (Dataset 1). Notice that by “/”, we refer to the fact that the algorithm failed the embedding process (due to the limited dimensions of the hyperspectral image).
Figure 2 synthetizes the average PSNR results we have achieved by embedding the watermark w1 (columns in red) and the watermark w2 (columns in blue), by considering Dataset 1.
Figure 2.
Histogram of the average PSNR achieved for the watermark w1 and the watermark w2 (Dataset 1).
3.2.2. Simulation Results on the Dataset 2
In Table 5 and Table 6, the achieved simulation results are reported, in terms of the PSNR metric by embedding the watermark w1 and the watermark w2, respectively. The value assumed by the PSNR metric is reported for each scene (first column) of the Yellowstone calibrated (on the second column) and uncalibrated (on the third column), Hawaii (on the fourth column), and Maine (on the sixth column) hyperspectral images.
Table 5.
Achieved results in terms of PSNR, by embedding w1 (Dataset 2).
Table 6.
Achieved results in terms of PSNR, by embedding w2 (Dataset 2).
Figure 3 reports the average PSNR results we have achieved by embedding the watermark w1 (columns in red) and the watermark w2 (columns in blue), by considering Dataset 2.
Figure 3.
Histogram of the average PSNR achieved for the watermark w1 and the watermark w2 (Dataset 2).
3.3. Simulation Results Achieved by the LBMHI Algorithm
In this section, we focus on the simulation results achieved by the LBMHI algorithm, on Dataset 1 and on Dataset 2, which are comparable with the other state-of-the-art predictive-based approaches []. Moreover, it is important to note that the parameters of the LBMHI algorithm can be configured.
Table 7 reports the simulations results, in terms of bits per pixel (BPP), achieved by the LMBHI compression algorithm, for each hyperspectral image of the Dataset 1 (rows from the second to the sixth), by considering the following configurations of the parameters: N = 8 and B = 1 (second column), N = 8 and B = 2 (third column), and N = 16 and B = 2 (fourth column). In Table 8, we report the experimental results achieved with Dataset 2 in the same manner of the ones reported in Table 7.
Table 7.
Achieved results in terms of BPP (Dataset 1).
Table 8.
Achieved results in terms of BPP (Dataset 2).
4. Conclusions and Future Works
Hyperspectral data are involved in real-life and sensitive applications (e.g., geoscience or military applications). In addition, the acquisition of such data is onerous and expensive. By considering such aspects, it is important to protect them by allowing the verification of the inalterability of these data by a receiver (since such data are often exchanged among several entities).
In this paper, we have focused on the protection and the efficient transmission of hyperspectral data by revisiting a framework for the secure and efficient transmission of hyperspectral images. This framework combines a reversible invisible watermarking scheme and the LMBHI algorithm.
In future work, we will consider the possible design of a hybrid approach which provides protection and compression at the same time, as well as the extension of the proposed framework to other typologies of 3-D data: e.g., 3-D medical images [], etc.
Acknowledgments
The authors would like to thank their student Mario Saponara, who tested a preliminary version of the reversible invisible watermarking scheme.
Author Contributions
Raffaele Pizzolante and Bruno Carpentieri worked together and contributed equally.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Rizzo, F.; Carpentieri, B.; Motta, G.; Storer, J.A. Low-complexity lossless compression of hyperspectral imagery via linear prediction. IEEE Signal Process. Lett. 2005, 12, 138–141. [Google Scholar] [CrossRef]
- Landgrebe, D. Hyperspectral image data analysis. IEEE Signal Process. Mag. 2002, 19, 17–28. [Google Scholar] [CrossRef]
- Smetek, T.E.; Bauer, K.W., Jr. A comparison of multivariate outlier detection methods for finding hyperspectral anomalies. Effic. Employ. Non-React. Sens. 2008, 3. [Google Scholar] [CrossRef]
- Eismann, M.T. Strategies for hyperspectral target detection in complex background environments. In Proceedings of the IEEE Aerospace Conference, Big Sky, MT, USA, 4–11 March 2006. [Google Scholar]
- Silva, C.S.; Pimentel, M.F.; Honorato, R.S.; Pasquini, C.; Prats-Montalbán, J.M.; Ferrer, A. Near infrared hyperspectral imaging for forensic analysis of document forgery. Analyst 2014, 139, 5176–5184. [Google Scholar] [CrossRef] [PubMed]
- Albano, P.; Bruno, A.; Carpentieri, B.; Castiglione, A.; Castiglione, A.; Palmieri, F.; Pizzolante, R.; Yim, K.; You, I. Secure and distributed video surveillance via portable devices. J. Ambient Intell. Humaniz. Comput. 2014, 5, 205–213. [Google Scholar] [CrossRef]
- Albano, P.; Bruno, A.; Carpentieri, B.; Castiglione, A.; Castiglione, A.; Palmieri, F.; Pizzolante, R.; Yim, K.; You, I. A Secure Distributed Video Surveillance System Based on Portable Devices. CD-ARES 2012, 7465, 403–415. [Google Scholar]
- Pizzolante, R.; Carpentieri, B.; Castiglione, A.; De Maio, G. The AVQ Algorithm: Watermarking and Compression Performances. In Proceedings of the Third International Conference on Intelligent Networking and Collaborative Ststems (INCoS), Fukuoka, Japan, 30 November–2 December 2011; pp. 698–702. [Google Scholar]
- Castiglione, A.; De Santis, A.; Pizzolante, R.; Castiglione, A.; Loia, V.; Palmieri, F. On the Protection of fMRI Images in Multi-Domain Environments. In Proceedings of the IEEE 29th International Conference on Advanced Information Networking and Applications (AINA 2015), Gwangiu, Korea, 24–27 March 2015; pp. 476–481. [Google Scholar]
- Pizzolante, R.; Carpentieri, B.; Castiglione, A.; Castiglione, A.; Palmieri, F. Text Compression and Encryption through Smart Devices for Mobile Communication. In Proceedings of the Seventh International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), Taichung, Taiwan, 3–5 July 2013; pp. 672–677. [Google Scholar]
- Pizzolante, R.; Carpentieri, B. Multiband and Lossless Compression of Hyperspectral Images. Algorithms 2016, 9, 16. [Google Scholar] [CrossRef]
- Bertoni, G.; Daemen, J.; Peeters, M.; Assche, G.V. The Making of KECCAK. Cryptologia 2014, 38, 26–60. [Google Scholar] [CrossRef]
- Pizzolante, R.; Carpentieri, B. A Lossless Invisible Watermarking Scheme for Hyperspectral Images. In Proceedings of the 8th European computing Conference (ECC’15), in Recent Advances on Systems, Signals, Control, Communications and Computers, Dubai, United Arab Emirates, 22–24 February 2015; WSEAS Press: Athens, Greece, 2015; pp. 119–123. [Google Scholar]
- Coatrieux, G.; le Guillou, C.; Cauvin, J.-M.; Roux, C. Reversible watermarking for knowledge digest embedding and reliability control in medical images. IEEE Trans. Inf. Technol. Biomed. 2009, 13, 158–165. [Google Scholar] [CrossRef] [PubMed]
- Pizzolante, R.; Carpentieri, B. Lossless, low-complexity, compression of three-dimensional volumetric medical images via linear prediction. In Proceedings of the 18th International Conference on Digital Signal Processing (DSP), Fira, Greece, 1–3 July 2013; pp. 1–6. [Google Scholar]
- NASA JPL. Available online: https://www.jpl.nasa.gov/ (accessed on 28 November 2017).
- Kiely, A.B.; Klimesh, M. Exploiting calibration-induced artifacts in lossless compression of hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2009, 47, 2672–2678. [Google Scholar] [CrossRef]
- Christophe, E.; Mailhes, C.; Duhamel, P. Hyperspectral Image Compression: Adapting SPIHT and EZW to Anisotropic 3-D Wavelet Coding. IEEE Trans. Image Process. 2008, 17, 2334–2346. [Google Scholar] [CrossRef] [PubMed]
- Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 20th International Conference on IEEE Pattern Recognition (ICPR), Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).