Entropy 2014, 16(2), 990-1001; doi:10.3390/e16020990

Article
Prediction Method for Image Coding Quality Based on Differential Information Entropy
Xin Tian 1,*, Tao Li 2, Jin-Wen Tian 2 and Song Li 1
1
School of Electronic Information, Wuhan University, Wuhan 430072, China; E-Mail: ls@whu.edu.cn
2
The National Key Laboratory of Science & Technology on Multi-Spectral Information Processing, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China; E-Mails: litao.1254131@yahoo.com.cn (T.L.); jwtian@mail.hust.edu.cn(J.-W.T.)
*
Author to whom correspondence should be addressed; E-Mail: xin.tian@whu.edu.cn.
Received: 27 October 2013; in revised form: 19 December 2013 / Accepted: 26 January 2014 /
Published: 17 February 2014

Abstract

: For the requirement of quality-based image coding, an approach to predict the quality of image coding based on differential information entropy is proposed. First of all, some typical prediction approaches are introduced, and then the differential information entropy is reviewed. Taking JPEG2000 as an example, the relationship between differential information entropy and the objective assessment indicator PSNR at a fixed compression ratio is established via data fitting, and the constraint for fitting is to minimize the average error. Next, the relationship among differential information entropy, compression ratio and PSNR at various compression ratios is constructed and this relationship is used as an indicator to predict the image coding quality. Finally, the proposed approach is compared with some traditional approaches. From the experiments, it can be seen that the differential information entropy has a better linear relationship with image coding quality than that with the image activity. Therefore, the conclusion can be reached that the proposed approach is capable of predicting image coding quality at low compression ratios with small errors, and can be widely applied in a variety of real-time space image coding systems for its simplicity.
Keywords:
image compression; quality prediction; differential information entropy

1. Introduction

With the gradual reduction of the Earth’s resources, countries all over the world are becoming increasingly aware of the importance of exploiting space resources. The European Space Agency (ESA) has completed the Near-Infrared Spectrograph, which is contributing to the international James Webb Space Telescope (a space observatory set for launch on an Ariane 5 rocket in 2018). NASA’s Voyager 1 spacecraft has ventured into interstellar space. The probe is now about 19 billion kilometers from our sun. China is expected to launch its fifth lunar probe, Chang’e-5, in 2017 to send back a moon rock sample to Earth. The NO.1 High-Resolution satellite was launched in the first batch and is a component of the major China High-Resolution Earth Observation System (CHEOS) technical project. The target is to provide accurate service for sectors of land, environment and agriculture, etc.

In all these projects, observing images is an efficient way for the acquisition of space information. These images are always sent back to the ground for observation, but the data channels for space communication are always limited. The goal of image coding is to resolve the contradiction between the large amounts of high resolution images and the limited data channels. The wavelet-based coding technique is an important research direction in the field of image coding [13]. One of the key steps in the wavelet-based coding technique is rate control. For example, JPEG2000, which is an excellent currently used well-known coding algorithm, can control the bit rate accurately and conveniently by a post-coding rate-distortion optimization algorithm. Currently, a lot of research has been done on the rate control algorithms in JPEG2000 [46]. These algorithms consume large amounts of memory and computing time for the reason that most of the coding process must be performed in order to estimate the image distortion. Therefore, prediction of image quality (image distortion) before coding is very useful for the implementation of the image coding algorithms due to the reduction of the memory cost and the computing time. Saha analyzed the relationship between coding quality and image activity measure (IAM) at a fixed bit rate [7,8]. Li proposed a model to predict JPEG2000 image quality at high compression ratios (CR) which obtained desirable prediction accuracy, but required a compression ratio larger than 10:1 [9]. In remote sensing, the compression ratio is usually low for fear of large image distortion. However, when the compression ratio is low, it is always difficult to predict image coding quality accurately.

In this paper, differential information entropy is used to predict the image coding quality at low compression ratios. The rest of the paper is organized as follows: in Section 2, a brief introduction on the background of the topic is given. In Section 3, differential information entropy is introduced. Taking the JPEG2000 coding algorithm as an example, the relationship between the differential information entropy and the objective assessment indicator (Peak Signal-to-Noise Ratio—PSNR) at fixed compression ratios is established through data fitting. In Section 4, the relationship among differential information entropy, compression ratio and PSNR is studied. The conclusions are drawn in Section 5.

2. Background

It is important to predict image coding quality before coding for the purpose of reducing the memory cost and the computing time. The relationship between the image coding performance and the image activity at fixed compression ratios was established in previous researches [79], and is given as follows:

PSNR | CR = f ( IAM ) = α ln ( IAM ) + β
in which α and β are the empirical parameters obtained by conducting different coding algorithms at different compression ratios. PSNR is commonly used to measure the image coding quality, and the calculation formula is
PSNR = 20 lg 255 2 MSE , MSE = 1 M × N i = 0 M 1 j = 0 N 1 [ I ( i , j ) I ˜ ( i , j ) ] 2
in which I(i j) and (i j) represent the original image and the decoded image, respectively. The calculation formulas of image activities [8,9] are
IAMD 1 = 1 ( M 1) × N i = 0 M 2 j = 0 N 1 | x ( i , j ) x ( i + 1 , j ) | + 1 M × (N 1) i = 0 M 1 j = 0 N 2 | x ( i , j ) x ( i , j + 1 ) | I AME 1 = 1 ( M 2) × N i = 1 M 2 j = 0 N 1 | x ( i 1 , j ) x ( i + 1 , j ) | + 1 M × (N 2) i = 0 M 1 j = 1 N 2 | x ( i , j 1 ) x ( i , j + 1 ) |
where IAMD1 and IAME1 represent single pixel distance image activity and double pixel distance image activity. The schematic diagram of the image activities are shown in Figure 1.

Li established the relationship between image coding performance and image activity at dynamic compression ratios as follows [9]:

PSNR = α ln [ θ IAMD 1 + ( 1 θ ) IAME 1 ] + β
in which α and β are functions related to the compression ratio.

3. Differential Information Entropy

In these studies, it is clearly shown that the prediction accuracy of image coding quality depends on the relationship between the image activity and PSNR. We find out that for some images, the relationship between the image activity and PSNR is not that close. Three images are shown in Figure 2a,b and c with similar image activity. Also, the logarithm of the image activity (lnIAMD1) is used to measure the image activity for each image, which are 2.12, 2.10 and 2.00, respectively.

From the lnIAMD1 values, it seems that the images in Figure 2a and Figure 2b have similar image coding performance. Then the JPEG2000 algorithm is used to code the images. The compression ratio is 4 and the coding performance is measured by PSNR. The PSNR values of the three images are 51.12, 53.79 and 54.17, respectively. The experimental results indicate that the images in Figure 2b and Figure 2c are similar in the coding performance which is different from the prediction results made by lnIAMD1. Therefore, the first purpose of the paper is to find an image coding quality prediction descriptor, which can more accurately reflect image coding performance.

We have noticed that images which have superior lossless coding performance, would also have good lossy coding performance. Since lossless coding is always simpler than lossy coding, a descriptor that is capable of reflecting lossless coding performance has been considered in the first step. A simple and common lossless coding algorithm can be implemented by conducting differential operation to adjacent pixels and then perform Huffman coding to differential values. The Huffman coding performance is close to the entropy of the differential values. Hence, the differential information entropy can be used to describe the coding performance of the above lossless coding algorithm, which is defined as:

y ( i , j ) = x ( i , j ) x ( i + 1 , j ) D _ Entropy = p N * log 2 p N
in which pN represents the ratio of the number of pixels whose value is N to the number of all pixels in the differential image y(i, j). The differential information entropy(D_Entropy) can be seen as the entropy of the differential image.

In the second step, the differential information entropy is tested to see whether it is capable of measuring the coding performance of lossy coding. The JPEG2000 algorithm is taken as an example and the PSNR is utilized as a measurement for image coding performance. The wavelet base is Db 9/7, and the transform level of wavelet is 3. A database for 8 bit high resolution grey remote sensing images of size 1,024 × 512 is built. There are three kinds of images, which are urban, plain and farmland, separately. Both simple and complex images are included. The similar characteristics of the images may be reflected in the imaging mode, such as remote sensing. In other words, the parameters by data fitting in the paper may be more suitable for remote sensing applications. The proposed quality prediction method can be applicable to other kinds of images with some other suitable parameters. Then these images are coded at different compression ratios (the compression ratios varying from 4/1 to 12/1), and the PSNR values are calculated for the coded images with each compression ratio. There are two tests: (1) the relationship between D_Entropy and PSNR. By removing some images of similar image compression characteristic, 23 images are selected from the database and used for data fitting. The sum of squares due to error(SSE) and R-square are used to evaluate the data fitting results. (2) The prediction method. The 23 images are used to find the relationship among PSNR, D_Entropy and compression ratio. This relationship is used as an indicator to predict the image coding quality. Six images are randomly selected from the rest images in the database for the verification of the prediction. In the first test, D_Entropy and lnIAMD1 for each image are calculated. The experimental results are shown in Table 1.

Assuming that D_Entropy is linear with PSNR for the sake of convenience, that is PSNR = a * D_Entropy + b. Then data fitting is used to find the parameters a and b. The constraint for fitting is to minimize the mean error of all samples. For comparison, the relationship between PSNR and lnIAMD1 is also studied, which is PSNR = αlnIAMD1+β.

The assessment parameters include SSE and R-square. The former is to compute the square error sum of original data and fitting data, the formula of which is:

SSE = i = 1 n w i ( y i y ^ i ) 2
where yi and i represent the original data and fitting data. The number of the original data is represented by n and wi are the weights. wi determine how much each value influences the final parameter estimates. Equation (6) indicates that good fittings have small SSE values.

R-square determines the fitting effect through the variation of data, the formula of which is:

R Square = 1 SSE SST

Total sum of squares(SST) is the sum of the square of deviations from the original data (yi) and the average value(i), the formula of which is:

SST = i = 1 n w i ( y i y ¯ i ) 2

R-square represents the square of the correlation between the real values and the predicted values. The more R-square is, the better fitting result will be. The experimental results are shown in Table 2 and Table 3.

From Table 2 and Table 3, it can be found that D_Entropy has smaller SSE and larger R-square than that in lnIAMD1 at various compression ratios. Hence, a conclusion can be made that D_Entropy has a better linear relationship with PSNR in the data fitting process.

4. The Relationship among PSNR, D_Entropy and Compression Ratio

The linear relationship between PSNR and D_Entropy at fixed compression ratios has been verified in Section 3. In Section 4, the relationship among PSNR, D_Entropy and compression ratio will be discussed. We have noticed that for most images, coding of the high-frequency sub-bands will consume large amounts of bit rates. Therefore, at a low compression ratio, the loss in one image will happen in the high-frequency sub-bands. Since the loss of the high-frequency sub-bands contributes similarly to the distortion of one image, the relationship between the coding performance (measured by PSNR) and the bit rates (reciprocal of the compression ratios) can also be viewed as a linear relationship. It can be proved in our experiments, which is shown in Figure 3. Then, in the second test, the following relationship can be assumed: PSNR = a / CR+b*D_Entropy + c. Through data fitting of samples in Table 1, it can be obtained that a = 52.9466, b = −7.4096, c = 69.1329. The average mean square error is 1.87.

Then the above relationship can be seen as a method to predict the image coding quality. In order to validate the effectiveness of the proposed method, six 8 bit 1,024 × 512 grey remote sensing images are tested. The images and the experimental results are shown in Figure 4 and Table 4, separately.

From the experimental results, it can be found that for different images, the prediction error of the proposed method is smaller than that in Li [9] and not exceed 2 dB. Meanwhile, the proposed method is simple. Therefore, it can be widely applied in a variety of space real-time image coding systems.

Finally, other coding algorithms (such as CCSDS [10], SPIHT [11] and EZW [12]) are implemented and tested. The following relationship is also assumed: PSNR = a / CR+b*D_Entropy+c. By data fitting of samples in Table 1, the results are shown in Table 5 and Table 8. Then the coding quality of the images in Figure 4 can be predicted and the prediction results are shown in Figure 5. The actual results and the predicted results for the coding quality of images in Figure 4 are represented by red line and blue line, separately. From the experimental results in Figure 5, it showed that the maximum prediction error is less than 2.5 dB and the average error is 1.12 dB. Therefore, a conclusion can be made that the proposed method is also efficient for some other coding algorithms.

5. Conclusions

In this study, the differential information entropy is used to describe the image coding performance. The relationship among differential information entropy, compression ratio and PSNR at various compression ratios is studied. Then a prediction method based on the relationship has been proposed to predict the image coding quality. From the experimental results, it demonstrated that the differential information entropy has a good linear relationship with PSNR. A conclusion can be made that the image coding performance can be predicted with small errors by the differential information entropy, which can be widely applied in a variety of space real-time image coding systems due to its low complexity. With the increase of the compression ratio, it is noteworthy that the relationship among differential information entropy, compression ratio and PSNR may be not a simple linear one. Therefore, the prediction error will be increased. How to reduce the prediction error in high compression ratios is a question we need to solve in the future.

This work was supported by the Nature Science Foundation of China, under Grants 61102064 , Hubei Natural Science Foundation of China, under Grants 2013CFB298, and the Chen-Guang Project of Wuhan City, under Grants 2013072304010826.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, B.; Yang, R.; Jiang, H. Remote-sensing image compression using two-dimensional oriented wavelet transform. IEEE Trans. Geosci. Rem. Sens 2011, 49, 236–250, doi:10.1109/TGRS.2010.2056691.
  2. Garcia-Vilchez, F.; Serra-Sagrista, J. Extending the ccsds recommendation for image data compression for remote sensing scenarios. IEEE Trans. Geosci. Rem. Sens 2009, 47, 3431–3445, doi:10.1109/TGRS.2009.2021067.
  3. Ma, J.; Plonka, G.; Chauris, H. A new sparse representation of seismic data using adaptive easy-path wavelet transform. IEEE Trans. Geosci. Rem. Sens. Lett 2010, 7, 540–544, doi:10.1109/LGRS.2010.2041185.
  4. Liu, Z.; Karam, L.J.; Watson, A.B. JPEG2000 encoding with perceptual distortion control. IEEE Trans. Image Process 2006, 15, 1763–1778, doi:10.1109/TIP.2006.873460.
  5. An, J.C.; Cai, Z.X. Efficient rate control for lossless mode of JPEG2000. IEEE Signal Process. Lett 2008, 15, 409–412, doi:10.1109/LSP.2008.922293.
  6. Singh, S.; Sharma, R.K.; Sharma, S.K. Efficient rate control approach for JPEG2000 image coding. J. Electron. Imaging 2012, 21, 033004, doi:10.1117/1.JEI.21.3.033004.
  7. Saha, S.; Vemuri, R. An analysis on the effect of image features on lossy coding performance. IEEE Signal Process. Lett 2000, 7, 104–107, doi:10.1109/97.841153.
  8. Saha, S.; Vemuri, R. How do image statistics impact lossy coding performance? Proceedings of International Conference on Information Technology: Coding and Computing, Las Vegas, NV, USA, 2000, 42–47.
  9. Li, L.; Wang, Z.S. Compression Quality Prediction Model for JPEG2000. IEEE Trans. Image Process 2010, 19, 384–398, doi:10.1109/TIP.2009.2034706.
  10. Image Data Compression: Report Concerning Space Data System Standards. Informational Report CCSDS 120.1-G-1; CCSDS: Washington, DC, USA, 2007.
  11. Said, A.; Pearlman, W.A. A new fast and efficient image codec based on set partitioning in hierarchical trees. IEEE Trans. Circuits Systems Video Technol 1996, 6, 243–250, doi:10.1109/76.499834.
  12. Shapiro, J.M. Embedded image coding using zerotrees of wavelet coefficients. IEEE Trans. Signal Process 1993, 41, 3445–3462, doi:10.1109/78.258085.
Entropy 16 00990f1 200
Figure 1. Some common image activities (a) IAMD1, (b) IAME1.

Click here to enlarge figure

Figure 1. Some common image activities (a) IAMD1, (b) IAME1.
Entropy 16 00990f1 1024
Entropy 16 00990f2 200
Figure 2. Three images with similar image activity.

Click here to enlarge figure

Figure 2. Three images with similar image activity.
Entropy 16 00990f2 1024
Entropy 16 00990f3 200
Figure 3. The relationship between D_Entropy and PSNR in different compression ratios.

Click here to enlarge figure

Figure 3. The relationship between D_Entropy and PSNR in different compression ratios.
Entropy 16 00990f3 1024
Entropy 16 00990f4 200
Figure 4. Test images for the validation of the proposed method.

Click here to enlarge figure

Figure 4. Test images for the validation of the proposed method.
Entropy 16 00990f4 1024
Entropy 16 00990f5a 200Entropy 16 00990f5b 200
Figure 5. Prediction Results of the Coding Quality for Images in Figure 4.

Click here to enlarge figure

Figure 5. Prediction Results of the Coding Quality for Images in Figure 4.
Entropy 16 00990f5a 1024Entropy 16 00990f5b 1024
Table Table 1. Experimental results of image compression.

Click here to display table

Table 1. Experimental results of image compression.
ImageslnIAMD1D_EntropyPSNR in different compression ratios
4681012
13.376.4037.3632.9730.9229.4328.38
22.464.9244.1039.6837.8036.3135.37
33.286.2537.4933.1531.1129.6028.53
42.845.5240.9836.8834.5033.2232.45
52.064.2753.5349.5147.0145.1943.93
62.875.5840.8736.6834.2632.9832.12
72.725.2641.4137.2734.9933.6632.92
82.725.3643.5338.7936.5334.9933.92
92.565.2543.1138.9737.1535.7334.66
102.885.6440.3136.334.0132.8532.01
112.905.6739.7535.8533.6232.5031.61
122.945.5940.1436.1333.8132.6431.61
133.065.7737.9033.8132.1130.8730.01
142.925.5940.3736.3833.9632.7331.91
153.055.4642.4137.4934.5432.7431.27
163.286.0337.7733.2331.0429.4828.48
173.226.0037.4333.0731.0529.6128.61
182.475.0343.4339.437.6736.3435.49
193.936.9529.7026.0723.9122.6021.75
203.806.8530.7726.9625.1023.8422.95
212.124.5651.1249.6746.9442.6641.22
222.103.7353.7949.6846.9545.0143.58
232.003.7854.1750.2547.8245.9244.58
Table Table 2. The relationship between D_Entropy and PSNR.

Click here to display table

Table 2. The relationship between D_Entropy and PSNR.
Compression RatioabSSER-square
4−7.56483.0740.780.9549
6−7.71179.8158.050.9392
8−7.53576.6252.020.9428
10−7.19573.230.430.9625
12−7.04371.3527.690.9643
Table Table 3. The relationship between lnIAMD1 and PSNR.

Click here to display table

Table 3. The relationship between lnIAMD1 and PSNR.
Compression RatioαβSSER-square
4−12.0376.0864.320.9289
6−12.3572.9370.570.9261
8−12.1270.0556.590.9377
10−11.5466.8438.730.9523
12−11.3365.2231.070.9600
Table Table 4. Prediction results of the image coding quality for different methods.

Click here to display table

Table 4. Prediction results of the image coding quality for different methods.
Test ImageslnIAMD1D_EntropyMethodPSNR in different compression ratios
4681012
(a)3.05285.8051Actual39.5635.4133.1831.7230.64
Proposed39.3634.9432.7431.4130.53
Reference [9]---32.5431.32

(b)2.90035.5442Actual41.9937.3734.7233.3732.17
Proposed41.2936.8834.6733.3532.46
Reference [9]---34.0132.84

(c)2.60175.0605Actual45.6541.2438.6636.9535.56
Proposed44.8740.4638.2536.9336.05
Reference [9]---36.9035.80

(d)2.46835.0203Actual45.0240.9338.7237.1936.11
Proposed45.1740.7638.5537.2336.35
Reference [9]---38.1937.12

(e)1.90823.8752Actual54.8850.8648.5346.5945.21
Proposed53.6649.2447.0445.7144.83
Reference [9]---43.6142.68

(f)2.19004.6100Actual47.7243.5341.1539.7138.64
Proposed48.2143.8041.5940.2739.39
Reference [9]---40.8839.88
Table Table 5. The relationship between D_Entropy and PSNR for CCSDS algorithm (a = 0).

Click here to display table

Table 5. The relationship between D_Entropy and PSNR for CCSDS algorithm (a = 0).
Compression RatiobcSSER-square
4−7.89682.8137.590.9616
6−7.27975.6831.840.9617
8−7.16873.0219.810.9751
10−6.71369.223.520.9666
12−6.55767.4622.90.9660
Table Table 6. The relationship between D_Entropy and PSNR for SPIHT algorithm (a = 0).

Click here to display table

Table 6. The relationship between D_Entropy and PSNR for SPIHT algorithm (a = 0).
Compression RatiobcSSER-square
4−7.93182.533.670.9658
6−7.23275.1634.490.9582
8−6.87971.1519.30.9737
10−6.58568.2223.80.9650
12−6.39266.2819.530.9693
Table Table 7. The relationship between D_Entropy and PSNR for EZW algorithm (a = 0).

Click here to display table

Table 7. The relationship between D_Entropy and PSNR for EZW algorithm (a = 0).
Compression RatiobcSSER-square
4−6.90274.7721.710.9707
6−6.66270.0617.930.9737
8−6.49767.7923.980.9638
10−5.88463.0119.650.9633
12−5.81761.9522.850.9572
Table Table 8. The relationship among D_Entropy, PSNR and Compression Ratio (varying from 4 to 12).

Click here to display table

Table 8. The relationship among D_Entropy, PSNR and Compression Ratio (varying from 4 to 12).
AlgorithmabcAverage Mean Square error
CCSDS48.1600−7.127266.67981.3376
SPIHT46.8595−7.008365.89331.3538
EZW40.7013−6.340761.64251.0696
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert