Next Article in Journal
Diversity of Biodeteriorative Bacterial and Fungal Consortia in Winter and Summer on Historical Sandstone of the Northern Pergola, Museum of King John III’s Palace at Wilanow, Poland
Next Article in Special Issue
Edge-Preserving Image Denoising Based on Lipschitz Estimation
Previous Article in Journal
Blood Glucose Level Regression for Smartphone PPG Signals Using Machine Learning
Previous Article in Special Issue
Block Compressive Sensing Single-View Video Reconstruction Using Joint Decoding Framework for Low Power Real Time Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Encoding Scheme for AMBTC Compressed Images Using Ternary Representation Technique

1
Department of Computer Science and Information Engineering, National Taichung University of Science and Technology, Taichung 404, Taiwan
2
School of Electrical and Computer Engineering, Nanfang College of Sun Yat-Sen University, Guangzhou 510970, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(2), 619; https://doi.org/10.3390/app11020619
Submission received: 22 November 2020 / Revised: 5 January 2021 / Accepted: 7 January 2021 / Published: 10 January 2021
(This article belongs to the Special Issue Advances in Signal, Image and Video Processing)

Abstract

:
Absolute moment block truncated coding (AMBTC) is a lossy image compression technique aiming at low computational cost, and has been widely studied. Previous studies have investigated the performance improvement of AMBTC; however, they often over describe the details of image blocks during encoding, causing an increase in bitrate. In this paper, we propose an efficient method to improve the compression performance by classifying image blocks into flat, smooth, and complex blocks according to their complexity. Flat blocks are encoded by their block means, while smooth blocks are encoded by a pair of adjusted quantized values and an index pointing to one of the k representative bitmaps. Complex blocks are encoded by three quantized values and a ternary map obtained by a clustering algorithm. Ternary indicators are used to specify the encoding cases. In our method, the details of most blocks can be retained without significantly increasing the bitrate. Experimental results show that, compared with prior works, the proposed method achieves higher image quality at a better compression ratio for all of the test images.

1. Introduction

With the rapid development of imaging technology, digital images are perhaps the most widely used media of the Internet. Because digital images themselves contain significant amounts of spatial redundancy, an efficient lossy image compression technique is required for lower storage requirement and faster transmission. The Joint Photographic Experts Group (JPEG) [1,2], vector quantization (VQ) [3,4], and block truncation coding (BTC) [5,6] are well-known lossy compression methods and have been extensively investigated in the literature. Among these techniques, BTC requires significantly less computation cost than others while offering acceptable image quality. BTC has been widely investigated in the disciplines of remote sensing and portable devices, in which computational costs are limited. BTC was firstly proposed by Delp and Mitchell [7]. This method partitions image into blocks, and each block is represented by two quantized values and a bitmap. Inspired by [7], Lema and Mitchel [8] propose a variant method called absolute moment block truncation coding (AMBTC), which offers a simpler computation than that of BTC.
The applications of AMBTC are studied in video compression [9], image authentication [10,11,12], and image steganography [13,14]. Moreover, some recoverable authentication methods adopt AMBTC codes as the recovery information to recover the tampered regions. Because the recovery codes have to be embedded into the host image, a more efficient coding of AMBTC is always desirable because the burden of the embedment can be reduced and the quality of the recovered regions can be enhanced. To improve the compression efficiency of the AMBTC method, several approaches, including bitmap omission [15], block classification [16,17], and quantized value adjustment [18], are adopted to lower the bitrate while maintaining the image quality. For example, Hu [15] recognizes that if the difference between two quantized values is smaller than a predefined threshold, the bitmap plays an insignificant role in reconstructed image quality. Therefore, Hu employs the bitmap omission approach by neglecting the recording of a bitmap if a block is considered to be flat, and only uses block means to represent the flat block. Chen et al. [17] adopt quadtree partitioning and propose a variable-rate AMBTC compression method for color images. The basic idea of [17] is to partition the image into blocks with various sizes according to their complexities. The AMBTC and bitmap omission technique are then employed to encode the image blocks. In some applications, such as data hiding or image authentication, bitmaps have to be altered to carry some required information, causing a degradation in image quality. Hong [18] optimizes the quantized values so that the impact of bitmap alteration can be reduced. Mathews and Nair [19] propose an adaptive AMBTC method based on edge quantization by considering human visual characteristics. This method separates image blocks into edge and non-edge blocks, and quantized values are calculated based on the edge information. Because the edge characteristics are considered, their method provides better image quality than other AMBTC variants.
Xiang et al. [16] in 2019 proposed a dynamic multi-grouping scheme for AMBTC focusing on improving the reconstructed image quality and reducing the bitrate. Their method partitions an image into non-overlapping blocks. According to the block complexity, varied grouping techniques are designed. An indicator is employed to distinguish the grouping types. In addition, instead of recording the quantized values, the differences between them are recorded so as to reduce the bitrate. Xiang et al.’s method provides better compression performance than those of prior works.
In Xiang et al.’s method, the number of pixel groups of an image block directly affects the reconstructed image quality and bitrate. Their method divides pixels of complex blocks into three or four groups during encoding, which may improve the image quality insignificantly but requires more bits for encoding. In this paper, we propose a ternary representation technique, which uses two thresholds to classify image blocks into three types, namely flat, smooth, and complex. We use the bitmap omission technique [15] to code flat blocks. The adjusted quantized values and an index pointing to one of the representative bitmaps are used to encode the smooth blocks. The complex blocks are encoded using three quantized values and a ternary bitmap. Compared with the AMBTC and Xiang et al.’s work, the proposed method achieves a higher reconstructed image quality with a smaller bitrate.
The reminder of this paper is organized as follows: Section 2 introduces AMBTC and Xiang et al.’s methods. Section 3 introduces the algorithms of this paper in detail. Section 4 presents the experimental results of the proposed method, and concluding remarks are provided in the final section.

2. Related Works

In this section, we briefly introduce AMBTC and Xiang et al.’s methods, which are compared with the proposed method for evaluating the encoding performance.

2.1. The AMBTC Method

The AMBTC method [8] compresses image blocks into two quantized values and a bitmap. The detailed approaches are as follows. Let I be the original image of size w × h and partition I into non-overlapping blocks { I i } i = 0 N 1 of size n × n , where N = ( w / n ) × ( h / n ) is the total number of blocks. Let I i , j be the j - th pixel of i - th block. Therefore, I i = { I i , j } j = 0 n × n 1 . For block I i , the averaged value m i can be calculated by:
m i = 1 n × n j = 0 n × n 1 I i , j .
The j - th bit of bitmap B i , indicated by B i , j , is used to indicate the relationship between I i , j and m i . B i , j can be obtained by:
B i , j = { 0 , I i , j < m i ; 1 , I i , j m i .
The lower quantized value a i and higher quantized value b i are obtained by averaging the pixels in I i with values smaller than and larger than or equal to m i , respectively. This can be implemented by sequentially visiting pixels in I i . The lower quantized value a i is obtained by calculating the averaged value of visited pixels with values smaller than m i . Similarly, the higher quantized value b i is the averaged values of the other pixels. Therefore, the compressed code Φ i of I i is { a i , b i , B i } . Each block is processed using the same manner, and the AMBTC compressed codes { Φ i } i = 0 N 1 = { a i , b i , B i } i = 0 N 1 of image I are then obtained.
To decode { Φ i } i = 0 N 1 = { a i , b i , B i } i = 0 N 1 , blocks { I i } i = 0 N 1 of size n × n are prepared, where I i = { I i , j } j = 0 n × n 1 . The j - th pixel of I i can be decoded by:
I i , j = { a i , B i , j = 0 ; b i , B i , j = 1 .
After all of the image blocks are reconstructed, the image I can then be obtained.

2.2. Xiang et al.’s Method

AMBTC uses the same approach to compress all image blocks. However, the same approach may not suitable for flat and complex blocks. As a result, Xiang et al. proposed an improved scheme to efficiently encode blocks according to their complexity, and achieve a better image quality than that of AMBTC with a satisfactory bitrate.
Let { Φ i } i = 0 N 1 = { a i , b i , B i } i = 0 N 1 be the AMBTC compressed code of the original image I = { I i } i = 0 N 1 . To determine the complexity of block I i , a threshold τ 0 is set. If b i a i τ 0 , the variations of pixel values in block I i are relatively small. Therefore, all the pixels in this block are categorized as one group. In this case, the block mean m i is calculated, and this block is encoded by ( m i ) 2 , which is the 8-bit binary representation of m i .
If b i a i > τ 0 , the variations of pixels in block I i are large and these pixels need to be regrouped to achieve a better reconstructed image quality. Let G i 0 and G i 1 be the group of pixels with B i , j = 0 and B i , j = 1 , respectively. Apply the AMBTC method to G i 0 and G i 1 to obtain codes Φ i 0 = { a i 0 , b i 0 , B i 0 } and Φ i 1 = { a i 1 , b i 1 , B i 1 } . According to a given threshold d min , this method uses the following rules to determine whether G i 0 and G i 1 should be regrouped:
Rule 1: If b i 0 a i 0 > τ 0 and the total number of pixels in G i 0 is greater than d min .
Rule 2: If b i 1 a i 1 > τ 0 and the total number of pixels in G i 1 is greater than d min .
If neither rule is met, block I i does not need to be further divided. Otherwise, block I i will be sub-divided into three or four groups using the following rules:
(1)
If only rules 1 or 2 are met, group G i 0 or G i 1 needs to be subdivided. The number of pixels needing to be subdivided is denoted by P i , and P i - bit bitmap B i 0 or B i 1 has to be used to record the bitmap of G i 0 or G i 1 . In this case, block I i is eventually divided into three groups.
(2)
If both rules 1 and 2 are met, both G i 0 and G i 1 need to be subdivided, and block I i is eventually divided into four groups. Bitmap B i 0 and B i 1 have to be recorded to maintain the grouping information.
Xiang et al.’s method uses a 2-bit indicator I N D to record grouping information of I i . When block I i is divided into one to four groups, the indicator I N D is set to be 00 2 , 01 2 , 10 2 , and 11 2 , respectively. Moreover, if I i needs to be divided into three groups, an extra indicator is required to show which group is subdivided. Specifically, if G i 0 is sub-divided, then J i = 0 . On the contrary, if G i 1 is sub-divided, then J i = 1 .
To record the quantized values, Xiang et al.’s method records the smallest quantized value of a block using 8 bits, and utilizes a difference encoding scheme (DES) to encode the difference d i between two quantized values. In DES, if d i < γ , where γ is a predefined threshold, d i is recorded using log 2 ( γ ) bits. Otherwise, d i is recorded using log 2 ( σ ) bits, where σ is the maximum difference between quantized values in all blocks. An extra indicator Y i is used to distinguish these two methods. That is, if d i < γ , Y i = 0 is set. Otherwise, Y i = 1 . The number of bits R i used to record the difference can be expressed as:
R i = { log 2 ( γ ) + 1 , d i < γ ; log 2 ( σ ) + 1 , d i γ .
We use the symbol ( x y ) 2 to represent the R-bit encoded result of the difference between x and y using DES. For example, if x = 40 , y = 28 , and γ = 64 , then d i = 12 < γ . Therefore, R = 7 and the encoding result is ( 40 28 ) 2 = 0 2 | | 001100 2 , where | | is the concatenation operator.
The compressed code and the number of bits required to record blocks I i of different grouping cases are summarized in Table 1. Each block is compressed using the same procedures and the final compressed code stream C S f of image I is obtained.
To decode C S f , the 2-bit indicator I N D is read. According to the read bits, four possible compressed codes shown in Table 1 with different lengths can be extracted. The image blocks can be reconstructed from the compressed codes, and the decompressed image can be obtained. The detailed decoding procedures can be referred to [16].

3. Proposed Method

The traditional AMBTC compression method uses the same number of bits to compress each block. However, coding in this way requires more bits than necessary for flat blocks and neglects too much image detail for complex blocks. Xiang et al.’s method improves AMBTC, resulting in better compression effects for both flat and complex blocks. However, in the processing of complex blocks, Xiang et al.’s method reconstructs the gray values of the image block by four quantized values. Although the quality of the reconstructed block is improved, it requires quantized values to be recorded and bitmaps with more bits. In addition, Xiang et al. adopt the traditional AMBTC method to compress the smooth blocks, which may increase the cost of recording bitmaps and quantized values.
In this paper, we propose a more effective solution by classifying image blocks into flat, smooth, and complex blocks based on thresholds τ 0 and τ 1 ( τ 0 τ 1 ). Let Φ i = { a i , b i , B i } be the AMBTC codes of I i . If b i a i τ 0 , I i is classified as a flat block. Because pixel variations in a flat block are small, all pixels in a flat block can be simply reconstructed by their mean to a satisfactory visual quality. If τ 0 < b i a i < τ 1 , I i is classified as a smooth block. For the smooth block, we use a clustering algorithm to obtain representative bitmaps, and the original bitmaps are replaced by the indices pointing to the obtained bitmap. The two quantized values are also adjusted to reduce the error caused by the bitmap replacement. If b i a i τ 1 , I i is classified as a complex block. We use three quantized values and a ternary map to represent the complex block to maintain better texture details. The encoding algorithms of these three types of blocks will be presented in the following sections.

3.1. Encoding of Flat Blocks

The pixel values of a flat block I i (i.e., b i a i τ 0 ) are relatively close, and thus the bitmap plays an insignificant role in reconstructing the image block. Therefore, we omit the recording of the quantization value in addition to the bitmap, and use an 8-bit mean value ( m i ) 2 to represent the flat block, where:
m i = round ( b i + a i 2 )
and round ( x ) is the function rounding x to the nearest integer.

3.2. Encoding of Smooth Blocks

If τ 0 < b i a i < τ 1 , the fluctuation of pixel values of block I i is more than that of a flat block. Therefore, we refer to I i as a smooth block. To reduce the bitrate, a codebook consisting of the k most representative bitmaps (codewords) is found, and the bitmap of the smooth block will be replaced by an index pointing to one of the codewords in the codebook. We use the k-means algorithm [20] to obtain the k most representative bitmaps. Let { a s , b s , B s } s = 0 N s 1 be the set of AMBTC codes satisfying τ 0 < b i a i < τ 1 for 0 i N 1 , where N s is the number of smooth blocks. Firstly, an initial codebook { C α 0 } α = 0 k 1 is constructed by randomly selecting k bitmaps from { B s } s = 0 N s 1 , where k is much less than N s . Secondly, the bitmaps { B s } s = 0 N s 1 are classified into k clusters according to the similarities between { B s } s = 0 N s 1 and { C α 0 } α = 0 k 1 . That is, if B s has more bits identical to C α 0 than other codewords, then B s is classified into group α , where 0 α k 1 . Thirdly, { B s } s = 0 N s 1 of the same group are averaged and rounded to obtain the updated codebook { C α 1 } α = 0 k 1 . Repeat the classification process t times and the final representative bitmaps { C α t } α = 0 k 1 are obtained. Normally, setting t = 6 can already obtain a satisfactory result. We denote the final representative bitmaps as { C α } α = 0 k 1 . Once the classification process is completed, the classification results { α s } s = 0 N s 1 of bitmaps { B s } s = 0 N s 1 are also obtained. Note that the codeword with index α s has the nearest distance to B s , that is:
α s = arg min α ( j = 0 n × n 1 ( B s , j C α , j ) 2 ) 1 / 2
where B s , j and C α , j represent the j - th element of B s and C α , respectively. Instead of recording { B s } s = 0 N s 1 , the proposed method uses the binary representation of { α s } s = 0 N s 1 as the required bitmap information. Therefore, the bits required to record the bitmap are reduced from n × n bits to log 2 ( k ) bits. To successfully decode the bitmap, we must have cluster centers { C α } α = 0 k 1 and cluster indices { α s } s = 0 N s 1 . Therefore, { C α } α = 0 k 1 must be included as part of the compressed codes.
When decoding a smooth block, because we use cluster center C α s to replace the original bitmap B s , the quality of the reconstructed image block will be reduced. To minimize the reduced quality, a quantized value adjustment (QA) technique [18] is employed. QA is a technique originally used in a data hiding technique to reduce the distortions of the reconstructed AMBTC block when the original bitmap is replaced by secret data. Because bits in the bitmap are altered, distortions of the reconstructed block are inevitable. QA subtly adjusts the quantized values by counting the bit difference between the original bitmap and secret data. In the proposed method, the original bitmap is replaced by a cluster center, which resembles the situations in which the bitmap is replaced by secret data. Therefore, the QA technique can be applied in the proposed method. To find the minimum distortion, the QA technique adjusts a s and b s to a ^ s and b ^ s by calculating:
a ^ s = a s ρ 00 + b s ρ 10 ρ 00 + ρ 10
and:
b ^ s = a s ρ 01 + b s ρ 11 ρ 01 + ρ 11
respectively, where ρ p q is the number of bits with B s , j = p and C α s , j = q , ( p , q ) { 0 , 1 } . For example, ρ 01 indicates the number of bits with B s , j = 0 and C α s , j = 1 . After adjustment of quantized values, the distortion due to the bitmap replacement will be smaller than that without adjustment.

3.3. Encoding of Complex Blocks

Blocks with b i a i τ 1 are classified as complex blocks. Let { I c } c = 0 N c 1 be the set of N c complex blocks in I . For a given complex block I c = { I c , j } j = 0 n × n 1 , the proposed method uses the k-means clustering algorithm to obtain three most representative quantized values { q c 0 , q c 1 , q c 2 } and a ternary map T c = { T c , j } c = 0 n × n 1 , where T c , j is a ternary digit ranging from 0 to 2 used to indicate which quantized value should be used to reconstruct the j - th pixel of I c . Because the value of T c , j is equally distributed over 0 to 2, we can simply encode the ternary digits 0 3 , 1 3 , and 2 3 by 0 2 , 10 2 , and 11 2 , respectively. We assume the encoded result of T c is T c of L-bit. Once the decoder has { T c } c = 0 N c 1 and { q c 0 , q c 1 , q c 2 } c = 0 N c 1 , blocks { I c } c = 0 N c 1 can be reconstructed.
When encoding a 4 × 4 ternary map, the average number of bits required in the proposed method is:
1 × 16 3 + 2 × 2 × 16 3 = 26.67   bits .
Theoretically, recording 16 ternary digits requires 16 × log 2 3 = 26 bits, which is almost the same as in the proposed method. Therefore, the encoding of the ternary map used in the proposed method is effective.

3.4. Encoding Procedures

This section describes the procedures of the proposed method. To distinguish the encoding methods of three types of image blocks, an indicator is prepended to the code stream of each encoded block. The indicators 0 2 , 10 2 , and 11 2 are used to indicate a flat, smooth, and complex block is encoded, respectively. The detailed encoding procedures are shown as follows:
Input:
Original image I , block size n × n , thresholds τ 0 and τ 1 , parameter γ , and cluster size k .
Output:
Code stream C S f .
Step 1:
Partition the original image I into blocks { I i } i = 0 N 1 of size n × n . Encode { I i } i = 0 N 1 using the AMBTC encoder and obtain codes Φ i = { a i , b i , B i } i = 0 N 1 , as described in Section 2.1.
Step 2:
Scan codes { a i , b i , B i } i = 0 N 1 . Let { B s } s = 0 N s 1 be the bitmap of smooth blocks. Clustering { B s } s = 0 N s 1 into k groups using the k-means clustering algorithm, we obtain k cluster centers { C α } α = 0 k 1 and N s cluster indices { α s } s = 0 N s 1 . Concatenate the binary representation of { C α } α = 0 k 1 and obtain the concatenated code stream C S A . The N s pairs of adjusted quantized values { a ^ s , b ^ s } s = 0 N s 1 of smooth blocks are also obtained, as described in Section 3.2. Similarly, quantized values { q c 0 , q c 1 , q c 2 } c = 0 N c 1 and ternary maps { T c } c = 0 N c 1 of complex blocks are also obtained, as described in Section 3.3.
Step 3:
Scan codes { a i , b i , B i } i = 0 N 1 again and perform the encoding according to the cases listed below:
Case 1:
If b i a i τ 0 , a flat block is visited and the code stream of block I i is C S i = 0 2 | | ( m i ) 2 .
Case 2:
If τ 0 < b i a i < τ 1 , a smooth block is visited. Extract a ^ s , b ^ s , and α s from { a ^ s , b ^ s , α s } s = 0 N s 1 obtained in Step 2, and block I i is encoded by C S i = 10 2   | |   ( a ^ s ) 2 | | ( b ^ s a ^ s ) 2 | | ( α s ) 2 . Note that ( b ^ s a ^ s ) 2 is encoded using the DES, as described in Section 2.2.
Case 3:
If b i a i τ 1 , block I i is a complex one. Extract q c 0 , q c 1 , q c 2 , and T c from { q c 0 , q c 1 , q c 2 , T c } c = 0 N c 1 obtained in Step 2, and block I i is encoded by C S i = 11 2   | |   ( q c 0 ) 2 | | ( q c 1 q c 0 ) 2 | | ( q c 2 q c 1 ) 2 | | T c . Note that ( q c 1 q c 0 ) 2 and ( q c 2 q c 1 ) 2 are encoded using the DES (see Section 2.2).
Step 4:
Repeat Step 3 until the code stream { C S i } i = 0 N 1 of blocks { I i } i = 0 N 1 are obtained. Concatenate { C S i } i = 0 N 1 , we have the concatenated code stream C S B .
Step 5:
Concatenate C S A and C S B ; we obtain the final code stream C S f of image I , i.e., C S f = C S A | | C S B .
The encoding of a given image block and the number of required bits for each block types are shown in Figure 1.
We take a simple example to illustrate the encoding of smooth and complex blocks. Let I 0 be a 4 × 4 block to be encoded, as shown in Figure 2a. Suppose τ 0 = 4 , τ 1 = 16 , γ = 64 , σ = 128 , and k = 128 are used in this example. The AMBTC compressed code of I 0 is { a 0 , b 0 , B 0 } = { 28 , 40 , 1110 1110 1100 1100 2 } , and B 0 is depicted in Figure 2b. Because τ 0 < b 0 a 0 < τ 1 , I 0 is a smooth block. Assume α 0 = 43 and C 43 = 1010   0110   0101   0100 2 (see Figure 2c). By comparing B 0 and C 43 , we have ρ 00 = 5 , ρ 01 = 1 , ρ 10 = 4 , and ρ 11 = 6 . Using Equations (7) and (8), we have a ^ 0 = 33 and b ^ 0 = 38 . Because b ^ 0 a ^ 0 = 5 < γ , we have Y = 0 . Because ( a ^ 0 ) 2 = 00100001 2 , ( b ^ 0 a ^ 0 ) 2 = 0 | | 000101 2 , and ( α 0 ) 2 = 0101011 2 , the code stream of I 0 should be C S 0 = 10   | |   00100001   | |   0 | | 000101   | |   0101011 2 .
Figure 2d shows another block I 1 to be encoded. For this block, quantized values a 1 = 25 and b 1 = 103 of the AMBTC code are calculated. Because b 1 a 1 τ 1 , I 1 is regarded as a complex block. Suppose after applying the k-means clustering algorithm to I 1 , we obtain three quantized values { q 1 0 , q 1 1 , q 1 2 } = { 19 ,   85 ,   133 } and the ternary cluster indices of pixels T 1 = { 1111   2121   0210   0000 } , as shown in Figure 2e. The difference between the first two quantized values is q 1 1 q 1 0   =   66 > γ . Therefore, indicator Y 1 , 0 = 1 should be placed in front of the log 2 ( σ ) = 7 - bit binary representation of 66 (i.e., 1 | | 1000010 2 ). Similarly, because q 1 2 q 1 1   =   48 < γ , indicator Y 1 , 1 = 0 should be placed in front of the log 2 ( γ ) = 6 - bit binary representation of 48 (i.e., 0 | | 110000 2 ). Finally, the ternary cluster indices T 1 are encoded by 10101010   11101110   011100   0000 2 , which is illustrated in Figure 2f. Therefore, according to Step 3 of Case 3 in Section 3.4, the code stream of block I 1 should be C S 1 = 11   | |   00010011   | |   1 | | 1000010   | |   0 | | 110000   | |   10101010   11101110   011100   0000 2 .

3.5. Decoding Procedures

In decoding, data bits are sequentially read and decoded, and image blocks are reconstructed by decoding the read data bits. The detailed steps of decoding are listed as follows:
Input:
Code stream C S f , block size n × n , parameter γ , σ , and cluster size k .
Output:
Decompressed image I = { I i } i = 0 N 1 .
Step 1:
Extract C S A from C S f and reconstruct k cluster centers { C α } α = 0 k 1 .
Step 2:
Extract one bit b from C S f . According to the extracted bit, one of the following decoding cases is then performed:
Case 1:
If b = 0 2 , the block to be reconstructed is a flat block. All the pixel values of block I i are the decimal value of the next 8 bits extracted from C S f .
Case 2:
If b = 1 2 and the next extracted bit is 0 2 , the block to be reconstructed is a smooth block. Extract the next 8 bits and convert them to a decimal value to obtain the quantized value a ^ s . Read the next bit from C S f . If the read bit is 0 2 , b ^ s is reconstructed by the decimal value of the next log 2 ( γ ) bits plus a ^ s . Otherwise, b ^ s is reconstructed by the decimal value of next log 2 ( σ ) bits plus a ^ s . The clustering index α s is the decimal value of next k bits, and the bitmap C α s can be obtained from { C α } α = 0 k 1 . Using the AMBTC decoder to decode { a ^ s , b ^ s , C α s } , the image block can be reconstructed.
Case 3:
If b = 1 2 and the next extracted bits is 1 2 , the block to be reconstructed is a complex block. Extract the next 8 bits and convert them to a decimal value to obtain the quantized value q c 0 . Read the next bit from C S f . If the read bit is 0 2 , q c 1 is reconstructed by the q c 0 plus the decimal value of the next log 2 ( γ ) bits; otherwise, q c 1 is reconstructed by the decimal value of the next log 2 ( σ ) bits plus q c 0 . Using a similar manner, q c 2 is reconstructed. To reconstruct the ternary map { T c , j } j = 0 n × n 1 , we start from j = 0 to j = n × n 1 and repeat the following process: Read a bit b 0 from C S f . If b 0 = 0 2 , we have T c , j = 0 . Otherwise, read the next bit b 1 from C S f . If b 0 b 1 = 10 2 , T c , j = 1 . If b 0 b 1 = 11 2 , T i , j = 2 . Once we have { q c 0 , q c 1 , q c 2 } and { T c , j } j = 0 n × n 1 , the j-th pixel of the image block is reconstructed by q c 0 , q c 1 , or q c 2 if T c , j = 0 , 1, or 2, respectively.
Step 3:
Repeat Step 2 until all image blocks are reconstructed, and the final decompressed image I is obtained.
We continue the example given in Section 3.4 to illustrate the decoding process. The detailed process and the decoded result are depicted in Figure 3. To decode the code stream C S 0 = 10   | |   00100001   | |   0   000101   | |   0101011 2 , because the first bit is 1 2 and the second bit is 0 2 , the to-be-reconstructed block is a smooth block. Extract the next 8 bits from C S 0 and convert them into decimal representation; we obtain a ^ 0 = 33 . The next extracted bit is 0. Therefore, the difference d 0 = 5 is the decimal value of the next log 2 ( γ ) bits, and we have b ^ 0 = a ^ 0 + 5 = 38 . Finally, extract log 2 ( k ) bits and convert them to a decimal value; we have α s = 43 and the bitmap C 43 is obtained. The image block can then be constructed by decoding { a ^ 0 , b ^ 0 , C 43 } using the AMBTC decompression technique.
To decode C S 1 = 11   | |   00010011   | |   1   1000010   | |   0   110000   | |   10101010   11101110   011100   0000 2 , because the first two extracted bits are 11 2 , the block to be decompressed is a complex block. Extract 8 bits and q 1 0 = 19 is the decimal value of these 8 bits. The next bit is 1 2 ; therefore, d 1 1 = 66 is the decimal value of the next log 2 ( σ ) = 7 bits and q 1 1 = 19 + 66 = 85 can be obtained. Similarly, the next extracted bit is 0 2 ; therefore, d 1 2 = 48 is obtained by converting the next log 2 ( γ ) = 6 bits to their decimal value, and q 1 2 = 85 + 48 = 133 can be obtained. Finally, we have to reconstruct { T 1 , j } j = 0 15 from the remaining bits. Because the next extracted bit is 1 2 , we extract one more bit, which is 0 2 . Therefore, T 1 , 0 = 1 is obtained. The remaining 15 ternary digits { T 1 , j } j = 1 15 can be decoded in the similar manner. Once we have { q 1 0 , q 1 1 , q 1 2 } and { T 1 , j } j = 0 15 , block I 1 can be reconstructed. Figure 3b illustrates the decoding process of C S 1 .

4. Experimental Results

In this section, we conduct several experiments to show the effectiveness and applicability of the proposed scheme. We take eight grayscale images of size 512 × 512 , namely, Lena, Jet, Baboon, Tiffany, Boat, Stream, Peppers and House, as the test images, as shown in Figure 4. These images can be obtained from the USC-SIPI image database [21]. We use the peak signal-to-noise ratio (PSNR) and bitrate to measure the performance. The PSNR is calculated by:
PSNR = 10 log 10 255 2 1 n × n × N i = 0 n × n × N 1 ( x i x i ) 2 ,
where x i and x i represent the pixel values of the original and decompressed images, respectively. The bitrate metric is measured by the number of bits required to record each pixel (i.e., bit per pixel, bpp).
In all of the experiments, we set τ 0 = 4 because the flat blocks under this setting show no apparent block boundary artifacts.

4.1. The Performance of the Proposed Method

Because the number of cluster centers k and threshold greatly affect the coding efficiency in the application of the quantized value adjustment (QA) technique [18], we evaluate how the QA technique and these parameters influence the bitrate and image quality in this section.

4.1.1. Coding Efficiency Comparisons

In the coding of smooth blocks, the original bitmaps of smooth blocks are used to obtain the cluster centers, and the quantized values are adjusted using the QA technique to lower the distortions. Table 2 and Table 3 show how the QA technique improves the image quality when the block size is set to 4 × 4 and 8 × 8 , respectively. In this experiment, τ 1 = 16 and γ = 32 are set. As seen from the tables, the QA technique effectively enhances the quality of reconstructed images for every k . For example, in Table 2 and k = 128 , the averaged quality of the reconstructed images with and without the QA technique is 34.63 and 34.52 dB, respectively. The averaged quality has improved by 34.63 34.52 = 0.11 dB. Similarly, in Table 3 when k = 128 , the PSNR improvement is 31.86 31.76 = 0.10 dB. Therefore, the QA technique indeed reduces the distortion caused by replacing the original bitmap with a cluster center.
Table 2 and Table 3 also reveal that the increase in cluster size k also enhances the image quality. For example, in Table 3 when k = 128 , the averaged PSNR of eight test images is 31.86 dB. When k = 256 and 512, the PSNR increases 31.91 31.86 = 0.05 dB and 32.00 31.86 = 0.14 dB, respectively. The reason is that a larger cluster size provides a greater chance to reduce the difference between the cluster centers and original bitmaps.
To evaluate how threshold τ 1 affects the performance of the QA, we plot the gain of PSNR when using the QA for various τ 1 with k = 128 and 512. The results are shown in Figure 5. Note that, in this experiment, a block size of 4 × 4 is set.
Figure 5a,b shows that the gain in PSNR increases as τ 1 increases, and this is mainly because the number of smooth blocks also increases as τ 1 increases. Because more blocks are classified as smooth for larger τ 1 , more blocks will be processed using the QA technique. As a result, the gain in PSNR is higher when τ 1 is larger. It also can be observed that for each test image, the gain in PSNR is larger when k = 128 than that when k = 512 . The reason is that a smaller k implies larger differences between the original bitmaps and cluster centers. Because the QA technique is capable of reducing the distortion caused by the differences, a larger PSNR improvement can be achieved for smaller k .
It is interesting to note that the gain in PSNR of the Stream and Baboon images increases more than that of other test images when varying τ 1 = 10 to τ 1 = 50 for both k = 128 and k = 512 . Because these two images are more complex than the others, their bitmaps of smooth blocks are expected to be more different from the selected cluster centers used to replace the bitmaps. As previously mentioned, the QA technique is effective in reducing the distortion caused by the differences, and the bitmaps of Stream and Baboon images are more different from the cluster centers than the other images. Therefore, the improvement in PSNR after applying the QA technique is more significant than for the others.

4.1.2. Performance Comparison of Various τ 1

The parameter τ 1 controls the number of smooth and complex blocks. The number of complex blocks decreases as τ 1 increases. To see the distribution of flat, smooth, and complex blocks of a test image, we take the Lena image as an example to illustrate their distribution by varying τ 1 . Figure 6a–d shows the distributions of blocks when τ 1 = 8 , 16, 32, and 64 are set. The block sizes in these figures are 8 × 8 and τ 0 = 4 . In this figure, the blue squares, red dots, and black cross marks represent flat, smooth, and complex blocks, respectively.
Because the same τ 0 is applied, it can be seen that the number of blue squares (flat blocks) is the same in Figure 6a–d. However, as τ 1 increases, the red dots increase and black cross marks decrease. The reason is that an increase in τ 1 leads more blocks to be categorized as smooth. It can also be inferred that a better image quality can be achieved at a smaller τ 1 but the bitrate will be higher because more blocks are deemed to be complex. Note that in the proposed method, more bits are required to represent a complex block than a smooth block.
Table 4 and Table 5 show the PSNR and bitrate for all of the test images under various τ 1 with block size 4 × 4 and 8 × 8 , respectively. In this experiment, τ 0 = 4 , γ = 64 , and k = 256 are set. We also list the PSNR and bitrate of the standard AMBTC method as a comparison. Note that the bitrates of the AMBTC with block size 4 × 4 and 8 × 8 are 2.0 and 1.25 bpp, respectively.
As seen in Table 4 and Table 5, the PSNR of block size 8 × 8 is lower than that of block size 4 × 4 . For example, when τ 1 = 16 , the PSNR of the Lena image of block sizes 4 × 4 and 8 × 8 are 34.73 and 31.91 dB, respectively. However, the former requires 1.68 1.02 = 0.66 more bits per pixel than the latter. In addition, the experiments also reveal the fact that a large τ 1 effectively reduces the bitrate at the expense of image quality. On the contrary, a small τ 1 provides better image quality, but requires more bitrate. This result is expected because a small τ 1 increases the number of complex blocks and, therefore, the bitrate, however, the image quality also increases.
Figure 7 shows the bitrate–PSNR curves of each of the eight test images by varying the threshold τ 1 from 8 to 64. The figure shows that for all of the test images, the PSNR increases as the bitrate increases. Moreover, the figure also reveals that smooth images, such as Tiffany or Jet, have a better compression efficiency than those of complex images, such as Stream or Baboon. The reason is that a smooth block not only requires less bits to record its compressed code but also provides better reconstructed quality. Because the smooth images naturally possess more smooth blocks than complex blocks, their bitrate–PSNR curves are higher than those of complex ones.
It also can be seen from Figure 7 that the PSNR and bitrate vary as the threshold τ 1 changes. A larger τ 1 gives a lower bitrate with lower PSNR. In contrast, a smaller τ 1 offers a higher image quality, but the bitrate is also higher. Therefore, the selection of threshold τ 1 depends on real applications. For example, if an application requires higher image quantity, a smaller τ 1 is required.
It is worth noting that, for most of the test images, the proposed method provides better performance than AMBTC, particularly for smooth images. For example, Lena, Jet, Tiffany, Boat, and Peppers are considered to be smooth. For these smooth images, regardless of the value of τ 1 , the PSNR is always higher and the bitrate is always lower than those of the AMBTC method. In contrast, for the complex images such as Baboon or Stream, few blocks are classified as flat, which require only 8 bits to record them. Therefore, the reduction in bitrate is limited. Nevertheless, the proposed method either provides a better image quality or lower bitrate than those of AMBTC.

4.2. Comparisons with Xiang et al.’s Work

Xiang et al.’s method [16] also improves the AMBTC method by dynamically splitting images into multiple groups and achieves a good performance. In this section, we compare the proposed method with that of Xiang et al. in terms of PSNR and bitrate. To make a fair comparison, threshold τ 0 = 4 and γ = 64 are set in both methods. The proposed method uses τ 1 to control the number of smooth and complex blocks, whereas Xiang et al.’s method uses d min to control the number of pixel groups. We select τ 1 = 8 , 16, and 24 in the proposed method and compare the results with those of Xiang et al. by setting d min = 6 , 7, and 8 for block size 4 × 4 . The results are shown in Table 6. Table 7 shows the same experimental results, except block size is 8 × 8 and d min = 28 , 32, and 36. The settings of d min in Xiang et al.’s method ensure that best performance can be achieved.
Table 6 shows that in Xiang et al.’s method, as d min increases, the image quality and bitrate decrease. The reason is that a large d min prevents more blocks from being split, leading to a decrease in bitrate and PSNR. Note that for most of the test images, the proposed method performs better than that of Xiang et al. We take the Lena image as an example: when τ 1 = 16 and d min = 7 are set, the PSNR of the proposed and Xiang et al.’s methods are 36.05 dB with 1.68 bpp and 35.12 dB with 2.20 bpp, respectively. The PSNR of the proposed method is 36.05 35.12 = 0.93 dB higher and the bitrate is 2.20 1.68 = 0.52 bpp lower than that of Xiang et al.’s method. Comparisons with other images and another set of parameters also reveal similar results, with the exception of the Baboon image. When τ 1 = 8 and d min = 6 are set, the PSNR of Xiang et al.’s method is 31.40 31.28 = 0.12 dB higher than the proposed method. The reason is that under these settings, more blocks are divided into four groups and thus a better image quality is achieved. However, their method requires 3.57 3.12 = 0.45 bpp more than the proposed method.
In the performance comparisons with block size 8 × 8 , the proposed method shows better results for all test images. For example, as shown in Table 7 when τ 1 = 14 and d min = 28 , the PSNR of the Baboon image of the proposed method is 28.84 dB at 1.88 bpp. The PSNR is 28.84 28.14 = 0.70 dB higher and the bitrate is 2.23 1.88 = 0.35 bpp lower than those of Xiang et al.’s method.
Figure 8a–f shows the visual quality comparisons of the AMBTC, Xiang et al.’s, and the proposed methods. As seen from Figure 8a when the block size is 4 × 4 and the AMBTC method is applied, apparent distortions can be seen in the image edges, and noticeable boundary artifacts are observed (see Figure 8a). Note that the PSNR of the AMBTC is 33.27 dB with 2.0 bpp. Xiang et al. improve the AMBTC method by adding more details to complex blocks. As a result, the PSNR (36.31 dB) is significantly higher and blocks at the edges look more natural than those of AMBTC (Figure 8c, d min = 6 ). However, their method requires 2.35 2 = 0.35 bpp more to achieve this effect. In contrast, the visual quality of the proposed method (Figure 8e, τ 1 = 8 ) is comparable with that of Xiang et al.’s method, but the bitrate is 2.35 2.05 = 0.30 bpp lower with a slightly higher PSNR.
When the block size is 8 × 8 , the distortion of AMBTC is more apparent (Figure 8b) than that of 4 × 4 , but the bitrate reduces from 2.0 to 1.25 bpp. The visual quality of Xiang et al.’s method (Figure 8d, d min = 28 ) is significantly better than that of AMBTC, and has no noticeable block boundary artifacts. However, their method requires 1.64 1.25 = 0.39 bpp more to improve the image quality. In addition, some edges in Lena’s face, eyes, and shoulder exhibit apparent distortions because the pixel splitting operation may not be triggered due to the setting of d min . In contrast, the edges of the proposed method exhibit no apparent distortion (see Figure 8f, τ 1 = 14 ). Moreover, the bitrate required in the proposed method is even lower than that of AMBTC by 1.25 1.09 = 0.16 bpp.

5. Conclusions

In this paper, we propose a hybrid encoding scheme for AMBTC compressed images using a ternary representation technique. Considering that the number of quantized values greatly affects the quality of the reconstructed image, the proposed method classifies image blocks into flat, smooth, and complex. These three types of blocks are encoded by using one, two, or three quantized values. Flat blocks require no bitmap, whereas smooth and complex blocks require binary and ternary maps, respectively, to record the quantized values to be used to reconstruct the corresponding pixels. A sophisticated design indicator is prepended before the code stream of a block to signify the block type. The proposed method achieves a better image quality than that of prior works with a smaller bitrate. The effectiveness of the proposed method is observed from the experimental results. Note that although the k-means algorithm used in the proposed method may require slightly higher computational cost than that of the discrete cosine transform (DCT) based methods, it is only applied to smooth blocks in the encoding stage to obtain the representative bitmaps rather than the whole image. Furthermore, the k-means algorithm does not need to be applied again during decoding. Therefore, the overall computational cost of the proposed method is smaller than that of DCT-based compression methods.

Author Contributions

W.H., J.W., and K.S.C. contributed to the conceptualization, methodology, and writing of this paper. J.Y. and T.-S.C conceived the simulation setup, formal analysis and conducted the in-vestigation. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, J.; Tian, Y.-G.; Han, T.; Yang, C.-F.; Liu, W.-B. LSB steganographic payload location for JPEG-decompressed images. Digit. Signal Process. 2015, 38, 66–76. [Google Scholar] [CrossRef]
  2. Liu, J.; Tian, Y.; Han, T.; Wang, J.; Luo, X. Stego key searching for LSB steganography on JPEG decompressed image. Sci. China Inf. Sci. 2016, 59, 1–15. [Google Scholar] [CrossRef]
  3. Qin, C.; Chang, C.-C.; Chiu, Y.-P. A Novel Joint Data-Hiding and Compression Scheme Based on SMVQ and Image Inpainting. IEEE Trans. Image Process. 2014, 23, 969–978. [Google Scholar] [CrossRef]
  4. Qin, C.; Hu, Y.-C. Reversible data hiding in VQ index table with lossless coding and adaptive switching mechanism. Signal Process. 2016, 129, 48–55. [Google Scholar] [CrossRef]
  5. Tsou, C.-C.; Hu, Y.-C.; Chang, C.-C. Efficient optimal pixel grouping schemes for AMBTC. Imaging Sci. J. 2008, 56, 217–231. [Google Scholar] [CrossRef]
  6. Hu, Y.C.; Su, B.H.; Tsai, P.Y. Color image coding scheme using absolute moment block and prediction technique. Imaging Sci. J. 2008, 56, 254–270. [Google Scholar] [CrossRef]
  7. Delp, E.J.; Mitchell, O.R. Image coding using block truncation coding. IEEE Trans. Commun. 1979, 27, 1335–1342. [Google Scholar] [CrossRef]
  8. Lema, M.; Mitchell, O. Absolute Moment Block Truncation Coding and Its Application to Color Images. IEEE Trans. Commun. 1984, 32, 1148–1157. [Google Scholar] [CrossRef]
  9. Kumaravadivelan, A.; Nagaraja, P.; Sudhanesh, R. Video compression technique through block truncation coding. Int. J. Res. Anal. Rev. 2019, 6, 236–242. [Google Scholar]
  10. Hemida, O.; He, H. A self-recovery watermarking scheme based on block truncation coding and quantum chaos map. Multimed. Tools Appl. 2020, 79, 18695–18725. [Google Scholar] [CrossRef]
  11. Qin, C.; Ji, P.; Zhang, X.; Dong, J.; Wang, J. Fragile image watermarking with pixel-wise recovery based on overlapping embedding strategy. Signal Process. 2017, 138, 280–293. [Google Scholar] [CrossRef]
  12. Qin, C.; Ji, P.; Chang, C.-C.; Dong, J.; Sun, X. Non-uniform Watermark Sharing Based on Optimal Iterative BTC for Image Tampering Recovery. IEEE MultiMed. 2018, 25, 36–48. [Google Scholar] [CrossRef]
  13. Ma, Y.Y.; Luo, X.Y.; Li, X.L.; Bao, Z.; Zhang, Y. Selection of rich model steganalysis features based on decision rough set α-positive region reduction. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 336–350. [Google Scholar] [CrossRef]
  14. Zhang, Y.; Qin, C.; Zhang, W.M.; Liu, F.L.; Luo, X.Y. On the fault-tolerant performance for a class of robust image ste-ganography. Signal Process. 2018, 146, 99–111. [Google Scholar] [CrossRef]
  15. Hu, Y.-C. Low-complexity and low-bit-rate image compression scheme based on absolute moment block truncation coding. Opt. Eng. 2003, 42, 1964–1975. [Google Scholar] [CrossRef]
  16. Xiang, Z.; Hu, Y.-C.; Yao, H.; Qin, C. Adaptive and dynamic multi-grouping scheme for absolute moment block truncation coding. Multimed. Tools Appl. 2018, 78, 7895–7909. [Google Scholar] [CrossRef]
  17. Chen, W.-L.; Hu, Y.-C.; Liu, K.-Y.; Lo, C.-C.; Wen, C.-H. Variable-Rate Quadtree-segmented Block Truncation Coding for Color Image Compression. Int. J. Signal Process. Image Process. Pattern Recognit. 2014, 7, 65–76. [Google Scholar] [CrossRef]
  18. Hong, W. Efficient Data Hiding Based on Block Truncation Coding Using Pixel Pair Matching Technique. Symmetry 2018, 10, 36. [Google Scholar] [CrossRef] [Green Version]
  19. Mathews, J.; Nair, M.S. Adaptive block truncation coding technique using edge-based quantization approach. Comput. Electr. Eng. 2015, 43, 169–179. [Google Scholar] [CrossRef]
  20. Hartigan, J.A.; Wong, M.A. A K-means clustering algorithm. Appl. Stat. 1979, 28, 100–108. [Google Scholar] [CrossRef]
  21. The USC-SIPI Image Database. Available online: http://sipi.usc.edu/database/ (accessed on 1 November 2020).
Figure 1. Illustration of image block encoding.
Figure 1. Illustration of image block encoding.
Applsci 11 00619 g001
Figure 2. An example of image encoding.
Figure 2. An example of image encoding.
Applsci 11 00619 g002aApplsci 11 00619 g002b
Figure 3. Illustration of the decoding procedures of image blocks.
Figure 3. Illustration of the decoding procedures of image blocks.
Applsci 11 00619 g003
Figure 4. Eight test images.
Figure 4. Eight test images.
Applsci 11 00619 g004
Figure 5. The gain of PSNR when the quantized value adjustment (QA) technique is applied.
Figure 5. The gain of PSNR when the quantized value adjustment (QA) technique is applied.
Applsci 11 00619 g005
Figure 6. Distribution of flat, smooth and complex blocks.
Figure 6. Distribution of flat, smooth and complex blocks.
Applsci 11 00619 g006
Figure 7. Bitrate–PSNR curves of each of the eight test images.
Figure 7. Bitrate–PSNR curves of each of the eight test images.
Applsci 11 00619 g007
Figure 8. Visual quality comparisons of the proposed method and that of Xiang et al.
Figure 8. Visual quality comparisons of the proposed method and that of Xiang et al.
Applsci 11 00619 g008
Table 1. Number of bits required to record a compressed block.
Table 1. Number of bits required to record a compressed block.
Number of Groups Compressed   Code   of   I i Number of Bits
1 { 00 2 , ( m i ) 2 } 2 + 8
2 { 01 2 , ( a i ) 2 ,   ( b i a i ) 2 ,   B i } 2 + 8 + R i + n × n
3 { 10 2 , ( a i 0 ) 2 ,   ( b i 0 a i 0 ) 2 ,   ( b i b i 0 ) 2 ,   B i ,   J i ,   B i 0 } or 2 + 8 + 2 R i + n × n + 1 + P i
{ 10 2 , ( a i ) 2   , ( a i 1 a i ) 2   , ( b i 1 a i 1 ) 2   , B i ,   J i ,   B i 1 }
4 { 11 2 , ( a i 0 ) 2 ,   ( b i 0 a i 0 ) 2 ,   ( a i 1 b i 0 ) 2 ,   ( b i 1 a i 1 ) 2 ,   B i ,   B i 0 , B i 1 } 2 + 8 + 3 R i + 2 × n × n
Table 2. Peak signal-to-noise ratio (PSNR) and bitrate of compressed images with 4 × 4 block size.
Table 2. Peak signal-to-noise ratio (PSNR) and bitrate of compressed images with 4 × 4 block size.
Images k = 128 k = 256 k = 512
Bitratew/o QAw/QABitratew/o QAw/QABitratew/o QAw/QA
Lena1.6035.8335.961.6335.9536.051.6836.0736.15
Jet1.5235.8535.911.5535.9235.971.5835.9936.03
Baboon2.7330.8030.852.7630.8530.892.7930.8930.93
Tiffany1.4737.1837.381.5037.3937.551.5537.5737.69
Boat2.0033.9734.132.0534.1534.272.1034.2734.36
Stream2.6832.4332.492.7032.4932.532.7332.5532.57
Peppers1.6835.3435.571.7335.5235.711.7835.7135.85
House1.9534.7334.781.9734.7934.822.0134.8534.88
Average1.9534.5234.631.9934.6334.722.0334.7434.81
Table 3. PSNR and bitrate of compressed images with 8 × 8 block size.
Table 3. PSNR and bitrate of compressed images with 8 × 8 block size.
Images k = 128 k = 256 k = 512
Bitratew/o QAw/QABitratew/o QAw/QABitratew/o QAw/QA
Lena0.9732.9633.071.0133.0333.121.0833.1333.21
Jet1.0032.7732.821.0432.8332.861.1132.9032.92
Baboon1.7928.6728.721.8328.7228.751.8928.8028.82
Tiffany0.8934.6334.810.9334.7934.931.0034.9935.09
Boat1.3031.1731.321.3431.2431.371.4131.3631.46
Stream1.8929.7929.821.9229.8429.861.9929.9329.93
Peppers1.0132.5932.741.0532.6632.801.1232.7932.90
House1.3631.5231.571.3931.5731.611.4631.6531.67
Average1.2831.7631.861.3131.8331.911.3831.9432.00
Table 4. PSNR and bitrate of the proposed method for various τ 1 (block size 4 × 4 ).
Table 4. PSNR and bitrate of the proposed method for various τ 1 (block size 4 × 4 ).
ImagesAMBTC
bpp = 2.0
PSNR (dB)Bitrate (bpp)
τ 1 = 16 τ 1 = 32 τ 1 = 64 τ 1 = 16 τ 1 = 32 τ 1 = 64
Lena33.2436.0534.6232.851.681.461.33
Jet31.9735.9834.7932.501.561.391.24
Baboon26.9830.8929.5525.742.752.341.75
Tiffany35.7737.5535.7234.261.541.341.25
Boat31.1634.2732.8730.812.091.771.57
Stream28.5932.5330.2027.272.732.111.66
Peppers33.4235.7134.4932.801.771.591.49
House30.8934.8232.0030.422.001.651.38
Average31.5034.7333.0330.832.021.711.46
Table 5. PSNR and bitrate of the proposed method for various τ 1 (block size 8 × 8 ).
Table 5. PSNR and bitrate of the proposed method for various τ 1 (block size 8 × 8 ).
ImagesAMBTC
bpp = 1.25
PSNR (dB)Bitrate (bpp)
τ 1 = 16 τ 1 = 32 τ 1 = 64 τ 1 = 16 τ 1 = 32 τ 1 = 64
Lena29.9333.1231.8029.241.020.760.51
Jet28.8432.8631.7229.401.040.820.62
Baboon25.1828.7527.6623.061.811.430.71
Tiffany32.5534.9332.7630.740.940.620.47
Boat28.0731.3729.7827.441.350.900.61
Stream26.1029.8527.9924.181.921.340.63
Peppers29.6632.8031.4929.431.060.780.58
House27.6831.6030.2026.921.391.020.62
Average28.5031.9130.4327.551.320.960.59
Table 6. Comparisons of PSNR and bitrate with block size 4 × 4 .
Table 6. Comparisons of PSNR and bitrate with block size 4 × 4 .
ImagesMetricsProposed
τ 1 = 8
[16]
d min = 6
Proposed
τ 1 = 16
[16]
d min = 7
Proposed
τ 1 = 24
[16]
d min = 8
LenaPSNR36.9836.3136.0535.1235.3133.91
Bitrate2.052.351.682.201.541.95
JetPSNR36.5334.9635.9733.7435.3632.69
Bitrate1.801.981.571.871.451.71
BaboonPSNR31.2831.4030.8929.5930.3228.11
Bitrate3.123.572.753.242.522.75
TiffanyPSNR38.9838.4237.5537.2936.4636.41
Bitrate1.922.121.541.991.401.81
BoatPSNR35.3734.8134.2733.4333.4432.11
Bitrate2.653.022.092.791.872.43
StreamPSNR32.9832.6932.5331.1431.4629.69
Bitrate3.073.452.733.182.392.75
PeppersPSNR37.1136.3835.7135.2335.0334.16
Bitrate2.282.641.772.451.652.20
HousePSNR35.3235.0634.8233.5233.9931.86
Bitrate2.282.522.002.351.802.05
Table 7. Comparisons of PSNR and bitrate with 8 × 8 block size.
Table 7. Comparisons of PSNR and bitrate with 8 × 8 block size.
ImagesMetricsProposed
τ 1 = 14
[16]
d min = 28
Proposed
τ 1 = 22
[16]
d min = 32
Proposed
τ 1 = 30
[16]
d min = 36
LenaPSNR33.3231.8832.6630.8532.0030.27
Bitrate1.091.640.901.450.791.27
JetPSNR32.9930.3732.4429.6631.8729.23
Bitrate1.091.360.941.240.841.13
BaboonPSNR28.8428.1428.4326.4427.8425.50
Bitrate1.882.231.651.881.471.50
TiffanyPSNR35.3634.4433.9733.4832.9532.89
Bitrate1.041.510.771.330.641.16
BoatPSNR31.6230.1230.6829.1129.9528.54
Bitrate1.461.991.111.750.931.50
StreamPSNR29.9428.6229.3927.3028.3126.54
Bitrate1.992.201.711.901.411.58
PeppersPSNR33.0131.3532.2730.5231.6830.03
Bitrate1.141.860.911.620.811.40
HousePSNR31.7130.2031.2228.7930.4528.09
Bitrate1.451.701.241.471.061.26
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, T.-S.; Wu, J.; Chen, K.S.; Yuan, J.; Hong, W. Hybrid Encoding Scheme for AMBTC Compressed Images Using Ternary Representation Technique. Appl. Sci. 2021, 11, 619. https://doi.org/10.3390/app11020619

AMA Style

Chen T-S, Wu J, Chen KS, Yuan J, Hong W. Hybrid Encoding Scheme for AMBTC Compressed Images Using Ternary Representation Technique. Applied Sciences. 2021; 11(2):619. https://doi.org/10.3390/app11020619

Chicago/Turabian Style

Chen, Tung-Shou, Jie Wu, Kai Sheng Chen, Junying Yuan, and Wien Hong. 2021. "Hybrid Encoding Scheme for AMBTC Compressed Images Using Ternary Representation Technique" Applied Sciences 11, no. 2: 619. https://doi.org/10.3390/app11020619

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop