You are currently viewing a new version of our website. To view the old version click .
Symmetry
  • Article
  • Open Access

24 October 2024

Lossless Recompression of Vector Quantization Index Table for Texture Images Based on Adaptive Huffman Coding Through Multi-Type Processing

,
,
and
1
Department of Information Engineering and Computer Science, Feng Chia University, Taichung 40724, Taiwan
2
Information and Communication Security Research Center, Feng Chia University, Taichung 40724, Taiwan
*
Authors to whom correspondence should be addressed.
This article belongs to the Section Computer

Abstract

With the development of the information age, all walks of life are inseparable from the internet. Every day, huge amounts of data are transmitted and stored on the internet. Therefore, to improve transmission efficiency and reduce storage occupancy, compression technology is becoming increasingly important. Based on different application scenarios, it is divided into lossless data compression and lossy data compression, which allows a certain degree of compression. Vector quantization (VQ) is a widely used lossy compression technology. Building upon VQ compression technology, we propose a lossless compression scheme for the VQ index table. In other words, our work aims to recompress VQ compression technology and restore it to the VQ compression carrier without loss. It is worth noting that our method specifically targets texture images. By leveraging the spatial symmetry inherent in these images, our approach generates high-frequency symbols through difference calculations, which facilitates the use of adaptive Huffman coding for efficient compression. Experimental results show that our scheme has better compression performance than other schemes.

1. Introduction

Ever since people realized the convenience artificial intelligence (AI) brought us, many AI-based tools and applications have been developed at an amazingly fast speed to leverage the quality of our lives. While the third industrial revolution is approaching, many new developments depending on AI are engaged. Evolutionary robots are not only built to assist in manufacturing, transportation, and healthcare but also built to accommodate modern smart homes. With all these advanced technologies, the traffic of future telecommunications becomes heavier than ever. Traditional cellular networks could not keep up the speed of transmitting data, but the possibility of industry revolution becomes a reality when Fifth-Generation (5G) communication technology comes into play. The ability to handle huge amounts of data through the internet is a major milestone to achieve and it increases the demand on digital cloud utilization.
The benefits of using 5G infrastructure [,] are that it can transmit data faster because of its higher bandwidth and it can connect to more devices to make internet-of-things (IoT) and machine-to-machine communication possible. Even though 5G can transmit big data, it is critical to improve data compression mechanisms to decrease the sizes of transmitted files. The smaller a file size, the faster the transmission speed and the smaller the cloud storage required.
Images, videos, and audio are common data formats transmitted through networks in our daily lives. Image compression can be categorized as either lossy or lossless, depending on whether the original images can be fully restored. Most of the time, people choose lossy compression for better web performance. Although lossy image compression cannot fully restore the original images, the recovered images are generally visually recognizable with only minor distortions. Vector quantization (VQ) [,] is a widely used and fundamental image compression technique in academia. To maintain high image quality, obtaining a well-trained codebook is essential. When an image has strong symmetry, the vectors in the codebook can be reused more frequently, improving compression efficiency. The Linde–Buzo–Gray (LBG) algorithm [], introduced in 1980, is a well-known method for training codebooks. It divides an image into blocks, converts the data sequences of these blocks into codewords, and replaces these codewords with the indices from a well-trained codebook to compress the image. Improving image compression techniques [,,,,,,,,,,,] is a key issue that many scholars have focused on. VQ compression technology [,,] has been widely used in various fields in recent years. Any improvements in VQ schemes could potentially be applied to other compression techniques in the future.
VQ compression can be categorized into memory VQ and memoryless VQ. The primary difference between the two is whether the correlations are among adjacent blocks or among neighbor pixels in a block. There are many well-known memory VQ compression algorithms, such as finite-state VQ (FSVQ) [], side match VQ (SMVQ) [], and predictive VQ (PVQ) [,]. Memory VQ schemes require more computation cost than memoryless VQ ones in general. Several favored memoryless VQ compression algorithms, such as Predictive Mean Search (PMS) [], Search-Order Coding (SOC) [], and Index Compression VQ (ICVQ) [], have been proposed. Subsequently, some algorithms [,] based on SOC for VQ index tables were developed. In 2009, Chang et al. [] used a state codebook to recompress the VQ index table. In 2024, Lin et al. [] applied the concept of side match to recompress the VQ index table. When compression is based on the correlation of neighboring indices of an index table, the index values can be treated as the neighboring pixel values. If the adjacent pixels are highly correlated in an image, it can help the compression of the image. Similarly, to an index table, the compression rate will be higher if the adjoined indices are close in value. In order to achieve this purpose, sorting a codebook is used to make neighboring indices close in value. Since there are more and more applications that require reversibility, it is necessary to be able to retrieve the original index table after decompression to keep the visual quality of the images at VQ level. Previous schemes for recompressing the VQ index table perform poorly on texture images. To achieve a high compression rate for the index table, we propose a scheme that combines principal component analysis (PCA) [] and Huffman coding []. The real challenge in compressing texture images is finding the right balance between compression efficiency and preserving quality. Specifically, it is about determining how much the VQ data can be reduced without noticeable quality loss. Most importantly, the goal is to ensure that any degradation in image quality is imperceptible to the human eye. The contributions of our proposed scheme are:
  • We propose a recompression scheme for a VQ index table of texture images, which achieves better compression effectiveness compared to other similar schemes.
  • Our proposed scheme enables lossless decompression after compression, allowing the restoration of the original VQ index table.
This paper is structured as follows: Section 2 describes the fundamental concepts of the methods related to our scheme; Section 3 details the flow of algorithms used in the proposed scheme; the experiments and results are analyzed in Section 4; and, at the end of the research, we conclude our thoughts in Section 5.

3. Proposed Scheme

In this section, we propose a novel recompression scheme based on Huffman coding of index changes between neighboring blocks. First, we obtain a well-ordered new codebook by applying the PCA algorithm to the pretrained VQ codebook. Due to the texture of the image, after sorting with the PCA algorithm, the codebook indices of neighboring blocks are often highly correlated with each other. Based on this characteristic, we calculate the differences or XOR values between neighboring blocks along specified paths, resulting in a processed index table. We observe that the processed index table contains some high-frequency symbols, allowing us to further compress it using Huffman coding.
Figure 4 shows the flow of our proposed scheme. Inside of the preprocessor, we apply VQ compression to the original images, generating the VQ codebook and VQ index table. We then sort the VQ codebook using PCA and obtain the corresponding VQ codebook and its updated VQ index table. The VQ index table is the input to the encoder. Through multi-type processing using different paths and methods, various types of processed index tables are obtained. Due to the immersion of numerous high-frequency symbols after processing, they undergo compression using adaptive Huffman coding to generate compressed files of all VQ index table types. The type with the smallest file size, indicating the best compression result, is selected and recorded using an indicator. Upon receiving the compressed VQ index table file, the decoder extracts the indicator first and then performs the corresponding reverse process of lossless decompression to reconstruct the original VQ index table.
Figure 4. The flow of our proposed scheme.

3.1. Multi-Type Processing Phase

The core of our research is this multi-type processing aiming to optimize the compression results. As shown in Table 1, there are eight processing types: a total of four different serpentine or zigzag paths combined with either a difference calculation method d i f f e r e n c e or an XOR method X O R . After obtaining a processed index table, it is compressed using adaptive Huffman coding. To achieve the best compression results, we compare the results of all compression types and select one by using a 3-bit indicator to denote the selected processing type. It should be noted that a path in Table 1 is only a schematic representation of the direction. Different block sizes for LBG training and different sizes of VQ compressed images will lead to variations in the size of the index table, which is not necessarily the 4 × 4 size shown under the path of Table 1.
Table 1. Processing types with corresponding paths and methods.
Firstly, we need to retain the index value of the first step position to ensure that the VQ index table can be reconstructed without any loss. For all other positions, they are calculated based on the index value of the previous position relative to the current one according to the selected path. The processed index values are obtained using Equation (3). The index value of the starting position in the path retains its original value, while subsequent positions use either difference calculations or XOR operations depending on the specified method. Here, P I T represents the processed index table, I T represents the original VQ index table, p s represents the step in the path, and M represents the processed method.
P I T s t e p = I T p s , I T p s I T p s 1 ,   I T p s I T p s 1 , i f   p s = 1 i f   M   i s   d i f f e r e n c e i f   M   i s   X O R
Figure 5 shows examples for two different types: (a) and (c) represent the original indices of the VQ compressed image for t y p e   1 and t y p e   2 , respectively, while (b) and (d) represent the processed index tables for t y p e   1 and t y p e   2 after using difference calculations. We observe that the index value of the starting point, which is 220, remains unchanged, while the values at other positions indicate relative changes to their respective previous positions. Results of other types are similar to the shown examples. In other words, an image VQ compressed index table corresponds to eight processed index tables depending on types.
Figure 5. Examples of different processing paths. (a) The original indices of the VQ compressed image t y p e   1 . (b) The processed index table for t y p e   1 using the difference method. (c) The original indices of the VQ compressed image for t y p e   2 . (d) The processed index table for t y p e   2 using the difference method.

3.2. Adaptive Huffman Coding Phase

Since there is a certain correlation between the changes in adjacent indices in texture images, there will be some high-frequency changed values. We can use this characteristic feature to compress using Huffman coding. Finally, we choose the type that produces the shortest compression code and record it with a 3-bit indicator.
Due to varying texture features in different images, we construct an independent Huffman coding table for each image instead of using a shared Huffman coding table. Therefore, our scheme involves adaptively constructing Huffman coding tailored to each texture image. Algorithm 1 describes the adaptive Huffman coding. First, input the processed index table P I T to the encoder; then, count the number of changed values in P I T , that is, to obtain the number of unique symbols and the counts of each symbol that constitute the frequency table F T . Then, convert the frequency into probability, construct the Huffman tree, and, finally, use the Huffman code H C to represent the compressed P I T based on the constructed Huffman tree. To summarize, we obtain the Huffman code H C and the frequency table at the end of the process. The reason for storing the frequency table instead of the Huffman tree is that the storage space occupied by the frequency table will be smaller than that of the Huffman tree, which can achieve a better compression effect.
Algorithm 1. Adaptive Huffman Coding
InputProcessed index table PIT.
OutputHuffman codes HC, frequency table FT.
Step 1Calculate frequencies:
    symbols = unique(PIT)
    //Get unique symbols in the processed index table.
    counts = histcounts(PIT, [symbols, max(symbols) + 1])
    //Count the frequency of each symbol.
    FT = [symbols’, counts’]
    //Combine symbols and counts into a frequency table.
Step 2Build Huffman tree:
    prob = counts/sum(counts)
    //Normalize frequencies to probabilities.
    huffmanTree = huffmandict(symbols, prob)
    //Construct the Huffman tree using the symbols and their probabilities.
Step 3Generate codes:
    HC = huffmanenco(PIT, huffmanTree)
    //Encode the symbols in the PIT using the Huffman tree.
Step 4Output Huffman codes HC, frequency table FT.
We perform adaptive Huffman coding on the processed index tables corresponding to the eight types, obtaining eight sets of Huffman codes HC and frequency table FT. To obtain the best compression effect, we then choose the path type with the smallest storage space for HC and FT and represent it with a 3-bit indicator. Assuming that type 2 has the best compression effect, “001” is recorded as the indicator. Finally, the compressed file of the VQ index table includes the indicator, frequency table, and Huffman code.

3.3. Lossless Decompression Phase

In the lossless decompression phase, the decoder first identifies the type of processing through the indicator after receiving the compressed file of the VQ index table. Then, it uses the frequency table to build a Huffman tree in the same way as the encoder and decodes the Huffman code to retrieve the processed index table. Since the processing type is known, the VQ index table can be reconstructed using the corresponding path and method. Equation (4) shows the formula of VQ index table reconstruction. At the starting step of the path, the index value is the same as the processed index value. For other positions, the processed index value of the current step is added to the processed index value of the previous step to reconstruct the original VQ index value if the method is d i f f e r e n c e . When the method is X O R , the processed index value of the current step is XORed with the processed index value of the previous step to reconstruct the original VQ index value. After processing all steps of the entire path, the VQ index table can be reconstructed.
I T p s = P I T p s , P I T p s + P I T p s 1 ,   P I T p s P I T p s 1 , i f   p s = 1 i f   M   i s   d i f f e r e n c e i f   M   i s   X O R
Figure 6 is a specific example. First, assume that the indicator extracted by the decoder is “001”, which corresponds to t y p e   2 . Figure 6a shows the processed index table with the t y p e   2 path. Because the method of t y p e   2 is “difference”, according to Equation (4), the reconstructed original VQ index table is shown in Figure 6b.
Figure 6. An example of reconstructing a VQ index table: (a) the processed index table of t y p e   2 ; (b) the reconstructed VQ index table.

4. Experimental Results

To verify the performance of our proposed scheme on texture images, we compare it with other algorithms that also compress the VQ index table, namely the search order coding (SOC) algorithm [], the SOC-based state codebook (SOC+SC) algorithm [], and the SOC-based side match (SOC+SM) algorithm []. Our experimental environment consists of a Windows 11 laptop with a 3.20 GHz AMD Ryzen 7 CPU and 16 GB RAM. The software used is MATLAB R2024a.
We use bit rate ( B R ), i.e., bits per pixel, as a metric to compare compression performance with other schemes. Equation (5) shows how bit rate is calculated, where t o t a l   b i t s represents the size of the compressed file of a VQ index table and M × N represents the dimensions of the VQ image.
B R = t o t a l   b i t s M × N
To explore potential limitations of the proposed method, we designed experiments to investigate the impacts of texture images on compression efficiency. In fact, texture images have influence on VQ compression and, consequently, impact the efficiency of recompression. Therefore, we conducted experiments on different images and with varying codebook sizes to obtain comprehensive results. Figure 7 shows six 512 × 512 monochrome texture images from the USC-SIPI image database [], which we use as test images. We divided an image into 4 × 4 blocks and then used LBG algorithm [] to obtain a VQ codebook and its index table. Table 2, Table 3, Table 4 and Table 5 show the bit rate and improvement rate of our proposed scheme compared to other schemes using index table recompression in the monochrome texture images while VQ codebook sizes were 64, 128, 256, and 512, respectively. The three schemes [,,] we used in comparisons are all based on the SOC algorithm. In this experiment, we set the number of bits to n = 2 for all three schemes and set the matching range to r = 4 for the scheme proposed by Lin et al. []. The bit rates for all VQ images across all codebook sizes using our proposed scheme are lower than those of other schemes, demonstrating that our proposed scheme provides better compression performance for texture images. Notably, the bit rate of the Wood Grain image reaches 0.2188, which is more than 22% higher than the other three schemes, if there are 64 codewords in the VQ codebook.
Figure 7. Monochrome texture images: (a) “Herringbone Weave”, (b) “Woolen Cloth”, (c) “Pressed Calf Leather”, (d) “Beach Sand”, (e) “Water”, and (f) “Wood Grain”.
Table 2. Comparison of bit rate and improvement rate with other schemes in monochrome texture images using a VQ codebook containing 64 codewords.
Table 3. Comparison of bit rate and improvement rate with other schemes in monochrome texture images using a VQ codebook containing 128 codewords.
Table 4. Comparison of bit rate and improvement rate with other schemes in monochrome texture images using a VQ codebook containing 256 codewords.
Table 5. Comparison of bit rate and improvement rate with other schemes in monochrome texture images using a VQ codebook containing 512 codewords.
We found that earth height map images also exhibited texture features. Therefore, we selected six height map images from the Earth Terrain, Height, and Segmentation Map Images dataset [] as test images, shown in Figure 8, for experiments as well. Table 6, Table 7, Table 8 and Table 9 display the bit rate and improvement rate of our proposed scheme compared to other schemes using earth height map images with VQ codebooks of sizes 64, 128, 256, and 512, respectively. Our scheme demonstrated better compression effectiveness across all codebook sizes. In particular, as shown in Table 6, the bit rates are only 0.1556 for Height map 1 and 0.1503 for Height map 4, achieving an improvement rate of more than 30% over other schemes. We observed that it also had high improvement rates for other sizes. This demonstrates that our scheme applied to texture images, including earth height map images, is more effective.
Figure 8. Earth height map images: (a) “Height map 1”, (b) “Height map 2”, (c) “Height map 3”, (d) “Height map 4”, (e) “Height map 5”, and (f) “Height map 6”.
Table 6. Comparison of bit rate and improvement rate with other schemes in Earth Height Map images using a VQ codebook containing 64 codewords.
Table 7. Comparison of bit rate and improvement rate with other schemes in Earth Height Map images using a VQ codebook containing 128 codewords.
Table 8. Comparison of bit rate and improvement rate with other schemes in Earth Height Map images using a VQ codebook containing 256 codewords.
Table 9. Comparison of bit rate and improvement rate with other schemes in Earth Height Map images using a VQ codebook containing 512 codewords.
The reason our scheme is better than the traditional SOC-based schemes is that the blocks of texture images are very similar, leading to many similar VQ codewords. Without a sorted codebook, the visual appearances can be similar but the adjacent index values are very different. Our proposed scheme does not rely on this entirely; it counts on using adaptive coding to compress based on the high frequencies of changed symbols as well. Figure 9 shows the histogram of the processed index frequencies for different processing types of “Herringbone Weave” using a VQ codebook containing 64 codewords. We can observe that the types coped with the d i f f e r e n c e method have more symbols concentrated at the high-frequency area near 0 and the frequency distribution trends are more obvious. In contrast, the processing types using the X O R method have fewer symbols, and the high-frequency areas are spread out around smaller values. Due to these differences in numbers of symbols and their frequencies, each method exhibits its own advantages when compressing various types of images. In summary, such frequency distributions are conducive to further compression using adaptive Huffman coding.
Figure 9. The processed index frequencies for different types of “Herringbone Weave” using a VQ codebook containing 64 codewords. (a) t y p e   1 . (b) t y p e   2 . (c) t y p e   3 . (d) t y p e   4 . (e) t y p e   5 . (f) t y p e   6 . (g) t y p e   7 . (h) t y p e   8 .
To evaluate the performance of our proposed scheme on different images beyond texture images, we used six common images shown in Figure 10. Figure 11 presents the results of our scheme and other schemes when using codebook sizes of 64, 128, 256, and 512. The experimental results show that, with a codebook of 64 codewords, the compression rates of our method are superior to other methods. For other codebook sizes, although our scheme does not achieve the best compression performance, it is only slightly behind. This demonstrates that our method performs well on various images, not just texture images.
Figure 10. Common images: (a) “Airplane”, (b) “Baboon”, (c) “Boat”, (d) “Goldhill”, (e) “Lake”, and (f) “Peppers”.
Figure 11. Comparison of bit rates with other schemes in common images.
Table 10 presents p-values comparing our proposed scheme with other methods. The p-value is a statistical measure that helps determine the significance of results; a smaller p-value indicates a more significant result. It can be observed that all p-values for the other methods are below 0.05 when compared to our method, demonstrating the statistical significance of our results. This finding provides strong support for our research conclusions.
Table 10. Comparison of p-values with other schemes.
Table 11 and Table 12, respectively, present the time complexities of the proposed scheme. The time complexity is O n log n for the encoder and O n for the decoder. With different codebook sizes trained using various test images, the fastest execution time occurs at a codebook size of 64, which is approximately 0.1 s. This demonstrates one of the advantages of our proposed scheme. Additionally, we observe that a larger codebook size does result in longer execution time. This is because there are more symbols to encode in the Huffman coding stage, which results in increasing the processing time. Therefore, shorter execution time usually indicates fewer symbols.
Table 11. Time complexity of our proposed scheme.
Table 12. Execution time of our proposed scheme.

5. Conclusions

This paper presents an effective recompression scheme for VQ-compressed index tables of texture images. The encoder performs a multi-type process on a VQ index table. The processing types are formed by different scanning paths and calculation methods; each type, then, is evaluated by applying adaptive Huffman coding. After evaluation, we select the type with the best compression effect and use a 3-bit indicator to record the type. The decoder only needs to extract the indicator first to determine the processing type and then perform the inverse operation of the encoder to reconstruct the original VQ index table. Compared with other schemes designed for compressing VQ index tables, our scheme demonstrates better compression performance on both datasets. Significant improvements across different codebook sizes and various VQ-compressed images reflect the superior compression performance of our scheme as well. Our research positively impacts image compression technology in fields such as telemedicine, remote sensing, and multimedia storage and transmission. In the future, we will test different datasets and explore more effective compression algorithms.

Author Contributions

Conceptualization, Y.L., J.-C.L. and C.-C.C. (Chin-Chen Chang); methodology, Y.L., J.-C.L. and C.-C.C. (Chin-Chen Chang); software, Y.L.; validation, Y.L.; writing—original draft preparation, Y.L. and J.-C.L.; writing—review and editing, Y.L., J.-C.L., C.-C.C. (Chin-Chen Chang) and C.-C.C. (Ching-Chun Chang). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Palattella, M.R.; Dohler, M.; Grieco, A.; Rizzo, G.; Torsner, J.; Engel, T.; Ladid, L. Internet of things in the 5G era: Enablers, architecture, and business models. IEEE J. Sel. Areas Commun. 2016, 34, 510–527. [Google Scholar] [CrossRef]
  2. Akpakwu, G.A.; Silva, B.J.; Hancke, G.P.; Abu-Mahfouz, A.M. A survey on 5G networks for the Internet of Things: Communication technologies and challenges. IEEE Access 2017, 6, 3619–3647. [Google Scholar] [CrossRef]
  3. Gray, R. Vector quantization. IEEE Assp. Mag. 1984, 1, 4–29. [Google Scholar] [CrossRef]
  4. Nasrabadi, N.M.; King, R.A. Image coding using vector quantization: A review. IEEE Trans. Commun. 1988, 36, 957–971. [Google Scholar] [CrossRef]
  5. Linde, Y.; Buzo, A.; Gray, R. An algorithm for vector quantizer design. IEEE Trans. Commun. 1980, 28, 84–95. [Google Scholar] [CrossRef]
  6. Yeh, C.Y.; Huang, H.H. An Upgraded Version of the Binary Search Space-Structured VQ Search Algorithm for AMR-WB Codec. Symmetry 2019, 11, 283. [Google Scholar] [CrossRef]
  7. Peng, H.; Yang, S.; Liu, Q.; Peng, Q.; Li, Q. Dynamic Fuzzy Adjustment Algorithm for Web Information Acquisition and Data Transmission. Symmetry 2020, 12, 535. [Google Scholar] [CrossRef]
  8. Gao, K.; Chang, C.C.; Lin, C.C. Cryptanalysis of Reversible Data Hiding in Encrypted Images Based on the VQ Attack. Symmetry 2023, 15, 189. [Google Scholar] [CrossRef]
  9. Dunham, M.; Gray, R. An algorithm for the design of labeled-transition finite-state vector quantizers. IEEE Trans. Commun. 1985, 33, 83–89. [Google Scholar] [CrossRef]
  10. Kim, T. Side match and overlap match vector quantizers for images. IEEE Trans. Image Process. 1992, 1, 170–185. [Google Scholar] [CrossRef] [PubMed]
  11. Gray, R.; Linde, Y. Vector quantizers and predictive quantizers for Gauss-Markov sources. IEEE Trans. Commun. 1982, 30, 381–389. [Google Scholar] [CrossRef]
  12. Hang, H.M.; Woods, J. Predictive vector quantization of images. IEEE Trans. Commun. 1985, 33, 1208–1219. [Google Scholar] [CrossRef]
  13. Lo, K.T.; Feng, J. Predictive mean search algorithms for fast VQ encoding of images. IEEE Trans. Consum. Electron. 1995, 41, 327–331. [Google Scholar]
  14. Hsieh, C.H.; Tsai, J.C. Lossless compression of VQ index with search-order coding. IEEE Trans. Image Process. 1996, 5, 1579–1582. [Google Scholar] [CrossRef] [PubMed]
  15. Shanbehzadeh, J.; Ogunbona, P.O. Index-compressed vector quantisation based on index mapping. IEE Proc.-Vis. Image Signal Process. 1997, 144, 31–38. [Google Scholar] [CrossRef][Green Version]
  16. Chang, C.C.; Chen, G.M.; Lin, C.C. Lossless Compression Schemes of Vector Quantization Indices Using State Codebook. J. Softw. 2009, 4, 274–282. [Google Scholar] [CrossRef]
  17. Lin, Y.; Liu, J.C.; Ching-Chun, C.; Chin-Chen, C. An Innovative Recompression Scheme for VQ Index Tables. Future Internet 2024, 16, 297. [Google Scholar] [CrossRef]
  18. Lee, R.C.T.; Chin, Y.H.; Chang, S.C. Application of principal component analysis to multikey searching. IEEE Trans. Softw. Eng. 1976, SE-2, 185–193. [Google Scholar] [CrossRef]
  19. Huffman, D.A. A method for the construction of minimum-redundancy codes. Proc. IRE 1952, 40, 1098–1101. [Google Scholar] [CrossRef]
  20. Weber, A.G. The USC-SIPI Image Database: Version 5. 2006. Available online: http://sipi.usc.edu/database (accessed on 28 June 2024).
  21. Pappas, T. Earth Terrain, Height, and Segmentation Map Images. 2020. Available online: https://www.kaggle.com/datasets/tpapp157/earth-terrain-height-and-segmentationmap-images (accessed on 28 June 2024).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.