Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (98)

Search Parameters:
Keywords = image lossy compression

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 3108 KiB  
Article
Soft Classification in a Composite Source Model
by Yuefeng Cao, Jiakun Liu and Wenyi Zhang
Entropy 2025, 27(6), 620; https://doi.org/10.3390/e27060620 - 11 Jun 2025
Viewed by 352
Abstract
A composite source model consists of an intrinsic state and an extrinsic observation. The fundamental performance limit of reproducing the intrinsic state is characterized by the indirect rate–distortion function. In a remote classification application, a source encoder encodes the extrinsic observation (e.g., image) [...] Read more.
A composite source model consists of an intrinsic state and an extrinsic observation. The fundamental performance limit of reproducing the intrinsic state is characterized by the indirect rate–distortion function. In a remote classification application, a source encoder encodes the extrinsic observation (e.g., image) into bits, and a source decoder plays the role of a classifier that reproduces the intrinsic state (e.g., label of image). In this work, we characterize the general structure of the optimal transition probability distribution, achieving the indirect rate–distortion function. This optimal solution can be interpreted as a “soft classifier”, which generalizes the conventionally adopted “classify-then-compress” scheme. We then apply the soft classification to aid the lossy compression of the extrinsic observation of a composite source. This leads to a coding scheme that exploits the soft classifier to guide reproduction, outperforming existing coding schemes without classification or with hard classification. Full article
(This article belongs to the Special Issue Semantic Information Theory)
Show Figures

Figure 1

17 pages, 5607 KiB  
Article
Tampering Detection in Absolute Moment Block Truncation Coding (AMBTC) Compressed Code Using Matrix Coding
by Yijie Lin, Ching-Chun Chang and Chin-Chen Chang
Electronics 2025, 14(9), 1831; https://doi.org/10.3390/electronics14091831 - 29 Apr 2025
Viewed by 302
Abstract
With the increasing use of digital image compression technology, ensuring data integrity and security within the compression domain has become a crucial area of research. Absolute moment block truncation coding (AMBTC), an efficient lossy compression algorithm, is widely used for low-bitrate image storage [...] Read more.
With the increasing use of digital image compression technology, ensuring data integrity and security within the compression domain has become a crucial area of research. Absolute moment block truncation coding (AMBTC), an efficient lossy compression algorithm, is widely used for low-bitrate image storage and transmission. However, existing studies have primarily focused on tamper detection for AMBTC compressed images, often overlooking the integrity of the AMBTC compressed code itself. To address this gap, this paper introduces a novel anti-tampering scheme specifically designed for AMBTC compressed code. The proposed scheme utilizes shuffle pairing to establish a one-to-one relationship between image blocks. The hash value, calculated as verification data from the original data of each block, is then embedded into the bitmap of its corresponding block using the matrix coding algorithm. Additionally, a tampering localization mechanism is incorporated to enhance the security of the compressed code without introducing additional redundancy. The experimental results demonstrate that the proposed scheme effectively detects tampering with high accuracy, providing protection for AMBTC compressed code. Full article
Show Figures

Figure 1

32 pages, 15292 KiB  
Article
Compression Ratio as Picture-Wise Just Noticeable Difference Predictor
by Nenad Stojanović, Boban Bondžulić, Vladimir Lukin, Dimitrije Bujaković, Sergii Kryvenko and Oleg Ieremeiev
Mathematics 2025, 13(9), 1445; https://doi.org/10.3390/math13091445 - 28 Apr 2025
Viewed by 484
Abstract
This paper presents the interesting results of applying compression ratio (CR) in the prediction of the boundary between visually lossless and visually lossy compression, which is of particular importance in perceptual image compression. The prediction is carried out through the objective quality (peak [...] Read more.
This paper presents the interesting results of applying compression ratio (CR) in the prediction of the boundary between visually lossless and visually lossy compression, which is of particular importance in perceptual image compression. The prediction is carried out through the objective quality (peak signal-to-noise ratio, PSNR) and image representation in bits per pixel (bpp). In this analysis, the results of subjective tests from four publicly available databases are used as ground truth for comparison with the results obtained using the compression ratio as a predictor. Through a wide analysis of color and grayscale infrared JPEG and Better Portable Graphics (BPG) compressed images, the values of parameters that control these two types of compression and for which CR is calculated are proposed. It is shown that PSNR and bpp predictions can be significantly improved by using CR calculated using these proposed values, regardless of the type of compression and whether color or infrared images are used. In this paper, CR is used for the first time in predicting the boundary between visually lossless and visually lossy compression for images from the infrared part of the electromagnetic spectrum, as well as in the prediction of BPG compressed content. This paper indicates the great potential of CR so that in future research, it can be used in joint prediction based on several features or through the CR curve obtained for different values of the parameters controlling the compression. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

24 pages, 5039 KiB  
Article
EPIIC: Edge-Preserving Method Increasing Nuclei Clarity for Compression Artifacts Removal in Whole-Slide Histopathological Images
by Julia Merta and Michal Marczyk
Appl. Sci. 2025, 15(8), 4450; https://doi.org/10.3390/app15084450 - 17 Apr 2025
Viewed by 406
Abstract
Hematoxylin and eosin (HE) staining is widely used in medical diagnosis. Stained slides provide crucial information to diagnose or monitor the progress of many diseases. Due to the large size of scanned images of whole tissues, a JPEG algorithm is commonly used for [...] Read more.
Hematoxylin and eosin (HE) staining is widely used in medical diagnosis. Stained slides provide crucial information to diagnose or monitor the progress of many diseases. Due to the large size of scanned images of whole tissues, a JPEG algorithm is commonly used for compression. This lossy compression method introduces artifacts visible as 8 × 8 pixel blocks and reduces overall quality, which may negatively impact further analysis. We propose a fully unsupervised Edge-Preserving method Increasing nucleI Clarity (EPIIC) for removing compression artifacts from whole-slide HE-stained images. The method is introduced in two versions, EPIIC and EPIIC Sobel, composed of stain deconvolution, gradient-based edge map estimation, and weighted smoothing. The performance of the method was evaluated using two image quality measures, PSNR and SSIM, and various datasets, including BreCaHAD with HE-stained histopathological images and five other natural image datasets, and compared with other edge-preserving filtering methods and a deep learning-based solution. The impact of compression artifacts removal on the nuclei segmentation task was tested using Hover-Net and STARDIST models. The proposed methods led to improved image quality in histopathological and natural images and better segmentation of cell nuclei compared to other edge-preserving filtering methods. The biggest improvement was observed for images compressed with a low compression quality factor. Compared to the method using neural networks, the developed algorithms have slightly worse performance in image enhancement, but they are superior in nuclei segmentation. EPIIC and EPIIC Sobel can efficiently remove compression artifacts, positively impacting the segmentation results of cell nuclei and overall image quality. Full article
Show Figures

Figure 1

35 pages, 44299 KiB  
Article
Lossy Infrared Image Compression Based on Wavelet Coefficient Probability Modeling and Run-Length-Enhanced Huffman Coding
by Yaohua Zhu, Ya Liu, Yanghang Zhu, Mingsheng Huang, Jingyu Jiang and Yong Zhang
Sensors 2025, 25(8), 2491; https://doi.org/10.3390/s25082491 - 15 Apr 2025
Viewed by 365
Abstract
Infrared line-scanning images have high redundancy and large file sizes. In JPEG2000 compression, the MQ arithmetic encoder’s complexity slows down processing. Huffman coding can achieve O(1) complexity based on a code table, but its integer-bit encoding mechanism and ignorance of the continuity of [...] Read more.
Infrared line-scanning images have high redundancy and large file sizes. In JPEG2000 compression, the MQ arithmetic encoder’s complexity slows down processing. Huffman coding can achieve O(1) complexity based on a code table, but its integer-bit encoding mechanism and ignorance of the continuity of symbol distribution result in suboptimal compression performance. In particular, when encoding sparse quantized wavelet coefficients that contain a large number of consecutive zeros, the inaccuracy of the one-bit shortest code accumulates, reducing compression efficiency. To address this, this paper proposes Huf-RLC, a Huffman-based method enhanced with Run-Length Coding. By leveraging zero-run continuity, Huf-RLC optimizes the shortest code encoding, reducing the average code length to below one bit in sparse distributions. Additionally, this paper proposes a wavelet coefficient probability model to avoid the complexity of calculating statistics for constructing Huffman code tables for different wavelet subbands. Furthermore, Differential Pulse Code Modulation (DPCM) is introduced to address the remaining spatial redundancy in the low-frequency wavelet subband. The experimental results indicate that the proposed method outperforms JPEG in terms of PSNR and SSIM, while maintaining minimal performance loss compared to JPEG2000. Particularly at low bitrates, the proposed method shows only a small gap with JPEG2000, while JPEG suffers from significant blocking artifacts. Additionally, the proposed method achieves compression speeds 3.155 times faster than JPEG2000 and 2.049 times faster than JPEG. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 1127 KiB  
Article
Efficient Compression of Red Blood Cell Image Dataset Using Joint Deep Learning-Based Pattern Classification and Data Compression
by Zerin Nusrat, Md Firoz Mahmud and W. David Pan
Electronics 2025, 14(8), 1556; https://doi.org/10.3390/electronics14081556 - 11 Apr 2025
Viewed by 454
Abstract
Millions of people across the globe are affected by the life-threatening disease of Malaria. To achieve the remote screening and diagnosis of the disease, the rapid transmission of large-size microscopic images is necessary, thereby demanding efficient data compression techniques. In this paper, we [...] Read more.
Millions of people across the globe are affected by the life-threatening disease of Malaria. To achieve the remote screening and diagnosis of the disease, the rapid transmission of large-size microscopic images is necessary, thereby demanding efficient data compression techniques. In this paper, we argued that well-classified images might lead to higher overall compression of the images in the datasets. To this end, we investigated the novel approach of joint pattern classification and compression of microscopic red blood cell images. Specifically, we used deep learning models, including a vision transformer and convolutional autoencoders, to classify red blood cell images into normal and Malaria-infected patterns, prior to applying compression on the images classified into different patterns separately. We evaluated the impacts of varying classification accuracy on overall image compression efficiency. The results highlight the importance of the accurate classification of images in improving overall compression performance. We demonstrated that the proposed deep learning-based joint classification/compression method offered superior performance compared with traditional lossy compression approaches such as JPEG and JPEG 2000. Our study provides useful insights into how deep learning-based pattern classification could benefit data compression, which would be advantageous in telemedicine, where large-image-size reduction and high decoded image quality are desired. Full article
(This article belongs to the Special Issue Deep Learning-Based Image Restoration and Object Identification)
Show Figures

Figure 1

30 pages, 17575 KiB  
Article
Generative Diffusion Models for Compressed Sensing of Satellite LiDAR Data: Evaluating Image Quality Metrics in Forest Landscape Reconstruction
by Andres Ramirez-Jaime, Gonzalo R. Arce, Nestor Porras-Diaz, Oleg Ieremeiev, Andrii Rubel, Vladimir Lukin, Mateusz Kopytek, Piotr Lech, Jarosław Fastowicz and Krzysztof Okarma
Remote Sens. 2025, 17(7), 1215; https://doi.org/10.3390/rs17071215 - 29 Mar 2025
Viewed by 842
Abstract
Spaceborne LiDAR systems are crucial for Earth observation but face hardware constraints, thus limiting resolution and data processing. We propose integrating compressed sensing and diffusion generative models to reconstruct high-resolution satellite LiDAR data within the Hyperheight Data Cube (HHDC) framework. Using a randomized [...] Read more.
Spaceborne LiDAR systems are crucial for Earth observation but face hardware constraints, thus limiting resolution and data processing. We propose integrating compressed sensing and diffusion generative models to reconstruct high-resolution satellite LiDAR data within the Hyperheight Data Cube (HHDC) framework. Using a randomized illumination pattern in the imaging model, we achieve efficient sampling and compression, reducing the onboard computational load and optimizing data transmission. Diffusion models then reconstruct detailed HHDCs from sparse samples on Earth. To ensure reliability despite lossy compression, we analyze distortion metrics for derived products like Digital Terrain and Canopy Height Models and evaluate the 3D reconstruction accuracy in waveform space. We identify image quality assessment metrics—ADD_GSIM, DSS, HaarPSI, PSIM, SSIM4, CVSSI, MCSD, and MDSI—that strongly correlate with subjective quality in reconstructed forest landscapes. This work advances high-resolution Earth observation by combining efficient data handling with insights into LiDAR imaging fidelity. Full article
Show Figures

Figure 1

25 pages, 6330 KiB  
Article
Post-Filtering of Noisy Images Compressed by HEIF
by Sergii Kryvenko, Volodymyr Rebrov, Vladimir Lukin, Vladimir Golovko, Anatoliy Sachenko, Andrii Shelestov and Benoit Vozel
Appl. Sci. 2025, 15(6), 2939; https://doi.org/10.3390/app15062939 - 8 Mar 2025
Viewed by 770
Abstract
Modern imaging systems produce a great volume of image data. In many practical situations, it is necessary to compress them for faster transferring or more efficient storage. Then, a compression has to be applied. If images are noisy, lossless compression is almost useless, [...] Read more.
Modern imaging systems produce a great volume of image data. In many practical situations, it is necessary to compress them for faster transferring or more efficient storage. Then, a compression has to be applied. If images are noisy, lossless compression is almost useless, and lossy compression is characterized by a specific noise filtering effect that depends on the image, noise, and coder properties. Here, we considered a modern HEIF coder applied to grayscale (component) images of different complexity corrupted by additive white Gaussian noise. It has recently been shown that an optimal operation point (OOP) might exist in this case. Note that the OOP is a value of quality factor where the compressed image quality (according to a used quality metric) is the closest to the corresponding noise-free image. The lossy compression of noisy images leads to both noise reduction and distortions introduced into the information component, thus, a compromise should be found between the compressed image quality and compression ratio attained. The OOP is one possible compromise, if it exists, for a given noisy image. However, it has also recently been demonstrated that the compressed image quality can be significantly improved if post-filtering is applied under the condition that the quality factor is slightly larger than the one corresponding to the OOP. Therefore, we considered the efficiency of post-filtering where a block-matching 3-dimensional (BM3D) filter was applied. It was shown that the positive effect of such post-filtering could reach a few dB in terms of the PSNR and PSNR-HVS-M metrics. The largest benefits took place for simple structure images and a high intensity of noise. It was also demonstrated that the filter parameters have to be adapted to the properties of residual noise that become more non-Gaussian if the compression ratio increases. Practical recommendations on the use of compression parameters and post-filtering are given. Full article
Show Figures

Figure 1

24 pages, 7858 KiB  
Article
Data Compression with a Time Limit
by Bruno Carpentieri
Algorithms 2025, 18(3), 135; https://doi.org/10.3390/a18030135 - 3 Mar 2025
Cited by 1 | Viewed by 915
Abstract
In this paper, we explore a framework to identify an optimal choice of compression algorithms that enables the best allocation of computing resources in a large-scale data storage environment: our goal is to maximize the efficiency of data compression given a time limit [...] Read more.
In this paper, we explore a framework to identify an optimal choice of compression algorithms that enables the best allocation of computing resources in a large-scale data storage environment: our goal is to maximize the efficiency of data compression given a time limit that must be observed by the compression process. We tested this approach with lossless compression of one-dimensional data (text) and two-dimensional data (images) and the experimental results demonstrate its effectiveness. We also extended this technique to lossy compression and successfully applied it to the lossy compression of two-dimensional data. Full article
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

23 pages, 3923 KiB  
Article
A Robust Semi-Blind Watermarking Technology for Resisting JPEG Compression Based on Deep Convolutional Generative Adversarial Networks
by Chin-Feng Lee, Zih-Cyuan Chao, Jau-Ji Shen and Anis Ur Rehman
Symmetry 2025, 17(1), 98; https://doi.org/10.3390/sym17010098 - 10 Jan 2025
Cited by 1 | Viewed by 1126
Abstract
In recent years, the internet has developed rapidly. With the popularity of social media, uploading and backing up digital images has become the norm. A huge number of digital images are circulating on the internet daily, and issues related to information security follow. [...] Read more.
In recent years, the internet has developed rapidly. With the popularity of social media, uploading and backing up digital images has become the norm. A huge number of digital images are circulating on the internet daily, and issues related to information security follow. To protect intellectual property rights, digital watermarking is an indispensable technology. However, the common lossy compression technology in the network transmission process is a big problem for watermarking. This paper describes an innovative semi-blind watermarking method with the use of deep convolutional generative adversarial networks (DCGANs) for hiding and extracting watermarks from JPEG-compressed images. The proposed method achieves an average peak signal-to-noise ratio (PSNR) of 49.99 dB, a structural similarity index (SSIM) of 0.95, and a bit error rate (BER) of 0.008 across varying JPEG quality factors. The process is based on an embedder, decoder, generator, and discriminator. It allows watermarking, decoding, or reconstruction to be symmetric such that there is less distortion and durability is improved. It constructs a specific generator for each image and watermark that is supposed to be protected. Experimental results show that, with the variety of JPEG quality factors, the restored watermark achieves a remarkably low corrupted rate, outstripping recent deep learning-based watermarking methods. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

21 pages, 4458 KiB  
Article
Video-Wise Just-Noticeable Distortion Prediction Model for Video Compression with a Spatial–Temporal Network
by Huanhua Liu, Shengzong Liu, Jianyu Xiao, Dandan Xu and Xiaoping Fan
Electronics 2024, 13(24), 4977; https://doi.org/10.3390/electronics13244977 - 18 Dec 2024
Viewed by 1088
Abstract
Just-Noticeable Difference (JND) in an image/video refers to the maximum difference that the human visual system cannot perceive, which has been widely applied in perception-guided image/video compression. In this work, we propose a Binary Decision-based Video-Wise Just-Noticeable Difference Prediction Method (BD-VW-JND-PM) with deep [...] Read more.
Just-Noticeable Difference (JND) in an image/video refers to the maximum difference that the human visual system cannot perceive, which has been widely applied in perception-guided image/video compression. In this work, we propose a Binary Decision-based Video-Wise Just-Noticeable Difference Prediction Method (BD-VW-JND-PM) with deep learning. Firstly, we model the VW-JND prediction problem as a binary decision process to reduce the inferring complexity. Then, we propose a Perceptually Lossy/Lossless Predictor for Compressed Video (PLLP-CV) to identify whether the distortion can be perceived or not. In the PLLP-CV, a Spatial–Temporal Network-based Perceptually Lossy/Lossless predictor (ST-Network-PLLP) is proposed for key frames by learning the spatial and temporal distortion features, and a threshold-based integration strategy is proposed to obtain the final results. Experimental results evaluated on the VideoSet database show that the mean prediction accuracy of PLLP-CV is about 95.6%, and the mean JND prediction error is 1.46 in QP and 0.74 in Peak-to-Noise Ratio (PSNR), which achieve 15% and 14.9% improvements, respectively. Full article
Show Figures

Figure 1

23 pages, 4436 KiB  
Article
JSN: Design and Analysis of JPEG Steganography Network
by Po-Chyi Su, Yi-Han Cheng and Tien-Ying Kuo
Electronics 2024, 13(23), 4821; https://doi.org/10.3390/electronics13234821 - 6 Dec 2024
Cited by 1 | Viewed by 845
Abstract
Image steganography involves hiding a secret message within an image for covert communication, allowing only the intended recipient to extract the hidden message from the “stego” image. The secret message can also be an image itself to enable the transmission of more information, [...] Read more.
Image steganography involves hiding a secret message within an image for covert communication, allowing only the intended recipient to extract the hidden message from the “stego” image. The secret message can also be an image itself to enable the transmission of more information, resulting in applications where one image is concealed within another. While existing techniques can embed a secret image of similar size into a cover image with minimal distortion, they often overlook the effects of lossy compression during transmission, such as when saving images in the commonly used JPEG format. This oversight can hinder the extraction of the hidden image. To address the challenges posed by JPEG compression in image steganography, we propose a JPEG Steganography Network (JSN) that leverages a reversible deep neural network as its backbone, integrated with the JPEG encoding process. We utilize 8×8 Discrete Cosine Transform (DCT) and consider the quantization step size specified by JPEG to create a JPEG-compliant stego image. We also discuss various design considerations and conduct extensive testing on JSN to validate its performance and practicality in real-world applications. Full article
(This article belongs to the Special Issue Image and Video Coding Technology)
Show Figures

Figure 1

23 pages, 106560 KiB  
Article
RLUNet: Overexposure-Content-Recovery-Based Single HDR Image Reconstruction with the Imaging Pipeline Principle
by Yiru Zheng, Wei Wang, Xiao Wang and Xin Yuan
Appl. Sci. 2024, 14(23), 11289; https://doi.org/10.3390/app142311289 - 3 Dec 2024
Viewed by 1592
Abstract
With the popularity of High Dynamic Range (HDR) display technology, consumer demand for HDR images is increasing. Since HDR cameras are expensive, reconstructing High Dynamic Range (HDR) images from traditional Low Dynamic Range (LDR) images is crucial. However, existing HDR image reconstruction algorithms [...] Read more.
With the popularity of High Dynamic Range (HDR) display technology, consumer demand for HDR images is increasing. Since HDR cameras are expensive, reconstructing High Dynamic Range (HDR) images from traditional Low Dynamic Range (LDR) images is crucial. However, existing HDR image reconstruction algorithms often fail to recover fine details and do not adequately address the fundamental principles of the LDR imaging pipeline. To overcome these limitations, the Reversing Lossy UNet (RLUNet) has been proposed, aiming to effectively balance dynamic range expansion and recover overexposed areas through a deeper understanding of LDR image pipeline principles. The RLUNet model comprises the Reverse Lossy Network, which is designed according to the LDR–HDR framework and focuses on reconstructing HDR images by recovering overexposed regions, dequantizing, linearizing the mapping, and suppressing compression artifacts. This framework, grounded in the principles of the LDR imaging pipeline, is designed to reverse the operations involved in lossy image operations. Furthermore, the integration of the Texture Filling Module (TFM) block with the Recovery of Overexposed Regions (ROR) module in the RLUNet model enhances the visual performance and detail texture of the overexposed areas in the reconstructed HDR image. The experiments demonstrate that the proposed RLUNet model outperforms various state-of-the-art methods on different testsets. Full article
(This article belongs to the Special Issue Applications in Computer Vision and Image Processing)
Show Figures

Figure 1

19 pages, 4431 KiB  
Article
Age of Information-Aware Networks for Low-Power IoT Sensor Applications
by Frederick M. Chache, Sean Maxon, Ram M. Narayanan and Ramesh Bharadwaj
IoT 2024, 5(4), 816-834; https://doi.org/10.3390/iot5040037 - 19 Nov 2024
Viewed by 1113
Abstract
The Internet of Things (IoT) is a fast-growing field that has found a variety of applications, such as smart agriculture and industrial processing. In these applications, it is important for nodes to maximize the amount of useful information transmitted over a limited channel. [...] Read more.
The Internet of Things (IoT) is a fast-growing field that has found a variety of applications, such as smart agriculture and industrial processing. In these applications, it is important for nodes to maximize the amount of useful information transmitted over a limited channel. This work seeks to improve the performance of low-powered sensor networks by developing an architecture that leverages existing techniques such as lossy compression and different queuing strategies in order to minimize their drawbacks and meet the performance needs of backend applications. The Age of Information (AoI) provides a useful metric for quantifying Quality of Service (QoS) in low-powered sensor networks and provides a method for measuring the freshness of data in the network. In this paper, we investigate QoS requirements and the effects of lossy compression and queue strategies on AoI. Furthermore, two important use cases for low-powered IoT sensor networks are studied, namely, real-time feedback control and image classification. The results highlight the relative importance of QoS metrics for applications with different needs. To this end, we introduce a QoS-aware architecture to optimize network performance for the QoS requirements of the studied applications. The proposed network architecture was tested with a mixture of application traffic settings and was shown to greatly improve network QoS compared to commonly used transmission architectures such as Slotted ALOHA. Full article
Show Figures

Figure 1

17 pages, 8618 KiB  
Article
Lossless Recompression of Vector Quantization Index Table for Texture Images Based on Adaptive Huffman Coding Through Multi-Type Processing
by Yijie Lin, Jui-Chuan Liu, Ching-Chun Chang and Chin-Chen Chang
Symmetry 2024, 16(11), 1419; https://doi.org/10.3390/sym16111419 - 24 Oct 2024
Cited by 2 | Viewed by 985
Abstract
With the development of the information age, all walks of life are inseparable from the internet. Every day, huge amounts of data are transmitted and stored on the internet. Therefore, to improve transmission efficiency and reduce storage occupancy, compression technology is becoming increasingly [...] Read more.
With the development of the information age, all walks of life are inseparable from the internet. Every day, huge amounts of data are transmitted and stored on the internet. Therefore, to improve transmission efficiency and reduce storage occupancy, compression technology is becoming increasingly important. Based on different application scenarios, it is divided into lossless data compression and lossy data compression, which allows a certain degree of compression. Vector quantization (VQ) is a widely used lossy compression technology. Building upon VQ compression technology, we propose a lossless compression scheme for the VQ index table. In other words, our work aims to recompress VQ compression technology and restore it to the VQ compression carrier without loss. It is worth noting that our method specifically targets texture images. By leveraging the spatial symmetry inherent in these images, our approach generates high-frequency symbols through difference calculations, which facilitates the use of adaptive Huffman coding for efficient compression. Experimental results show that our scheme has better compression performance than other schemes. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

Back to TopTop