Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (118)

Search Parameters:
Keywords = lossless image compression

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 1678 KB  
Article
Challenges in Algorithmic Implementation: The FLoCIC Algorithm as a Case Study in Technology-Enhanced Computer Science Education
by David Jesenko, Borut Žalik and Štefan Kohek
Appl. Sci. 2025, 15(18), 10118; https://doi.org/10.3390/app151810118 - 16 Sep 2025
Viewed by 216
Abstract
Learning and implementing algorithms is a fundamental but challenging aspect of Computer Science education. One of the key tools used in teaching algorithms is pseudocode, which serves as an abstract representation of the logic behind a given algorithm. This study explores the educational [...] Read more.
Learning and implementing algorithms is a fundamental but challenging aspect of Computer Science education. One of the key tools used in teaching algorithms is pseudocode, which serves as an abstract representation of the logic behind a given algorithm. This study explores the educational value of the FLoCIC (Few Lines of Code for Image Compression) algorithm, which is designed to teach lossless image compression through algorithmic implementation, particularly within the context of multimedia data. Image compression represents a typical multimedia task that combines algorithmic thinking with practical problem-solving. By analysing questionnaire responses (N = 121) from undergraduate and graduate students, this study identifies critical challenges in pseudocode-based learning, including understanding complex algorithmic components and debugging recursive functions. This paper highlights the influence of prior knowledge in areas such as data structures, compression, and algorithms in general on the success of students in completing the task, with graduate students demonstrating stronger results compared to undergraduates. The study analyses the role of external resources and online code repositories, further revealing their utility in supporting implementation efforts but highlighting the need for a fundamental understanding of the algorithm for successful implementation. The findings highlight the importance of promoting conceptual understanding and practical problem-solving skills to improve student learning in algorithmic tasks. Full article
(This article belongs to the Special Issue Challenges and Trends in Technology-Enhanced Learning)
Show Figures

Figure 1

23 pages, 4254 KB  
Article
A Strongly Robust Secret Image Sharing Algorithm Based on QR Codes
by Pengcheng Huang, Canyu Chen and Xinmeng Wan
Algorithms 2025, 18(9), 535; https://doi.org/10.3390/a18090535 - 22 Aug 2025
Viewed by 600
Abstract
Secret image sharing (SIS) is an image protection technique based on cryptography. However, traditional SIS schemes have limited noise resistance, making it difficult to ensure reconstructed image quality. To address this issue, this paper proposes a robust SIS scheme based on QR codes, [...] Read more.
Secret image sharing (SIS) is an image protection technique based on cryptography. However, traditional SIS schemes have limited noise resistance, making it difficult to ensure reconstructed image quality. To address this issue, this paper proposes a robust SIS scheme based on QR codes, which enables the efficient and lossless reconstruction of the secret image without pixel expansion. Moreover, the proposed scheme maintains high reconstruction quality under noisy conditions. In the sharing phase, the scheme compresses the length of shares by optimizing polynomial computation and improving the pixel allocation strategy. Reed–Solomon coding is then incorporated to enhance the anti-noise capability during the sharing process, while achieving meaningful secret sharing using QR codes as carriers. In the reconstruction phase, the scheme further improves the quality of the reconstructed secret image by combining image inpainting algorithms with the error-correction capability of Reed–Solomon codes. The experimental results show that the scheme can achieve lossless reconstruction when the salt-and-pepper noise density is less than d0.02, and still maintains high-quality reconstruction when d0.13. Compared with the existing schemes, the proposed method significantly improves noise robustness without pixel expansion, while preserving the visual meaning of the QR code carrier, and achieves a secret sharing strategy that combines robustness and practicality. Full article
Show Figures

Figure 1

18 pages, 7358 KB  
Article
A Tile-Based Multi-Core Hardware Architecture for Lossless Image Compression and Decompression
by Xufeng Li, Li Zhou and Yan Zhu
Appl. Sci. 2025, 15(11), 6017; https://doi.org/10.3390/app15116017 - 27 May 2025
Viewed by 591
Abstract
Lossless image compression plays a vital role in improving data storage and transmission efficiency without compromising data integrity. However, the throughput of current lossless compression and decompression systems remains limited and is unable to meet the growing demands of high-speed data transfer. To [...] Read more.
Lossless image compression plays a vital role in improving data storage and transmission efficiency without compromising data integrity. However, the throughput of current lossless compression and decompression systems remains limited and is unable to meet the growing demands of high-speed data transfer. To address this challenge, a previously proposed hybrid lossless compression and decompression algorithm has been implemented on an FPGA platform. This implementation significantly improves processing speed and efficiency. A multi-core system architecture is introduced, utilizing the processing system (PS) and programmable logic (PL) of a Xilinx Zynq-706 evaluation board. The PS handles coordination. The PL performs compression and decompression using multiple cores. Each core can process up to eight image tiles at the same time. The compression process is designed with a four-stage pipeline, and decompression is managed by a dynamic state machine to ensure optimized control. The parallel architecture and innovative algorithm design enable high-throughput operation, achieving compression and decompression rates of 480 Msubpixels/s and 372 Msubpixels/s, respectively. Through this work, a practical and high-performance solution for real-time lossless image compression is demonstrated. Full article
Show Figures

Figure 1

7 pages, 1414 KB  
Proceeding Paper
Improved Low Complexity Predictor for Block-Based Lossless Image Compression
by Huang-Chun Hsu, Jian-Jiun Ding and De-Yan Lu
Eng. Proc. 2025, 92(1), 38; https://doi.org/10.3390/engproc2025092038 - 30 Apr 2025
Viewed by 407
Abstract
Lossless image compression has been studied and widely applied, particularly in medicine, space exploration, aerial photography, and satellite communication. In this study, we proposed a low-complexity lossless compression for image (LOCO-I) predictor based on the joint photographic expert group–lossless standard (JPEG-LS). We analyzed [...] Read more.
Lossless image compression has been studied and widely applied, particularly in medicine, space exploration, aerial photography, and satellite communication. In this study, we proposed a low-complexity lossless compression for image (LOCO-I) predictor based on the joint photographic expert group–lossless standard (JPEG-LS). We analyzed the nature of the LOCO-I predictor and offered possible solutions. The improved LOCO-I outperformed LOCO-I by a reduction of 2.26% in entropy for the full image size and reductions of 2.70, 2.81, and 2.89% for 32 × 32, 16 × 16, and 8 × 8 block-based compression, respectively. In addition, we suggested vertical/horizontal flip for block-based compression, which requires extra bits to record and decreases the entropy. Compared with other state-of-the-art (SOTA) lossless image compression predictors, the proposed method has low computation complexity as it is multiplication- and division-free. The model is also better suited for hardware implementation. As the predictor exploits no inter-block relation, it enables parallel processing and random access if encoded by fix-length coding (FLC). Full article
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)
Show Figures

Figure 1

32 pages, 15292 KB  
Article
Compression Ratio as Picture-Wise Just Noticeable Difference Predictor
by Nenad Stojanović, Boban Bondžulić, Vladimir Lukin, Dimitrije Bujaković, Sergii Kryvenko and Oleg Ieremeiev
Mathematics 2025, 13(9), 1445; https://doi.org/10.3390/math13091445 - 28 Apr 2025
Cited by 1 | Viewed by 685
Abstract
This paper presents the interesting results of applying compression ratio (CR) in the prediction of the boundary between visually lossless and visually lossy compression, which is of particular importance in perceptual image compression. The prediction is carried out through the objective quality (peak [...] Read more.
This paper presents the interesting results of applying compression ratio (CR) in the prediction of the boundary between visually lossless and visually lossy compression, which is of particular importance in perceptual image compression. The prediction is carried out through the objective quality (peak signal-to-noise ratio, PSNR) and image representation in bits per pixel (bpp). In this analysis, the results of subjective tests from four publicly available databases are used as ground truth for comparison with the results obtained using the compression ratio as a predictor. Through a wide analysis of color and grayscale infrared JPEG and Better Portable Graphics (BPG) compressed images, the values of parameters that control these two types of compression and for which CR is calculated are proposed. It is shown that PSNR and bpp predictions can be significantly improved by using CR calculated using these proposed values, regardless of the type of compression and whether color or infrared images are used. In this paper, CR is used for the first time in predicting the boundary between visually lossless and visually lossy compression for images from the infrared part of the electromagnetic spectrum, as well as in the prediction of BPG compressed content. This paper indicates the great potential of CR so that in future research, it can be used in joint prediction based on several features or through the CR curve obtained for different values of the parameters controlling the compression. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

23 pages, 2184 KB  
Article
Lossless Compression of Malaria-Infected Erythrocyte Images Using Vision Transformer and Deep Autoencoders
by Md Firoz Mahmud, Zerin Nusrat and W. David Pan
Computers 2025, 14(4), 127; https://doi.org/10.3390/computers14040127 - 1 Apr 2025
Viewed by 830
Abstract
Lossless compression of medical images allows for rapid image data exchange and faithful recovery of the compressed data for medical image assessment. There are many useful telemedicine applications, for example in diagnosing conditions such as malaria in resource-limited regions. This paper presents a [...] Read more.
Lossless compression of medical images allows for rapid image data exchange and faithful recovery of the compressed data for medical image assessment. There are many useful telemedicine applications, for example in diagnosing conditions such as malaria in resource-limited regions. This paper presents a novel machine learning-based approach where lossless compression of malaria-infected erythrocyte images is assisted by cutting-edge classifiers. To this end, we first use a Vision Transformer to classify images into two categories: those cells that are infected with malaria and those that are not. We then employ distinct deep autoencoders for each category, which not only reduces the dimensions of the image data but also preserves crucial diagnostic information. To ensure no loss in reconstructed image quality, we further compress the residuals produced by these autoencoders using the Huffman code. Simulation results show that the proposed method achieves lower overall bit rates and thus higher compression ratios than traditional compression schemes such as JPEG 2000, JPEG-LS, and CALIC. This strategy holds significant potential for effective telemedicine applications and can improve diagnostic capabilities in regions impacted by malaria. Full article
Show Figures

Graphical abstract

24 pages, 3068 KB  
Article
Enhanced Dual Reversible Data Hiding Using Combined Approaches
by Cheonshik Kim, Ching-Nung Yang and Lu Leng
Appl. Sci. 2025, 15(6), 3279; https://doi.org/10.3390/app15063279 - 17 Mar 2025
Viewed by 807
Abstract
This paper proposes a reversible data hiding technique based on two cover images. The proposed method enhances performance by utilizing Hamming coding (HC), arithmetic coding (AC), and an improved Exploiting Modification Direction (EMD) technique. Since AC provides lossless compression for binary data, it [...] Read more.
This paper proposes a reversible data hiding technique based on two cover images. The proposed method enhances performance by utilizing Hamming coding (HC), arithmetic coding (AC), and an improved Exploiting Modification Direction (EMD) technique. Since AC provides lossless compression for binary data, it is widely used in image compression and helps maximize the efficiency of data transmission and storage. The EMD technique is recognized as an efficient data hiding method. However, it has a significant limitation: it does not allow for the restoration of the original cover image after data extraction. Additionally, EMD has a data hiding capacity limit of approximately 1.2 bpp. To address these limitations, an improved reversible data hiding technique is proposed. In this study, HC and AC are integrated with an improved EMD technique to enhance data hiding performance, achieving higher embedding capacity while ensuring the complete restoration of the original cover image. In the proposed method, Hamming coding is applied for data encoding and arithmetic coding is used for compression to increase efficiency. The compressed data are then embedded using the improved EMD technique, enabling the receiver to fully restore the original cover image. Experimental results demonstrate that the proposed method achieves an average PSNR of 66 dB and a data embedding capacity of 1.5 bpp, proving to be a promising approach for secure and efficient data hiding applications. Full article
(This article belongs to the Special Issue Multimedia Smart Security)
Show Figures

Figure 1

27 pages, 3984 KB  
Article
Enhanced Framework for Lossless Image Compression Using Image Segmentation and a Novel Dynamic Bit-Level Encoding Algorithm
by Erdal Erdal and Alperen Önal
Appl. Sci. 2025, 15(6), 2964; https://doi.org/10.3390/app15062964 - 10 Mar 2025
Viewed by 1648
Abstract
This study proposes a dynamic bit-level encoding algorithm (DEA) and introduces the S+DEA compression framework, which enhances compression efficiency by integrating the DEA with image segmentation as a preprocessing step. The novel approaches were validated on four different datasets, demonstrating strong performance and [...] Read more.
This study proposes a dynamic bit-level encoding algorithm (DEA) and introduces the S+DEA compression framework, which enhances compression efficiency by integrating the DEA with image segmentation as a preprocessing step. The novel approaches were validated on four different datasets, demonstrating strong performance and broad applicability. A dedicated data structure was developed to facilitate lossless storage and precise reconstruction of compressed data, ensuring data integrity throughout the process. The evaluation results showed that the DEA outperformed all benchmark encoding algorithms, achieving an improvement percentage (IP) value of 45.12, indicating its effectiveness as a highly efficient encoding method. Moreover, the S+DEA compression algorithm demonstrated significant improvements in compression efficiency. It consistently outperformed BPG, JPEG-LS, and JPEG2000 across three datasets. While it performed slightly worse than JPEG-LS in medical images, it remained competitive overall. A dataset-specific analysis revealed that in medical images, the S+DEA performed close to the DEA, suggesting that segmentation alone does not enhance compression in this domain. This emphasizes the importance of exploring alternative preprocessing techniques to enhance the DEA’s performance in medical imaging applications. The experimental results demonstrate that the DEA and S+DEA offer competitive encoding and compression capabilities, making them promising alternatives to existing frameworks. Full article
(This article belongs to the Special Issue Advanced Digital Signal Processing and Its Applications)
Show Figures

Figure 1

25 pages, 6330 KB  
Article
Post-Filtering of Noisy Images Compressed by HEIF
by Sergii Kryvenko, Volodymyr Rebrov, Vladimir Lukin, Vladimir Golovko, Anatoliy Sachenko, Andrii Shelestov and Benoit Vozel
Appl. Sci. 2025, 15(6), 2939; https://doi.org/10.3390/app15062939 - 8 Mar 2025
Viewed by 932
Abstract
Modern imaging systems produce a great volume of image data. In many practical situations, it is necessary to compress them for faster transferring or more efficient storage. Then, a compression has to be applied. If images are noisy, lossless compression is almost useless, [...] Read more.
Modern imaging systems produce a great volume of image data. In many practical situations, it is necessary to compress them for faster transferring or more efficient storage. Then, a compression has to be applied. If images are noisy, lossless compression is almost useless, and lossy compression is characterized by a specific noise filtering effect that depends on the image, noise, and coder properties. Here, we considered a modern HEIF coder applied to grayscale (component) images of different complexity corrupted by additive white Gaussian noise. It has recently been shown that an optimal operation point (OOP) might exist in this case. Note that the OOP is a value of quality factor where the compressed image quality (according to a used quality metric) is the closest to the corresponding noise-free image. The lossy compression of noisy images leads to both noise reduction and distortions introduced into the information component, thus, a compromise should be found between the compressed image quality and compression ratio attained. The OOP is one possible compromise, if it exists, for a given noisy image. However, it has also recently been demonstrated that the compressed image quality can be significantly improved if post-filtering is applied under the condition that the quality factor is slightly larger than the one corresponding to the OOP. Therefore, we considered the efficiency of post-filtering where a block-matching 3-dimensional (BM3D) filter was applied. It was shown that the positive effect of such post-filtering could reach a few dB in terms of the PSNR and PSNR-HVS-M metrics. The largest benefits took place for simple structure images and a high intensity of noise. It was also demonstrated that the filter parameters have to be adapted to the properties of residual noise that become more non-Gaussian if the compression ratio increases. Practical recommendations on the use of compression parameters and post-filtering are given. Full article
Show Figures

Figure 1

21 pages, 18398 KB  
Article
A Low-Complexity Lossless Compression Method Based on a Code Table for Infrared Images
by Yaohua Zhu, Mingsheng Huang, Yanghang Zhu and Yong Zhang
Appl. Sci. 2025, 15(5), 2826; https://doi.org/10.3390/app15052826 - 5 Mar 2025
Cited by 1 | Viewed by 1100
Abstract
Traditional JPEG series image compression algorithms have limitations in speed. To improve the storage and transmission of 14-bit/pixel images acquired by infrared line-scan detectors, a novel method is introduced for achieving high-speed and highly efficient compression of line-scan infrared images. The proposed method [...] Read more.
Traditional JPEG series image compression algorithms have limitations in speed. To improve the storage and transmission of 14-bit/pixel images acquired by infrared line-scan detectors, a novel method is introduced for achieving high-speed and highly efficient compression of line-scan infrared images. The proposed method utilizes the features of infrared images to reduce image redundancy and employs improved Huffman coding for entropy coding. The improved Huffman coding addresses the low-probability long coding of 14-bit images by truncating long codes, which results in low complexity and minimal loss in the compression ratio. Additionally, a method is proposed to obtain a Huffman code table that bypasses the pixel counting process required for entropy coding, thereby improving the compression speed. The final implementation is a low-complexity lossless image compression algorithm that achieves fast encoding through simple table lookup rules. The proposed method results in only a 10% loss in compression performance compared to JPEG 2000, while achieving a 20-fold speed improvement. Compared to dictionary-based methods, the proposed method can achieve high-speed compression while maintaining high compression efficiency, making it particularly suitable for the high-speed, high-efficiency lossless compression of line-scan panoramic infrared images. The code table compression effect is 5% lower than the theoretical value. The algorithm can also be applied to analyze images with more bits. Full article
Show Figures

Figure 1

18 pages, 3903 KB  
Article
Lossless Hyperspectral Image Compression in Comet Interceptor and Hera Missions with Restricted Bandwith
by Kasper Skog, Tomáš Kohout, Tomáš Kašpárek, Antti Penttilä, Monika Wolfmayr and Jaan Praks
Remote Sens. 2025, 17(5), 899; https://doi.org/10.3390/rs17050899 - 4 Mar 2025
Cited by 1 | Viewed by 1201
Abstract
Lossless image compression is vital for missions with limited data transmission bandwidth. Reducing file sizes enables faster transmission and increased scientific gains from transient events. This study compares two wavelet-based image compression algorithms, CCSDS 122.0 and JPEG 2000, used in the European Space [...] Read more.
Lossless image compression is vital for missions with limited data transmission bandwidth. Reducing file sizes enables faster transmission and increased scientific gains from transient events. This study compares two wavelet-based image compression algorithms, CCSDS 122.0 and JPEG 2000, used in the European Space Agency Comet Interceptor and Hera missions, respectively, in varying scenarios. The JPEG 2000 implementation is sourced from the JasPer library, whereas a custom implementation was written for CCSDS 122.0. The performance analysis for both algorithms consists of compressing simulated asteroid images in the visible and near-infrared spectral ranges. In addition, all test images were noise-filtered to study the effect of the amount of noise on both compression ratio and speed. The study finds that JPEG 2000 achieves consistently higher compression ratios and benefits from decreased noise more than CCSDS 122.0. However, CCSDS 122.0 produces comparable results faster than JPEG 2000 and is substantially less computationally complex. On the contrary, JPEG 2000 allows dynamic (entropy-permitting) reduction in the bit depth of internal data structures to 8 bits, halving the memory allocation, while CCSDS 122.0 always works in 16-bit mode. These results contribute valuable knowledge to the behavioral characteristics of both algorithms and provide insight for entities planning on using either algorithm on board planetary missions. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

24 pages, 7858 KB  
Article
Data Compression with a Time Limit
by Bruno Carpentieri
Algorithms 2025, 18(3), 135; https://doi.org/10.3390/a18030135 - 3 Mar 2025
Cited by 2 | Viewed by 1177
Abstract
In this paper, we explore a framework to identify an optimal choice of compression algorithms that enables the best allocation of computing resources in a large-scale data storage environment: our goal is to maximize the efficiency of data compression given a time limit [...] Read more.
In this paper, we explore a framework to identify an optimal choice of compression algorithms that enables the best allocation of computing resources in a large-scale data storage environment: our goal is to maximize the efficiency of data compression given a time limit that must be observed by the compression process. We tested this approach with lossless compression of one-dimensional data (text) and two-dimensional data (images) and the experimental results demonstrate its effectiveness. We also extended this technique to lossy compression and successfully applied it to the lossy compression of two-dimensional data. Full article
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

21 pages, 2061 KB  
Article
Hardware Acceleration of Division-Free Quadrature-Based Square Rooting Approach for Near-Lossless Compression of Hyperspectral Images
by Amal Altamimi and Belgacem Ben Youssef
Sensors 2025, 25(4), 1092; https://doi.org/10.3390/s25041092 - 12 Feb 2025
Cited by 2 | Viewed by 767
Abstract
Recent advancements in hyperspectral imaging have significantly increased the acquired data volume, creating a need for more efficient compression methods for handling the growing storage and transmission demands. These challenges are particularly critical for onboard satellite systems, where power and computational resources are [...] Read more.
Recent advancements in hyperspectral imaging have significantly increased the acquired data volume, creating a need for more efficient compression methods for handling the growing storage and transmission demands. These challenges are particularly critical for onboard satellite systems, where power and computational resources are limited, and real-time processing is essential. In this article, we present a novel FPGA-based hardware acceleration of a near-lossless compression technique for hyperspectral images by leveraging a division-free quadrature-based square rooting method. In this regard, the two division operations inherent in the original approach were replaced with pre-computed reciprocals, multiplications, and a geometric series expansion. Optimized for real-time applications, the synthesis results show that our approach achieves a high throughput of 1611.77 Mega Samples per second (MSps) and a low power requirement of 0.886 Watts on the economical Cyclone V FPGA. This results in an efficiency of 1819.15 MSps/Watt, which, to the best of our knowledge, surpasses recent state-of-the-art hardware implementations in the context of near-lossless compression of hyperspectral images. Full article
(This article belongs to the Special Issue Applications of Sensors Based on Embedded Systems)
Show Figures

Figure 1

20 pages, 6914 KB  
Article
Computationally Efficient Light Field Video Compression Using 5-D Approximate DCT
by Braveenan Sritharan, Chamira U. S. Edussooriya, Chamith Wijenayake, R. J. Cintra and Arjuna Madanayake
J. Low Power Electron. Appl. 2025, 15(1), 2; https://doi.org/10.3390/jlpea15010002 - 9 Jan 2025
Viewed by 1401
Abstract
Five-dimensional (5-D) light field videos (LFVs) capture spatial, angular, and temporal variations in light rays emanating from scenes. This leads to a significantly large amount of data compared to conventional three-dimensional videos, which capture only spatial and temporal variations in light rays. In [...] Read more.
Five-dimensional (5-D) light field videos (LFVs) capture spatial, angular, and temporal variations in light rays emanating from scenes. This leads to a significantly large amount of data compared to conventional three-dimensional videos, which capture only spatial and temporal variations in light rays. In this paper, we propose an LFV compression technique using low-complexity 5-D approximate discrete cosine transform (ADCT). To further reduce the computational complexity, our algorithm exploits the partial separability of LFV representations. It applies two-dimensional (2-D) ADCT for sub-aperture images of LFV frames with intra-view and inter-view configurations. Furthermore, we apply one-dimensional ADCT to the temporal dimension. We evaluate the performance of the proposed LFV compression technique using several 5-D ADCT algorithms, and the exact 5-D discrete cosine transform (DCT). The experimental results obtained with LFVs confirm that the proposed LFV compression technique provides a more than 250 times reduction in the data size with near-lossless fidelity with a peak-signal-to-noise ratio greater than 40 dB and structural similarity index greater than 0.9. Furthermore, compared to the exact DCT, our algorithms requires approximately 10 times less computational complexity. Full article
Show Figures

Figure 1

23 pages, 564 KB  
Article
Lossless Image Compression Using Context-Dependent Linear Prediction Based on Mean Absolute Error Minimization
by Grzegorz Ulacha and Mirosław Łazoryszczak
Entropy 2024, 26(12), 1115; https://doi.org/10.3390/e26121115 - 20 Dec 2024
Cited by 2 | Viewed by 1241
Abstract
This paper presents a method for lossless compression of images with fast decoding time and the option to select encoder parameters for individual image characteristics to increase compression efficiency. The data modeling stage was based on linear and nonlinear prediction, which was complemented [...] Read more.
This paper presents a method for lossless compression of images with fast decoding time and the option to select encoder parameters for individual image characteristics to increase compression efficiency. The data modeling stage was based on linear and nonlinear prediction, which was complemented by a simple block for removing the context-dependent constant component. The prediction was based on the Iterative Reweighted Least Squares (IRLS) method which allowed the minimization of mean absolute error. Two-stage compression was used to encode prediction errors: an adaptive Golomb and a binary arithmetic coding. High compression efficiency was achieved by using an author’s context-switching algorithm, which allows several prediction models tailored to the individual characteristics of each image area. In addition, an analysis of the impact of individual encoder parameters on efficiency and encoding time was conducted, and the efficiency of the proposed solution was shown against competing solutions, showing a 9.1% improvement in the bit average of files for the entire test base compared to JPEG-LS. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Back to TopTop