Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (29)

Search Parameters:
Keywords = lossless reconstruction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2872 KiB  
Article
Wavelet-Guided Multi-Scale ConvNeXt for Unsupervised Medical Image Registration
by Xuejun Zhang, Aobo Xu, Ganxin Ouyang, Zhengrong Xu, Shaofei Shen, Wenkang Chen, Mingxian Liang, Guiqi Zhang, Jiashun Wei, Xiangrong Zhou and Dongbo Wu
Bioengineering 2025, 12(4), 406; https://doi.org/10.3390/bioengineering12040406 - 11 Apr 2025
Cited by 2 | Viewed by 986
Abstract
Medical image registration is essential in clinical practices such as surgical navigation and image-guided diagnosis. The Transformer architecture of TransMorph demonstrates better accuracy in non-rigid registration tasks. However, its weaker spatial locality priors necessitate large-scale training datasets and a heavy number of parameters, [...] Read more.
Medical image registration is essential in clinical practices such as surgical navigation and image-guided diagnosis. The Transformer architecture of TransMorph demonstrates better accuracy in non-rigid registration tasks. However, its weaker spatial locality priors necessitate large-scale training datasets and a heavy number of parameters, which conflict with the limited annotated data and real-time demands of clinical workflows. Moreover, traditional downsampling and upsampling always degrade high-frequency anatomical features such as tissue boundaries or small lesions. We proposed WaveMorph, a wavelet-guided multi-scale ConvNeXt method for unsupervised medical image registration. A novel multi-scale wavelet feature fusion downsampling module is proposed by integrating the ConvNeXt architecture with Haar wavelet lossless decomposition to extract and fuse features from eight frequency sub-images using multi-scale convolution kernels. Additionally, a lightweight dynamic upsampling module is introduced in the decoder to reconstruct fine-grained anatomical structures. WaveMorph integrates the inductive bias of CNNs with the advantages of Transformers, effectively mitigating topological distortions caused by spatial information loss while supporting real-time inference. In both atlas-to-patient (IXI) and inter-patient (OASIS) registration tasks, WaveMorph demonstrates state-of-the-art performance, achieving Dice scores of 0.779 ± 0.015 and 0.824 ± 0.021, respectively, and real-time inference (0.072 s/image), validating the effectiveness of our model in medical image registration. Full article
Show Figures

Figure 1

23 pages, 2184 KiB  
Article
Lossless Compression of Malaria-Infected Erythrocyte Images Using Vision Transformer and Deep Autoencoders
by Md Firoz Mahmud, Zerin Nusrat and W. David Pan
Computers 2025, 14(4), 127; https://doi.org/10.3390/computers14040127 - 1 Apr 2025
Viewed by 680
Abstract
Lossless compression of medical images allows for rapid image data exchange and faithful recovery of the compressed data for medical image assessment. There are many useful telemedicine applications, for example in diagnosing conditions such as malaria in resource-limited regions. This paper presents a [...] Read more.
Lossless compression of medical images allows for rapid image data exchange and faithful recovery of the compressed data for medical image assessment. There are many useful telemedicine applications, for example in diagnosing conditions such as malaria in resource-limited regions. This paper presents a novel machine learning-based approach where lossless compression of malaria-infected erythrocyte images is assisted by cutting-edge classifiers. To this end, we first use a Vision Transformer to classify images into two categories: those cells that are infected with malaria and those that are not. We then employ distinct deep autoencoders for each category, which not only reduces the dimensions of the image data but also preserves crucial diagnostic information. To ensure no loss in reconstructed image quality, we further compress the residuals produced by these autoencoders using the Huffman code. Simulation results show that the proposed method achieves lower overall bit rates and thus higher compression ratios than traditional compression schemes such as JPEG 2000, JPEG-LS, and CALIC. This strategy holds significant potential for effective telemedicine applications and can improve diagnostic capabilities in regions impacted by malaria. Full article
Show Figures

Graphical abstract

27 pages, 3984 KiB  
Article
Enhanced Framework for Lossless Image Compression Using Image Segmentation and a Novel Dynamic Bit-Level Encoding Algorithm
by Erdal Erdal and Alperen Önal
Appl. Sci. 2025, 15(6), 2964; https://doi.org/10.3390/app15062964 - 10 Mar 2025
Viewed by 1206
Abstract
This study proposes a dynamic bit-level encoding algorithm (DEA) and introduces the S+DEA compression framework, which enhances compression efficiency by integrating the DEA with image segmentation as a preprocessing step. The novel approaches were validated on four different datasets, demonstrating strong performance and [...] Read more.
This study proposes a dynamic bit-level encoding algorithm (DEA) and introduces the S+DEA compression framework, which enhances compression efficiency by integrating the DEA with image segmentation as a preprocessing step. The novel approaches were validated on four different datasets, demonstrating strong performance and broad applicability. A dedicated data structure was developed to facilitate lossless storage and precise reconstruction of compressed data, ensuring data integrity throughout the process. The evaluation results showed that the DEA outperformed all benchmark encoding algorithms, achieving an improvement percentage (IP) value of 45.12, indicating its effectiveness as a highly efficient encoding method. Moreover, the S+DEA compression algorithm demonstrated significant improvements in compression efficiency. It consistently outperformed BPG, JPEG-LS, and JPEG2000 across three datasets. While it performed slightly worse than JPEG-LS in medical images, it remained competitive overall. A dataset-specific analysis revealed that in medical images, the S+DEA performed close to the DEA, suggesting that segmentation alone does not enhance compression in this domain. This emphasizes the importance of exploring alternative preprocessing techniques to enhance the DEA’s performance in medical imaging applications. The experimental results demonstrate that the DEA and S+DEA offer competitive encoding and compression capabilities, making them promising alternatives to existing frameworks. Full article
(This article belongs to the Special Issue Advanced Digital Signal Processing and Its Applications)
Show Figures

Figure 1

21 pages, 3894 KiB  
Article
Bounded-Error LiDAR Compression for Bandwidth-Efficient Cloud-Edge In-Vehicle Data Transmission
by Ray-I Chang, Ting-Wei Hsu, Chih Yang and Yen-Ting Chen
Electronics 2025, 14(5), 908; https://doi.org/10.3390/electronics14050908 - 25 Feb 2025
Viewed by 717
Abstract
Recent advances in autonomous driving have led to an increased use of LiDAR (Light Detection and Ranging) sensors for high-frequency 3D perceptions, resulting in massive data volumes that challenge in-vehicle networks, storage systems, and cloud-edge communications. To address this issue, we propose a [...] Read more.
Recent advances in autonomous driving have led to an increased use of LiDAR (Light Detection and Ranging) sensors for high-frequency 3D perceptions, resulting in massive data volumes that challenge in-vehicle networks, storage systems, and cloud-edge communications. To address this issue, we propose a bounded-error LiDAR compression framework that enforces a user-defined maximum coordinate deviation (e.g., 2 cm) in the real-world space. Our method combines multiple compression strategies in both axis-wise metric Axis or Euclidean metric L2 (namely, Error-Bounded Huffman Coding (EB-HC), Error-Bounded 3D Compression (EB-3D), and the extended Error-Bounded Huffman Coding with 3D Integration (EB-HC-3D)) with a lossless Huffman coding baseline. By quantizing and grouping point coordinates based on a strict threshold (either axis-wise or Euclidean), our method significantly reduces data size while preserving the geometric fidelity. Experiments on the KITTI dataset demonstrate that, under a 2 cm bounded-error, our single-bin compression reduces the data to 25–35% of their original size, while multi-bin processing can further compress the data to 15–25% of their original volume. An analysis of compression ratios, error metrics, and encoding/decoding speeds shows that our method achieves a substantial data reduction while keeping reconstruction errors within the specified limit. Moreover, runtime profiling indicates that our method is well-suited for deployment on in-vehicle edge devices, thereby enabling scalable cloud-edge cooperation. Full article
(This article belongs to the Special Issue Recent Advances of Cloud, Edge, and Parallel Computing)
Show Figures

Figure 1

14 pages, 841 KiB  
Article
A Closed-Form Analytical Conversion between Zernike and Gatinel–Malet Basis Polynomials to Present Relevant Aberrations in Ophthalmology and Refractive Surgery
by Masoud Mehrjoo, Damien Gatinel, Jacques Malet and Samuel Arba Mosquera
Photonics 2024, 11(9), 883; https://doi.org/10.3390/photonics11090883 - 20 Sep 2024
Cited by 2 | Viewed by 1698
Abstract
The Zernike representation of wavefronts interlinks low- and high-order aberrations, which may result in imprecise clinical estimates. Recently, the Gatinel–Malet wavefront representation has been introduced to resolve this problem by deriving a new, unlinked basis originating from Zernike polynomials. This new basis preserves [...] Read more.
The Zernike representation of wavefronts interlinks low- and high-order aberrations, which may result in imprecise clinical estimates. Recently, the Gatinel–Malet wavefront representation has been introduced to resolve this problem by deriving a new, unlinked basis originating from Zernike polynomials. This new basis preserves the classical low and high aberration subgroups’ structure, as well as the orthogonality within each subgroup, but not the orthogonality between low and high aberrations. This feature has led to conversions relying on separate wavefront reconstructions for each subgroup, which may increase the associated numerical errors. This study proposes a robust, minimised-error (lossless) analytical approach for conversion between the Zernike and Gatinel–Malet spaces. This method analytically reformulates the conversion as a nonhomogeneous system of linear equations and computationally solves it using matrix factorisation and decomposition techniques with high-level accuracy. This work fundamentally demonstrates the lossless expression of complex wavefronts in a format that is more clinically interpretable, with potential applications in various areas of ophthalmology, such as refractive surgery. Full article
(This article belongs to the Special Issue Visual Optics)
Show Figures

Figure 1

19 pages, 17320 KiB  
Article
Three-Dimensional Double Random-Phase Encryption for Simultaneous Two-Primary Data
by Jae-Young Jang and Myungjin Cho
Electronics 2024, 13(5), 823; https://doi.org/10.3390/electronics13050823 - 20 Feb 2024
Cited by 3 | Viewed by 1375
Abstract
In this paper, we propose a three-dimensional (3D) optical encryption technique for simultaneous two-primary data using double random-phase encryption (DRPE). In conventional DRPE, the primary data can be encrypted through two different random phase masks optically. Thus, its speed is the same as [...] Read more.
In this paper, we propose a three-dimensional (3D) optical encryption technique for simultaneous two-primary data using double random-phase encryption (DRPE). In conventional DRPE, the primary data can be encrypted through two different random phase masks optically. Thus, its speed is the same as the speed of light. However, in this method, each primary dataset can be decrypted by the individual key data. For simultaneous two primary dataset such as stereo images or multi-view images, a new encryption technique is required. Therefore, in this paper, we encrypt the simultaneous two different primary datasets by DRPE. In our method, the first and second primary data are regarded as the amplitude and phase with single key data for encryption. To verify the feasibility of our method, we implement the simulation and measure the performance metrics such as thw peak signal to noise ratio (PSNR) and the peak sidelobe ratio (PSR). As a result, PSNR values of two-dimensional decryption results for the first (“LENA” text) and second (lena image) primary data by our proposed method with the correct and incorrect key data are 311.0139, 41.9609, 12.0166, and 7.4626, respectively, since the first primary data are lossless, and the second primary data are lossy. For 3D reconstruction, PSR values of the first and second primary data are 914.2644 and 774.1400, respectively. Full article
Show Figures

Figure 1

13 pages, 310 KiB  
Article
Lossy Compression of Individual Sequences Revisited: Fundamental Limits of Finite-State Encoders
by Neri Merhav
Entropy 2024, 26(2), 116; https://doi.org/10.3390/e26020116 - 28 Jan 2024
Cited by 1 | Viewed by 1367
Abstract
We extend Ziv and Lempel’s model of finite-state encoders to the realm of lossy compression of individual sequences. In particular, the model of the encoder includes a finite-state reconstruction codebook followed by an information lossless finite-state encoder that compresses the reconstruction codeword with [...] Read more.
We extend Ziv and Lempel’s model of finite-state encoders to the realm of lossy compression of individual sequences. In particular, the model of the encoder includes a finite-state reconstruction codebook followed by an information lossless finite-state encoder that compresses the reconstruction codeword with no additional distortion. We first derive two different lower bounds to the compression ratio, which depend on the number of states of the lossless encoder. Both bounds are asymptotically achievable by conceptually simple coding schemes. We then show that when the number of states of the lossless encoder is large enough in terms of the reconstruction block length, the performance can be improved, sometimes significantly so. In particular, the improved performance is achievable using a random-coding ensemble that is universal, not only in terms of the source sequence but also in terms of the distortion measure. Full article
(This article belongs to the Collection Feature Papers in Information Theory)
Show Figures

Figure 1

28 pages, 12632 KiB  
Article
Novel Integer Shmaliy Transform and New Multiparametric Piecewise Linear Chaotic Map for Joint Lossless Compression and Encryption of Medical Images in IoMTs
by Achraf Daoui, Haokun Mao, Mohamed Yamni, Qiong Li, Osama Alfarraj and Ahmed A. Abd El-Latif
Mathematics 2023, 11(16), 3619; https://doi.org/10.3390/math11163619 - 21 Aug 2023
Cited by 12 | Viewed by 1686
Abstract
The discrete Shmaliy moment transform (DST) is a type of discrete orthogonal moment transform that is widely used in signal and image processing. However, DST is not suitable for lossless image applications due to its non-integer reversible nature. To overcome this limitation, we [...] Read more.
The discrete Shmaliy moment transform (DST) is a type of discrete orthogonal moment transform that is widely used in signal and image processing. However, DST is not suitable for lossless image applications due to its non-integer reversible nature. To overcome this limitation, we introduce the integer discrete Shmaliy transform (IDST) that performs integer-to-integer encoding, leading to a perfect and unique reconstruction of the input image. Next, a new 1D chaotic system model, the 1D multiparametric piecewise linear chaotic map (M-PWLCM), is presented as an extension of the existing 1D PWLCM. The M-PWLCM includes eight control parameters defined over an unlimited interval. To demonstrate the relevance of IDST and M-PWLCM in reversible image processing applications, they are used in a new scheme for lossless compression and encryption of medical images in the internet of medical things (IoMTs). On the one hand, the simulation results show that our scheme offers a good compression ratio and a higher level of security to resist differential attacks, brute force attacks and statistical attacks. On the other hand, the comparative analysis carried out shows the overall superiority of our scheme over similar state-of-the-art ones, both in achieving a higher compression ratio and better security when communicating medical images over unsecured IoMTs. Full article
Show Figures

Figure 1

17 pages, 7730 KiB  
Article
Two-Stage Robust Lossless DWI Watermarking Based on Transformer Networks in the Wavelet Domain
by Zhangyu Liu, Zhi Li, Long Zheng and Dandan Li
Appl. Sci. 2023, 13(12), 6886; https://doi.org/10.3390/app13126886 - 6 Jun 2023
Cited by 3 | Viewed by 1565
Abstract
For copyright protection of diffusion-weighted imaging (DWI) images, traditional robust watermarking techniques result in irreversible distortions, while reversible watermarking methods exhibit poor robustness. We propose a two-stage lossless watermarking algorithm based on a Transformer network to solve this problem. The first stage of [...] Read more.
For copyright protection of diffusion-weighted imaging (DWI) images, traditional robust watermarking techniques result in irreversible distortions, while reversible watermarking methods exhibit poor robustness. We propose a two-stage lossless watermarking algorithm based on a Transformer network to solve this problem. The first stage of the algorithm is to train the robust watermarking network, embed the watermark into the cover image in the wavelet domain, and design the frequency information enhancement module to improve the reconstruction quality. In the second stage, based on the pre-trained robust watermarking network, the difference image between the watermarked image and the cover image is reversibly embedded into the watermarked image as the compensation information to losslessly recover the cover image. The difference image is compressed using DCT and Huffman coding to reduce the compensation information. Finally, the watermark extraction network is trained on the second embedding result to avoid weakening the robustness of the first stage caused by the reversible embedding. The experimental results demonstrate that the PSNR of the watermarked image reaches 60.18 dB. Under various types of image attacks, the watermark extraction BER is below 0.003, indicating excellent robustness. The cover image can be recovered losslessly under no attack. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 1326 KiB  
Article
MIDOM—A DICOM-Based Medical Image Communication System
by Branimir Pervan, Sinisa Tomic, Hana Ivandic and Josip Knezovic
Appl. Sci. 2023, 13(10), 6075; https://doi.org/10.3390/app13106075 - 15 May 2023
Cited by 8 | Viewed by 2597
Abstract
Despite the existing medical infrastructure being limited in terms of interoperability, the amount of medical multimedia transferred over the network and shared through various channels increases rapidly. In search of consultations with colleagues, medical professionals with the consent of their patients, usually exchange [...] Read more.
Despite the existing medical infrastructure being limited in terms of interoperability, the amount of medical multimedia transferred over the network and shared through various channels increases rapidly. In search of consultations with colleagues, medical professionals with the consent of their patients, usually exchange medical multimedia, mainly in the form of images, by using standard instant messaging services which utilize lossy compression algorithms. That consultation paradigm can easily lead to losses in image representation that can be misinterpreted and lead to the wrong diagnosis. This paper presents MIDOM—Medical Imaging and Diagnostics on the Move, a DICOM-based medical image communication system enhanced with a couple of variants of our previously developed custom lossless Classification and Blending Predictor Coder (CBPC) compression method. The system generally exploits the idea that end devices used by the general population and medical professionals alike are satisfactorily performant and energy-efficient, up to a point to support custom and complex compression methods successfully. The system has been implemented and appropriately integrated with Orthanc, a lightweight DICOM server, and a medical images storing PACS server. We benchmarked the system thoroughly with five real-world anonymized medical image sets in terms of compression ratios and latency reduction, aiming to simulate scenarios in which the availability of the medical services might be hardly reachable or in other ways limited. The results clearly show that our system enhanced with the compression methods in the question pays off in nearly every testing scenario by lowering the network latency to at least 60% of the latency required to send raw and uncompressed image sets and 25% in the best-case, while maintaining the perfect reconstruction of medical images and, thus, providing a more suitable environment for healthcare applications. Full article
(This article belongs to the Special Issue eHealth Innovative Approaches and Applications)
Show Figures

Figure 1

34 pages, 2116 KiB  
Article
Novel Block Sorting and Symbol Prediction Algorithm for PDE-Based Lossless Image Compression: A Comparative Study with JPEG and JPEG 2000
by Časlav Livada, Tomislav Horvat and Alfonzo Baumgartner
Appl. Sci. 2023, 13(5), 3152; https://doi.org/10.3390/app13053152 - 28 Feb 2023
Cited by 7 | Viewed by 1811
Abstract
In this paper, we present a novel compression method based on partial differential equations complemented by block sorting and symbol prediction. Block sorting is performed using the Burrows–Wheeler transform, while symbol prediction is performed using the context mixing method. With these transformations, the [...] Read more.
In this paper, we present a novel compression method based on partial differential equations complemented by block sorting and symbol prediction. Block sorting is performed using the Burrows–Wheeler transform, while symbol prediction is performed using the context mixing method. With these transformations, the range coder is used as a lossless compression method. The objective and subjective quality evaluation of the reconstructed image illustrates the efficiency of this new compression method and is compared with the current standards, JPEG and JPEG 2000. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

26 pages, 11319 KiB  
Article
Reversible Data Hiding in Encrypted Images Based on an Adaptive Recognition Strategy for Blocks
by Zhi Pang, Han Li, Zhaolin Xiao and Liansheng Sui
Symmetry 2023, 15(2), 524; https://doi.org/10.3390/sym15020524 - 15 Feb 2023
Cited by 6 | Viewed by 2076
Abstract
As the rapid development of third-party storage and homomorphic encryption have profoundly stimulated the desire for secure communication, reversible data hiding in encrypted images has received widespread attention, since it allows lossless data conveying and perfect image recovery. In order to obtain secure [...] Read more.
As the rapid development of third-party storage and homomorphic encryption have profoundly stimulated the desire for secure communication, reversible data hiding in encrypted images has received widespread attention, since it allows lossless data conveying and perfect image recovery. In order to obtain secure reversible data hiding with high embedding capacity, a novel block embedding method is proposed, based on an adaptive recognition strategy for combined blocks in the binary image, with which the adjacent identical blocks can be integrated into a combination to reserve more spare bits for data accommodation. Furthermore, a fully reversible data hiding method for grayscale images in the encryption domain is designed. The secret data is hidden into lower bit-planes of the image while the original bits of those embedded lower pixels are recorded into the vacated space of higher bit-planes. The original image can be reconstructed flawlessly as well as the secret data being extracted without errors. To reinforce security, the original image and the secret data are encrypted and scrambled based on sequences generated with the high-dimension chaotic system. Due to its high sensitivity of initial values, the performance such as security and robustness is guaranteed. By comparing the PSNR value of the marked decrypted image and evaluating the quality of the extracted secret image, experimental results demonstrate that the proposed method can obtain higher embedding capacity, achieving 0.2700–0.3924 bpp increment over the state-of-the-art methods, and recover the marked decrypted image with high visual symmetry/quality, and efficiently resist against potential attacks, such as the histogram analysis, differential, brute-force, JPEG attacks, and so on. Full article
Show Figures

Figure 1

17 pages, 709 KiB  
Article
Lossless Reconstruction of Convolutional Neural Network for Channel-Based Network Pruning
by Donghyeon Lee, Eunho Lee and Youngbae Hwang
Sensors 2023, 23(4), 2102; https://doi.org/10.3390/s23042102 - 13 Feb 2023
Cited by 3 | Viewed by 2767
Abstract
Network pruning reduces the number of parameters and computational costs of convolutional neural networks while maintaining high performance. Although existing pruning methods have achieved excellent results, they do not consider reconstruction after pruning in order to apply the network to actual devices. This [...] Read more.
Network pruning reduces the number of parameters and computational costs of convolutional neural networks while maintaining high performance. Although existing pruning methods have achieved excellent results, they do not consider reconstruction after pruning in order to apply the network to actual devices. This study proposes a reconstruction process for channel-based network pruning. For lossless reconstruction, we focus on three components of the network: the residual block, skip connection, and convolution layer. Union operation and index alignment are applied to the residual block and skip connection, respectively. Furthermore, we reconstruct a compressed convolution layer by considering batch normalization. We apply our method to existing channel-based pruning methods for downstream tasks such as image classification, object detection, and semantic segmentation. Experimental results show that compressing a large model has a 1.93% higher accuracy in image classification, 2.2 higher mean Intersection over Union (mIoU) in semantic segmentation, and 0.054 higher mean Average Precision (mAP) in object detection than well-designed small models. Moreover, we demonstrate that our method can reduce the actual latency by 8.15× and 5.29× on Raspberry Pi and Jetson Nano, respectively. Full article
Show Figures

Figure 1

17 pages, 4913 KiB  
Article
IEF-CSNET: Information Enhancement and Fusion Network for Compressed Sensing Reconstruction
by Ziqun Zhou, Fengyin Liu and Haibin Shen
Sensors 2023, 23(4), 1886; https://doi.org/10.3390/s23041886 - 8 Feb 2023
Cited by 4 | Viewed by 2203
Abstract
The rapidly growing requirement for data has put forward Compressed Sensing (CS) to realize low-ratio sampling and to reconstruct complete signals. With the intensive development of Deep Neural Network (DNN) methods, performance in image reconstruction from CS measurements is constantly increasing. Currently, many [...] Read more.
The rapidly growing requirement for data has put forward Compressed Sensing (CS) to realize low-ratio sampling and to reconstruct complete signals. With the intensive development of Deep Neural Network (DNN) methods, performance in image reconstruction from CS measurements is constantly increasing. Currently, many network structures pay less attention to the relevance of before- and after-stage results and fail to make full use of relevant information in the compressed domain to achieve interblock information fusion and a great receptive field. Additionally, due to multiple resamplings and several forced compressions of information flow, information loss and network structure redundancy inevitably result. Therefore, an Information Enhancement and Fusion Network for CS reconstruction (IEF-CSNET) is proposed in this work, and a Compressed Information Extension (CIE) module is designed to fuse the compressed information in the compressed domain and greatly expand the receptive field. The Error Comprehensive Consideration Enhancement (ECCE) module enhances the error image by incorporating the previous recovered error so that the interlink among the iterations can be utilized for better recovery. In addition, an Iterative Information Flow Enhancement (IIFE) module is further proposed to complete the progressive recovery with loss-less information transmission during the iteration. In summary, the proposed method achieves the best effect, exhibits high robustness at this stage, with the peak signal-to-noise ratio (PSNR) improved by 0.59 dB on average under all test sets and sampling rates, and presents a greatly improved speed compared with the best algorithm. Full article
(This article belongs to the Special Issue Compressed Sensing and Imaging Processing)
Show Figures

Figure 1

17 pages, 3790 KiB  
Article
Subjective Assessment of Objective Image Quality Metrics Range Guaranteeing Visually Lossless Compression
by Afnan, Faiz Ullah, Yaseen, Jinhee Lee, Sonain Jamil and Oh-Jin Kwon
Sensors 2023, 23(3), 1297; https://doi.org/10.3390/s23031297 - 23 Jan 2023
Cited by 10 | Viewed by 5875
Abstract
The usage of media such as images and videos has been extensively increased in recent years. It has become impractical to store images and videos acquired by camera sensors in their raw form due to their huge storage size. Generally, image data is [...] Read more.
The usage of media such as images and videos has been extensively increased in recent years. It has become impractical to store images and videos acquired by camera sensors in their raw form due to their huge storage size. Generally, image data is compressed with a compression algorithm and then stored or transmitted to another platform. Thus, image compression helps to reduce the storage size and transmission cost of the images and videos. However, image compression might cause visual artifacts, depending on the compression level. In this regard, performance evaluation of the compression algorithms is an essential task needed to reconstruct images with visually or near-visually lossless quality in case of lossy compression. The performance of the compression algorithms is assessed by both subjective and objective image quality assessment (IQA) methodologies. In this paper, subjective and objective IQA methods are integrated to evaluate the range of the image quality metrics (IQMs) values that guarantee the visually or near-visually lossless compression performed by the JPEG 1 standard (ISO/IEC 10918). A novel “Flicker Test Software” is developed for conducting the proposed subjective and objective evaluation study. In the flicker test, the selected test images are subjectively analyzed by subjects at different compression levels. The IQMs are calculated at the previous compression level, when the images were visually lossless for each subject. The results analysis shows that the objective IQMs with more closely packed values having the least standard deviation that guaranteed the visually lossless compression of the images with JPEG 1 are the feature similarity index measure (FSIM), the multiscale structural similarity index measure (MS-SSIM), and the information content weighted SSIM (IW-SSIM), with average values of 0.9997, 0.9970, and 0.9970 respectively. Full article
(This article belongs to the Topic Advances in Perceptual Quality Assessment of User Generated Contents)
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Back to TopTop