entropy-logo

Journal Browser

Journal Browser

Information Theory and Coding for Image/Video Processing

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: 30 April 2025 | Viewed by 11776

Special Issue Editors


E-Mail Website
Guest Editor
School of Electrical and Computer Engineering, Ben Gurion University of the Negev, Be’er-Sheva 84105001, Israel
Interests: image/video processing; video coding; multimedia communication; watermarking

E-Mail Website
Guest Editor Assistant
School of Electrical and Computer Engineering, Ben Gurion University of the Negev, Be’er-Sheva 84105001, Israel
Interests: video and image compression; media streaming; computer vision; artificial intelligence; Python

Special Issue Information

Dear Colleagues,

This Special Issue aims to present the latest advancements in the field of information theory and coding techniques specifically tailored for image and video processing applications. We welcome original research papers, review articles, and case studies that explore the theory, methodologies, algorithms, and practical implementations of information theory and coding in the context of image and video processing.

Topics of interest include but are not limited to:

  1. Image and video compression techniques
  2. Error control coding for image and video transmission
  3. Joint source-channel coding for image and video communication
  4. Channel coding schemes for multimedia transmission
  5. Image and video watermarking
  6. Coding techniques for multimedia storage and retrieval
  7. Coding for multimedia streaming and adaptive streaming
  8. Information-theoretic analysis of image and video processing algorithms
  9. Source coding for virtual reality and augmented reality applications
  10. Coding for 3D imaging and depth perception
  11. Machine learning-based coding techniques for image and video processing
  12. Cross-layer design for information theory and coding in multimedia systems
  13. Security and privacy in image and video coding

Prof. Dr. Ofer Hadar
Mr. Shevach Riabtsev
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 5238 KiB  
Article
A Novel Video Compression Approach Based on Two-Stage Learning
by Dan Shao, Ning Wang, Pu Chen, Yu Liu and Lin Lin
Entropy 2024, 26(12), 1110; https://doi.org/10.3390/e26121110 - 19 Dec 2024
Viewed by 537
Abstract
In recent years, the rapid growth of video data posed challenges for storage and transmission. Video compression techniques provided a viable solution to this problem. In this study, we proposed a bidirectional coding video compression model named DeepBiVC, which was based on two-stage [...] Read more.
In recent years, the rapid growth of video data posed challenges for storage and transmission. Video compression techniques provided a viable solution to this problem. In this study, we proposed a bidirectional coding video compression model named DeepBiVC, which was based on two-stage learning. Firstly, we conducted preprocessing on the video data by segmenting the video flow into groups of continuous image frames, with each group comprising five frames. Then, in the first stage, we developed an image compression module based on an invertible neural network (INN) model to compress the first and last frames of each group. In the second stage, we designed a video compression module that compressed the intermediate frames using bidirectional optical flow estimation. Experimental results indicated that DeepBiVC outperformed other state-of-the-art video compression methods regarding PSNR and MS-SSIM metrics. Specifically, on the VUG dataset at bpp = 0.3, DeepBiVC achieved a PSNR of 37.16 and an MS-SSIM of 0.98. Full article
(This article belongs to the Special Issue Information Theory and Coding for Image/Video Processing)
Show Figures

Figure 1

23 pages, 1145 KiB  
Article
State-of-the-Art Trends in Data Compression: COMPROMISE Case Study
by David Podgorelec, Damjan Strnad, Ivana Kolingerová and Borut Žalik
Entropy 2024, 26(12), 1032; https://doi.org/10.3390/e26121032 - 29 Nov 2024
Viewed by 962
Abstract
After a boom that coincided with the advent of the internet, digital cameras, digital video and audio storage and playback devices, the research on data compression has rested on its laurels for a quarter of a century. Domain-dependent lossy algorithms of the time, [...] Read more.
After a boom that coincided with the advent of the internet, digital cameras, digital video and audio storage and playback devices, the research on data compression has rested on its laurels for a quarter of a century. Domain-dependent lossy algorithms of the time, such as JPEG, AVC, MP3 and others, achieved remarkable compression ratios and encoding and decoding speeds with acceptable data quality, which has kept them in common use to this day. However, recent computing paradigms such as cloud computing, edge computing, the Internet of Things (IoT), and digital preservation have gradually posed new challenges, and, as a consequence, development trends in data compression are focusing on concepts that were not previously in the spotlight. In this article, we try to critically evaluate the most prominent of these trends and to explore their parallels, complementarities, and differences. Digital data restoration mimics the human ability to omit memorising information that is satisfactorily retrievable from the context. Feature-based data compression introduces a two-level data representation with higher-level semantic features and with residuals that correct the feature-restored (predicted) data. The integration of the advantages of individual domain-specific data compression methods into a general approach is also challenging. To the best of our knowledge, a method that addresses all these trends does not exist yet. Our methodology, COMPROMISE, has been developed exactly to make as many solutions to these challenges as possible inter-operable. It incorporates features and digital restoration. Furthermore, it is largely domain-independent (general), asymmetric, and universal. The latter refers to the ability to compress data in a common framework in a lossy, lossless, and near-lossless mode. COMPROMISE may also be considered an umbrella that links many existing domain-dependent and independent methods, supports hybrid lossless–lossy techniques, and encourages the development of new data compression algorithms. Full article
(This article belongs to the Special Issue Information Theory and Coding for Image/Video Processing)
Show Figures

Figure 1

23 pages, 7106 KiB  
Article
A Convolutional Neural Network-Based Quantization Method for Block Compressed Sensing of Images
by Jiulu Gong, Qunlin Chen, Wei Zhu and Zepeng Wang
Entropy 2024, 26(6), 468; https://doi.org/10.3390/e26060468 - 29 May 2024
Viewed by 931
Abstract
Block compressed sensing (BCS) is a promising method for resource-constrained image/video coding applications. However, the quantization of BCS measurements has posed a challenge, leading to significant quantization errors and encoding redundancy. In this paper, we propose a quantization method for BCS measurements using [...] Read more.
Block compressed sensing (BCS) is a promising method for resource-constrained image/video coding applications. However, the quantization of BCS measurements has posed a challenge, leading to significant quantization errors and encoding redundancy. In this paper, we propose a quantization method for BCS measurements using convolutional neural networks (CNN). The quantization process maps measurements to quantized data that follow a uniform distribution based on the measurements’ distribution, which aims to maximize the amount of information carried by the quantized data. The dequantization process restores the quantized data to data that conform to the measurements’ distribution. The restored data are then modified by the correlation information of the measurements drawn from the quantized data, with the goal of minimizing the quantization errors. The proposed method uses CNNs to construct quantization and dequantization processes, and the networks are trained jointly. The distribution parameters of each block are used as side information, which is quantized with 1 bit by the same method. Extensive experiments on four public datasets showed that, compared with uniform quantization and entropy coding, the proposed method can improve the PSNR by an average of 0.48 dB without using entropy coding when the compression bit rate is 0.1 bpp. Full article
(This article belongs to the Special Issue Information Theory and Coding for Image/Video Processing)
Show Figures

Figure 1

16 pages, 633 KiB  
Article
Influential Metrics Estimation and Dynamic Frequency Selection Based on Two-Dimensional Mapping for JPEG-Reversible Data Hiding
by Haiyong Wang and Chentao Lu
Entropy 2024, 26(4), 301; https://doi.org/10.3390/e26040301 - 29 Mar 2024
Viewed by 1022
Abstract
JPEG Reversible Data Hiding (RDH) is a method designed to extract hidden data from a marked image and perfectly restore the image to its original JPEG form. However, while existing RDH methods adaptively manage the visual distortion caused by embedded data, they often [...] Read more.
JPEG Reversible Data Hiding (RDH) is a method designed to extract hidden data from a marked image and perfectly restore the image to its original JPEG form. However, while existing RDH methods adaptively manage the visual distortion caused by embedded data, they often neglect the concurrent increase in file size. In rectifying this oversight, we have designed a new JPEG RDH scheme that addresses all influential metrics during the embedding phase and a dynamic frequency selection strategy with recoverable frequency order after data embedding. The process initiates with a pre-processing phase of blocks and the subsequent selection of frequencies. Utilizing a two-dimensional (2D) mapping strategy, we then compute the visual distortion and file size increment (FSI) for each image block by examining non-zero alternating current (AC) coefficient pairs (NZACPs) and their corresponding run lengths. Finally, we select appropriate block groups based on the influential metrics of each block group and proceed with data embedding by 2D histogram shifting (HS). Extensive experimentation demonstrates how our method’s efficiently and consistently outperformed existing techniques with a superior peak signal-to-noise Ratio (PSNR) and optimized FSI. Full article
(This article belongs to the Special Issue Information Theory and Coding for Image/Video Processing)
Show Figures

Figure 1

13 pages, 27073 KiB  
Article
A New Transformation Technique for Reducing Information Entropy: A Case Study on Greyscale Raster Images
by Borut Žalik, Damjan Strnad, David Podgorelec, Ivana Kolingerová, Luka Lukač, Niko Lukač, Simon Kolmanič, Krista Rizman Žalik and Štefan Kohek
Entropy 2023, 25(12), 1591; https://doi.org/10.3390/e25121591 - 27 Nov 2023
Cited by 2 | Viewed by 1264
Abstract
This paper proposes a new string transformation technique called Move with Interleaving (MwI). Four possible ways of rearranging 2D raster images into 1D sequences of values are applied, including scan-line, left-right, strip-based, and Hilbert arrangements. Experiments on 32 benchmark greyscale raster images of [...] Read more.
This paper proposes a new string transformation technique called Move with Interleaving (MwI). Four possible ways of rearranging 2D raster images into 1D sequences of values are applied, including scan-line, left-right, strip-based, and Hilbert arrangements. Experiments on 32 benchmark greyscale raster images of various resolutions demonstrated that the proposed transformation reduces information entropy to a similar extent as the combination of the Burrows–Wheeler transform followed by the Move-To-Front or the Inversion Frequencies. The proposed transformation MwI yields the best result among all the considered transformations when the Hilbert arrangement is applied. Full article
(This article belongs to the Special Issue Information Theory and Coding for Image/Video Processing)
Show Figures

Figure 1

24 pages, 13345 KiB  
Article
An Improved Image Compression Algorithm Using 2D DWT and PCA with Canonical Huffman Encoding
by Rajiv Ranjan and Prabhat Kumar
Entropy 2023, 25(10), 1382; https://doi.org/10.3390/e25101382 - 25 Sep 2023
Cited by 7 | Viewed by 2770
Abstract
Of late, image compression has become crucial due to the rising need for faster encoding and decoding. To achieve this objective, the present study proposes the use of canonical Huffman coding (CHC) as an entropy coder, which entails a lower decoding time compared [...] Read more.
Of late, image compression has become crucial due to the rising need for faster encoding and decoding. To achieve this objective, the present study proposes the use of canonical Huffman coding (CHC) as an entropy coder, which entails a lower decoding time compared to binary Huffman codes. For image compression, discrete wavelet transform (DWT) and CHC with principal component analysis (PCA) were combined. The lossy method was introduced by using PCA, followed by DWT and CHC to enhance compression efficiency. By using DWT and CHC instead of PCA alone, the reconstructed images have a better peak signal-to-noise ratio (PSNR). In this study, we also developed a hybrid compression model combining the advantages of DWT, CHC and PCA. With the increasing use of image data, better image compression techniques are necessary for the efficient use of storage space. The proposed technique achieved up to 60% compression while maintaining high visual quality. This method also outperformed the currently available techniques in terms of both PSNR (in dB) and bit-per-pixel (bpp) scores. This approach was tested on various color images, including Peppers 512 × 512 × 3 and Couple 256 × 256 × 3, showing improvements by 17 dB and 22 dB, respectively, while reducing the bpp by 0.56 and 0.10, respectively. For grayscale images as well, i.e., Lena 512 × 512 and Boat 256 × 256, the proposed method showed improvements by 5 dB and 8 dB, respectively, with a decrease of 0.02 bpp in both cases. Full article
(This article belongs to the Special Issue Information Theory and Coding for Image/Video Processing)
Show Figures

Figure 1

Review

Jump to: Research

35 pages, 10230 KiB  
Review
Unveiling the Future of Human and Machine Coding: A Survey of End-to-End Learned Image Compression
by Chen-Hsiu Huang and Ja-Ling Wu
Entropy 2024, 26(5), 357; https://doi.org/10.3390/e26050357 - 24 Apr 2024
Cited by 1 | Viewed by 2737
Abstract
End-to-end learned image compression codecs have notably emerged in recent years. These codecs have demonstrated superiority over conventional methods, showcasing remarkable flexibility and adaptability across diverse data domains while supporting new distortion losses. Despite challenges such as computational complexity, learned image compression methods [...] Read more.
End-to-end learned image compression codecs have notably emerged in recent years. These codecs have demonstrated superiority over conventional methods, showcasing remarkable flexibility and adaptability across diverse data domains while supporting new distortion losses. Despite challenges such as computational complexity, learned image compression methods inherently align with learning-based data processing and analytic pipelines due to their well-suited internal representations. The concept of Video Coding for Machines has garnered significant attention from both academic researchers and industry practitioners. This concept reflects the growing need to integrate data compression with computer vision applications. In light of these developments, we present a comprehensive survey and review of lossy image compression methods. Additionally, we provide a concise overview of two prominent international standards, MPEG Video Coding for Machines and JPEG AI. These standards are designed to bridge the gap between data compression and computer vision, catering to practical industry use cases. Full article
(This article belongs to the Special Issue Information Theory and Coding for Image/Video Processing)
Show Figures

Figure 1

Back to TopTop