Special Issue "Multimedia Information Compression and Coding"

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information and Communications Technology".

Deadline for manuscript submissions: closed (30 April 2016)

Special Issue Editor

Guest Editor
Prof. Dr. Khalid Sayood

Department of Electrical Engineering, University of Nebraska-Lincoln, 209N Scott Engineering Center, P.O. Box 880511, Lincoln, NE 68588-0511, USA
Website | E-Mail
Interests: data compression; joint source-channel coding; bioinformatics; teaching and information

Special Issue Information

Dear Colleagues,

The area of compression can be viewed as a mature field. Huffman codes were introduced in the 1950s, while arithmetic coding and dictionary coding made their appearance in the 1970s. For lossy compression, predictive compression traces its history to the fifties, transform coding to the sixties, and wavelet-based compression was introduced in the nineties. While the basic techniques have been around for a while, recent years have seen the appearance of new modalities and new platforms for compression. The ubiquity of compression has also extended the use of compression to data types not present twenty years ago while this ubiquity has made security and privacy issues matters of concern. This Special Issue focuses on all these aspects of multimedia information and coding.

Prospective authors are invited to submit previously unpublished works in these areas. Topics of interest include but are not restricted to:

  • Video compression
  • High Efficiency Video Coding
  • Network compression.
  • Genomic compression.
  • Hyperspectral compression
  • Quantum compression
  • Compression and cryptography
  • Compression of biological signals
  • Compression over sensor networks
  • Compression and Big Data
  • Medical Image Compression

Prof. Dr. Khalid Sayood
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 350 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (5 papers)

View options order results:
result details:
Displaying articles 1-5
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Lazy Management for Frequency Table on Hardware-Based Stream Lossless Data Compression
Information 2016, 7(4), 63; doi:10.3390/info7040063
Received: 29 April 2016 / Revised: 21 September 2016 / Accepted: 29 September 2016 / Published: 31 October 2016
PDF Full-text (413 KB) | HTML Full-text | XML Full-text
Abstract
The demand for communicating large amounts of data in real-time has raised new challenges with implementing high-speed communication paths for high definition video and sensory data. It requires the implementation of high speed data paths based on hardware. Implementation difficulties have to be
[...] Read more.
The demand for communicating large amounts of data in real-time has raised new challenges with implementing high-speed communication paths for high definition video and sensory data. It requires the implementation of high speed data paths based on hardware. Implementation difficulties have to be addressed by applying new techniques based on data-oriented algorithms. This paper focuses on a solution for this problem by applying a lossless data compression mechanism on the communication data path. The new lossless data compression mechanism, called LCA-DLT, provides dynamic histogram management for symbol lookup tables used in the compression and the decompression operations. When the histogram memory is fully used, the management algorithm needs to find the least used entries and invalidate these entries. The invalidation operations cause the blocking of the compression and the decompression data stream. This paper proposes novel techniques to eliminate blocking by introducing a dynamic invalidation mechanism, which allows achievement of a high throughput data compression. Full article
(This article belongs to the Special Issue Multimedia Information Compression and Coding)
Figures

Figure 1

Open AccessArticle A Survey on Data Compression Methods for Biological Sequences
Information 2016, 7(4), 56; doi:10.3390/info7040056
Received: 27 June 2016 / Revised: 23 September 2016 / Accepted: 29 September 2016 / Published: 14 October 2016
Cited by 1 | PDF Full-text (372 KB) | HTML Full-text | XML Full-text
Abstract
The ever increasing growth of the production of high-throughput sequencing data poses a serious challenge to the storage, processing and transmission of these data. As frequently stated, it is a data deluge. Compression is essential to address this challenge—it reduces storage space and
[...] Read more.
The ever increasing growth of the production of high-throughput sequencing data poses a serious challenge to the storage, processing and transmission of these data. As frequently stated, it is a data deluge. Compression is essential to address this challenge—it reduces storage space and processing costs, along with speeding up data transmission. In this paper, we provide a comprehensive survey of existing compression approaches, that are specialized for biological data, including protein and DNA sequences. Also, we devote an important part of the paper to the approaches proposed for the compression of different file formats, such as FASTA, as well as FASTQ and SAM/BAM, which contain quality scores and metadata, in addition to the biological sequences. Then, we present a comparison of the performance of several methods, in terms of compression ratio, memory usage and compression/decompression time. Finally, we present some suggestions for future research on biological data compression. Full article
(This article belongs to the Special Issue Multimedia Information Compression and Coding)
Figures

Figure 1

Open AccessArticle Efficient Software HEVC to AVS2 Transcoding
Information 2016, 7(3), 53; doi:10.3390/info7030053
Received: 29 July 2016 / Revised: 8 September 2016 / Accepted: 14 September 2016 / Published: 19 September 2016
PDF Full-text (254 KB) | HTML Full-text | XML Full-text
Abstract
The second generation of Audio and Video coding Standard (AVS) is developed by the IEEE 1857 Working Group under project 1857.4 and was standardized in 2016 by the AVS Working Group of China as the new broadcasting standard AVS2. High Efficient Video Coding
[...] Read more.
The second generation of Audio and Video coding Standard (AVS) is developed by the IEEE 1857 Working Group under project 1857.4 and was standardized in 2016 by the AVS Working Group of China as the new broadcasting standard AVS2. High Efficient Video Coding (HEVC) is the newest global video coding standard announced in 2013. More and more codings are migrating from H.264/AVC to HEVC because of its higher compression performance. In this paper, we propose an efficient HEVC to AVS2 transcoding algorithm, which applies a multi-stage decoding information utilization framework to maximize the usage of the decoding information in the transcoding process. The proposed algorithm achieves 11×–17× speed gains over the AVS2 reference software RD 14.0 with a modest BD-rate loss of 9.6%–16.6%. Full article
(This article belongs to the Special Issue Multimedia Information Compression and Coding)
Figures

Figure 1

Open AccessArticle Visually Lossless JPEG 2000 for Remote Image Browsing
Information 2016, 7(3), 45; doi:10.3390/info7030045
Received: 17 May 2016 / Revised: 22 June 2016 / Accepted: 1 July 2016 / Published: 15 July 2016
PDF Full-text (2696 KB) | HTML Full-text | XML Full-text
Abstract
Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of
[...] Read more.
Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG 2000 codestream. This codestream is JPEG 2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG 2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results. Full article
(This article belongs to the Special Issue Multimedia Information Compression and Coding)
Figures

Figure 1

Review

Jump to: Research

Open AccessReview Speech Compression
Information 2016, 7(2), 32; doi:10.3390/info7020032
Received: 22 April 2016 / Revised: 24 May 2016 / Accepted: 30 May 2016 / Published: 3 June 2016
Cited by 4 | PDF Full-text (4364 KB) | HTML Full-text | XML Full-text
Abstract
Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most
[...] Read more.
Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed. Full article
(This article belongs to the Special Issue Multimedia Information Compression and Coding)
Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: Visually Lossless JPEG2000 for Remote Image Browsing
Authors: Michael W. Marcellin and Ali Bilgin
Affiliation: The University of Arizona
Abstract: Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream. This codestream is JPEG2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG2000 Internet Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results.

Back to Top