Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = non-uniform scalar quantizers

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 934 KB  
Article
Non-Uniform Entropy-Constrained L Quantization for Sparse and Irregular Sources
by Alin-Adrian Alecu, Mohammad Ali Tahouri, Adrian Munteanu and Bujor Păvăloiu
Entropy 2025, 27(11), 1126; https://doi.org/10.3390/e27111126 - 31 Oct 2025
Viewed by 509
Abstract
Near-lossless coding schemes traditionally rely on uniform quantization to control the maximum absolute error (L norm) of residual signals, often assuming a parametric model for the source distribution. This paper introduces a novel design framework for non-uniform, entropy-aware L-oriented [...] Read more.
Near-lossless coding schemes traditionally rely on uniform quantization to control the maximum absolute error (L norm) of residual signals, often assuming a parametric model for the source distribution. This paper introduces a novel design framework for non-uniform, entropy-aware L-oriented scalar quantizers that leverages a tight and differentiable approximation of the L distortion metric and does not require any parametric density function formulations. The framework is evaluated on both synthetic parametric sources and real-world medical depth map video datasets. For smoothly decaying distributions, such as the continuous Laplacian or discrete two-sided geometric distributions, the proposed method naturally converges to near-uniform quantizers, consistent with theoretical expectations. In contrast, for sparse or irregular sources, the algorithm produces highly non-uniform bin allocations that adapt to the local distribution structure and improve rate-distortion efficiency. When embedded in a residual-based near-lossless compression scheme, the resulting codec consistently outperforms versions equipped with uniform or piecewise-uniform quantizers, as well as state-of-the-art near-lossless schemes such as JPEG-LS and CALIC. Full article
(This article belongs to the Special Issue Information Theory and Data Compression)
Show Figures

Figure 1

18 pages, 5028 KB  
Article
Design and Analysis of Binary Scalar Quantizer of Laplacian Source with Applications
by Zoran Peric, Bojan Denic, Milan Savic and Vladimir Despotovic
Information 2020, 11(11), 501; https://doi.org/10.3390/info11110501 - 27 Oct 2020
Cited by 15 | Viewed by 3687
Abstract
A compression method based on non-uniform binary scalar quantization, designed for the memoryless Laplacian source with zero-mean and unit variance, is analyzed in this paper. Two quantizer design approaches are presented that investigate the effect of clipping with the aim of reducing the [...] Read more.
A compression method based on non-uniform binary scalar quantization, designed for the memoryless Laplacian source with zero-mean and unit variance, is analyzed in this paper. Two quantizer design approaches are presented that investigate the effect of clipping with the aim of reducing the quantization noise, where the minimal mean-squared error distortion is used to determine the optimal clipping factor. A detailed comparison of both models is provided, and the performance evaluation in a wide dynamic range of input data variances is also performed. The observed binary scalar quantization models are applied in standard signal processing tasks, such as speech and image quantization, but also to quantization of neural network parameters. The motivation behind the binary quantization of neural network weights is the model compression by a factor of 32, which is crucial for implementation in mobile or embedded devices with limited memory and processing power. The experimental results follow well the theoretical models, confirming their applicability in real-world applications. Full article
(This article belongs to the Special Issue Signal Processing and Machine Learning)
Show Figures

Figure 1

Back to TopTop