Next Article in Journal
Kicked General Fractional Lorenz-Type Equations: Exact Solutions and Multi-Dimensional Discrete Maps
Previous Article in Journal
Information Entropy of Biometric Data in a Recurrent Neural Network with Low Connectivity
Previous Article in Special Issue
Graph-Theoretic Limits of Distributed Computation: Entropy, Eigenvalues, and Chromatic Numbers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Uniform Entropy-Constrained L Quantization for Sparse and Irregular Sources

by
Alin-Adrian Alecu
1,*,
Mohammad Ali Tahouri
2,
Adrian Munteanu
2 and
Bujor Păvăloiu
1
1
Faculty of Engineering in Foreign Languages (FILS), Universitatea Nationala de Stiinta si Tehnologie Politehnica Bucuresti, Splaiul Independentei 313, 060042 Bucharest, Romania
2
Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(11), 1126; https://doi.org/10.3390/e27111126
Submission received: 27 September 2025 / Revised: 28 October 2025 / Accepted: 29 October 2025 / Published: 31 October 2025
(This article belongs to the Special Issue Information Theory and Data Compression)

Abstract

Near-lossless coding schemes traditionally rely on uniform quantization to control the maximum absolute error ( L norm) of residual signals, often assuming a parametric model for the source distribution. This paper introduces a novel design framework for non-uniform, entropy-aware L -oriented scalar quantizers that leverages a tight and differentiable approximation of the L distortion metric and does not require any parametric density function formulations. The framework is evaluated on both synthetic parametric sources and real-world medical depth map video datasets. For smoothly decaying distributions, such as the continuous Laplacian or discrete two-sided geometric distributions, the proposed method naturally converges to near-uniform quantizers, consistent with theoretical expectations. In contrast, for sparse or irregular sources, the algorithm produces highly non-uniform bin allocations that adapt to the local distribution structure and improve rate-distortion efficiency. When embedded in a residual-based near-lossless compression scheme, the resulting codec consistently outperforms versions equipped with uniform or piecewise-uniform quantizers, as well as state-of-the-art near-lossless schemes such as JPEG-LS and CALIC.

1. Introduction

Advances in data acquisition technologies have led to an ever-increasing variety of digital media that require efficient storage and transmission, ranging from natural, medical, and satellite images to sensor data such as depth maps or point clouds. For such data, near-lossless compression has emerged as an attractive paradigm, offering a controlled trade-off between bitrate reduction and fidelity preservation. Lossless compression reproduces the original data exactly but achieves limited compression efficiency, whereas lossy compression attains higher rates by allowing average distortions, typically measured under the L 2 norm. Near-lossless compression lies between these two extremes, ensuring that each reconstructed sample x ^ i deviates from the original x i by at most a specified maximum absolute error D max , i.e., | x i x ^ i | D max for all i, which corresponds to a bounded L distortion. This feature is particularly important for applications where local accuracy is critical, such as medical imaging or 3D reconstruction from depth maps.
Existing near-lossless coding schemes have been developed using a variety of approaches, including spatial-domain coding, predictive coding, transform coding, progressive coding, and, more recently, deep learning-based solutions. Despite their effectiveness, most methods employ uniform quantizers designed under specific source distribution assumptions. Residual distributions are commonly modeled using continuous Gaussian-like distributions or discrete families such as the two-sided geometric distribution (TSGD), which justifies the use of uniform quantizers that are theoretically L -optimal in these cases. Overall, although quantizer design is well understood for classical average-error criteria such as the L 2 metric, extending these designs to entropy-constrained L quantizers remains largely distribution-dependent. In practice, near-lossless schemes continue to rely on uniform quantizers, whose L optimality is generally limited to symmetric, unimodal, and monotone residual distributions. Indeed, classical quantization theory shows that uniform scalar quantizers are near-optimal primarily for smooth, symmetric, and unimodal distributions that decrease monotonically from their mode [1,2]. For asymmetric or multimodal sources, the equal spacing of decision thresholds causes certain regions to be over- or under-represented, leading to inefficient bit allocation and unbalanced reconstruction errors. This observation emphasizes the need for quantization schemes capable of efficiently handling sparse or otherwise non-standard sources.
To overcome this limitation, we introduce a general framework for entropy-constrained scalar quantizer design under the L distortion metric, capable of adapting to arbitrary, possibly sparse or non-Gaussian source distributions. The framework relies on a differentiable approximation of the L norm and operates directly on input signal samples, without requiring an explicit analytical form of the source probability density function f X ( x ) . It produces as output an optimized set of decision boundaries and reconstruction levels obtained by minimizing a rate–distortion Lagrangian that can be adjusted to satisfy either a target distortion bound D max or a target rate R max , providing flexible control of compression behavior without parametric modeling or prior assumptions. We show that for Gaussian-alike continuous distributions and for the TSGD case, our algorithm converges to uniform quantizers, thereby confirming the L -optimality assumptions widely used in the literature. In contrast, for discontinuous or sparse distributions, the scheme produces non-uniform quantizers that adapt to the structure of the source and clearly outperform uniform quantizers in the L rate–distortion sense.
The proposed framework is suited for applications that require strict local fidelity and involve sparse or irregular data, such as 3D sensing, robotics, remote sensing, and medical imaging. By adapting the quantization intervals to source statistics while maintaining a guaranteed maximum error bound, it enables efficient near-lossless compression across these domains.
To demonstrate the practical benefits, we integrate the proposed quantizers into a residual-based near-lossless encoder for depth video sequences acquired in medical and assistive monitoring contexts. Experimental results show that this codec outperforms both its uniform-quantizer counterparts and state-of-the-art near-lossless compression schemes such as JPEG-LS and CALIC.
The main contributions of this paper are summarized as follows:
  • We propose an iterative scheme for entropy-constrained scalar L -oriented non-uniform quantizers that is applicable to sparse (discontinuous) input distributions.
  • We demonstrate that the algorithm converges to uniform designs for smooth symmetric sources commonly used to model residuals and yields non-uniform quantizers for sparse or irregular distributions.
  • We embed the proposed quantization scheme into a residual-based near-lossless depth video codec and show that it consistently outperforms state-of-the-art methods such as JPEG-LS and CALIC.
The remainder of this paper is organized as follows. Section 2 reviews related work on quantizer design under the L metric and on near-lossless compression. Section 3 introduces the proposed entropy-constrained L -oriented quantizer design algorithm. Section 4 presents experimental results, including both synthetic distribution tests and depth video coding. Section 5 provides a discussion of the findings, and Section 6 concludes this paper.

2. Related Work

This section reviews prior work in three areas: quantizer design under general L p metrics, with emphasis on the L norm; near-lossless compression of natural images and video; and near-lossless compression of depth maps, highlighting methods that provide L guarantees.

2.1. Optimal Quantizer Design for L p and L Distortion

The problem of optimal quantizer design has been extensively studied in the context of various distortion measures. The early work of Lloyd and Max [3,4] introduced an iterative design of non-uniform PDF-optimized scalar quantizers for fixed-length coding using the L 2 distortion metric, which was later extended to vector quantization by Linde, Buzo, and Gray as the generalized Lloyd algorithm [5]. In variable-length coding, Woods [6] introduced the first numerical descent algorithm for entropy-constrained scalar quantizers, while Berger [7] proposed a Lagrangian formulation of the optimization problem and derived optimality conditions for entropy-constrained scalar quantizers operating under the L 2 distortion metric. Farvardin and Modestino [8] further extended these results by adapting the framework to a broader class of distortion measures, while Chou et al. [9] generalized the work of [7,8] to entropy-constrained vector quantization and derived optimal conditions for minimizing the Lagrangian function. Zamir and Feder later showed that dithered uniform and lattice quantizers achieve near-optimal rate–distortion performance, defining a universal framework for entropy-constrained design [10]. General references on quantization theory are given by Gersho and Gray [11] and Gray and Neuhoff [2].
When discussing optimal quantizer design for a given distortion measure, several works [5,8] consider a general class of non-negative distortion measures, including specific instances p of the broader Hölder distortion L p , and show that the optimization framework applies across these measures. While many studies provide explicit solutions for both fixed-rate and variable-rate quantization under the L 2 distortion metric, designing quantizers for the L measure is more challenging due to the non-differentiability of the maximum-error criterion. Mathews and Hahn [12] address this by considering the limiting case p of the L p norm and propose an iterative algorithm for fixed-rate vector quantization under the L metric; however, extending their approach to entropy-constrained quantization is not straightforward. Linder and Zamir further showed that optimality conditions remain valid for general distortion measures beyond L 2 , including bounded-error criteria [13].
More recently, Ling and Li proposed a rejection-sampled universal quantizer that minimizes the maximum reconstruction error under an entropy constraint [14].
At the distributional level, Chang et al. [15] modeled depth residuals using a TSGD and showed that uniform scalar quantizers are L -optimal for this distribution, providing a distribution-dependent solution specific to TSGD sources. Schiopu and Tabus observed similar bounded-error efficiency for Laplacian and exponential residuals in near-lossless depth coding [16]. More generally, uniform quantizers are considered L -optimal for symmetric, unimodal, piecewise monotone distributions, such as Gaussian and Laplacian residuals [11]. These approaches, however, remain tied to specific distributional assumptions and do not generalize to arbitrary sparse or discontinuous source distributions.

2.2. Near-Lossless L -Oriented Compression Schemes

Near-lossless image and video compression has been explored through a variety of paradigms, each leveraging different strategies to bound the reconstruction error while maintaining coding efficiency.
Spatial domain coding techniques operate directly on pixel values, often exploiting local correlations. The early work of Chen and Ramabadran [17] proposed differential pulse-code modulation (DPCM) coding combined with uniform quantization to ensure that the L distortion does not exceed a predefined bound of 1. This approach was later generalized by Ke and Marcellin [18] to support arbitrary discrete distortion constraints. A modern implementation of near-lossless spatial coding is found in the WebP standard [19], which preprocesses pixel values to reduce local differences, effectively enforcing an L error bound while maintaining compression efficiency suitable for web applications.
Predictive coding relies on estimating pixel values from previously encoded pixels and then encoding only the residual error. Avcıbaş and Memon [20] proposed a progressive predictive scheme with uniform quantization of residuals, enabling near-lossless L compression with scalable refinement. Wu and Bao [21] extended the CALIC lossless compression scheme [22] to the near-lossless setting, using uniform quantization to enforce an L error constraint. JPEG-LS [23], a widely adopted standard, combines predictive modeling with uniform quantization of the prediction residuals, allowing for effective L control. More recently, Tahouri et al. [24] proposed a lightweight codec for depth video that operates in both lossless and near-lossless mode, using uniform quantization to set bounds on the L error.
Transform coding approaches first decorrelate image data using a transform, such as a filterbank or wavelet, and then quantize the transform coefficients. Karray et al. [25] formulated L -constrained coding in a probabilistic framework using filterbanks to manage coefficient quantization errors. Ansari et al. [26] proposed a hybrid method that combines predictive and transform coding, enhancing coding efficiency under near-lossless constraints. Alecu et al. [27] introduced a scalable L coding scheme based on the lifting wavelet transform, enabling multi-resolution compression with per-pixel error bounds.
Progressive or embedded coding allows for embedded bitstreams that can be truncated for lossy-to-lossless decoding. Pinho and Neves [28] introduced a progressive lossless compression method optimized for L -constrained decoding.
Deep learning-based approaches have also been explored for near-lossless compression. Zhang and Wu [29] applied convolutional neural networks (CNNs) within a generative adversarial framework to improve reconstruction quality while respecting L constraints. Zhang et al. [30] developed a CNN-based near-lossless codec using uniform quantization of latent features. Bai et al. [31,32] proposed variational autoencoder architectures for joint lossy and residual coding, achieving near-lossless performance by combining learned representations with uniform quantization of residuals to satisfy a prescribed L error bound.

2.3. Near-Lossless L Compression of Depth Maps

While the previous subsection focused on near-lossless compression of natural images and video, depth map data presents unique challenges due to its sparsity and wide dynamic range. Several codecs have been proposed for near-lossless depth compression. Mehrotra et al. [33] introduced a low-complexity inverse coding scheme that combines prediction and adaptive Run-Length Golomb–Rice coding. Choi and Ho [34] enhanced HEVC’s near-lossless mode for depth sequences by performing statistical analysis of residual data. Shahriyar et al. [35] presented a depth sequence coder using hierarchical partitioning and spatial-domain uniform quantization. Von Bülow et al. [36] applied depth-of-field segmentation to achieve near-lossless compression, effectively prioritizing perceptually important regions of depth data.
Recent approaches include Siekkinen [37], who proposed neural network-assisted packing of depth maps into video frames; the method reduces compression errors and can support near-lossless operation. Wu and Gao [38] presented an end-to-end lossless compression method for high-precision depth maps, which can be adapted for near-lossless use by allowing small controlled errors.
While all these approaches are near-lossless in nature, only a few explicitly enforce L -bounded coding of depth maps. Notably, Chang et al. [15] introduced an L -predictive coding scheme for depth sequences, while Tahouri et al. [24] developed a lossless and near-lossless codec tailored to depth video data with strict L error guarantees.

3. Materials and Methods

In this section, we propose a quantizer design scheme for entropy-constrained L -oriented scalar quantizers. A key challenge is that the standard definition X = max i | x i | is piecewise and non-smooth. Its gradient is sparse (depending only on the component attaining the maximum) and becomes undefined at points where the maximum switches between components x i . To enable stable optimization, we introduce a tight differentiable approximation of the L quantization error, which we then use to formulate the Lagrangian optimization problem for entropy-constrained scalar quantizers. Solving this problem yields optimality conditions that form the basis of our iterative L -oriented quantizer design algorithm.

3.1. A Differentiable Approximation of the L Quantization Error

We model a discrete input source as a random field of N i.i.d. random variables X 1 , X N with a common density function f X ( x ) , respectively, as a single random variable X with f X ( x ) . This formulation adopts the i.i.d. source model as a standard simplifying assumption in quantization and entropy–distortion theory [2,11], allowing the analysis to focus on the marginal distribution of the source rather than on its joint dependencies. Although real-world data may exhibit statistical correlations, the derivation of scalar quantizer properties and entropy–constrained bounds relies only on the marginal behavior of X, which sufficiently characterizes local amplitude statistics under the memoryless model. Given a quantization operation Q ( · ) , we denote by { b i } and { y i } the decision boundaries and reconstruction levels associated with its quantization intervals. The following theorem provides a differentiable approximation of the L quantization error.
Theorem 1.
Let X be a random variable with probability density function f X ( x ) , and consider a scalar quantizer Q ( · ) defined by decision boundaries { b i } i = 0 M + 1 and reconstruction levels { y i } i = 0 M , such that Q ( x ) = y i for any b i x < b i + 1 . Then, the expected L quantization error d ( X , Q ( X ) ) = max i | x i Q ( x i ) | can be approximated as follows, where τ > 0 is a parameter that controls the tightness of the approximation:
E [ d ( X , Q ( X ) ) ] 1 τ log i = 0 M b i b i + 1 e τ x y i f X ( x ) d x ,
A concise proof is provided below, while the full derivation is given in Appendix A.
Proof of Theorem 1.
Let us define U = τ max j = 1 N | Z j | with some parameter τ > 0 and where Z j = X j Q ( X j ) . By Jensen’s inequality φ ( E [ U ] ) E [ φ ( U ) ] for a convex function φ , it follows that:
e τ E max j | Z j | E e τ max j | Z j | E j = 1 N e τ | Z j | N E e τ max j | Z j | ,
Taking the logarithm in (2) and using Jensen again, we obtain:
E max j | Z j | 1 τ log E j = 1 N e τ | Z j | E max j | Z j | + log N τ ,
The middle term can be written in terms of a single random variable as E [ j = 1 N e τ | Z j | ] = N E [ e τ | Z | ] , such that (3) leads to:
1 τ log E e τ | Z | E max j | Z j | 1 τ log E e τ | Z | + log N τ ,
Equation (4) gives a tight approximation 1 τ log E [ e τ | Z | ] for E [ max j | Z j | ] . Since E d ( X , Q ( X ) ) = d ( x , Q ( x ) ) f X ( x ) d x can be written as i = 0 M b i b i + 1 d ( x , y i ) f X ( x ) d x , Equation (1) easily follows, which concludes the proof. □
Finally, it is interesting to note that Equation (1) is derived using the bounds (3) of the LogSumExp function log j = 1 N e τ | Z j | , which is a well-known smooth approximation of the max ( · ) function. Furthermore, with the exception of discontinuities introduced by the | · | operator, the L quantization error (1) is now differentiable.
To clarify the practical meaning of the above result, we present a simple numerical example comparing the theoretical bound of Equation (1) with its empirical evaluation.
Numerical Illustration of the Theorem. Let τ = 0.2 . We consider a scalar quantizer defined by decision boundaries { b i } and reconstruction points { y i } derived from a Laplacian dataset with σ = 10 . The quantizer is fixed, and multiple independent datasets { x j ( k ) } j = 1 N , k = 1 , , 10 4 , are generated from the same distribution. For each dataset, the maximum absolute quantization error D ( k ) = max j = 1 : N | x j ( k ) Q ( x j ( k ) ) | is computed, yielding an empirical distribution of maxima. We calculate the empirical mean E [ D ( k ) ] that corresponds to the left-hand side of (1), the theoretical value of the right-hand side integral for the Laplacian density function and the given quantizer, and the analytical upper bound obtained by adding the bound log N τ .
Figure 1 shows the histogram of empirical maxima, with vertical lines marking the empirical mean, theoretical value, and analytical upper bound. As each D ( k ) represents the maximum quantization error over many samples, its distribution follows extreme value statistics and approaches a Gumbel form as N increases. The theoretical value slightly underestimates the empirical mean but remains remarkably close to it, confirming the accuracy of the approximation. Both values lie well below the analytical upper bound, which provides a conservative margin consistent with the theoretical inequality.

3.2. An Iterative Quantizer Design Algorithm

The design of optimal fixed-rate quantizers can be formulated as finding a quantizer Q ( · ) that minimizes the distortion D ( Q ) subject to a rate constraint R ( Q ) R max , or, in the dual approach, as minimizing the rate R ( Q ) subject to a maximum allowable distortion D ( Q ) D max . In what follows, we focus on the classical first formulation, noting that the second case follows in a similar manner.
Using the distortion expression of (1), a Lagrangian formulation for L -oriented entropy-constrained scalar quantization can be written as:
J = 1 τ log i = 0 M b i b i + 1 e τ | x y i | f X ( x ) d x + λ i = 0 M l i b i b i + 1 f X ( x ) d x ,
where λ is a Lagrangian multiplier and l i are codeword lengths.
To simplify expressions, let us first denote:
Φ = i = 0 M b i b i + 1 e τ | x y i | f X ( x ) d x ,
The optimal reconstruction levels { y i } i = 0 M and decision levels { b i } i = 0 M + 1 that minimize (5) can be found by computing derivatives. For readability, we present only the main steps; full intermediate derivations for this subsection are provided in Appendix A. For high rates, we assume that the density function f X ( x ) is constant over each quantization interval [ b i , b i + 1 ) , i.e., has the value f X ( y i ) , such that d J d y i becomes:
d J d y i = f X ( y i ) τ Φ d d y i y i b i + 1 e τ ( x y i ) d x + b i y i e τ ( y i x ) d x ,
Further computing (7) and setting it to zero gives us:
e τ ( y i b i ) e τ ( b i + 1 y i ) = 0 ,
From (8), we now obtain the reconstruction level condition:
y i = b i + b i + 1 2 ,
We now compute d J d b i by applying the Leibniz integral rule for (5), leading to the following expression:
d J d b i = f X ( b i ) e τ ( b i y i 1 ) e τ ( y i b i ) τ Φ λ ( l i l i 1 ) ,
Setting (10) to zero and substituting the expression from (6) would, in principle, allow the calculation of the optimal values of b i . However, this approach does not yield analytical solutions for b i , though they can be approximated using numerical methods. Consequently, we propose an alternative method that provides a near-optimal solution.
Specifically, from (5) and (6), we have D = 1 τ log Φ , where D is the L distortion. It is evident upon inspection that most bin decision levels b i have a negligible effect on D (and hence on Φ ), except for the few bins responsible for the maximum quantization error. This behavior is analogous to the LogSumExp function, where exponential terms ensure that only the largest contributions dominate. Based on this observation, we assume that Φ is effectively independent of b i for the majority of bins. Under this assumption, setting (10) to zero leads to the following equation for e τ b i :
e 2 τ b i τ Φ λ ( l i l i 1 ) e τ y i 1 e τ b i e τ ( y i 1 + y i ) = 0 ,
Expression (11) can be solved as a second order equation for e τ b i , with two solutions. However, one is unacceptable, as it would lead to a negative value for e τ b i . The positive solution of (11) then leads us to the decision level condition:
b i = 1 τ log τ Φ λ ( l i l i 1 ) e τ y i 1 2 + 1 2 4 e τ ( y i 1 + y i ) + τ 2 Φ 2 λ 2 ( l i l i 1 ) 2 e 2 τ y i 1 ,
Finally, for codeword lengths, we have the well-known entropy condition:
l i = log 2 b i b i + 1 f X ( x ) d x ,
Conditions (9), (12), and (13) are now sufficient to fully define L -oriented entropy-constrained scalar quantizers, the iterative design scheme being illustrated in Figure 2. The decision boundaries are initialized randomly around a fine uniform grid corresponding to a quantizer with D max = 1 , providing a dense set of closely spaced levels that ensure high initial resolution. For a fixed Lagrange multiplier λ , the optimization minimizes J = D + λ R , producing a single operating point on the rate–distortion curve. By iteratively adjusting λ , the algorithm converges to the quantizer that satisfies a specified rate constraint (or the dual problem, distortion constraint).
Unlike uniform quantizers with fixed spacing and midpoint reconstruction levels, the proposed method updates boundaries and reconstruction values independently, producing nonuniform bins and non-midpoint reconstruction levels.
The computational complexity of this iterative scheme is governed by the sample–bin assignment and update operations. For N input samples, M quantization levels, and t max iterations, the total cost scales as O ( t max ( N + M ) ) , comparable to classical Lloyd–Max and entropy-constrained quantizer designs under a linear sweep implementation. The additional entropy and differentiable L terms contribute only constant-factor overhead without affecting the asymptotic order.
It should be noted that (9), derived under the high-rate assumption, provides optimal reconstruction levels for continuous source distributions. However, for sparse source distributions, or at low rates, the variation of f X ( x ) within a bin can be significant, and the midpoint may no longer minimize the L distortion. Consequently, we generalize (9) by defining the reconstruction level as the midpoint of the data points within each bin:
y i = max ( x ) + min ( x ) 2 , b i x < b i + 1 .
It is obvious that for high-rate and continuous distributions, (14) reduces to the reconstruction levels given by (9).
Finally, since D = 1 τ log Φ , in practice, we compute Φ from D rather than using (6). At each iteration t, Φ ( t ) is evaluated using y i ( t 1 ) and b i ( t 1 ) from the previous iteration, i.e., based on D ( t 1 ) . This procedure yields near-optimal solutions for { b i } i = 0 M + 1 and { y i } i = 0 M , with the algorithm experimentally found to converge typically within 10–20 iterations.

4. Experimental Results

This section evaluates the rate-distortion performance of the proposed scalar L quantizers under two representative scenarios: (i) memoryless sources following parametric distributions commonly used to model coding residuals or non-Gaussian stochastic processes, and (ii) training data with sparse or discontinuous distributions. In both cases, we benchmark the proposed approach against families of uniform quantizers.
In addition, we present a near-lossless L -bounded compression scheme for depth map video coding, wherein we equip the codec of [24] with the designed non-uniform L quantizers. We compare its performance both with its uniform-quantizer baseline and with other state-of-the-art near-lossless coding methods.

4.1. Continuous and Discrete Parametric Distributions

We consider memoryless sources whose probability density functions belong to distribution families known to effectively model coding residuals or capture non-Gaussian stochastic behavior. In particular, we examine the TSGD distribution, respectively, the Laplacian and Exponential distributions. For each distribution, N = 50 , 000 samples are generated using σ = 10 (Laplacian), θ = 0.9 (TSGD), and λ = 10 (Exponential).
The rate–distortion (R–D) curves for these distributions are shown in Figure 3, Figure 4 and Figure 5. In each case, we compare the convex hull of the proposed L quantizer with that of a mid-tread deadzone uniform quantizer, where the rate is measured as entropy and the distortion corresponds to the empirical L error, excluding any transmission overhead. The L 2 R–D curve for the same operating points is also shown for reference. The results indicate that the L quantizers achieve essentially the same performance as their uniform counterparts for symmetric distributions (Laplacian and TSGD), while outperforming them for the asymmetric Exponential case.
The parameter settings for the R–D points are listed in Table 1, including the rate R, distortion D, initial and final number of levels ( M i , M f ) , and the corresponding Lagrange multiplier λ . Each convex hull comprises 50 optimization runs with different λ values, using τ = log N 0.2 . Larger τ values led to numerical instability, whereas reducing τ below log N 4 caused minor deviations between theoretical and empirical results.
An example of the resulting quantization intervals for a Laplacian source under an L distortion bound of D m a x = 5 is shown in Figure 6. Apart from minor deviations at the edges (which can be adjusted without affecting the error), the resulting intervals coincide with those of a uniform quantizer. For the Exponential distribution, the intervals are nearly uniform but incorporate a shifted anchor point that accounts for the distribution’s asymmetry. This outcome is expected: the central reconstruction level of the proposed quantizer naturally aligns with an R–D optimal point, whereas for asymmetric distributions without a natural deadzone, uniform quantizers default to being anchored at zero unless prior knowledge of the source statistics is used. A uniform quantizer could, in principle, achieve comparable performance, but only if its anchor point were chosen according to such prior knowledge (e.g., mean, mode, or another relevant measure).

4.2. Sparse Source Distributions

We next evaluate the performance of non-uniform L quantizers on a set of medical surveillance depth map video datasets. The data was captured with an Orbbec Persee depth camera in hospital room environments and consists of 16-bit depth frames and associated 8-bit segmentation maps. Part of this dataset has also been reported in prior work [24]. Representative frames from the sequences are shown in Figure 7, with datasets S2–S5 corresponding to Rooms 2–5 as used in [24].
In the following, we propose a modified encoder for the near-lossless L -bounded mode of the residual-based codec described in [24], which extends the original design by introducing non-uniform scalar L -optimized quantizers in place of the piecewise-uniform ones used previously.
For completeness, we summarize the architecture of this codec, which forms the foundation of our system. The codec performs intra-frame compression of depth video data using two inputs for each frame: the 16-bit depth frame itself and a semantic segmentation map, which is derived using machine-learning-based classifiers and associated with the same frame. In the first stage, each frame is separated into foreground and background regions based on the segmentation map. A reference background is then constructed from a set of static frames captured at the beginning of the encoding process and is updated whenever the camera position changes—specifically, when movement is detected through variations in segmentation labels within the reference background. Each incoming frame is subsequently subtracted from the reference background to obtain a residual image, while the foreground regions are preserved in lossless form. The residual is next processed by an L -oriented quantization block that maps residual values into quantization bins with a guaranteed per-pixel error bound. Finally, the quantized residual and foreground data are losslessly and independently encoded using JPEG-LS, while the reconstruction levels of the quantizer and other frame-level metadata are compressed using the Zlib library [39], producing compact frame packets that contain both the encoded data and all associated header information.
It should be noted that a separate quantizer is designed for each frame to adapt to temporal variations in the residual statistics. While sharing a single quantizer across multiple frames could slightly reduce signaling overhead and latency, the per-frame design offers superior adaptability with negligible additional header cost.
In order to compare quantizer performance, we first report coding results for the quantized residuals from these datasets. The corresponding reconstruction levels are included within the bitstream as compact header metadata, whose size is negligible compared to the frame payload. Therefore, Table 2 reports only the rate of the quantized residual data, isolating the impact of the quantizer design itself. Table 2 summarizes the average quantization rate (in bpp) for a given guaranteed L distortion D m a x . Results are shown for the codec equipped with our proposed non-uniform L -oriented quantizers, the piecewise uniform quantizers introduced in [24], and standard uniform quantizers. Across nearly all datasets and distortion levels, the proposed non-uniform quantizers achieve lower rates than both the uniform and piecewise-uniform approaches, demonstrating their superior efficiency, with only minor exceptions at extreme settings.
Table 3 further presents coding results for the complete video streams, showing the average bitrate per frame (in bpp), the average PSNR per frame (in dB), and the guaranteed L distortion D m a x . We compare the codec using non-uniform, piecewise-uniform, and standard uniform quantizers and also include results for JPEG-LS and CALIC. Overall, the codec equipped with our proposed L non-uniform quantizers achieves the lowest bitrate across all datasets for nearly any given D m a x . In terms of PSNR, it remains competitive and surpasses all designs at very high rates, while at low to medium rates, the uniform and piecewise-uniform quantizers yield superior PSNR values.
Finally, to better understand how the proposed non-uniform quantizers adapt to sparse source distributions and to compare their behavior with the near-uniform quantizers obtained for the continuous Laplacian distribution in Figure 6, we visualize one example of the resulting quantization intervals. Figure 8 shows the intervals for a depth map residual at a distortion level of D m a x = 20 . The residual distribution is extremely sparse, with a sharp peak and long tails. To reveal details in the tails, we plot the logarithm of the density rather than the density itself. The figure highlights that the resulting quantizers are highly non-uniform, in contrast to the near-uniform quantizers of the Laplacian case, demonstrating the ability of our design algorithm to adapt to sparse source distributions.

5. Discussion

The experimental results demonstrate several key aspects of the proposed L quantizers. For parametric distributions such as Laplacian, TSGD, and Exponential sources, the design algorithm converges to nearly uniform quantizers. This is consistent with theory: when the distribution is continuous, unimodal, monotone, and without heavy sparsity, uniform binning is deemed L -optimal. Small deviations at the edges, as seen for the Laplacian case, have negligible impact on rate–distortion performance but confirm the algorithm’s ability to adapt to local distribution features. For asymmetric sources like the Exponential, the anchor point is shifted automatically, improving efficiency without altering the overall near-uniform structure.
In contrast, for sparse or discontinuous sources such as residuals in medical depth map sequences, the proposed non-uniform quantizers exhibit a highly non-uniform structure, reflecting an optimal allocation of quantization levels to minimize the L error. This adaptive behavior directly translates into improved compression efficiency, as evidenced by the lower rates reported in Table 2 and Table 3.
The coding experiments further confirm that these benefits carry over to practical video compression. Integrated into a residual-based near-lossless L -constrained coding scheme, the proposed quantizers consistently yield lower bitrates than both uniform and piecewise-uniform designs, as well as standard near-lossless codecs like JPEG-LS and CALIC. On average, we observe gains of 43.9% over JPEG-LS, 19.5% over CALIC, 9.4% over standard uniform quantizers, and 2.9% over piecewise-uniform quantizers. In terms of PSNR, uniform quantizers perform better at low to medium rates, while the proposed L designs match or surpass them at high rates (small D max ) once sufficient levels ensure both maximum-error control and average fidelity. Overall, these results demonstrate that L non-uniform quantization provides a powerful tool for applications requiring strict error guarantees alongside efficient coding of sparse or structured residual data.

6. Conclusions

In this paper, we presented a design framework for non-uniform L -oriented quantizers and evaluated their performance on both synthetic parametric sources and medical depth map datasets. We showed that while continuous, smoothly decaying distributions yield near-uniform quantizers, sparse or irregular sources benefit greatly from non-uniform bin allocation. Experimental results confirm that this adaptability not only improves rate–distortion performance over uniform and piecewise-uniform designs but also translates into significant bitrate savings when integrated into a residual-based near-lossless compression pipeline, surpassing state-of-the-art near-lossless schemes.
Overall, our results establish non-uniform L -oriented quantization as an effective approach for combining strict error control with improved compression efficiency, particularly for sparse or irregular data sources.

Author Contributions

Conceptualization, A.-A.A.; methodology, A.-A.A.; software, A.-A.A. and M.A.T.; validation, A.-A.A. and M.A.T.; formal analysis, A.-A.A.; investigation, A.-A.A.; resources, A.M. and B.P.; data curation, M.A.T.; writing—original draft preparation, A.-A.A.; writing—review and editing, A.-A.A., M.A.T., A.M., and B.P.; visualization, A.-A.A.; supervision, B.P. and A.M.; project administration, A.M.; funding acquisition, A.M. and A.-A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work is funded by Innoviris Brussels, Belgium, in the research project MUSCLES, and by the National Program for Research of the National Association of Technical Universities, Romania, GNAC ARUT 2023. Furthermore, Mintt S.A., Belgium, has provided the dataset used in this research.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Restrictions apply to the availability of this data. Data was obtained from Mintt S.A., Belgium and is available from the authors with the permission of Mintt S.A., Belgium.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Proof and Derivation Details

In the following, we give detailed proof of Theorem 1; respectively, derivation of Equation (5) up to (13).
Proof of Theorem 1.
Let us define U = τ max j = 1 N | Z j | with some parameter τ > 0 and where Z j = X j Q ( X j ) . By Jensen’s inequality φ ( E [ U ] ) E [ φ ( U ) ] for a convex function φ , it follows that:
e τ E max j | Z j | E e τ max j | Z j | E j = 1 N e τ | Z j | N E e τ max j | Z j | ,
Taking the logarithm in (A1) and using Jensen again, we obtain:
log e τ E max j | Z j | = τ E max j | Z j | log E j = 1 N e τ | Z j | log N + log E e τ max j | Z j | log N + E log e τ max j | Z j | = log N + τ E max j | Z j |
Further dividing by τ leads to:
E max j | Z j | 1 τ log E j = 1 N e τ | Z j | E max j | Z j | + log N τ ,
It is interesting to observe that Equation (A3) describes in fact the well-known bounds of the LogSumExp function log j = 1 N e τ | Z j | . Furthermore, we can write this in terms of a single random variable as:
E [ j = 1 N e τ | Z j | ] = N E [ e τ | Z | ] ,
The middle term can be written in terms of a single random variable as E [ j = 1 N e τ | Z j | ] = N E [ e τ | Z | ] , such that (A3) leads to:
1 τ log E e τ | Z | E max j | Z j | 1 τ log E e τ | Z | + log N τ ,
Using the above relation, Equation (A3) becomes:
E max j | Z j | 1 τ log N E e τ | Z | = log N τ + 1 τ log E e τ | Z | log N τ + E max j | Z j | ,
Re-arranging the terms, we obtain:
1 τ log E e τ | Z | E max j | Z j | 1 τ log E e τ | Z | + log N τ ,
Next, for the quantization error (distortion), we have the well-known generalized relation:
D ( Q ) = E d ( X , Q ( X ) ) = d ( x , Q ( x ) ) f X ( x ) d x = i = 0 M b i b i + 1 d ( x , y i ) f X ( x ) d x ,
Replacing the L quantization error approximation in Equation (A8) leads to:
E d ( X , Q ( X ) ) 1 τ log E e τ | Z | = 1 τ log E e τ | X Q ( X ) | = 1 τ log i = 0 M b i b i + 1 e τ | x y i | f X ( x ) d x ,
Which gives us Equation (1) and concludes the proof. □
Let us start from the Lagrangian formulation for L -oriented quantization as given by Equation (5):
J = 1 τ log i = 0 M b i b i + 1 e τ | x y i | f X ( x ) d x + λ i = 0 M l i b i b i + 1 f X ( x ) d x ,
We wish to find the optimal reconstruction levels { y i } i = 0 M and decision levels { b i } i = 0 M + 1 that minimize (A10). For high rates, we assume that the density function f X ( x ) is constant over each quantization interval [ b i , b i + 1 ) , i.e., for each y i [ b i , b i + 1 ) has some constant value f X ( y i ) = = f X ( b i ) . Also, we define Φ as:
Φ = i = 0 M b i b i + 1 e τ | x y i | f X ( x ) d x ,
In a first instance, we compute d J d y i . Recall that for any function u, the derivative of the logarithm is given by log u = u u . Since the second term of Equation (A10) does not depend on y i , and knowing that b i y i < b i + 1 , the derivative can then be written as:
d J d y i = f X ( y i ) τ Φ d d y i y i b i + 1 e τ ( x y i ) d x + b i y i e τ ( y i x ) d x = f X ( y i ) τ Φ d d y i e τ ( b i + 1 y i ) 1 τ + 1 + e τ ( y i b i ) τ = f X ( y i ) τ Φ e τ ( b i + 1 y i ) + e τ ( y i b i ) ,
Setting the above expression to zero leads us to the following equation:
e τ ( y i b i ) e τ ( b i + 1 y i ) = 0 ,
From (A13), it is now trivial to compute the reconstruction level condition:
y i = b i + b i + 1 2 ,
We now wish to compute d J d b i . First, recall the Leibniz integral rule for some function g ( x , t ) and some integration limits a ( x ) , c ( x ) :
d d x a ( x ) c ( x ) g ( x , t ) d t = g ( x , c ( x ) ) d c ( x ) d x g ( x , a ( x ) ) d a ( x ) d x + a ( x ) c ( x ) g ( x , t ) x d t ,
which for the special case a ( x ) = a and c ( x ) = x can be written as:
d d x a x g ( x , t ) d t = g ( x , x ) + a x g ( x , t ) x d t ,
Starting from Equation (A10), we can now compute d J d b i by identifying the terms that depend on b i , respectively, using the fact that the derivative log u = u u for any function u. We thus obtain the following equation, where we have also flipped the integral limits (hence the negative signs) in order to match the special case expression of the Leibniz rule:
d J d b i = 1 τ Φ d d b i b i + 1 b i e τ | x y i | f X ( x ) d x + b i 1 b i e τ | x y i 1 | f X ( x ) d x + λ d d b i l i b i + 1 b i f X ( x ) d x + l i 1 b i 1 b i f X ( x ) d x ,
Applying the Leibniz special case rule to the above expression gives us the following:
d J d b i = 1 τ Φ e τ | b i y i | f X ( b i ) + e τ | b i y i 1 | f X ( b i ) + λ l i f X ( b i ) + l i 1 f X ( b i ) ,
where the second terms a x g ( x , t ) x d t of the Leibniz rule are always zero since in our case g ( x , t ) does not depend on b i . Given that y i 1 < b i y i , the above expression now simplifies to:
d J d b i = f X ( b i ) e τ ( b i y i 1 ) e τ ( y i b i ) τ Φ λ ( l i l i 1 ) ,
Setting Equation (A19) to zero results in the following equation for e τ b i :
e 2 τ b i τ Φ λ ( l i l i 1 ) e τ y i 1 e τ b i e τ ( y i 1 + y i ) = 0 ,
Equation (A20) can be solved as a second order equation in e τ b i , with the following solutions:
e τ b i = τ Φ λ ( l i l i 1 ) e τ y i 1 ± τ 2 Φ 2 λ 2 ( l i l i 1 ) 2 e 2 τ y i 1 + 4 e τ ( y i 1 + y i ) 2 ,
Since 4 e τ ( y i 1 + y i ) > 0 , it can be seen that one solution will be negative regardless of the sign of ( l i l i 1 ) , which is unacceptable, as we always have e τ b i > 0 . Taking the positive solution and applying the logarithm leads us to the decision level condition:
b i = 1 τ log τ Φ λ ( l i l i 1 ) e τ y i 1 2 + 1 2 4 e τ ( y i 1 + y i ) + τ 2 Φ 2 λ 2 ( l i l i 1 ) 2 e 2 τ y i 1 ,
Finally, for codeword lengths, we have the well-known entropy condition:
l i = log 2 b i b i + 1 f X ( x ) d x ,
Conditions (A14), (A22), and (A23) now fully describe L -oriented entropy-constrained scalar quantizers, which concludes the proof.

References

  1. Panter, P.F.; Dite, W. Quantization Effects in Pulse-Code Modulation. Proc. IRE 1951, 39, 44–48. [Google Scholar] [CrossRef]
  2. Gray, R.M.; Neuhoff, D.L. Quantization. IEEE Trans. Inf. Theory 1998, 44, 2325–2383. [Google Scholar] [CrossRef]
  3. Lloyd, S.P. Least Squares Quantization in PCM. IEEE Trans. Inf. Theory 1982, 28, 129–137. [Google Scholar] [CrossRef]
  4. Max, J. Quantizing for Minimum Distortion. IEEE Trans. Inf. Theory 1960, 6, 7–12. [Google Scholar] [CrossRef]
  5. Linde, R.; Buzo, A.; Gray, R.M. An Algorithm for Vector Quantization Design. IEEE Trans. Commun. 1980, 28, 84–95. [Google Scholar] [CrossRef]
  6. Wood, R. On Optimum Quantization. IEEE Trans. Inf. Theory 1969, 15, 248–252. [Google Scholar] [CrossRef]
  7. Berger, T. Optimum Quantizers and Permutation Codes. IEEE Trans. Inf. Theory 1972, 18, 759–765. [Google Scholar] [CrossRef]
  8. Farvardin, N.; Modestino, J. Optimum Quantizer Performance for a Class of Non-Gaussian Memoryless Sources. IEEE Trans. Inf. Theory 1984, 30, 485–497. [Google Scholar] [CrossRef]
  9. Chou, P.A.; Lookabaugh, T.; Gray, R.M. Entropy-Constrained Vector Quantization. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 31–42. [Google Scholar] [CrossRef]
  10. Zamir, R.; Feder, M. On Universal Quantization by Randomized Uniform/Lattice Quantizers. IEEE Trans. Inf. Theory 1992, 38, 428–436. [Google Scholar] [CrossRef]
  11. Gersho, A.; Gray, R.M. Vector Quantization and Signal Compression; Kluwer Academic Publishers: Boston, MA, USA, 1992. [Google Scholar]
  12. Mathews, V.J.; Hahn, P.J. Vector Quantization Using the L-Infinite Distortion Measure. IEEE Signal Process. Lett. 1997, 4, 33–35. [Google Scholar] [CrossRef]
  13. Linder, T.; Zamir, R. High-Resolution Source Coding for Non-Difference Distortion Measures: The Rate–Distortion Function. IEEE Trans. Inf. Theory 1999, 45, 533–547. [Google Scholar] [CrossRef]
  14. Ling, C.W.; Li, C.T. Rejection-Sampled Universal Quantization for Smaller Quantization Errors. arXiv 2024, arXiv:2402.03030. [Google Scholar] [CrossRef]
  15. Chang, W.; Schiopu, I.; Munteanu, A. L-Infinite Predictive Coding of Depth. In Proceedings of the Advanced Concepts for Intelligent Vision Systems, ACIVS 2018, Poitiers, France, 24–27 September 2018; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2018; Volume 11157, pp. 127–139. [Google Scholar]
  16. Schiopu, I.; Tabus, I. Lossy and near-lossless compression of depth images using segmentation into constrained regions. In Proceedings of the 2012 Proceedings of the 20th European Signal Processing Conference (EUSIPCO), Bucharest, Romania, 27–31 August 2012; pp. 1099–1103. [Google Scholar]
  17. Chen, K.; Ramabadran, T.V. Near-Lossless Compression of Medical Images Through Entropy-Coded DPCM. IEEE Trans. Med. Imaging 1994, 13, 538–548. [Google Scholar] [CrossRef] [PubMed]
  18. Ke, L.; Marcellin, M. Near-lossless image compression: Minimum-entropy, constrained-error DPCM. IEEE Trans. Image Process. 1998, 7, 225–228. [Google Scholar]
  19. Zern, J.; Massimino, P.; Alakuijala, J. WebP Image Format. RFC 9649. 2024. Available online: https://www.rfc-editor.org/info/rfc9649 (accessed on 28 October 2025).
  20. Avcibas, I.; Memon, N.; Sankur, B.; Sayood, K. A Progressive Lossless/Near-Lossless Image Compression Algorithm. IEEE Signal Process. Lett. 2002, 9, 312–314. [Google Scholar] [CrossRef]
  21. Wu, X.; Bao, P. L/sub/spl infin//constrained high-fidelity image compression via adaptive context modeling. IEEE Trans. Image Process. 2000, 9, 536–542. [Google Scholar]
  22. Wu, X.; Memon, N. Context-Based, Adaptive, Lossless Image Coding. IEEE Trans. Commun. 1997, 45, 437–444. [Google Scholar] [CrossRef]
  23. Weinberger, M.; Seroussi, G.; Sapiro, G. The LOCO-I lossless image compression algorithm: Principles and standardization into JPEG-LS. IEEE Trans. Image Process. 2000, 9, 1309–1324. [Google Scholar] [CrossRef]
  24. Tahouri, M.A.; Alecu, A.A.; Denis, L.; Munteanu, A. Lossless and Near-Lossless L-Infinite Compression of Depth Video Data. Sensors 2025, 25, 1403. [Google Scholar] [CrossRef] [PubMed]
  25. Karray, L.; Duhamel, P.; Rioul, O. Image Coding with an L-Infinite Norm and Confidence Interval Criteria. IEEE Trans. Image Process. 1998, 7, 621–631. [Google Scholar]
  26. Ansari, R.; Memon, N.; Ceran, E. Near-Lossless Image Compression Techniques. J. Electron. Imaging 1998, 7, 486–494. [Google Scholar] [CrossRef]
  27. Alecu, A.; Munteanu, A.; Cornelis, J.P.H.; Schelkens, P. Wavelet-Based Scalable L-Infinity-Oriented Compression. IEEE Trans. Image Process. 2006, 15, 2499–2512. [Google Scholar] [CrossRef] [PubMed]
  28. Pinho, A.J.; Neves, A.J.R. Progressive lossless compression of medical images. In Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, Taiwan, 19–24 April 2009; pp. 409–412. [Google Scholar] [CrossRef]
  29. Zhang, X.; Wu, X. Near-Lossless L-infinite-Constrained Image Decompression via Deep Neural Network. In Proceedings of the 2019 Data Compression Conference (DCC), Snowbird, UT, USA, 26–29 March 2019; pp. 33–42. [Google Scholar]
  30. Zhang, X.; Wu, X. Ultra High Fidelity Deep Image Decompression With L-infinite-Constrained Compression. IEEE Trans. Image Process. 2021, 30, 963–975. [Google Scholar] [PubMed]
  31. Bai, Y.; Liu, X.; Zuo, W.; Wang, Y.; Ji, X. Learning Scalable L-infinite-constrained Near-lossless Image Compression via Joint Lossy Image and Residual Compression. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 19–25 June 2021; pp. 11941–11950. [Google Scholar]
  32. Bai, Y.; Liu, X.; Wang, K.; Ji, X.; Wu, X.; Gao, W. Deep Lossy Plus Residual Coding for Lossless and Near-Lossless Image Compression. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 3577–3594. [Google Scholar] [CrossRef]
  33. Mehrotra, S.; Zhang, Z.; Cai, Q.; Zhang, C.; Chou, P.A. Low-complexity, near-lossless coding of depth maps from kinect-like depth cameras. In Proceedings of the 2011 IEEE 13th International Workshop on Multimedia Signal Processing, Hangzhou, China, 17–19 October 2011; pp. 1–6. [Google Scholar]
  34. Choi, J.A.; Ho, Y.S. Improved near-lossless HEVC codec for depth map based on statistical analysis of residual data. In Proceedings of the 2012 IEEE International Symposium on Circuits and Systems (ISCAS), Seoul, Republic of Korea, 20–23 May 2012; pp. 894–897. [Google Scholar]
  35. Shahriyar, S.; Murshed, M.; Ali, M.; Paul, M. Depth Sequence Coding With Hierarchical Partitioning and Spatial-Domain Quantization. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 835–849. [Google Scholar]
  36. von Buelow, M.; Tausch, R.; Schurig, M.; Knauthe, V.; Wirth, T.; Guthe, S.; Santos, P.; Fellner, D.W. Depth-of-Field Segmentation for Near-lossless Image Compression and 3D Reconstruction. J. Comput. Cult. Herit. 2022, 15, 1–16. [Google Scholar] [CrossRef]
  37. Siekkinen, M.; Kämäräinen, T. Neural Network Assisted Depth Map Packing for Compression Using Standard Hardware Video Codecs. arXiv 2022, arXiv:2206.15183. [Google Scholar] [CrossRef]
  38. Wu, Y.; Gao, W. End-to-End Lossless Compression of High Precision Depth Maps Guided by Pseudo-Residual. arXiv 2022, arXiv:2201.03195. [Google Scholar]
  39. Deutsch, L.; Gailly, J. RFC 1950: ZLIB Compressed Data Format Specification Version 3.3. May 1996. Status: INFORMATIONAL. Available online: https://www.rfc-editor.org/info/rfc1950 (accessed on 28 October 2025).
Figure 1. Empirical and theoretical L quantization errors for a Laplacian source.
Figure 1. Empirical and theoretical L quantization errors for a Laplacian source.
Entropy 27 01126 g001
Figure 2. An iterative algorithm for entropy-constrained scalar L -oriented quantization.
Figure 2. An iterative algorithm for entropy-constrained scalar L -oriented quantization.
Entropy 27 01126 g002
Figure 3. For a Laplacian distribution. (a) The R–D convex hull computed using the L distortion metric. (b) The R–D curve showing the corresponding L 2 distortion values evaluated at the same operating points.
Figure 3. For a Laplacian distribution. (a) The R–D convex hull computed using the L distortion metric. (b) The R–D curve showing the corresponding L 2 distortion values evaluated at the same operating points.
Entropy 27 01126 g003
Figure 4. For a TSGD distribution. (a) The R–D convex hull computed using the L distortion metric. (b) The R–D curve showing the corresponding L 2 distortion values evaluated at the same operating points.
Figure 4. For a TSGD distribution. (a) The R–D convex hull computed using the L distortion metric. (b) The R–D curve showing the corresponding L 2 distortion values evaluated at the same operating points.
Entropy 27 01126 g004
Figure 5. For an Exponential distribution. (a) The R–D convex hull computed using the L distortion metric. (b) The R–D curve showing the corresponding L 2 distortion values evaluated at the same operating points.
Figure 5. For an Exponential distribution. (a) The R–D convex hull computed using the L distortion metric. (b) The R–D curve showing the corresponding L 2 distortion values evaluated at the same operating points.
Entropy 27 01126 g005
Figure 6. Quantization intervals for a Laplacian signal at distortion D m a x = 5 .
Figure 6. Quantization intervals for a Laplacian signal at distortion D m a x = 5 .
Entropy 27 01126 g006
Figure 7. A sample depth map frame for dataset (a) S1, (b) S2, (c) S3, (d) S4, (e) S5, (f) S6.
Figure 7. A sample depth map frame for dataset (a) S1, (b) S2, (c) S3, (d) S4, (e) S5, (f) S6.
Entropy 27 01126 g007
Figure 8. Quantization intervals for a depth map residual at distortion D m a x = 20 .
Figure 8. Quantization intervals for a depth map residual at distortion D m a x = 20 .
Entropy 27 01126 g008
Table 1. Summary of parameter configurations corresponding to points on the R–D convex hulls of the Laplacian, TSGD, and Exponential models. For each point, the rate R, distortion D, the initial number of quantization levels M i , the final number of levels M f after convergence, and the associated Lagrangian multipliers λ are reported.
Table 1. Summary of parameter configurations corresponding to points on the R–D convex hulls of the Laplacian, TSGD, and Exponential models. For each point, the rate R, distortion D, the initial number of quantization levels M i , the final number of levels M f after convergence, and the associated Lagrangian multipliers λ are reported.
LaplacianTSGDExponential
R D M i M f λ R D M i M f λ R D M i M f λ
0.2932.7299435.10.1141.0091345.90.1719.0049319.0
0.3031.8499534.30.1240.0091343.90.1818.1949318.2
0.4626.1999627.80.1439.0091342.90.2217.0449417.1
0.6920.9799721.20.3428.0091530.60.2516.1749415.9
0.9117.2499917.10.5622.0091623.50.3015.0049414.7
1.0115.9899815.50.7019.0091720.40.3514.1149413.6
1.2513.19991012.21.3511.00911011.20.4812.0149511.2
1.5810.2099129.01.4410.00911010.20.6810.114968.9
1.788.6499157.31.579.0091109.20.799.164977.8
2.047.1699185.71.877.0091147.10.898.544977.0
2.365.6999224.12.365.0091214.10.948.214976.6
2.814.2199312.43.083.0091342.01.256.3149104.7
3.652.5599500.84.681.0091861.01.505.1449123.5
4.771.1499880.5 2.193.3449201.6
2.672.5149250.8
3.771.0749460.5
Table 2. Comparison of near-lossless compression results for residual frames, for different schemes. Rates (in bpp) are shown, together with guaranteed maximum distortion values D m a x .
Table 2. Comparison of near-lossless compression results for residual frames, for different schemes. Rates (in bpp) are shown, together with guaranteed maximum distortion values D m a x .
Data D max Codec w/Quant.
Non-Unif.pw-Unif.Unif.
S112.8192.8353.685
102.2872.3342.639
201.9652.0622.216
301.6101.6651.769
S212.3352.3862.968
101.7921.7942.043
201.5051.6321.807
301.2781.3701.491
S312.0842.0932.608
101.6111.6341.818
201.2651.4191.538
301.0531.0921.152
S412.5842.6163.311
102.0182.0232.280
201.6081.7961.963
301.3181.2571.343
S512.6142.6353.248
101.9892.0232.241
201.6481.6881.801
301.3161.3741.446
S612.6632.6883.330
102.0392.0932.322
201.7071.8001.937
301.4221.4081.506
Note. Bold values indicate the best rate obtained for each dataset and D m a x .
Table 3. Comparison of near-lossless compression results for different schemes. Rates (in bpp) and PSNR (dB) values are shown, together with guaranteed maximum distortion values D m a x .
Table 3. Comparison of near-lossless compression results for different schemes. Rates (in bpp) and PSNR (dB) values are shown, together with guaranteed maximum distortion values D m a x .
Data D max JPEG-LSCALICCodec w/Non-Unif.Codec w/pw-Unif.Codec w/Unif.
RatePSNRRatePSNRRatePSNRRatePSNRRatePSNR
S117.11999.994.712100.273.395120.433.471119.624.135119.62
104.82665.053.14265.942.83361.492.87782.203.08882.20
203.99053.802.59354.932.49757.262.59170.432.66670.43
303.45947.263.33248.422.12849.102.18261.552.21961.55
S216.30299.974.611100.162.875120.472.975119.113.389119.11
103.99465.023.08965.842.30071.422.29784.772.46484.77
203.21553.132.52954.442.00153.132.12073.892.22873.89
302.77446.762.24048.001.76447.161.84962.851.91362.85
S315.540100.113.919101.192.291122.332.356120.552.709120.55
103.65365.412.56466.731.79459.941.81584.471.92084.47
202.80053.802.09156.001.43551.551.58773.821.64073.82
302.30147.191.85549.121.21249.851.24861.801.25461.80
S416.32099.964.192100.052.975119.913.070119.173.587119.17
103.97865.002.79165.502.37764.822.38182.712.55682.71
203.30053.652.27054.291.95455.342.14068.752.23868.75
302.87647.052.02647.791.65349.201.58858.931.61858.93
S516.722100.054.265100.792.782120.572.860119.303.298119.30
104.65565.382.87266.062.13064.612.16086.312.29186.31
203.89253.782.36455.001.77557.871.81171.121.85171.12
303.36047.082.11148.591.43052.231.48461.451.49661.45
S615.841100.053.932100.333.308118.973.425117.523.832117.52
103.76465.302.59265.942.63160.592.68782.592.82582.59
202.97853.572.08954.372.28156.262.37468.232.43968.23
302.50046.911.86048.091.98349.081.96759.782.00859.78
Note. Bold values indicate the best rate obtained for each dataset and D m a x .
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alecu, A.-A.; Tahouri, M.A.; Munteanu, A.; Păvăloiu, B. Non-Uniform Entropy-Constrained L Quantization for Sparse and Irregular Sources. Entropy 2025, 27, 1126. https://doi.org/10.3390/e27111126

AMA Style

Alecu A-A, Tahouri MA, Munteanu A, Păvăloiu B. Non-Uniform Entropy-Constrained L Quantization for Sparse and Irregular Sources. Entropy. 2025; 27(11):1126. https://doi.org/10.3390/e27111126

Chicago/Turabian Style

Alecu, Alin-Adrian, Mohammad Ali Tahouri, Adrian Munteanu, and Bujor Păvăloiu. 2025. "Non-Uniform Entropy-Constrained L Quantization for Sparse and Irregular Sources" Entropy 27, no. 11: 1126. https://doi.org/10.3390/e27111126

APA Style

Alecu, A.-A., Tahouri, M. A., Munteanu, A., & Păvăloiu, B. (2025). Non-Uniform Entropy-Constrained L Quantization for Sparse and Irregular Sources. Entropy, 27(11), 1126. https://doi.org/10.3390/e27111126

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop