Next Article in Journal
Entropy in the Assessment of the Labour Market Situation in the Context of the Survival Analysis Methods
Previous Article in Journal
Simulation-Based Two-Stage Scheduling Optimization Method for Carrier-Based Aircraft Launch and Departure Operations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Asymptotically Optimal Status Update Compression in Multi-Source System: Age–Distortion Tradeoff

Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230026, China
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(7), 664; https://doi.org/10.3390/e27070664
Submission received: 8 May 2025 / Revised: 14 June 2025 / Accepted: 19 June 2025 / Published: 20 June 2025
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

We consider a compression problem in a multi-source status-updating system through a representative two-source scenario. The status updates are generated by two independent sources following heterogeneous Poisson processes. These updates are then compressed into binary strings and sent to the receiver via a shared, error-free channel with a unit rate. We propose two compression schemes—a multi-quantizer compression scheme, where a dedicated quantizer–encoder pair is assigned to each source for compression, and a single-quantizer compression scheme, employing a unified quantizer–encoder pair shared across both sources. For each scheme, we formulate an optimization problem to jointly design quantizer–encoder pairs, with the objective of minimizing the sum of the average ages subject to a distortion constraint of symbols, respectively. The following three theoretical results are established: (1) The combination of two uniform quantizers with different parameters, along with their corresponding AoI-optimal encoders, provides an asymptotically optimal solution for the multi-quantizer compression scheme. (2) The combination of a piecewise uniform w-quantizer with an AoI-optimal encoder provides an asymptotically optimal solution for the single-quantizer compression scheme. (3) For both schemes, the optimal sum of the average ages is asymptotically linear with respect to the log distortion, with the same slope determined by the sources’ arrival rates.

1. Introduction

The age of information (AoI) [1]—a metric for data freshness—has attracted significant research interest in diverse areas, including channel coding [2,3,4,5,6], network scheduling [7,8], remote estimation [9,10,11,12], and other related fields. This growing interest stems from the surge of applications demanding timely status updates, such as the Internet of Things (IoT), Vehicular Ad-hoc NETwork (VANET), and surveillance networks [13,14,15]. In practical applications, many systems often deploy multiple nodes to support complex monitoring tasks. Taking the industrial scenario as an example, a sensor network composed of distributed nodes can monitor key indicators such as temperature and pressure in real time, then transmit the data to the central monitoring system through the network in a timely manner. Based on these real-time data streams, factories can further optimize production processes to improve efficiency. However, for analog (i.e., continuous-valued) sources, high-precision encoding typically requires longer transmission times, inevitably causing information staleness at the receiver. Therefore, it is crucial to design an efficient compression scheme to ensure both timely and accurate recovery.
In this paper, we consider a compression problem in a multi-source (multi-stream) status-updating system through a representative two-source scenario, as illustrated in Figure 1. The system consists of two independent continuous-time analog sources, each generating independent and identically distributed (i.i.d.) symbols with possibly different probability density functions (pdfs). Status updates from the sources arrive according to heterogeneous Poisson processes. These symbols are compressed into binary strings and sent through a shared error-free unit-rate link. When the channel is busy, newly arrived updates are discarded. The system is to be designed to optimize the acquisition of fresh and accurate data at the receiver.
References relevant to this work can be broadly categorized into three research directions.
The first research direction focuses on analyzing the average age in multi-source (multi-stream) status-updating systems. The exact expression of the multi-source M/M/1 non-preemptive queue has been derived in [16]. Furthermore, the average age of the queues with more general service processes has been analyzed. Specifically, Najm and Telatar [17] has investigated the average age for multi-source M/G/1/1 preemptive systems, while Chen et al. [18] has derived the average age for multi-source M/G/1/1 non-preemptive systems, a result particularly relevant to our work. While these works provide fundamental queueing-theoretic insights, they do not incorporate the coding aspects.
The second research direction corresponds to the timely lossless source coding problem under different queuing-theoretic considerations. Therein, transmitting one bit requires one unit of time, so the transmission time of a symbol is equal to the assigned codeword length. Unlike conventional queuing systems with fixed service times, these studies treat codeword lengths as design variables to maximize information freshness, given the symbol arrival processes and probability mass functions (pmf). Existing work has studied various coding schemes under strict lossless reconstruction requirements for the entire data stream—including fixed-to-variable [19], variable-to-variable [20] and variable-to-fixed lossless source coding schemes [21]—as well as more flexible systems permitting symbol skipping when the channel is busy [22,23]. Specifically, Mayekar et al. [22] derived the optimal real-valued codeword lengths under a zero-wait policy, where a new update is generated immediately upon successful delivery of the previous one. In [23], a selective encoding policy was proposed, which discards updates during busy periods and only encodes the most probable k realizations; the corresponding optimal real-valued codeword lengths have been also derived. However, these works have only considered single-source scenarios, leaving the multi-source source coding problem unexplored. The analog sources have also not been taken into account, so there is no role of distortion.
The third research direction explores the age–distortion tradeoff, where distortion is defined in different ways across various studies. In [24], distortion was modeled as a monotonically decreasing function of symbol processing time, and the optimal update policies were derived under distortion constraints. In [25], distortion was measured by the mean-squared error (MSE), and the age–distortion tradeoff was studied in a sensing system where the energy harvesting sensor node monitored and transmitted the status updates to the remote monitor. In [26], distortion was modeled as the importance of data, and the age–distortion tradeoff was studied using dynamic programming methods. Another work [27] proposed a cross-layer framework to jointly optimize AoI and compression distortion for real-time monitoring over fading channels. However, these works did not consider the design of variable-rate quantizers.
An important case of discrete sources arises from the output of a quantizer. Therefore, we focus on the natural combination of timely lossless source coding and quantization. In our earlier work [28], a joint sampling and compression problem involving the age–distortion tradeoff for a single-source system was investigated, where the arrival process was controlled by the sampler. While [28] established a series of results for single-source systems, multi-source scenarios introduce new issues in the following four aspects: (1) The deployment of multiple quantizers and encoders increases the number of optimization parameters, which are often interdependent. (2) Age evolution in multi-source systems exhibits intrinsic interdependencies distinct from the single-source case. The average age of any individual source may be a nonlinear multivariate function of service times for all sources, which cannot be decoupled, thereby increasing system design complexity. (3) System performance metrics inherently depend on collective behavior across all sources. (4) Significant heterogeneity among sources—in terms of probability distribution characteristics, arrival rates, and other parameters—further complicates the design task. The age–distortion tradeoff visualization is illustrated in Figure 2.
To accommodate heterogeneous requirements on accuracy, we introduce weights α and 1 α , and use the weighted sum of mean-squared errors (WSMSE) as the system distortion measure. We propose two compression schemes—a multi-quantizer compression scheme with dedicated quantizer–encoder pairs for each source, and a single-quantizer compression scheme, employing a unified quantizer–encoder pair shared across both sources, as illustrated in Figure 3 and Figure 4, respectively.
For each compression scheme, we formulate a joint optimization problem to design quantizers and encoders, minimizing the sum of the average ages under a given distortion constraint of symbols. For the multi-quantizer compression scheme, the combination of two uniform quantizers with different parameters, along with the corresponding AoI-optimal encoders, provides an asymptotically optimal solution. For the single-quantizer compression scheme, the combination of a piecewise uniform w-quantizer with the corresponding AoI-optimal encoder provides an asymptotically optimal solution. Our analysis reveals that the optimal sum of average ages follows an asymptotically linear relationship with log WSMSE, with the same slope determined by the arrival rates of both sources. In comparison, the optimal average age versus log MSE is asymptotically linear with a slope of 3 4 , as established by [28] for the single-source case. A classical result in the high-resolution quantization theory states that entropy versus log MSE is asymptotically linear with a slope of 1 2 [29].
The remainder of this paper is organized as follows: In Section 2, we describe the system model and propose two compression schemes. In Section 3, we study the multi-quantizer compression scheme and develop its asymptotically optimal solution. In Section 4, we turn to the single-quantizer compression scheme and develop its corresponding results. The influence of different parameters on system performance is studied in Section 5. Numerical results are provided in Section 6. Finally, we conclude the paper in Section 7.

2. System Model

We consider a continuous-time status-updating system with two independent analog sources, as shown in Figure 1. For each source i , ( i = 1 , 2 ) , updating symbols X i are generated as i.i.d. random variables with known pdf f i ( x ) , and they arrive according to a Poisson process with rate λ i . We assume that the pdfs satisfy the following conditions—-an assumption that is typically employed in high-resolution quantization theory [30]:
(1) Each pdf f i ( x ) is continuous and sufficiently smooth, and both pdfs share the same bounded support interval U = [ a , b ] . (2) The quantization cells are sufficiently small such that each pdf can be considered approximately constant within each cell. (3) Each reconstruction point X ^ is positioned at the centroid of its corresponding quantization cell. (4) The quantization rate R is sufficiently high (i.e., R ).
We propose two compression schemes—a multi-quantizer compression scheme, where a dedicated quantizer–encoder pair is assigned to each source for compression, and a single-quantizer compression scheme, employing a unified quantizer–encoder pair shared across both sources, as shown in Figure 3 and Figure 4, respectively. The difference between the two schemes lies in their required number of quantizer–encoder pairs. The multi-quantizer compression scheme offers greater design flexibility, as it can be designed separately for each source. In contrast, the single-quantizer compression scheme requires only one quantizer–encoder pair, whose design is dependent on the characteristics of both sources simultaneously, potentially limiting its performance.
During idle channel periods, arriving symbols are quantized and then assigned binary prefix-free codewords by an encoder. These compressed symbols are then sent to the receiver through a shared noiseless channel at a rate of one bit per unit time, while any new arrivals generated during channel busy periods are discarded. Therefore, the transmission time of a symbol is equal to its assigned codeword length, and our system can be modeled as a multi-source (multi-stream) M/G/1/1 non-preemptive system [18]. We assume that the receiver can identify the corresponding source of the symbols received.
We use AoI to quantify information freshness. For source i, the AoI at time t is defined as Δ ( i ) ( t ) = t τ ( i ) ( t ) , where τ ( i ) ( t ) denotes the generation time of the most recently received update at time t, and the average age is given by the following:
Δ ( i ) = lim T 1 T 0 T Δ ( i ) ( t ) d t .
For the multi-source M/G/1/1 non-preemptive system, the average age of source i , ( i = 1 , 2 ) is given by the following [18]:
Δ ( i ) = λ 1 E [ S 1 ] + λ 2 E [ S 2 ] + 1 λ i + λ 1 E [ S 1 2 ] + λ 2 E [ S 2 2 ] 2 ( λ 1 E [ S 1 ] + λ 2 E [ S 2 ] + 1 ) ,
where S 1 and S 2 denote the transmission times of symbols generated by sources 1 and 2, respectively.
The sum of the average ages—with the direct use of AoI to represent the sum of the average ages—is expressed as follows:
Δ = λ 1 E [ S 1 ] + λ 2 E [ S 2 ] + 1 λ 1 + λ 1 E [ S 1 ] + λ 2 E [ S 2 ] + 1 λ 2 + λ 1 E [ S 1 2 ] + λ 2 E [ S 2 2 ] λ 1 E [ S 1 ] + λ 2 E [ S 2 ] + 1 .

2.1. Multi-Quantizer Compression Scheme

In the multi-quantizer compression scheme, the symbols from two sources are compressed by two separate quantizer–encoder pairs, respectively. For source i, given a quantizer Q i , ( a j i 1 , a j i ] is used to denote the j i th quantization cell. Each cell is represented by a reproduction point c j i with occurrence probability p j i . Let Q denote the set of quantizers. The MSE for source i is expressed as follows:
D ( Q i ( X i ) ) = j i a j i 1 a j i ( x c j i ) 2 f i ( x ) d x .
To accommodate heterogeneous requirements on accuracy, we introduce weights α and 1 α . For the multi-quantizer compression scheme, we define the WSMSE D m ( X 1 , X 2 ) as the system distortion measure, as expressed in Equation (5). In this paper, we use the subscript “m” to denote variables under the multi-quantizer compression scheme, and “s” for variables under the single-quantizer compression scheme.
D m ( X 1 , X 2 ) = α D ( Q 1 ( X 1 ) ) + ( 1 α ) D ( Q 2 ( X 2 ) ) .
For a given pmf, we assign binary prefix-free codewords to the quantization cells, where the codeword length for the jth cell of source i is denoted by l j i . The random variable L i represents the codeword length for the quantized symbols of source i, and the set of all prefix-free codeword length assignments is denoted by L . As established in information theory, the codeword lengths of any prefix-free code must satisfy the Kraft inequality, i.e., j i 2 l j i 1 ([29], p. 24). In our analysis, we focus on the high-resolution regime where the average codeword lengths are sufficiently large. In subsequent analysis, we ignore the integer constraint and consider the real-valued length assignments. The transmission time of a symbol is equal to its assigned codeword length; the AoI is given by the following:
Δ m = λ 1 E [ L 1 ] + λ 2 E [ L 2 ] + 1 λ 1 + λ 1 E [ L 1 ] + λ 2 E [ L 2 ] + 1 λ 2 + λ 1 E [ L 1 2 ] + λ 2 E [ L 2 2 ] λ 1 E [ L 1 ] + λ 2 E [ L 2 ] + 1 .
The AoI Δ m is a nonlinear function of E [ L i ] and E [ L i 2 ] , which is difficult to decouple directly. Furthermore, each quantizer Q i determines the output pmf P i ( Q i ( X i ) ) . Therefore, the AoI minimization problem can be formulated as a joint codeword length assignment problem, which is a complex nonlinear fractional problem that involves the joint design of two sets of codeword lengths, as follows:
min { L 1 , L 2 L } Δ m s . t . j 1 2 l j 1 1 , j 2 2 l j 2 1 , l j 1 , l j 2 R + .
Given the arrival processes for both sources, the AoI is governed by the assigned codeword lengths, which in turn are determined by the pmfs of the quantizer outputs. The WSMSE distortion metric, characterized by the MSEs for both sources, is directly influenced by the design of both quantizers. When the analog sources are considered, this framework naturally gives rise to an inherent tradeoff between AoI and distortion performance, which necessitates the joint optimization of both quantizer-encoder pairs, leading to a generalized formulation of Problem (7) as follows:
min { Q 1 , Q 2 Q , L 1 , L 2 L } Δ m s . t . j 1 2 l j 1 1 , j 2 2 l j 2 1 , D m ( X 1 , X 2 ) D , l j 1 , l j 2 R + .

2.2. Single-Quantizer Compression Scheme

In the single-quantizer compression scheme, the symbols generated by both sources are compressed by a shared quantizer–encoder pair. Therefore, the design of this pair depends simultaneously on both sources.
For a given quantizer Q, we denote the jth quantization cell by ( a j 1 , a j ] , represented by a reproduction point c j . The quantizer maps continuous inputs X 1 and X 2 to discrete outputs Q ( X 1 ) and Q ( X 2 ) , with pmfs P 1 ( Q ( X 1 ) ) = { p 1 , p 2 , , p j , } and P 2 ( Q ( X 2 ) ) = { q 1 , q 2 , , q j , } , defined as follows:
p j = a j 1 a j f 1 ( x ) d x , q j = a j 1 a j f 2 ( x ) d x
representing the occurrence probabilities for sources 1 and 2, respectively. The MSE for source i is given by the following:
D ( Q ( X i ) ) = j a j 1 a j ( x c j ) 2 f i ( x ) d x .
For the single-quantizer compression scheme, we similarly adopt the WSMSE D s ( X 1 , X 2 ) as system distortion metric, as follows:
D s ( X 1 , X 2 ) = α D ( Q ( X 1 ) ) + ( 1 α ) D ( Q ( X 2 ) ) .
We assign binary prefix-free codewords to the quantization cells, where the codeword length assigned to the jth cell is denoted by l j . Since both sources share the same quantizer–encoder pair, the quantization regions, reproduction points, and codeword length assignments remain identical. However, the pmfs of the codeword length assignments are different, leading to distinct statistical properties of the transmission times. Although the same symbol has identical transmission time values for both sources, their distributions differ. To explicitly capture the differences in distribution, we denote the first and second moments of the codeword lengths as E P i [ L ] and E P i [ L 2 ] for each source i, respectively. The AoI is given by
Δ s = λ 1 E P 1 [ L ] + λ 2 E P 2 [ L ] + 1 λ 1 + λ 1 E P 1 [ L ] + λ 2 E P 2 [ L ] + 1 λ 2 + λ 1 E P 1 [ L 2 ] + λ 2 E P 2 [ L 2 ] λ 1 E P 1 [ L ] + λ 2 E P 2 [ L ] + 1 .
The AoI Δ s is a nonlinear function of E P i [ L ] and E P i [ L 2 ] , which is difficult to decouple directly. Furthermore, the quantizer Q determines the output pmfs P i ( Q ( X i ) ) for source i, thereby necessitating the design of codeword lengths for both pmfs. This leads to the following AoI minimization problem:
min { L L } Δ s s . t . j 2 l j 1 , l j R + .
When the analog sources are considered, this framework naturally gives rise to an inherent tradeoff between AoI and distortion performance. Specifically, for both sources, the quantizer design and codeword length assignment are intrinsically coupled. The quantizer determines both the distortion characteristics and the output probability distributions, which subsequently govern the optimal codeword length assignment and, consequently, the achievable AoI performance. We study a joint quantization and encoding problem to optimize AoI under a distortion constraint, as follows:
min { Q Q , L L } Δ s s . t . j 2 l j 1 , D s ( X 1 , X 2 ) D , l j R + .
The encoders are different between the two compression schemes. The multi-quantizer scheme employs two distinct sets of codeword lengths (one for each source), whereas the single-quantizer scheme employs a unified set of codeword lengths for both sources. To maintain notational simplicity, for both schemes, we refer to encoders with codeword lengths satisfying optimization problems (7) and (13) collectively as the AoI-optimal encoder, denoted by F * . In addition, the Shannon encoder F s is defined as the encoder with lengths l j = log 2 p j , where p j represents the probability of the jth realization.

2.3. Preliminaries

In this section, we present some definitions that will be used throughout the rest of the text.
Definition 1. 
For the multi-quantizer compression scheme, given a distortion threshold D > 0 , the optimal AoI is defined as follows:
Δ m * ( D ) = inf Q 1 , Q 2 , F , D m ( X 1 , X 2 ) D Δ m ( Q 1 , Q 2 , F ) ,
where the infimum is taken over all quantizers Q 1 , Q 2 satisfying the distortion constraint and codeword length assignments L 1 , L 2 .
A quintuple ( Q 1 , Q 2 , F , D 1 , D 2 ) is asymptotically optimal under the distortion threshold D if:
lim D 0 [ Δ m ( Q 1 , Q 2 , F , D 1 , D 2 ) Δ m * ( D ) ] = 0 .
For any distortion thresholds D 1 and D 2 satisfying α D 1 + ( 1 α ) D 2 D , we define the constrained optimal AoI under the distortion thresholds D 1 and D 2 as follows:
Δ m * ( D 1 , D 2 ) = inf Q 1 , Q 2 , F , D ( Q 1 ( X 1 ) ) D 1 , D ( Q 2 ( X 2 ) ) D 2 Δ m ( Q 1 , Q 2 , F ) .
Similarly, a triplet ( Q 1 , Q 2 , F ) is asymptotically optimal under the distortion thresholds D 1 and D 2 if:
lim D 1 0 D 2 0 [ Δ m ( Q 1 , Q 2 , F ) Δ m * ( D 1 , D 2 ) ] = 0 .
Remark 1. 
Since the system distortion metric is the WSMSE, in the multi-quantizer compression scheme and under a distortion threshold D > 0 , the design problem not only involves the design of the quantizer and corresponding encoders but also needs to allocate the appropriate distortion for each source. Consequently, the complete solution can be formally represented as a quintuple ( Q 1 , Q 2 , F , D 1 , D 2 ) .
Definition 2. 
For the single-quantizer compression scheme, given a distortion threshold D > 0 , the optimal AoI is given by the following:
Δ s * ( D ) = inf Q , F , D s ( X 1 , X 2 ) D Δ s ( Q , F ) .
A pair ( Q , F ) is asymptotically optimal under the distortion threshold D if:
lim D 0 [ Δ s ( Q , F ) Δ s * ( D ) ] = 0 .
Subsequently, we present some definitions and known results of the high-resolution quantization theory. Given a quantizer Q, the output entropy is denoted by H ( Q ( X ) ) . A quantizer that achieves the minimum entropy for a given distortion threshold D is called the optimal quantizer denoted by Q * . The uniform quantizer is denoted by Q uni . The definition of an asymptotically optimal quantizer is as follows:
Definition 3. 
A quantizer Q is asymptotically optimal if
lim D 0 [ H ( Q ( X ) ) H ( Q * ( X ) ) ] = 0 .
For a pdf f ( x ) and a weight function w ( x ) , the weighted mean-squared error (WMSE) distortion is defined by the following:
D w ( Q ( X ) ) = j a j 1 a j w ( x ) ( x c j ) 2 f ( x ) d x .
According to the results of [30], for a continuous and sufficiently smooth weight function w ( x ) and the corresponding WMSE distortion, a piecewise uniform quantizer—which we call the w-quantizer denoted by Q w —can be constructed. The construction proceeds as follows: First, the support interval U is partitioned into intervals { I 1 , , I j , } of equal step size δ . For sufficiently small δ , w ( x ) is approximately constant w ( x j ) within each I j . Then, each interval I j is subdivided into the cells with length δ w ( x j ) , and the midpoint of each cell is the reproduction point. The results are recapitulated as follows:
Lemma 1. 
([30]). Let w ( x ) be a continuous, sufficiently smooth weight function with the bounded support interval U = [ a , b ] . Under the WMSE distortion, the w-quantizer is asymptotically optimal, as follows:
lim D 0 H [ ( Q w ( X ) ) H ( Q * ( X ) ) ] = 0 .
Furthermore, the asymptotic behavior of the distortion satisfies the following:
lim δ 0 D δ 2 = 1 12 .
and
lim D 0 [ H ( Q w ( X ) ) + log 2 12 D ] = h ( X ) + 1 2 E [ log 2 ( w ( X ) ) ] ,
where
h ( X ) = U f ( x ) log 2 f ( x ) d x
is the differential entropy of the random variable X.
Remark 2. 
A well-known result in the high-resolution quantization theory is that the uniform quantizer is asymptotically optimal for entropy-constrained quantization [31], which is a special case of w ( x ) 1 . The key properties follow from Lemma 1 with w ( x ) = 1 directly, as follows:
lim D 0 H ( Q uni ( X ) ) + log 2 12 D = h ( X ) .

3. Asymptotically Optimal Solution for Multi-Quantizer Compression Scheme

For the multi-quantizer compression scheme, we develop an asymptotically optimal solution. For notational simplicity, we define the following for use throughout the paper:
a : = λ 1 λ 2 + λ 1 λ 1 + λ 2 + 1
b : = λ 2 λ 1 + λ 2 λ 1 + λ 2 + 1
c : = λ 1 2 + λ 1 λ 2 + λ 2 2 λ 1 λ 2 ( λ 1 + λ 2 )
The results are then presented as follows:
Theorem 1. 
The uniform quantizer Q uni ( 1 ) , with cell size δ 1 = 12 a D α ( a + b ) and distortion D 1 * = a D α ( a + b ) for source 1, along with the uniform quantizer Q uni ( 2 ) with cell size δ 2 = 12 b D ( 1 α ) ( a + b ) and distortion D 2 * = b D ( 1 α ) ( a + b ) for source 2, as well as the AoI-optimal encoder, together provide an asymptotically optimal solution to the problem (8), in the sense that, as the distortion D 0 , this solution asymptotically achieves the optimal AoI.
lim D 0 Δ m ( Q uni ( 1 ) , Q uni ( 2 ) , F * , D 1 * , D 2 * ) Δ m * ( D ) = 0 .
Under this solution, the asymptotic behavior of the optimal AoI satisfies the following:
lim D 0 Δ m * ( D ) + a + b 2 log 2 12 D = a 2 log 2 a α ( a + b ) b 2 log 2 b ( 1 α ) ( a + b ) + a h ( X 1 ) + b h ( X 2 ) + c
and
lim D 0 Δ m * ( D ) log 2 D = a + b 2 .
Remark 3. 
Theorem 1 reveals that the performance of the multi-quantizer compression scheme exhibits a strong dependence on the weight α and the ratio a a + b , since the allocation of distortion for each source is directly determined by these parameters. The ratio a a + b quantifies the relative contribution of source 1 to the sum of the average ages, as implied in the proof below. Intuitively, a larger ratio a a + b necessitates a smaller average age for source 1 to minimize the sum of the average ages, and a smaller value of α indicates that the source can tolerate a larger distortion. This is consistent with the optimal distortion allocation D 1 * = a D α ( a + b ) . The analysis for D 2 * follows analogously.
Remark 4. 
Let
δ = α ( a + b ) a δ 1 = ( 1 α ) ( a + b ) b δ 2 .
As δ 1 0 and δ 2 0 , D 1 * δ 1 2 12 and D 2 * δ 2 2 12 . Then,
α δ 1 2 12 + ( 1 α ) δ 2 2 12 = α a α ( a + b ) δ 2 12 + ( 1 α ) b ( 1 α ) ( a + b ) δ 2 12 = δ 2 12 ,
which yields
D = α D 1 * + ( 1 α ) D 2 * δ 2 12 .
Thus, we define δ as the step size for the multi-quantizer compression scheme.
Remark 5. 
A classical result in the high-resolution quantization theory is that entropy versus log MSE is asymptotically linear with a slope of 1 2 [29]. Similarly, our multi-quantizer scheme reveals an analogous asymptotically linear relationship—the performance curve of the optimal AoI versus log WSMSE is asymptotically linear with a slope of a + b 2 , which depends explicitly on the source arrival rates.
The proof of Theorem 1 proceeds as follows: First, a lower bound of the AoI Δ m is constructed, which decouples the design of codeword length assignments for both sources. Then, we obtain the upper and asymptotically lower bounds of the optimal AoI Δ m * ( D ) through (i) allocating appropriate distortion to each source and (ii) designing the corresponding quantizer–encoder pair. We further prove that the two bounds are tight asymptotically. During this process, to avoid directly solving Problem (8), the performance of the Shannon encoder is used to approximate the solution of (8) in the high-resolution regime. Finally, we prove the asymptotically optimality of the solution and analyze its performance. The proof flowchart of Theorem 1 is shown in Figure 5.
Proof. 
We divide the proof of Theorem 1 into four steps.
  • Step 1. Derive the lower bound of Δ m , as follows:
Lemma 2. 
Δ m is lower bounded by the following:
Δ m a E [ L 1 ] + b E [ L 2 ] + c = : Δ ̲ m .
Proof. 
See Appendix A. □
  • Step 2. Derive the upper and asymptotically lower bounds of the optimal AoI Δ m * ( D ) .
For notational simplicity, we define the following:
Δ ̲ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 , D 2 ) : = a H ( Q uni ( 1 ) ( X 1 ) , D 1 ) + b H ( Q uni ( 2 ) ( X 2 ) , D 2 ) + c ,
where H ( Q uni ( i ) ( X i ) , D i ) denotes the output entropy of the uniform quantizer for source i , ( i = 1 , 2 ) with the allocated distortion of D i .
In the following lemma, an asymptotically lower bound of Δ m * ( D ) is given:
Lemma 3. 
For any ϵ > 0 , there exists a distortion D 0 > 0 such that 0 < D < D 0 , the following inequality holds:
Δ m * ( D ) + ϵ Δ ̲ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 * , D 2 * ) ,
where D 1 * = a D α ( a + b ) and D 2 * = b D ( 1 α ) ( a + b ) .
Proof. 
See Appendix B. □
Given the optimal distortion allocation scheme ( D 1 * , D 2 * ) , the AoI achieved by the corresponding uniform quantizers with the Shannon encoders Δ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 * , D 2 * ) is obviously an upper bound of the optimal AoI, expressed as follows:
Δ m * ( D ) Δ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 * , D 2 * ) .
Step 3. We proceed to prove that the upper bound and the asymptotically lower bound of the optimal AoI coincide asymptotically.
Lemma 4. 
If the uniform quantizers and the Shannon encoders for sources 1 and 2 are given, then
lim D 0 Δ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 * , D 2 * ) Δ ̲ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 * , D 2 * ) = 0 ,
where D 1 * = a D α ( a + b ) and D 2 * = b D ( 1 α ) ( a + b ) .
Proof. 
See Appendix C. □
  • Step 4. Derive the asymptotically optimal solution and analyze its performance.
From Lemma 4, we can obtain that, for any ϵ > 0 , there exists some D 0 > 0 , such that 0 < D < D 0 implies the following:
| Δ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 * , D 2 * ) Δ ̲ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 * , D 2 * ) | < ϵ 2 .
Letting D 0 = min { D 0 , D 0 } , for any D 1 , D 2 satisfying 0 < D 1 < D 0 and 0 < D 2 < D 0 , the following inequality holds:
Δ m * ( D ) + 3 2 ϵ > Δ ̲ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 * , D 2 * ) + ϵ 2 > Δ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 * , D 2 * ) .
Since ϵ can be arbitrarily small, for sufficiently small D, the following inequality holds:
Δ m * ( D ) Δ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 * , D 2 * ) Δ m ( Q uni ( 1 ) , Q uni ( 2 ) , F * , D 1 * , D 2 * ) .
Thus, the quintuple ( Q uni ( 1 ) , Q uni ( 2 ) , F * , D 1 * , D 2 * ) is the asymptotically optimal solution.
Next, we prove (32) and (33). We have the following:
| Δ m * ( D ) + a + b 2 log 2 12 D + a 2 log 2 a α ( a + b ) + b 2 log 2 b ( 1 α ) ( a + b ) a h ( X 1 ) b h ( X 2 ) c | | Δ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 * , D 2 * ) Δ ̲ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 * , D 2 * ) | + | Δ ̲ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 * , D 2 * ) + a + b 2 log 2 12 D + a 2 log 2 a α ( a + b ) + b 2 log 2 b ( 1 α ) ( a + b ) a h ( X 1 ) b h ( X 2 ) c | .
Then, we derive the following:
Δ ̲ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 * , D 2 * ) + a + b 2 log 2 12 D + a 2 log 2 a α ( a + b ) + b 2 log 2 b ( 1 α ) ( a + b ) a h ( X 1 ) b h ( X 2 ) c = a H ( Q uni ( 1 ) ( X 1 ) , D 1 * ) + b H ( Q uni ( 2 ) ( X 2 ) , D 2 * ) + c + a + b 2 log 2 12 D + a 2 log 2 a α ( a + b ) + b 2 log 2 b ( 1 α ) ( a + b ) a h ( X 1 ) b h ( X 2 ) c = a H ( Q uni ( 1 ) ( X 1 ) , D 1 * ) + a 2 log 2 12 a D α ( a + b ) a h ( X 1 ) + b H ( Q uni ( 2 ) ( X 2 ) , D 2 * ) + b 2 log 2 12 b D ( 1 α ) ( a + b ) b h ( X 2 ) = a H ( Q uni ( 1 ) ( X 1 ) , D 1 * ) + a log 2 12 D 1 * a h ( X 1 ) + b H ( Q uni ( 2 ) ( X 2 ) , D 2 * ) + b log 2 12 D 2 * b h ( X 2 ) .
From the results of the high-resolution theory, then we obtain the following:
lim D 1 * 0 a H ( Q uni ( 1 ) ( X 1 ) , D 1 * ) + a log 2 12 D 1 * = a h ( X 1 )
lim D 2 * 0 b H ( Q uni ( 2 ) ( X 2 ) , D 2 * ) + b log 2 12 D 2 * = b h ( X 2 ) .
Equations (46)–(48) yield the following:
lim D 0 Δ ̲ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 * , D 2 * ) + a + b 2 log 2 12 D = a 2 log 2 a α ( a + b ) b 2 log 2 b ( 1 α ) ( a + b ) + a h ( X 1 ) + b h ( X 2 ) + c .
Then, (45), (49), and Lemma 4 imply the following:
lim D 0 Δ m * ( D ) + a + b 2 log 2 12 D = a 2 log 2 a α ( a + b ) b 2 log 2 b ( 1 α ) ( a + b ) + a h ( X 1 ) + b h ( X 2 ) + c .
Then, we can directly obtain the following result:
lim D 0 Δ m * ( D ) log 2 D = a + b 2 .
This completes the proof. □

4. Asymptotically Optimal Solution for Single-Quantizer Compression Scheme

For the single-quantizer compression scheme, we develop an asymptotically optimal solution. Let β = a a + b and introduce two random variables X ¯ α and X ¯ β , with the pdfs f ¯ α ( x ) = α f 1 ( x ) + ( 1 α ) f 2 ( x ) and f ¯ β ( x ) = β f 1 ( x ) + ( 1 β ) f 2 ( x ) , respectively.
Theorem 2. 
The pair ( Q w , F * ) , consisting of the w-quantizer and the AoI-optimal encoder, forms an asymptotically optimal solution to the problem (14), as follows:
lim D 0 [ Δ s ( Q w , F * ) Δ s * ( D ) ] = 0 .
Under this solution, the asymptotic behavior of the optimal AoI satisfies the following:
lim D 0 [ Δ s * ( D ) + a + b 2 log 2 12 D ] = ( a + b ) h ( X ¯ β ) a + b 2 D ( f ¯ β ( x ) | | f ¯ α ( x ) ) + c ,
where D ( f ¯ β ( x ) | | f ¯ α ( x ) ) = U f ¯ β ( x ) log 2 f ¯ β ( x ) f ¯ α ( x ) d x is the relative entropy between f ¯ β ( x ) and f ¯ α ( x ) . Furthermore, we have the following:
lim D 0 Δ s * ( D ) log 2 D = a + b 2 .
Remark 6. 
Theorem 2 reveals that the performance of the single-quantizer compression scheme also exhibits a strong dependence on the weight α and the ratio β = a a + b . This scheme can essentially be viewed as a single-source compression problem, as implied in the proof below. Relative entropy D ( f ¯ β ( x ) | | f ¯ α ( x ) ) characterizes the “mismatch" between the equivalent probability distribution in the objective function and that in the distortion constraint.
The proof of Theorem 2 proceeds as follows: First, for the single-quantizer compression scheme, we derive the upper and asymptotically lower bounds of the optimal AoI Δ s * ( D ) . Then, we prove that the two bounds asymptotically coincide. Crucially, this is achieved by leveraging the Shannon encoder’s performance to approximate the solution of (14) in the high-resolution regime, thereby circumventing the need to solve the original nonlinear fractional optimization problem directly. Finally, we derive the asymptotically optimal pair and analyze its performance. The proof flowchart of Theorem 2 is shown in Figure 6.
Proof. 
We divide the proof into the following three steps:
  • Step 1. Derive the upper and the asymptotically lower bounds of the optimal AoI Δ s * ( D ) .
Let w ( x ) : = f ¯ α ( x ) f ¯ β ( x ) . Then, the original WSMSE distortion metric in (11) is transformed into the following WMSE metric:
D s ( X 1 , X 2 ) = α D ( Q ( X 1 ) ) + ( 1 α ) D ( Q ( X 2 ) ) = j a j 1 a j ( x c j ) 2 α f 1 ( x ) d x + ( 1 α ) f 2 ( x ) d x
= j a j 1 a j ( x c j ) 2 f ¯ α ( x ) d x = j a j 1 a j ( x c j ) 2 w ( x ) f ¯ β ( x ) d x = : D w ( Q ( X ¯ β ) ) ,
By treating w ( x ) as a weight function, the original optimization problem (14) can be reformulated as follows:
min { Q Q , L L } Δ s s . t . j 2 l j 1 , D w ( Q ( X ¯ β ) ) D , l j R + .
Then, we present an asymptotically lower bound of the optimal AoI Δ s * ( D ) , as follows:
Lemma 5. 
For any ϵ > 0 , there exists a distortion D > 0 such that 0 < D < D , the following inequality holds:
Δ s * ( D ) + ϵ 2 Δ ̲ s ( Q w , F s ) ,
where
Δ ̲ s ( Q w , F s ) : = ( a + b ) H ( Q w ( X ¯ β ) ) + c .
Proof. 
By introducing the pmf P β = { β p 1 + ( 1 β ) q 1 , , β p j + ( 1 β ) q j , } , we obtain the following:
Δ s ( a ) a E P 1 [ L ] + b E P 2 [ L ] + c = a j p j l j + b j q j l j + c = ( a + b ) j a p j a + b + b q j a + b l j + c = ( a + b ) j β p j + ( 1 β ) q j l j + c = ( a + b ) E P β [ L ] + c = : Δ ̲ s ,
where (a) follows from Lemma 2.
For any ϵ > 0 , from the definition of infimum, there exists a pair ( Q , F ) satisfying Δ s * ( D ) + ϵ 4 > Δ s ( Q , F ) . In addition, from Lemma 1, we know that the w-quantizer is asymptotically optimal under the WMSE distortion. Therefore, there exists some D > 0 , such that 0 < D < D implies | H ( Q w ( X ¯ β ) ) H ( Q * ( X ¯ β ) ) | < ϵ 4 ( a + b ) . Then
Δ s * ( D ) + ϵ 2 > Δ s ( Q , F ) + ϵ 4 ( a + b ) E P β [ L ] + c + ϵ 4 ( a ) ( a + b ) H ( Q ( X ¯ β ) ) + c + ϵ 4 ( a + b ) H ( Q * ( X ¯ β ) ) + c + ϵ 4 ( a + b ) H ( Q w ( X ¯ β ) ) + c = : Δ ̲ s ( Q w , F s ) ,
where (a) uses the fact that E P β [ L ] H ( Q ( X ¯ β ) ) for a prefix-free code. This completes the proof. □
The AoI under the w-quantizer and the Shannon encoder provides an upper bound of the optimal AoI Δ s * ( D ) . Then
Δ ̲ s ( Q w , F s ) < Δ s * ( D ) + ϵ 2 Δ s ( Q w , F s ) + ϵ 2
Step 2. We prove that the two bounds asymptotically coincide.
Lemma 6. 
If the w-quantizer and the Shannon encoder are given, then we have the following:
lim D 0 [ Δ s ( Q w , F s ) Δ ̲ s ( Q w , F s ) ] = 0 .
Proof. 
See Appendix D. □
  • Step 3. Derive the asymptotically optimal pair and analyze its performance.
For any ϵ > 0 , there exists some D > 0 , such that 0 < D < D implies | Δ s ( Q w , F s ) Δ ̲ s ( Q w , F s ) | < ϵ 2 . Thus, we have the following:
Δ ̲ s ( Q w , F s ) + ϵ 2 > Δ s ( Q w , F s ) Δ s ( Q w , F * ) .
Letting D 0 = min { D , D } , for any D satisfying 0 < D < D 0 , we have the following:
Δ s * ( D ) + ϵ > Δ ̲ s ( Q w , F s ) + ϵ 2 Δ s ( Q w , F * ) .
Since ϵ can be arbitrarily small, for sufficiently small D, the following results can be obtained:
Δ s * ( D ) Δ s ( Q w , F * ) .
Thus, the w-quantizer and the AoI-optimal encoder are asymptotically optimal.
Next, we prove (53) and (54). We have the following:
| Δ s * ( D ) + a + b 2 log 2 12 D ( a + b ) h ( X ¯ β ) a + b 2 E [ log 2 ( w ( X ¯ β ) ) ] c | | Δ s ( Q w , F s ) Δ ̲ s ( Q w , F s ) | + | Δ ̲ s ( Q w , F s ) + a + b 2 log 2 12 D ( a + b ) h ( X ¯ β ) a + b 2 E [ log 2 ( w ( X ¯ β ) ) ] c | .
By using Lemma 1, we obtain the following:
lim D 0 Δ ̲ s ( Q w , F s ) + a + b 2 log 2 12 D = lim D 0 ( a + b ) H ( Q w ( X ¯ β ) ) + c + a + b 2 log 2 12 D = ( a + b ) h ( X ¯ β ) + a + b 2 E [ log 2 ( w ( X ¯ β ) ) ] + c = ( a + b ) h ( X ¯ β ) + a + b 2 U f ¯ β ( x ) log 2 f ¯ α ( x ) f ¯ β ( x ) d x + c = ( a + b ) h ( X ¯ β ) a + b 2 D ( f ¯ β ( x ) | | f ¯ α ( x ) ) + c .
Then, (67), (68) and Lemma 6 imply the following:
lim D 0 [ Δ s * ( D ) + a + b 2 log 2 12 D ] = ( a + b ) h ( X ¯ β ) a + b 2 D ( f ¯ β ( x ) | | f ¯ α ( x ) ) + c .
From (69), the following result can be directly obtained:
lim D 0 Δ s * ( D ) log 2 D = a + b 2 .
This completes the proof. □

5. The Impact of System Parameters on AoI Performance

This section investigates how key parameters, such as arrival rates and weights of both sources, affect AoI performance in the high-resolution regime.

5.1. The Impact of Arrival Rates on AoI Performance

We first investigate how to optimally allocate arrival rates for two sources under a fixed total arrival rate λ = λ 1 + λ 2 to minimize the system’s AoI performance. The main result is presented below.
Proposition 1. 
In the high-resolution regime, when the total arrival rate λ is fixed, there exists a unique optimal rate allocation strategy ( λ 1 * , λ λ 1 * ) that minimizes the system’s AoI performance for both the multi-quantizer compression scheme and the single-quantizer compression scheme.
Proof. 
Similar to Lemma 2, the lower bound of AoI Δ is expressed as follows:
Δ ̲ = 1 + λ 1 λ λ 1 + λ 1 λ E [ L 1 ] + 1 + λ λ 1 λ 1 + λ λ 1 λ E [ L 2 ] + λ 1 2 + λ 1 ( λ λ 1 ) + ( λ λ 1 ) 2 λ 1 ( λ λ 1 ) λ .
Taking the derivative of (71) with respect to λ 1 yields the following:
d Δ ̲ d λ 1 = 1 λ + λ ( λ λ 1 ) 2 E [ L 1 ] λ 2 + λ 1 2 λ 1 2 λ E [ L 2 ] λ 2 2 λ λ 1 λ 1 2 ( λ λ 1 ) 2 = λ 1 2 ( λ λ 1 ) 2 + λ 2 λ 1 2 E [ L 1 ] ( λ 2 + λ 1 2 ) ( λ λ 1 ) 2 E [ L 2 ] λ 3 + 2 λ 2 λ 1 λ λ 1 2 ( λ λ 1 ) 2 .
Define
G ( λ 1 ) = ( λ 1 2 ( λ λ 1 ) 2 + λ 2 λ 1 2 ) E [ L 1 ] ( λ 2 + λ 1 2 ) ( λ λ 1 ) 2 E [ L 2 ] λ 3 + 2 λ 2 λ 1 = λ 1 2 ( λ λ 1 ) 2 ( E [ L 1 ] E [ L 2 ] ) + λ 2 ( λ 1 2 E [ L 1 ] ( λ λ 1 ) 2 E [ L 2 ] ) + 2 λ 2 .
Taking the derivative of (73) with respect to λ 1 results in the following:
d G ( λ 1 ) d λ 1 = ( 2 λ 1 ( λ λ 1 ) 2 2 ( λ λ 1 ) λ 1 2 ) ( E [ L 1 ] E [ L 2 ] ) + 2 λ 2 ( λ 1 ( E [ L 1 ] E [ L 2 ] ) + λ E [ L 2 ] ) + 2 λ 2 .
For the multi-quantizer scheme, when distortion D 0 , we have
E [ L 1 ( Q uni ( 1 ) , F s ) ] E [ L 2 ( Q uni ( 2 ) , F s ) ] h ( X 1 ) h ( X 2 ) a b
and
E [ L 1 ( Q uni ( 1 ) , F s ) ] , E [ L 2 ( Q uni ( 2 ) , F s ) ] .
Thus, for sufficiently small D, we have
d G ( λ 1 ) d λ 1 > 0 .
In the high-resolution regime, G ( λ 1 ) monotonically increases with λ 1 . Since G ( λ ) > 0 and G ( 0 ) < 0 , the equation G ( λ 1 ) = 0 has a unique solution, denoted by λ 1 * . Therefore, for the multi-quantizer scheme, there exists a unique optimal rate allocation strategy ( λ 1 * , λ λ 1 * ) that minimizes the system’s AoI performance.
For the single-quantizer scheme, when distortion D 0 , we have
E P 1 [ L ( Q w , F s ) ] E P 2 [ L ( Q w , F s ) ] U f ( x ) log 2 f ¯ β ( x ) w ( x ) d x + U f 2 ( x ) log 2 f ¯ β ( x ) w ( x ) d x
and
E P 1 [ L ( Q w , F s ) ] , E P 2 [ L ( Q w , F s ) ] .
For the single-quantizer scheme, by using a similar analysis, we also conclude that there exists a unique optimal rate allocation strategy that minimizes the system’s AoI performance in the high-resolution regime. This completes the proof. □

5.2. The Impact of Weights on AoI Performance

We now investigate the impact of both sources’ weights on AoI performance in the high-resolution regime for both compression schemes.
Proposition 2. 
For the multi-quantizer scheme, in the high-resolution regime, the optimal AoI performance is a concave function of weight α, uniquely achieving its maximum at α = β .
Proof. 
Define
F 1 ( α ) = a 2 log 2 a α ( a + b ) b 2 log 2 b ( 1 α ) ( a + b ) + a h ( X 1 ) + b h ( X 2 ) + c .
Taking the derivative of (80) with respect to α yields the following:
d F 1 ( α ) d α = a 2 ln 2 1 α b 2 ln 2 1 1 α .
Setting d F 1 ( α ) d α = 0 results in the following:
α = β .
Taking the second-order derivative of (80) with respect to α yields the following:
d 2 F 1 ( α ) d α 2 = a 2 ln 2 1 α 2 b 2 ln 2 1 ( 1 α ) 2 < 0 ,
establishing concavity with the unique maximum achieved at α = β . This completes the proof. □
Proposition 3. 
For the single-quantizer scheme, in the high-resolution regime, the optimal AoI performance is a concave function of weight α, uniquely achieving its maximum at α = β .
Proof. 
Define
F 2 ( α ) = ( a + b ) h ( X ¯ β ) a + b 2 D ( f ¯ β ( x ) | | f ¯ α ( x ) ) + c = ( a + b ) h ( X ¯ β ) + a + b 2 U f ¯ β ( x ) log 2 f ¯ α ( x ) f ¯ β ( x ) d x + c .
Taking the derivative of (84) with respect to α yields the following:
d F 2 ( α ) d α = a + b 2 ln 2 U f ¯ β ( x ) f ( x ) g ( x ) f ¯ α ( x ) d x .
Setting d F 2 ( α ) d α = 0 results in the following:
α = β .
Taking the second-order derivative of (84) with respect to α yields the following:
d 2 F 2 ( α ) d α 2 = a + b 2 ln 2 U f ¯ β ( x ) ( f ( x ) g ( x ) ) 2 f ¯ α 2 ( x ) d x < 0 ,
establishing concavity with the unique maximum achieved at α = β . This completes the proof. □

6. Numerical Results

In this section, we present numerical results to evaluate the performance of the proposed solution. For sources 1 and 2, to satisfy the assumptions of pdfs, we truncate the pdfs f 1 ( x ) N ( 3 , 1 ) and f 2 ( x ) exp ( 1 ) to the interval [ 0 , 6 ] , respectively. We use “Upper bound1” and “Lower bound1” to represent the upper bound and asymptotically lower bound of the optimal performance for the multi-quantizer compression scheme, respectively. Moreover, we use “Upper bound2” and “Lower bound2” to represent the upper bound and asymptotically lower bound of the optimal performance for the single-quantizer compression scheme, respectively. In addition, we use “Fixed-length1” and “Fixed-length2” to represent the fixed-length encoding for the multi-quantizer compression scheme and the single-quantizer compression scheme, respectively.
Figure 7 illustrates the upper bound and the asymptotically lower bound of the optimal AoI versus log distortion for both compression schemes. The arrival rates for sources 1 and 2 are set to λ 1 = 3 and λ 2 = 1 , respectively. The weight is set to w = 0.6 . Through calculation, we derive the parameters a = 19 4 and b = 19 12 . As the step size δ decreases from 1.5 to 0.1, the performance curve moves from the lower right to the upper left. For the multi-quantizer compression scheme, we implement uniform quantizers with cell sizes δ 1 = a α ( a + b ) δ for source 1 and δ 2 = b ( 1 α ) ( a + b ) δ for source 2, each paired with their corresponding Shannon codes. The resulting AoI—plotted as the black curve—serves as the upper bound of the optimal AoI, while the asymptotically lower bound of the optimal AoI is plotted as the blue curve.
For the single-quantizer compression scheme, we employ the w-quantizer with its corresponding Shannon code. The resulting AoI—plotted as the green curve—is the upper bound of the optimal AoI, while the asymptotically lower bound is plotted as the red curve. We observe that the gaps between the upper and lower bounds for both schemes are remarkably small. As the quantization step size decreases, the gaps asymptotically approach zero, confirming Lemmas 4 and 6. In addition, four curves exhibit asymptotically linear behavior with the same slope of a + b 2 = 19 6 , confirming Theorems 1 and 2. Furthermore, while maintaining the previously described quantizer structures, we evaluate the performance of fixed-length encoding for the multi-quantizer compression scheme (light blue curve) and the single-quantizer compression scheme (magenta curve). As we can observe, significant performance gaps exist between fixed-length encoding and the theoretical asymptotic optimum for both schemes. Additionally, the single-quantizer scheme (magenta curve) exhibits jitter at low bit rates. This jitter phenomenon stems from the design principle of the w-quantizer, which inserts about w ( x i ) quantization points within each interval on the basis of the uniform quantizer with cell size δ . At low bit rates (where δ is large), w ( x i ) cannot be treated as a constant within each interval, leading to poor approximation performance. As δ decreases, this jitter effect gradually diminishes and eventually disappears when the quantization becomes sufficiently fine.
Figure 8 illustrates the impact of source 1’s arrival rate λ 1 on the optimal AoI for both compression schemes in the high-resolution regime, with a fixed total arrival rate λ = 4 , weight w = 0.6 , and step size δ = 0.1 . We vary λ 1 from 0.5 to 3.5 in steps of 0.1 and plot the corresponding upper bounds and asymptotically lower bounds for both compression schemes. Notably, the gap between the upper and lower bound remains small for each scheme. In addition, the bounds for both compression schemes exhibit an initial monotonic decrease followed by a subsequent increase with respect to λ 1 , with a unique minimum, confirming Proposition 1. This behavior can be explained through information freshness dynamics. As the arrival rate of either source becomes very small, the corresponding source experiences severely diminished update frequency, resulting in substantial age accumulation that dominates the system’s overall AoI performance. Specifically, when λ 1 is too small, source 1’s infrequent updates create an age bottleneck; conversely, when λ 1 approaches λ (making λ 2 small), source 2 becomes the freshness-limiting factor. This dependency creates the observed profile, with the optimal point occurring at a balanced rate allocation that avoids either extreme.
Figure 9 analyzes the effect of the weight α on the optimal AoI performance for both schemes in the high-resolution regime. The arrival rates for sources 1 and 2 are set to λ 1 = 3 and λ 2 = 1 , respectively. The step size is set to δ = 0.1 . We vary α from 0.1 to 0.9 in steps of 0.05 and plot the upper and the asymptotically lower bounds. Our results reveal that the bounds exhibit concave behavior with respect to α , with α = a a + b = 0.75 as the unique maximum, confirming Propositions 2 and 3.
Numerical simulations demonstrate that the multi-quantizer compression scheme, with its additional design flexibility, achieves better performance compared to the single-quantizer compression scheme. The multi-quantizer compression scheme can customize quantization cells for each source, while the single-quantizer scheme treats multiple sources as a single equivalent source, limiting its adaptability.

7. Conclusions

In this work, we consider a compression problem in a multi-source system to characterize the age–distortion tradeoff. We propose a multi-quantizer compression scheme and a single-quantizer compression scheme. For each scheme, we derive the asymptotically optimal solution and prove that the optimal AoI is asymptotically linear with respect to the log WSMSE, with the same slope determined by the sources’ arrival rates. Numerical simulations demonstrate that the multi-quantizer compression scheme, with its additional design flexibility, achieves better performance compared to the single-quantizer compression scheme. Furthermore, the proof of our results is based on an approximation technique to bypass a complex codeword length assignment problem. This method not only streamlines the theoretical analysis but also exhibits strong extensibility to other similar problems.

Author Contributions

Conceptualization, J.L.; Methodology, J.L.; Software, J.L.; Investigation, J.L.; Writing—original draft, J.L.; Writing—review & editing, W.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China grant number 62231022.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

During the preparation of this manuscript/study, the authors used GPT 3.5 for the purposes of minor text polishing. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Proof of Lemma 2

We analyze the following difference:
Δ m Δ ̲ m = 1 λ 1 + 1 λ 2 λ 1 2 + λ 1 λ 2 + λ 2 2 λ 1 λ 2 ( λ 1 + λ 2 ) + λ 1 E [ L 1 2 ] + λ 2 E [ L 2 2 ] λ 1 E [ L 1 ] + λ 2 E [ L 2 ] + 1 λ 1 λ 1 + λ 2 E [ L 1 ] λ 2 λ 1 + λ 2 E [ L 2 ] = λ 1 2 ( E [ L 1 2 ] E 2 [ L 1 ] ) + λ 2 2 ( E [ L 2 2 ] E 2 [ L 2 ] ) + 1 ( λ 1 E [ L 1 ] + λ 2 E [ L 2 ] + 1 ) ( λ 1 + λ 2 ) + ( λ 1 λ 2 ) E [ L 1 2 ] 2 ( λ 1 λ 2 ) E [ L 1 ] E [ L 2 ] + ( λ 1 λ 2 ) E [ L 2 2 ] ( λ 1 E [ L 1 ] + λ 2 E [ L 2 ] + 1 ) ( λ 1 + λ 2 ) ( a ) λ 1 2 ( E [ L 1 2 ] E 2 [ L 1 ] ) + λ 2 2 ( E [ L 2 2 ] E 2 [ L 2 ] ) + ( λ 1 λ 2 ) ( E [ L 1 ] E [ L 2 ] ) 2 + 1 ( λ 1 E [ L 1 ] + λ 2 E [ L 2 ] + 1 ) ( λ 1 + λ 2 ) 0 ,
where ( a ) uses Jensen’s inequality. This completes the proof.

Appendix B. Proof of Lemma 3

The distortion constraint specified in (8) gives rise to the following two cases:
Case 1: Equality Constraint. Consider the case where the given distortions D 1 and D 2 for sources 1 and 2 that satisfy the equality α D 1 + ( 1 α ) D 2 = D . From the definition of infimum, we know that, for any ϵ > 0 , there exists a triplet ( Q 1 , Q 2 , F ) satisfying Δ m * ( D 1 , D 2 ) + 1 2 ϵ > Δ m ( Q 1 , Q 2 , F , D 1 , D 2 ) . Furthermore, due to the asymptotic optimality of the uniform quantizer for entropy-constrained quantization, we obtain that, for any ϵ > 0 , there exists some D 3 > 0 and D 4 > 0 , such that 0 < D 1 < D 3 and 0 < D 2 < D 4 implies
| H ( Q uni ( 1 ) ( X 1 ) , D 1 ) H ( Q 1 * ( X 1 ) , D 1 ) | < ϵ 4 a | H ( Q uni ( 2 ) ( X 2 ) , D 2 ) H ( Q 2 * ( X 2 ) , D 2 ) | < ϵ 4 b .
At the same time, there exists some D 5 > 0 and D 6 > 0 , such that 0 < D 1 < D 5 and 0 < D 2 < D 6 implies the following:
| H ( Q uni ( 1 ) ( X 1 ) , D 1 ) log 2 12 D 1 h ( X 1 ) | < ϵ 4 a | H ( Q uni ( 2 ) ( X 2 ) , D 2 ) log 2 12 D 2 h ( X 2 ) | < ϵ 4 b .
Letting D 0 = min { D 3 , D 4 , D 5 , D 6 } , for any D 1 , D 2 satisfying 0 < D 1 < D 0 and 0 < D 2 < D 0 , we obtain the following:
Δ m ( Q 1 , Q 2 , F , D 1 , D 2 ) + ϵ Δ m ( Q 1 , Q 2 , F * , D 1 , D 2 ) + ϵ ( a ) a E [ L 1 ] + b E [ L 2 ] + c + ϵ a H ( Q 1 ( X 1 ) , D 1 ) + b H ( Q 2 ( X 2 ) , D 2 ) + c + ϵ ( b ) a H ( Q 1 * ( X 1 ) , D 1 ) + b H ( Q 2 * ( X 2 ) , D 2 ) + c + ϵ > a H ( Q uni ( 1 ) ( X 1 ) , D 1 ) + b H ( Q uni ( 2 ) ( X 2 ) , D 2 ) + c + ϵ 2 > a ( h ( X 1 ) log 2 12 D 1 ) + b ( h ( X 2 ) log 2 12 D 2 ) + c = a h ( X 1 ) + b h ( X 2 ) + c a 2 log 2 12 D 1 b 2 log 2 12 D 2 ,
where ( a ) follows from Lemma 2, and ( b ) is due to the definition of the optimal quantizer. Let
F ( D 1 ) = a 2 log 2 12 D 1 b 2 log 2 12 D α D 1 1 α + a h ( X 1 ) + b h ( X 2 ) + c .
Taking the derivative of F ( D 1 ) with respect to D 1 yields the following:
F ( D 1 ) = a ( 2 ln 2 ) D 1 + α b ( 2 ln 2 ) ( D α D 1 ) .
Letting F ( D 1 ) = 0 results in the following:
D 1 * = a D α ( a + b ) .
Since F ( D 1 ) > 0 , the solution D 1 * indeed represents the global minimum. Consequently, the optimal distortion allocation strategy between the two sources is given by ( D 1 * , D 2 * ) , where D 1 * = a D α ( a + b ) and D 2 * = b D ( 1 α ) ( a + b ) . Then
Δ m * ( D 1 , D 2 ) + ϵ > a H ( Q uni ( 1 ) ( X 1 ) , D 1 * ) + b H ( Q uni ( 2 ) ( X 2 ) , D 2 * ) + c .
Due to the arbitrary selection of distortions D 1 and D 2 satisfying the equality constraint α D 1 + ( 1 α ) D 2 = D , then Δ m * ( D ) is asymptotically lower bounded by the following:
Δ m * ( D ) + ϵ > a H ( Q uni ( 1 ) ( X 1 ) , D 1 * ) + b H ( Q uni ( 2 ) ( X 2 ) , D 2 * ) + c = Δ ̲ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 * , D 2 * ) .
Case 2: Inequality Constraint. Consider the case where the given distortions D 1 and D 2 satisfy the inequality α D 1 + ( 1 α ) D 2 < D . Then, there exists a slack variable γ > 0 such that α D 1 + ( 1 α ) D 2 + γ = D . In the high-resolution regime, we obtain the following:
a H ( Q uni ( 1 ) ( X 1 ) , D 1 ) + b H ( Q uni ( 2 ) ( X 2 ) , D 2 ) a H ( Q uni ( 1 ) ( X 1 ) , D 1 ) + b H ( Q uni ( 2 ) ( X 2 ) , D 2 + γ 1 α ) .
Thus for the case of α D 1 + ( 1 α ) D 2 < D , there always exists D 1 and D 2 + γ 1 α , which can achive a smaller value. Therefore, the optimal AoI can only be achieved when the equality constraint is satisfied. This completes the proof.

Appendix C. Proof of Lemma 4

Define
Φ m : = λ 1 2 ( E [ L 1 2 ] E 2 [ L 1 ] ) + λ 2 2 ( E [ L 2 2 ] E 2 [ L 2 ] ) + ( λ 1 λ 2 ) ( E [ L 1 ] E [ L 2 ] ) 2 + 1 ( λ 1 E [ L 1 ] + λ 2 E [ L 2 ] + 1 ) ( λ 1 + λ 2 )
and
Ψ m : = Δ m Δ ̲ m Φ m .
For sufficiently small distortion D, the optimal distortion allocation strategy ( D 1 * , D 2 * ) is as follows:
D 1 * = a D α ( a + b ) D 2 * = b D ( 1 α ) ( a + b ) .
Since the uniform quantizer is asymptotically optimal, thus the cell sizes of two quantizers are δ 1 = 12 a D α ( a + b ) and δ 2 = 12 b D ( 1 α ) ( a + b ) for sources 1 and 2, respectively. Then
δ 1 = ( 1 α ) a α b δ 2 .
When D 0 , then δ 1 0 and δ 2 0 . For sufficiently small δ 1 and δ 2 , the pdfs f 1 ( x ) and f 2 ( x ) can be approximated as constants within each cell. Under the uniform quantizer and the Shannon encoder, the second moments of the codeword lengths for the two sources are given by the following:
E [ L 1 2 ( Q uni ( 1 ) , F s ) ] = j 1 f 1 ( x j 1 ) δ 1 log 2 f 1 ( x j 1 ) δ 1 2 = j 1 f 1 ( x j 1 ) δ 1 log 2 2 f 1 ( x j 1 ) + 2 log 2 f 1 ( x j 1 ) log 2 δ 1 + log 2 2 δ 1
E [ L 2 2 ( Q uni ( 2 ) , F s ) ] = j 2 f 2 ( x j 2 ) δ 2 log 2 f 2 ( x j 2 ) δ 2 2 = j 2 f 2 ( x j 2 ) δ 2 log 2 2 f 2 ( x j 2 ) + 2 log 2 f 2 ( x j 2 ) log 2 δ 2 + log 2 2 δ 2 .
The first moments of the codeword lengths for the two sources are as follows:
E [ L 1 ( Q uni ( 1 ) , F s ) ] = j 1 f 1 ( x j 1 ) δ 1 log 2 f 1 ( x j 1 ) δ 1 = j 1 f 1 ( x j 1 ) δ 1 log 2 f 1 ( x j 1 ) log 2 δ 1
E [ L 2 ( Q uni ( 2 ) , F s ) ] = j 2 f 2 ( x j 2 ) δ 2 log 2 f 2 ( x j 2 ) δ 2 = j 2 f 2 ( x j 2 ) δ 2 log 2 f 2 ( x j 2 ) log 2 δ 2 .
Then, we have
E [ L 1 2 ( Q uni ( 1 ) , F s ) ] E 2 [ L 1 ( Q uni ( 1 ) , F s ) ] = j 1 f 1 ( x j 1 ) δ 1 log 2 2 f 1 ( x j 1 ) j 1 f 1 ( x j 1 ) δ 1 log 2 f 1 ( x j 1 ) 2
and
E [ L 2 2 ( Q uni ( 2 ) , F s ) ] E 2 [ L 2 ( Q uni ( 2 ) , F s ) ] = j 2 f 2 ( x j 2 ) δ 2 log 2 2 f 2 ( x j 2 ) j 2 f 2 ( x j 2 ) δ 2 log 2 f 2 ( x j 2 ) 2 .
The difference between the first moments of the codeword lengths for the two sources is expressed as follows:
E [ L 1 ( Q uni ( 1 ) , F s ) ] E [ L 2 ( Q uni ( 2 ) , F s ) ] = j 1 f 1 ( x j 1 ) δ 1 log 2 f 1 ( x j 1 ) log 2 δ 1 + j 2 f 2 ( x j 2 ) δ 2 log 2 f 2 ( x j 2 ) + log 2 δ 2 = j 1 f 1 ( x j 1 ) δ 1 log 2 f 1 ( x j 1 ) log 2 ( 1 α ) a α b + j 2 f 2 ( x j 2 ) δ 2 log 2 f 2 ( x j 2 ) .
When δ 1 0 , δ 2 0 , we have the following:
j 1 f 1 ( x j 1 ) δ 1 log 2 f 1 ( x j 1 ) h ( X 1 )
j 1 f 1 ( x j 1 ) δ 1 log 2 2 f 1 ( x j 1 ) U f 1 ( x ) log 2 2 f 1 ( x ) d x
j 2 f 2 ( x j 2 ) δ 2 log 2 f 2 ( x j 2 ) h ( X 2 )
j 2 f 2 ( x j 2 ) δ 2 log 2 2 f 2 ( x j 2 ) U f 2 ( x ) log 2 2 f 2 ( x ) d x .
Thus, we obtain the following:
E [ L 1 ( Q uni ( 1 ) , F s ) ] E [ L 2 ( Q uni ( 2 ) , F s ) ] h ( X 1 ) h ( X 2 ) log 2 ( 1 α ) a α b .
We can further derive the following:
lim D 0 Φ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 * , D 2 * ) = lim D 0 λ 1 2 ( E [ L 1 2 ( Q uni ( 1 ) , F s ) ] E 2 [ L 1 ( Q uni ( 1 ) , F s ) ] ) ( λ 1 E [ L 1 ( Q uni ( 1 ) , F s ) ] + λ 2 E [ L 2 ( Q uni ( 2 ) , F s ) ] + 1 ) ( λ 1 + λ 2 ) + λ 2 2 ( E [ L 2 2 ( Q uni , F s ) ] E 2 [ L 2 ( Q uni ( 2 ) , F s ) ] ) ( λ 1 E [ L 1 ( Q uni ( 1 ) , F s ) ] + λ 2 E [ L 2 ( Q uni ( 2 ) , F s ) ] + 1 ) ( λ 1 + λ 2 ) + ( λ 1 λ 2 ) ( E [ L 1 ( Q uni ( 1 ) , F s ) ] E [ L 2 ( Q uni ( 2 ) , F s ) ] ) 2 + 1 ( λ 1 E [ L 1 ( Q uni ( 1 ) , F s ) ] + λ 2 E [ L 2 ( Q uni ( 2 ) , F s ) ] + 1 ) ( λ 1 + λ 2 ) = 0
and
lim D 0 Ψ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 * , D 2 * ) = lim D 0 ( λ 1 λ 2 ) ( E [ L 1 2 ( Q uni ( 1 ) , F s ) ] E 2 [ L 1 ( Q uni ( 1 ) , F s ) ] ) ( λ 1 E [ L 1 ( Q uni ( 1 ) , F s ) ] + λ 2 E [ L 2 ( Q uni ( 2 ) , F s ) ] + 1 ) ( λ 1 + λ 2 ) + ( λ 1 λ 2 ) ( E [ L 2 2 ( Q uni ( 2 ) , F s ) ] E 2 [ L 2 ( Q uni ( 2 ) , F s ) ] ) ( λ 1 E [ L 1 ( Q uni ( 1 ) , F s ) ] + λ 2 E [ L 2 ( Q uni ( 2 ) , F s ) ] + 1 ) ( λ 1 + λ 2 ) = 0 .
Equations (A27) and (A28) yield the following:
lim D 0 Δ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 * , D 2 * ) Δ ̲ m ( Q uni ( 1 ) , Q uni ( 2 ) , F s , D 1 * , D 2 * ) = 0 .
This completes the proof.

Appendix D. Proof of Lemma 6

For the symbol X ¯ β , according to the construction of the w-quantizer, the occurrence probability of the kth cell in the jth interval is given by β p j k + ( 1 β ) q j k , with the weight w ( x j k ) . For a sufficiently small step size δ , both the pdf f ¯ β ( x ) and the weight function w ( x ) can be approximated as constants f ¯ β ( x j ) and w ( x j ) within each interval. We use the Shannon encoder, and the codeword length assigned to the kth cell in the jth interval is as follows:
l j k = log 2 ( β p j k + ( 1 β ) q j k ) = log 2 f ¯ β ( x j k ) δ w ( x j k ) .
The second moment of the codeword lengths for source 1 is as follows:
E P 1 [ L 2 ( Q w , F s ) ] = j k f 1 ( x j k ) δ w ( x j k ) log 2 2 f ¯ β ( x j k ) δ w ( x j k ) = j f 1 ( x j ) δ log 2 2 f ¯ β ( x j ) w ( x j ) δ = j f 1 ( x j ) δ log 2 2 f ¯ β ( x j ) w ( x j ) + 2 j f 1 ( x j ) δ log 2 f ¯ β ( x j ) w ( x j ) log 2 δ + log 2 2 δ .
The first moment of the codeword lengths for source 1 is as follows:
E P 1 [ L ( Q w , F s ) ] = j k f 1 ( x j k ) δ w ( x j k ) log 2 f ¯ β ( x j k ) δ w ( x j k ) = j f 1 ( x j ) δ log 2 f ¯ β ( x j ) δ w ( x j ) = j f 1 ( x j ) δ log 2 f ¯ β ( x j ) w ( x j ) log 2 δ .
When δ 0 , we have the following:
j f 1 ( x j ) δ log 2 f ¯ β ( x j ) w ( x j ) U f 1 ( x ) log 2 f ¯ β ( x ) w ( x ) d x
j f 1 ( x j ) δ log 2 2 f ¯ β ( x j ) w ( x j ) U f 1 ( x ) log 2 2 f ¯ β ( x ) w ( x ) d x .
Then, we can obtain the following:
E P 1 [ L 2 ( Q w , F s ) ] E P 1 2 [ L ( Q w , F s ) ] = j f 1 ( x j ) δ log 2 2 f ¯ β ( x j ) w ( x j ) j f 1 ( x j ) δ log 2 f ¯ β ( x j ) w ( x j ) 2
When δ 0 , we derive the following:
E P 1 [ L 2 ( Q w , F s ) ] E P 1 2 [ L ( Q w , F s ) ] U f 1 ( x ) log 2 2 f ¯ β ( x ) w ( x ) d x U f 1 ( x ) log 2 f ¯ β ( x ) w ( x ) d x 2 .
An analogous result holds for Source 2. When δ 0 , we have
E P 2 [ L 2 ( Q w , F s ) ] E P 2 2 [ L ( Q w , F s ) ] U f 2 ( x ) log 2 2 f ¯ β ( x ) w ( x ) d x U f 2 ( x ) log 2 f ¯ β ( x ) w ( x ) d x 2 .
The difference in the first moments of the codeword lengths between the two sources is expressed as follows:
E P 1 [ L ( Q w , F s ) ] E P 2 [ L ( Q w , F s ) ] = j f 1 ( x j ) δ log 2 f ¯ β ( x j ) δ w ( x j ) + j f 2 ( x j ) δ log 2 f ¯ β ( x j ) δ w ( x j ) = j f 1 ( x j ) δ log 2 f ¯ β ( x j ) w ( x j ) + j f 2 ( x j ) δ log 2 f ¯ β ( x j ) w ( x j )
when δ 0 , we have the following:
E P 1 [ L ( Q w , F s ) ] E P 2 [ L ( Q w , F s ) ] U f 1 ( x ) log 2 f ¯ β ( x ) w ( x ) d x + U f 2 ( x ) log 2 f ¯ β ( x ) w ( x ) d x .
Let
Φ s : = λ 1 2 ( E P 1 [ L 2 ] E P 1 2 [ L ] ) + λ 2 2 ( E P 2 [ L 2 ] E P 2 2 [ L ] ) ( λ 1 E P 1 [ L ] + λ 2 E P 2 [ L ] + 1 ) ( λ 1 + λ 2 ) + ( λ 1 λ 2 ) ( E P 1 [ L ] E P 2 [ L ] ) 2 + 1 ( λ 1 E P 1 [ L ] + λ 2 E P 2 [ L ] + 1 ) ( λ 1 + λ 2 )
and
Ψ s : = Δ s Δ ̲ s Φ s .
When D 0 , we have δ 0 . Then, we obtain the following:
lim D 0 Φ s ( Q w , F s ) = lim D 0 λ 1 2 ( E P 1 [ L 2 ( Q w , F s ) ] E P 1 2 [ L ( Q w , F s ) ] ) ( λ 1 E P 1 [ L ( Q w , F s ) ] + λ 2 E P 2 [ L ( Q w , F s ) ] + 1 ) ( λ 1 + λ 2 ) + λ 2 2 ( E P 2 [ L 2 ( Q w , F s ) ] E P 2 2 [ L ( Q w , F s ) ] ) ( λ 1 E P 1 [ L ( Q w , F s ) ] + λ 2 E P 2 [ L ( Q w , F s ) ] + 1 ) ( λ 1 + λ 2 ) + ( λ 1 λ 2 ) ( E P 1 [ L ( Q w , F s ) ] E P 2 [ L ( Q w , F s ) ] ) 2 + 1 ( λ 1 E P 1 [ L ( Q w , F s ) ] + λ 2 E P 2 [ L ( Q w , F s ) ] + 1 ) ( λ 1 + λ 2 ) = 0
and
lim D 0 Ψ s ( Q w , F s ) = lim D 0 ( λ 1 λ 2 ) ( E P 1 [ L 2 ( Q w , F s ) ] E P 1 2 [ L ( Q w , F s ) ] ) ( λ 1 E P 1 [ L ( Q w , F s ) ] + λ 2 E P 2 [ L ( Q w , F s ) ] + 1 ) ( λ 1 + λ 2 ) + ( λ 1 λ 2 ) ( E P 2 [ L ( Q w , F s ) 2 ] E P 2 2 [ L ( Q w , F s ) ] ) ( λ 1 E P 1 [ L ( Q w , F s ) ] + λ 2 E P 2 [ L ( Q w , F s ) ] + 1 ) ( λ 1 + λ 2 ) = 0 .
Thus, we have
lim D 0 Δ s ( Q w , F s ) Δ ̲ s ( Q w , F s ) = 0 .
This completes the proof.

References

  1. Kaul, S.; Yates, R.; Gruteser, M. Real-time status: How often should one update? In Proceedings of the IEEE INFOCOM, Orlando, FL, USA, 25–30 March 2012; pp. 2731–2735. [Google Scholar]
  2. Li, A.; Wu, S.; Jiao, J.; Zhang, N.; Zhang, Q. Age of Information With Hybrid-ARQ: A Unified Explicit Result. IEEE Trans. Commun. 2022, 70, 7899–7914. [Google Scholar] [CrossRef]
  3. Devassy, R.; Durisi, G.; Ferrante, G.C.; Simeone, O.; Uysal, E. Reliable Transmission of Short Packets Through Queues and Noisy Channels Under Latency and Peak-Age Violation Guarantees. IEEE J. Sel. Areas Commun. 2019, 37, 721–734. [Google Scholar] [CrossRef]
  4. Ceran, E.T.; Gündüz, D.; György, A. Average Age of Information With Hybrid-ARQ Under a Resource Constraint. IEEE Trans. Wireless Commun. 2019, 18, 1900–1913. [Google Scholar] [CrossRef]
  5. Bobbili, S.C.; Parag, P.; Chamberland, J.F. Real-Time Status Updates With Perfect Feedback Over Erasure Channels. IEEE Trans. Commun. 2020, 68, 5363–5374. [Google Scholar] [CrossRef]
  6. Pan, H.; Liew, S.C.; Liang, J.; Leung, V.C.M.; Li, J. Coding of Multi-Source Information Streams With Age of Information Requirements. IEEE J. Sel. Areas Commun. 2021, 39, 1427–1440. [Google Scholar] [CrossRef]
  7. Kadota, I.; Sinha, A.; Modiano, E. Optimizing Age of Information in Wireless Networks with Throughput Constraints. In Proceedings of the IEEE INFOCOM, Honolulu, HI, USA, 16–19 April 2018. [Google Scholar]
  8. Ju, Z.; Rafiee, P.; Ozel, O. Optimizing Urgency of Information through Resource Constrained Joint Sensing and Transmission. Entropy 2022, 24, 1624. [Google Scholar] [CrossRef]
  9. Arafa, A.; Banawan, K.; Seddik, K.G.; Poor, H.V. Sample, Quantize, and Encode: Timely Estimation Over Noisy Channels. IEEE Trans. Commun. 2021, 69, 6485–6499. [Google Scholar] [CrossRef]
  10. Sun, Y.; Polyanskiy, Y.; Uysal, E. Sampling of the Wiener Process for Remote Estimation Over a Channel With Random Delay. IEEE Trans. Inf. Theory 2020, 66, 1118–1135. [Google Scholar] [CrossRef]
  11. Chen, X.; Liao, X.; Bidokhti, S.S. Real-time sampling and estimation on random access channels: Age of information and beyond. In Proceedings of the IEEE INFOCOM, Vancouver, BC, Canada, 10–13 May 2021; pp. 1–10. [Google Scholar]
  12. Li, A.; Wu, S.; Lee, G.C.; Chen, X.; Sun, S. Sampling to Achieve the Goal: An Age-aware Remote Markov Decision Process. In Proceedings of the IEEE Inf. Theory Workshop, Shenzhen, China, 24–28 November 2024; pp. 121–126. [Google Scholar]
  13. Li, S.; Xu, L.D.; Zhao, S. The internet of things: A survey. Inf. Syst. Front. 2015, 17, 243–259. [Google Scholar] [CrossRef]
  14. Ravi, N.; Krishna, C.M.; Koren, I. Mix-Zones as an Effective Privacy Enhancing Technique in Mobile and Vehicular Ad-hoc Networks. ACM Comput. Surv. 2024, 56, 1–33. [Google Scholar] [CrossRef]
  15. Zhou, B.; Saad, W. Joint Status Sampling and Updating for Minimizing Age of Information in the Internet of Things. IEEE Trans. Commun. 2019, 67, 7468–7482. [Google Scholar] [CrossRef]
  16. Moltafet, M.; Leinonen, M.; Codreanu, M. On the Age of Information in Multi-Source Queueing Models. IEEE Trans. Commun. 2020, 68, 5003–5017. [Google Scholar] [CrossRef]
  17. Najm, E.; Telatar, E. Status updates in a multi-stream M/G/1/1 preemptive queue. In Proceedings of the IEEE INFOCOM WKSHPS, Honolulu, HI, USA, 16–19 April 2018; pp. 124–129. [Google Scholar]
  18. Chen, Z.; Deng, D.; She, C.; Jia, Y.; Liang, L.; Fang, S.; Wang, M.; Li, Y. Age of information: The multi-stream M/G/1/1 non-preemptive system. IEEE Trans. Commun. 2022, 70, 2328–2341. [Google Scholar] [CrossRef]
  19. Zhong, J.; Yates, R.D. Timeliness in Lossless Block Coding. In Proceedings of the Data Compression Conference, Snowbird, UT, USA, 29 March–1 April 2016; pp. 339–348. [Google Scholar]
  20. Zhong, J.; Yates, R.D.; Soljanin, E. Backlog-adaptive compression: Age of information. In Proceedings of the IEEE International Symposium on Information Theory, Aachen, Germany, 25–30 June 2017; pp. 566–570. [Google Scholar]
  21. Abend, U.; Khina, A. Real-Time Variable-to-Fixed Lossless Source Coding of Randomly Arriving Symbols. In Proceedings of the IEEE Information Theory Workshop, Virtual Event, 17–21 October 2021; pp. 1–5. [Google Scholar]
  22. Mayekar, P.; Parag, P.; Tyagi, H. Optimal Source Codes for Timely Updates. IEEE Trans. Inf. Theory 2020, 66, 3714–3731. [Google Scholar] [CrossRef]
  23. Bastopcu, M.; Buyukates, B.; Ulukus, S. Selective Encoding Policies for Maximizing Information Freshness. IEEE Trans. Commun. 2021, 69, 5714–5726. [Google Scholar] [CrossRef]
  24. Bastopcu, M.; Ulukus, S. Age of Information for Updates With Distortion: Constant and Age-Dependent Distortion Constraints. IEEE/ACM Trans. Net. 2021, 29, 2425–2438. [Google Scholar] [CrossRef]
  25. Dong, Y.; Fan, P.; Letaief, K.B. Energy Harvesting Powered Sensing in IoT: Timeliness Versus Distortion. IEEE Internet Things J. 2020, 7, 10897–10911. [Google Scholar] [CrossRef]
  26. İnan, Y.; Inovan, R.; Telatar, E. Optimal Policies for Age and Distortion in a Discrete-Time Model. In Proceedings of the IEEE Information Theory Workshop, Virtual Event, 17–21 October 2021. [Google Scholar]
  27. Hu, S.; Chen, W. Monitoring Real-Time Status of Analog Sources: A Cross-Layer Approach. IEEE J. Sel. Areas Commun. 2021, 39, 1309–1324. [Google Scholar] [CrossRef]
  28. Li, J.; Zhang, W. Asymptotically Optimal Joint Sampling and Compression for Timely Status Updates: Age–Distortion Tradeoff. IEEE Trans. Veh. Technol. 2025, 74, 2338–2352. [Google Scholar] [CrossRef]
  29. Gallager, R.G. Principles of Digital Communication; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
  30. Aggarwal, A.; Regunathan, S.; Rose, K. Efficient bit-rate scalability for weighted squared error optimization in audio coding. IEEE Trans. Audio Speech Lang. Process. 2006, 14, 1313–1327. [Google Scholar] [CrossRef]
  31. Gish, H.; Pierce, J. Asymptotically efficient quantizing. IEEE Trans. Inf. Theory 1968, 14, 676–683. [Google Scholar] [CrossRef]
Figure 1. System model.
Figure 1. System model.
Entropy 27 00664 g001
Figure 2. Age–distortion tradeoff schematic diagram in multi-source system. Blocks connected by green (red) arrows represent opposite (same) trend of quantity growth or decrease.
Figure 2. Age–distortion tradeoff schematic diagram in multi-source system. Blocks connected by green (red) arrows represent opposite (same) trend of quantity growth or decrease.
Entropy 27 00664 g002
Figure 3. Multi-quantizer compression scheme.
Figure 3. Multi-quantizer compression scheme.
Entropy 27 00664 g003
Figure 4. Single-quantizer compression scheme.
Figure 4. Single-quantizer compression scheme.
Entropy 27 00664 g004
Figure 5. Proof flowchart of Theorem 1.
Figure 5. Proof flowchart of Theorem 1.
Entropy 27 00664 g005
Figure 6. Proof flowchart of Theorem 2.
Figure 6. Proof flowchart of Theorem 2.
Entropy 27 00664 g006
Figure 7. Performance comparison between upper bound, asymptotically lower bound, and corresponding fixed-length encoding scheme of optimal AoI versus log distortion for both schemes.
Figure 7. Performance comparison between upper bound, asymptotically lower bound, and corresponding fixed-length encoding scheme of optimal AoI versus log distortion for both schemes.
Entropy 27 00664 g007
Figure 8. Performance comparison between upper bound and asymptotically lower bound with respect to source 1’s arrival rate λ 1 for both schemes.
Figure 8. Performance comparison between upper bound and asymptotically lower bound with respect to source 1’s arrival rate λ 1 for both schemes.
Entropy 27 00664 g008
Figure 9. Performance comparison between upper bound and asymptotically lower bound of optimal AoI with respect to weight α for both schemes.
Figure 9. Performance comparison between upper bound and asymptotically lower bound of optimal AoI with respect to weight α for both schemes.
Entropy 27 00664 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, J.; Zhang, W. Asymptotically Optimal Status Update Compression in Multi-Source System: Age–Distortion Tradeoff. Entropy 2025, 27, 664. https://doi.org/10.3390/e27070664

AMA Style

Li J, Zhang W. Asymptotically Optimal Status Update Compression in Multi-Source System: Age–Distortion Tradeoff. Entropy. 2025; 27(7):664. https://doi.org/10.3390/e27070664

Chicago/Turabian Style

Li, Jun, and Wenyi Zhang. 2025. "Asymptotically Optimal Status Update Compression in Multi-Source System: Age–Distortion Tradeoff" Entropy 27, no. 7: 664. https://doi.org/10.3390/e27070664

APA Style

Li, J., & Zhang, W. (2025). Asymptotically Optimal Status Update Compression in Multi-Source System: Age–Distortion Tradeoff. Entropy, 27(7), 664. https://doi.org/10.3390/e27070664

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop