Next Article in Journal
Influence of Thermal Inertia on Dynamic Characteristics of Gas Turbine Impeller Components
Previous Article in Journal
Onsager’s Non-Equilibrium Thermodynamics as Gradient Flow in Information Geometry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Compressed Adaptive-Sampling-Rate Image Sensing Based on Overcomplete Dictionary

School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(7), 709; https://doi.org/10.3390/e27070709
Submission received: 11 May 2025 / Revised: 21 June 2025 / Accepted: 24 June 2025 / Published: 30 June 2025
(This article belongs to the Section Signal and Data Analysis)

Abstract

In this paper, a compressed adaptive image-sensing method based on an overcomplete ridgelet dictionary is proposed. Some low-complexity operations are designed to distinguish between smooth blocks and texture blocks in the compressed domain, and adaptive sampling is performed by assigning different sampling rates to different types of blocks. The efficient, sparse representation of images is achieved by using an overcomplete ridgelet dictionary; at the same time, a reasonable dictionary-partitioning method is designed, which effectively reduces the number of candidate dictionary atoms and greatly improves the speed of classification. Unlike existing methods, the proposed method does not rely on the original signal, and computation is simple, making it particularly suitable for scenarios where a device’s computing power is limited. At the same time, the proposed method can accurately identify smooth image blocks and more reasonably allocate sampling rates to obtain a reconstructed image with better quality. The experimental results show that our method’s image reconstruction quality is superior to that of existing ARCS methods and still maintains low computational complexity.

1. Introduction

1.1. Motivation

In recent years, wireless image sensors have shown potential for application in monitoring, tracking, and other fields owing to their low cost and convenience. However, in these resource-limited sensors, traditional image sampling and encoding methods are too complex, so the compressed sensing (CS) method is considered more suitable. First, the CS sampling process is a linear dimension-reducing process; the sampling and compression operations are completed simultaneously, and the calculations are simple. Second, CS can sample at a rate much lower than the Nyquist sampling rate by utilizing the sparsity of the signal, which further reduces the performance requirements for sampling equipment. Finally, the complex calculation is moved to the reconstruction end, which is also consistent with the structural characteristics of a distributed system. These advantages have attracted the attention of researchers, and a series of methods based on CS have been proposed [1,2].
However, there are still some problems that require further research when applying the CS method to distributed systems, one of which is how to implement the adaptive rate compression sensing (ARCS) method [3]. The CS method reduces the dimensions of the original signal in the sampling process and obtains the CS measurement directly. In terms of operation complexity reduction, this is an advantage; however, it also means that the original signal is hidden in the CS measurement and is unknown to the sampling device, which results in new problems, including the ARCS problem.
Different parts of a natural image have different complexities. Traditional image encoding techniques divide an image into multiple blocks and allocate different encoding rates based on the complexity of the blocks. However, in CS, since the original signal is unknown, it is difficult to estimate the complexity of a block and assign an appropriate sampling rate for it. This is the ARCS problem.

1.2. Related Works

Many researchers have conducted extensive studies to achieve ARCS. In [4,5,6], the authors assumed that the original signal was fully measured; then, an appropriate sampling rate was set, and the reconstruction quality was improved. However, the full measurements increases the sampling pressure of the sensor, resulting in increased costs and energy consumption, which does not fully utilize the advantages of CS. Yu et al. [7] proposed a solution that obtains low-resolution images through an auxiliary sensor and extracts salient features from the low-resolution image. However, although this method reduces the sampling pressure to some extent, the problem is not fundamentally solved.
To achieve ARCS without relying on the original image, Li et al. [8] proposed a method based on sensed entropy. The sampling rate of the image block is allocated based on the sensed entropy, which can be obtained by using CS measurement. A saliency-based [9] ARCS method is proposed in [10]. In this method, saliency is used to characterize the differences between blocks, while the differences are used as the basis for sampling-rate allocation. However, misclassification sometimes occurs with the above two methods, leading to block effects in the reconstructed images. To address the block effect problem, a method based on empirical modal decomposition (EMD) is proposed in [11]. EMD is used to obtain an energy distribution map of the high-frequency components, and adaptive sampling rate allocation is realized based on the energy distribution map. The method improves reconstruction quality; however, the calculation complexity of the EMD process is relatively high. In [12], an ARCS method based on statistical characteristic estimation and signal prior probability is proposed. This method is computationally simple but requires an understanding of the prior probability distribution. In [13,14], the authors proposed two adaptive rate video CS methods based on motion estimation. Adaptive rate allocation was achieved by estimating the motion of the foreground objects in the video. However, these methods assume a strong correlation between blocks and are therefore not applicable to image signals.
Although the previous studies have proposed diverse adaptive sampling methods, these approaches still suffer from several limitations, including dependence on the original image, high computational complexity, and low reconstruction quality. To provide a comprehensive understanding, Table 1 compares the strengths and weaknesses of these methods.

1.3. Proposed Work

To address the limitations of the previous methods and improve the performance of distributed systems based on a CS scheme, a new adaptive-rate compression block sensing method based on an overcomplete ridgelet dictionary (ABCS-RDET) is proposed. This method pays special attention to the following aspects during design:
First, the proposed method depends only on the compressed domain signal, and the original signal can be unknown.
Second, it uses an overcomplete ridgelet dictionary to sparsely represent the signal. Due to its ability to accurately capture the ridge features in natural images, especially the edge parts, the overcomplete ridgelet dictionary has good sparsity performance on natural images. In this method, the characteristics of the dictionary are utilized to achieve an efficient sparse representation of the signals.
Third, when using an overcomplete dictionary, the excessive number of atoms in the dictionary can have a negative impact on the speed of atomic matching. A dictionary partitioning method was designed to solve this problem. By dividing the overcomplete dictionary into smooth parts and texture parts, the number of candidate atoms in the matching process is reduced, and the matching speed is improved. Thus, atomic matching calculation can be used on a resource-limited device, and the matching results can be used to distinguish between smooth blocks and texture blocks.
Finally, based on the proportion of smooth blocks in an image, the overall complexity of the image can be estimated. Depending on the complexity estimation, different rate allocation strategies can be used to the allocate appropriate sampling rates for each block, and adaptive rate sampling can be achieved.
Based on the above ideas, the following features are ultimately achieved: (a) the method does not rely on the original signal, (b) its computational complexity is very low, (c) it improves the quality of the reconstructed image. These features ultimately make this method suitable for practical applications, which demonstrates good performance.
Finally, from a technical perspective, the two specific contributions of this method are as follows:
  • An adaptive residual energy hard-threshold classification method based on an overcomplete dictionary is proposed. This method is independent of the original signal as well as its prior statistics and has a simple implementation process and a fast calculation speed.
  • We developed a dictionary partition method. By partitioning the dictionary, the method significantly boosts both the speed and the accuracy of image block classification, thereby enhancing the image reconstruction quality.
The rest of this article is organized as follows: Section 2 briefly introduces the methodological background; Section 3 provides a specific introduction to the proposed method; Section 4 provides the experimental results and the corresponding analysis; Section 5 is the summary.

2. Methodological Background

2.1. Block Compressed Sensing

The sampling process of CS can be expressed as
y = Φ x ,
where x R n is the original signal, Φ R m × n ( m < n ) is the measurement matrix, and y R m is the CS measurement. For signal x , the CS method requires it to be “sparse”; that is, most of its elements are zero, and only a small number of elements have significant non-zero values. If the number of non-zero elements is k , signal x is said to be “ k -sparse”. Natural image signals can be represented as sparse coefficients by using certain sparse dictionaries, so natural images can also be sampled and reconstructed using CS.
However, since the total number of pixels n is very large for a whole image, the memory consume for related Φ becomes unacceptable. In order to solve this problem, block compressed sensing (BCS) [15] was proposed. Assuming that the image collected by the current sensor is X , X can be decomposed into I non-overlapping sub-blocks, and the size of each block is B × B . Then, each sub-block can be vectorized as x i R n , where n = B 2 , and i is the image block index, i 1,2 , 3 , , I . Each x i can be separately sampled using a much smaller matrix Φ b R m × n , and the block measurement y i can be obtained from
y i = Φ b x i = Φ b Ψ a i ,
where Ψ is the sparse basis, and a i is the sparse coefficient of x i .

2.2. Compressed Sensing Based on Overcomplete Dictionary

In the CS method, the sparse representation of signals is the most important prerequisite, and signals should be represented as sparsely as possible. For the effective representation of images, the images must be localized as well as oriented and have a suitable bandpass [16,17]. The overcomplete dictionary is one of the representations that has these properties [18]. Compared with orthogonal sparse bases, overcomplete dictionaries tend to have stronger sparse representation capabilities [19]. Therefore, the CS method based on an overcomplete dictionary has been of interest to researchers.
An overcomplete dictionary can be represented as a two-dimensional matrix D R n × C , where each column vector is called a dictionary atom, which is vectorized from a two-dimensional dictionary atom matrix. The number of atoms C is often much greater than n . In an overcomplete dictionary, the signal can be expressed as
x i = D a i = j = 1 k a j d j ,
where a i is a sparse vector, k is the number of non-zero elements in a i , a j is the value of the j -th non-zero element, and d j is the atom in D corresponding to the position of a j . Now signal x i is represented as a k -sparse signal under dictionary D , and
y i = Φ b x i = Φ b D a i = A b a i ,
where A b = Φ b D is a compressed dictionary.
If x i is to be accurately reconstructed from y i , the measurement matrix Φ b must meet the D-RIP [20,21] characteristics,
1 δ k v 2 2 Φ b v 2 2 1 + δ k v 2 , 2
where δ k 0,1 is a constant, and v is any vector that concentrates in the subspace formed by all k column subsets of D .
E. J. Candès [21] proved that both the Gaussian random matrix [22] and the Bernoulli random matrix [23] can satisfy the requirements of D-RIP. In this paper, a random Gaussian matrix is used as the measurement matrix. In [21], it is also proven that the measurement number m and the sparsity k have the following relationship:
m k l o g C / k .
Equation (6) shows that the sparsity k determines the number of measurements m . Therefore, setting the measurement number according to the sparseness can effectively save resources.
An approximate a ~ i of solution a i can be obtained by solving the following model [20,21]:
a ~ i = m i n a ~ i 1 a ~ i   s u c h   t h a t   A a ~ i y i 2 ϵ ,
where ϵ is the upper boundary of the signal noise. Many recovery algorithms can be used to solve (7), the popular algorithms for which are BP [24], GPSR [25], etc.

2.3. Discrete Overcomplete Ridgelet Dictionary

An overcomplete ridgelet dictionary is a commonly used overcomplete dictionary. It can efficiently capture the texture structure and edge information of an image and has high-quality sparse representation ability [26]. The specific discrete ridgelet dictionary [27] described below is used in the proposed method.
The basis in the dictionary is two-dimensional, noted as d c ( z ) R B × B , which is generated by the parameter space Γ γ = θ , s , t | θ 0 , π , s 0,3 , t Γ t , where θ is the direction parameter, and s is the scale parameter. The value range of parameter t is related to the direction parameter θ :
Γ t = 0 , B s i n θ + c o s θ ,     θ 0 , π 2 B c o s θ , B s i n θ , o t h e r ,
d c ( z ) is generated by the above parameters as
d c z = 1 w e ( s c r c z T t c ) 2 2 1 2 e ( s c r c z T t c ) 2 8 ,
where z = z z x , z y 0,1 , 2 , , B 1 2 is an atomic position variable, r c = c o s θ c , s i n θ c is a rotation vector determined by parameter θ , and w is the weight coefficient used to normalize the atoms.
To obtain dictionary atoms, the above continuous function should be discretized. In this method, the direction parameter θ is sampled at intervals of π / 36 ; the value of the scale parameter s is 0.2 k s , k s = 0,1 , 2 , , 15 ; the sampling interval of the shift parameter t is taken as 2 0.1 k s . And it is necessary to set the atom energy threshold τ D to filter low-energy atoms; it is set as τ D = 2.5 here, and the ridgelet dictionary has a total atom number C = 2806 .
To sparsely represent the vectorized signal, the two-dimensional atom d c ( z ) is vectorized as d c R n , c = 1,2 , 3 , , C , which finally generates the 2-D dictionary D = d 1 , d 2 , d 3 , , d C .

3. Adaptive Rate Method

3.1. Identification of Smooth Blocks and Texture Blocks

Equation (3) shows that signal x i can be represented by dictionary D , and sparse coefficients a i , and a i can be obtained by solving the following model [19]:
a i = m i n a i 1   s u c h   t h a t   x i = D a i .
A suitable dictionary D can often make most elements in a i approach 0, while only k elements have significantly larger values. At this time, a i is a compressible signal, and the original signal x i can be represented by k large values in a i [28]:
x i = j = 1 k a j d j + ε i ,
where ε i is the residual.
The similarity between a dictionary atom and an image block is a crucial factor in determining atom matching. The higher the similarity between the dictionary atom and the image block, the more likely the atom is to be selected. For smooth image blocks, their similarity is higher with broad-ridge atoms. Therefore, wide ridge atoms are more likely to be selected to match smooth blocks. Further analysis showed that for a smooth image block, since the energy of the high-frequency components is very small, the linear combination of k wide ridge atoms can limit the error energy ε i 2 2 to a very small range, while for a texture block, to limit ε i 2 2 to a small range, more atoms with narrower ridges are needed. An example is shown in Figure 1.
Since the approximate representation of a smooth block requires only k atoms with wide ridges, the ridgelet dictionary D R n × C can be decomposed into a smooth dictionary and a texture dictionary according to the ridges’ scale. The smooth dictionary is noted as D s R n × C 1 , which consists of 319 broad ridge atoms whose scale parameter s 0,0.4 ,
D s d s c 1 | 0 s 0.4 , d s c 1 D .
At the same time, the complement of the smooth dictionary about D is called the texture dictionary, noted as D t R n × C 2 , which consists of 2487 fine ridge atoms whose scale parameter s 0.4,3 ,
D t d t c 2 | 0.4 < s 3 , d t c 2 D .
C 1 and C 2 are their atom numbers, C 1 + C 2 = C . Examples of atoms are shown in Figure 2.
By decomposing the dictionary D into D s and D t , smooth blocks and texture blocks can quickly be distinguished: for an image block x i and a threshold τ , if there are k atoms in D s to control ε i 2 2 0 , τ , x i is considered as a smooth block; otherwise, it is a texture block. Since the size of D s is greatly smaller than that of D , the distinguishing process using D s only is much faster.

3.2. Block Classification in Compressed Domain

The block identification method proposed in Section 3.1 can be easily applied in the compressed domain. Using (11), y i can be expressed as
y i = Φ b x i = Φ b j = 1 k a j d j + ε i ,
Noting that Φ b ε i = ω i , D s u b = d 1 , d 2 d k , α i = a 1 , a 2 a k T , (14) can be rewritten as
ω i = y i A s u b · α i ,
where A s u b = Φ b D s u b , and ω i is the residual.
The compressed dictionary A b = Φ b D can also be divided into two: the smooth compressed dictionary A s and the texture compressed dictionary A t ,
A s = Φ b D s = a s 1 , a s 2 , , a s C 1 ,
A t = Φ b D t = a t 1 , a t 2 , , a t C 2 .
If there are k atoms in A s to control e i = ω i 2 2 within a certain range, then it can be considered that the corresponding x i is a smoothing block.
Given y i , A s , and the allowed atom number k = K s , the orthogonal matching pursuit (OMP) algorithm [29] can be used to determine A s u b and α i .
First, the residual is initialized as ω i 0 = y i , the selected atomic index is initialized as Λ i 0 = , and the selected atomic set is initialized as A ~ i 0 = . In each iteration, the algorithm matches and selects an atom from A s that is the most similar to the current residual. That is, in the t -th iteration, it finds the index c 1 ~ i t corresponding to
c 1 ~ i t = a r g m a x c 1 = 1 , , C 1 ω i t 1 , a s c 1 ,
where the symbol · , · represents the absolute value of the inner product, and a s c 1 is an atom of A s . Next, Λ i t and A ~ i t can be updated:
Λ i t = Λ i t 1 c 1 ~ i t ,
A ~ i t = A ~ i t 1 , a s c 1 ~ i t .
Then, the linear representation coefficient α ~ i t can be solved using the least squares method:
α ~ i t = A ~ i t T · A ~ i t 1 · A ~ i t T · y i .
Finally, the residual ω i t can be updated:
ω i t = y i A ~ i t · α ~ i t .
When t = K s , the iteration stops, and e i = ω i t 2 2 is output.
The algorithm details are shown in Algorithm 1 and Figure 3.
Algorithm 1 Classification algorithm
Input:
A s : smooth compressed dictionary.
y i : measurement value of image x i
K s : the number of iterations of the algorithm.
Initialization:
ω i 0 = y i , A ~ i 0 =   , Λ i 0 =
while  t K s  do
c 1 ~ i t = a r g m a x c 1 = 1 , , C 1 ω i t 1 , a s c 1 ;
Λ i t = Λ i t 1 c 1 ~ i t ; A ~ i t = A ~ i t 1 , a s c 1 ~ i t ;
α ~ i t = A ~ i t T · A ~ i t 1 · A ~ i t T · y i ;
ω i t = y i A ~ i t · α ~ i t ; t = t + 1
end
e i = ω i t 2 2
Output:
e i : the residual energy of the i -th block’s image
The CS measurement results of the entire picture are stored in a matrix Y = y 1 , y 2 , , y I . Using the above method, the residual energy vector e = e 1 , e 2 , , e I corresponding to all blocks can be obtained. Then e is normalized as e ¯ using the Min–Max method [30],
e ¯ = e min e max e min e ,
where m i n ( e ) represents the minimum value of the element in vector e , and m a x ( e ) represents the maximum value of the element in vector e . Finally, a fixed threshold τ s can be set to classify the smooth and textured blocks.
Dictionary A b is decomposed as A s and A t , and the matching operation uses A s only, whose size is much smaller than that of A b ; thus, the classification operation is simple and fast. However, we should also point out that for the classified texture blocks, they can be further distinguished as simple texture blocks and complex texture blocks using similar methods, if necessary. The differences are that k should be set to a larger value and the candidate dictionary becomes A t . Such differences lead to a relatively complex computation.

3.3. Adaptive Rate Allocation

We assume that the total sampling rate r of an image is fixed, and the main purpose of the proposed method is to reasonably allocate the total number of samples S to each block, where S = r · n · I .
First, before starting the adaptive rate allocation, in order to obtain the initial CS measurements, a fixed low-rate sampling matrix Φ l R m l × n should be used for quick measurement, and the corresponding y i l R m l can be obtained, where m l is the size and is a small number.
Second, using y i l and the proposed method in Section 3.2, the image blocks can be divided into smooth blocks (note das C s class) and texture blocks (noted as C t class).
Third, in order to utilize the limited sampling number more effectively, a dynamic rate allocation strategy based on the complexity of the entire image is proposed. The proportion of smooth blocks in an entire image can be used to characterize its overall complexity. Note that the number of C s blocks is N s ; when N s I / 2 , the entire image is considered a simple one; otherwise, it is considered a complex one. For a simple image, it is reasonable that the total sampling number is sufficient, since smoothing blocks require a few sampling resources, leaving enough remaining resources for complex blocks. For complex images, the situation is the opposite: a more accurate allocating method is required.
When there is a complex image, its texture blocks should be further divided into simple texture blocks (note as C t 1 class) and complex texture blocks (note as C t 2 class). The specific dividing method is described in the last part of Section 3.2, where A t is selected as the candidate atom set, k = K t , and a new threshold τ t needs to be set.
For simpler images whose blocks are divided into two categories, C s and C t , a fixed initial sampling number m l is assigned to C s blocks; that is, m C s = m l , where m C s is the sampling number of C s , and then the sampling number of the C t blocks can be decided,
m C t = r o u n d S m C s · N s N t ,
where r o u n d ( ) represents the rounding function, and N t is total number of C t blocks. Then, it is necessary to check if m C t is greater than a given maximum number m C t u l , which should be set as m C t = m C t u l , and the excess sample numbers are evenly allocated to C s blocks. m C s should be updated as
m C s = r o u n d S N t · m C t u l N s .
For complex images, there are three categories, C s , C t 1 , and C t 2 . A fixed initial sampling number is assigned to C s , m C s = m l , and the remaining sampling number is evenly allocated to C t 1 and C t 2 . Then, a sampling number reducing is taken for C t 1 blocks, and the saved sampling number is given to the C t 2 blocks,
m C t 1 = r o u n d S m l · N s N t 1 + N t 2 β ,
and
m C t 2 = r o u n d S m l · N s N t 1 + N t 2 + β N t 1 N t 2 ,
where m C t 1 and m C t 2 are the sampling numbers of the C t 1 and C t 2 blocks, N t 1 and N t 2 are the numbers of C t 1 and C t 2 blocks, and β is the parameter that controls the reducing number. It is also necessary to check if m C t 1 is too small or m C t 2 is too big. In this paper, m C t 1 is limited to be no less than 1.04 times the size of m C s , and m C t 2 is limited to be no greater than m C t u l . This can be guaranteed by adjusting the value of β .
Finally, when the final sampling number m for each block is decided, the supplementary sampling operation should be executed. For each block, if m > m l , a new supplementary matrix Φ s R m s × n is generated, and the supplementary measurement y i s R m s can be obtained, where m s = m m l . Then, the final CS measurement y i can be obtain by concatenating y i l and y i s .

3.4. Reconstruction

This work mainly focuses on the adaptive sampling rate method. When the sampling rate is reasonably allocated at the sampling end, the commonly used reconstruction methods can obtain good results. A classic reconstruction method, SPGL1 [31,32], was selected to reconstruct signal in this method. Using the sensing dictionary A , measurement y i , and measurement matrix Φ b , the approximate sparse solution a ~ i can be solved:
a ~ i = m i n a ~ i 1 a ~ i R N   s u c h   t h a t   A a ~ i y i 2 ϵ .
The approximate solution x ~ i of an image block is
x ~ i = D a ~ i ,   i 1,2 , 3 , , I .

4. Experiments

4.1. Parameter Settings

The proposed method was validated on a set of standard test images, which are accessible via the link https://github.com/eclipsetb/academic/blob/main/testImages.zip (accessed on 23 June 2025).
To verify the method’s stability, the method was tested on both 512 × 512 high- resolution images and 256 × 256 low-resolution images. Each image was partitioned into 16 × 16 blocks, and individual tests are performed on each image with total sampling rates of 0.1, 0.2, 0.3, 0.4, and 0.5.
The parameters used in the block classification operation are listed in Table 2. Since K s controls the number of iterations of the Algorithm 1, the smaller it is, the faster the program runs. The actual tests showed that K s can be set to 1 to distinguish smooth blocks while obtaining the fastest running speed. τ s is a normalized threshold that is used to distinguish the smooth blocks. K t and τ t are the similar parameters; however, they are used to distinguish between the C t 1 blocks and C t 2 blocks.
The sampling rate allocation parameters are shown in Table 3. These parameters determine the final sampling number, and they are not related to the block classification method.

4.2. Simulation Results

In this section, the proposed method is compared with the ABCS-SD [7] ABCS-MC [10], ABCS-Entropy [8], and BCS [15] methods. Among them, the ABCS-SD, ABCS-MC, and ABCS-Entropy methods are adaptive sampling rate methods, and the BCS is a traditional fixed-rate method. The ABCS-SD and BCS methods were not optimized for the reconstruction method and were reconstructed using the SPGL1 algorithm, consistent with the proposed method. For the other methods with optimized reconstruction processes, their optimized processes were used in the experiments.
The PSNRs (peak signal-to-noise ratios) of the reconstructed high-resolution images for different sampling rates are shown in Table 4. The reconstruction quality plots are illustrated in Figure 4. As shown in Table 4 and Figure 4, the method proposed in this article achieved the best PSNRs for all test images. We considered that this was mainly due to the proposed method being able to accurately identify smooth and texture blocks as well as the better sparse representation ability of the overcomplete dictionary. We also would like to mention that Figure 4 shows that when the sampling rate increases from the lowest values, such as increasing from 0.1 to 0.2 or 0.3, the reconstruction quality of the proposed method improves faster than that of the other methods. When the sampling rate is sufficient, such as increasing from 0.4 to 0.5, the quality improvement provided by the proposed method slows. It can be considered that when the sampling rate is insufficient, the proposed method quickly allocates the increased sampling resources to the complex blocks that need these resources. It also shows that this method relatively accurately identifies texture blocks.
Figure 5 shows the visual effects of the different methods in reconstructing high-resolution images when the total sampling rate r = 0.3 . Overall, the proposed method has better visual effects with all test images. For more complex parts of these images, the proposed method often achieves better reconstruction quality than the other methods. Simultaneously, a relatively consistent reconstruction quality can be achieved for both the texture blocks and the smooth blocks, and the blocking effect is not obvious. This can be illustrated from another aspect: the proposed method can identify smooth blocks and texture blocks well and assign them reasonable sampling rates.
The method was also tested on some low-resolution images. And the PSNR results are shown in Table 5. It can be seen that the proposed method also achieves the best PSNRs in these low-resolution images.
Considering that the proposed method was designed for distributed applications, to evaluate the complexity of the sampling operations, the running time was also tested using an embedded device. A Raspberry Pi 5 was used as the testing device, its CPU was an Arm Cortex-A76(2.4 GHz), and it had 4GB RAM. Its energy consumption was less than 20 Watts, and its current price was less than USD 60.
Each method was tested 10 times on the device, and the average values are shown in Figure 6.
It can be seen from Figure 6 that, when the test image is simple, our method outperforms all other methods in speed, and when the test image is more complex, the proposed method is slightly slower than the ABCS-SD method, but the gap is not large. The experimental results show that the proposed method is a low-computation-complexity method.

5. Summary

This paper proposes an innovative CS method based on an overcomplete ridgelet dictionary. This method can estimate the sparsity of image blocks based on CS measurements, without dependency on the original image. At the same time, the method is simple as well as accurate and adaptively allocate reasonable sampling rates for blocks, achieving better reconstruction quality at a given sampling rate. These characteristics make it particularly suitable for resource-constrained systems, such as monitoring systems, sensor network systems, or satellite systems. The experimental results show the effectiveness of the proposed method. Compared with the existing methods, images with a higher PSNR and better visual effects can be reconstructed with a similar or faster running time.
In our future work, we will explore two research directions. One is to further study the specific performance of this method in practical application scenarios, involving testing and optimizing its performance on actual scene images rather than standard test images. Second, noting that the core mechanism of this method involves classifying specific signals and considering the demonstrated ability of deep learning methods on classification tasks, we plan to explore applications of deep learning methods in ARCS.

Author Contributions

Conceptualization, J.W.; methodology, D.L.; writing—original draft, D.L.; writing—review and editing, J.W., D.L. and Q.Y.; supervision, Y.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Yunnan Fundamental Research Projects, grant number 202401AT070415, “Research on Adaptive Rate Compressive Sensing Method”; the National Natural Science Foundation of China (NSFC) Project, grant number 62461030, “Research on Optimization Methods and Fusion Mechanisms for Intelligent Reflecting Surface-Assisted Multi-Cell CoMP-NOMA Cooperative Transmission”; and the Yunnan Fundamental Research Projects, grant number 202401AS070105, “Research on 6G UAV Emergency Communication Link Technologies for Complex Mountainous Areas”.

Data Availability Statement

Data are contained within this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Masoum, A.; Meratnia, N.; Havinga, P.J.M. Compressive Sensing Based Data Collection in Wireless Sensor Networks. In Proceedings of the 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), Daegu, Republic of Korea, 16–18 November 2017; pp. 442–447. [Google Scholar]
  2. Hsieh, S.-H.; Liang, W.-J.; Lu, C.-S.; Pei, S.-C. Distributed Compressive Sensing: Performance Analysis with Diverse Signal Ensembles. IEEE Trans. Signal Process. 2020, 68, 3500–3514. [Google Scholar] [CrossRef]
  3. Lin, W.; Dong, L. Adaptive Downsampling to Improve Image Compression at Low Bit Rates. IEEE Trans. Image Process. 2006, 15, 2513–2521. [Google Scholar] [CrossRef] [PubMed]
  4. Zhu, S.; Zeng, B.; Gabbouj, M. Adaptive Reweighted Compressed Sensing for Image Compression. In Proceedings of the 2014 IEEE International Symposium on Circuits and Systems (ISCAS), Melbourne, VIC, Australia, 1–5 June 2014; pp. 1–4. [Google Scholar]
  5. Zhang, J.; Xiang, Q.; Yin, Y.; Chen, C.; Luo, X. Adaptive Compressed Sensing for Wireless Image Sensor Networks. Multimed. Tools Appl. 2017, 76, 4227–4242. [Google Scholar] [CrossRef]
  6. Monika, R.; Dhanalakshmi, S. An Optimal Adaptive Reweighted Sampling-Based Adaptive Block Compressed Sensing for Underwater Image Compression. Vis. Comput. 2024, 40, 4071–4084. [Google Scholar] [CrossRef]
  7. Yu, Y.; Wang, B.; Zhang, L. Saliency-Based Compressive Sampling for Image Signals. IEEE Signal Process. Lett. 2010, 17, 973–976. [Google Scholar] [CrossRef]
  8. Li, R.; Duan, X.; He, W.; You, L. Entropy-Assisted Adaptive Compressive Sensing for Energy-Efficient Visual Sensors. Multimed. Tools Appl. 2020, 79, 20821–20843. [Google Scholar] [CrossRef]
  9. Itti, L.; Koch, C.; Niebur, E. A Model of Saliency-Based Visual Attention for Rapid Scene Analysis. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1254–1259. [Google Scholar] [CrossRef]
  10. Li, R.; He, W.; Liu, Z.; Li, Y.; Fu, Z. Saliency-Based Adaptive Compressive Sampling of Images Using Measurement Contrast. Multimed. Tools Appl. 2018, 77, 12139–12156. [Google Scholar] [CrossRef]
  11. Wang, W.; Chen, J.; Zhang, Y.; Xia, J.; Zeng, X. Adaptive Compressed Sampling Based on EMD for Wireless Sensor Networks. IEEE Sens. J. 2023, 23, 2577–2591. [Google Scholar] [CrossRef]
  12. Wang, W.; Jin, X.; Quan, D.; Zhu, M.; Wang, X.; Zheng, M.; Li, J.; Chen, J. Rate Adaptive Compressed Sampling Based on Region Division for Wireless Sensor Networks. Sci. Rep. 2024, 14, 29666. [Google Scholar] [CrossRef]
  13. Song, Z.; Chen, J. Adaptive Rate Compression for Distributed Video Sensing in Wireless Visual Sensor Networks. Vis. Comput. 2025. [Google Scholar] [CrossRef]
  14. Unde, A.S.; Pattathil, D.P. Adaptive Compressive Video Coding for Embedded Camera Sensors: Compressed Domain Motion and Measurements Estimation. IEEE Trans. Mob. Comput. 2020, 19, 2250–2263. [Google Scholar] [CrossRef]
  15. Gan, L. Block Compressed Sensing of Natural Images. In Proceedings of the 2007 15th International Conference on Digital Signal Processing, Cardiff, UK, 1–4 July 2007; pp. 403–406. [Google Scholar]
  16. Olshausen, B.A.; Field, D.J. Sparse Coding with an Overcomplete Basis Set: A Strategy Employed by V1? Vision Res. 1997, 37, 3311–3325. [Google Scholar] [CrossRef] [PubMed]
  17. Olshausen, B.A.; Field, D.J. Emergence of Simple-Cell Receptive Field Properties by Learning a Sparse Code for Natural Images. Nature 1996, 381, 607–609. [Google Scholar] [CrossRef]
  18. Figueras i Ventura, R.M.; Vandergheynst, P.; Frossard, P. Low-Rate and Flexible Image Coding with Redundant Representations. IEEE Trans. Image Process. 2006, 15, 726–739. [Google Scholar] [CrossRef]
  19. Donoho, D.L.; Elad, M. Optimally Sparse Representation in General (Nonorthogonal) Dictionaries via ℓ1 Minimization. Proc. Natl. Acad. Sci. USA 2003, 100, 2197–2202. [Google Scholar] [CrossRef]
  20. Randall, P.A. Sparse Recovery via Convex Optimization; California Institute of Technology: Pasadena, CA, USA, 2009. [Google Scholar]
  21. Candès, J.E.; Eldar, C.Y.; Needel, D.; Randall, P. Compressed Sensing with Coherent and Redundant Dictionaries. Appl. Comput. Harmon. Anal. 2011, 31, 59–73. [Google Scholar] [CrossRef]
  22. Jin, S.; Sun, W.; Huang, L. Joint Optimization Methods for Gaussian Random Measurement Matrix Based on Column Coherence in Compressed Sensing. Signal Process. 2023, 207, 108941. [Google Scholar] [CrossRef]
  23. Yang, C.; Pan, P.; Ding, Q. Image Encryption Scheme Based on Mixed Chaotic Bernoulli Measurement Matrix Block Compressive Sensing. Entropy 2022, 24, 273. [Google Scholar] [CrossRef]
  24. Tausiesakul, B. Basis Pursuit and Linear Programming Equivalence: A Performance Comparison in Sparse Signal Recovery. In Proceedings of the 2022 7th International Conference on Smart and Sustainable Technologies (SpliTech), Split/Bol, Croatia, 5–8 July 2022; pp. 1–6. [Google Scholar]
  25. Figueiredo, M.A.T.; Nowak, R.D.; Wright, S.J. Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems. IEEE J. Sel. Top. Signal Process. 2007, 1, 586–597. [Google Scholar] [CrossRef]
  26. Lin, L.; Liu, F.; Jiao, L.; Yang, S.; Hao, H. The Overcomplete Dictionary-Based Directional Estimation Model and Nonconvex Reconstruction Methods. IEEE Trans. Cybern. 2018, 48, 1042–1053. [Google Scholar] [CrossRef] [PubMed]
  27. Lin, L.; Liu, F.; Jiao, L. Compressed Sensing by Collaborative Reconstruction on Overcomplete Dictionary. Signal Process. 2014, 103, 92–102. [Google Scholar] [CrossRef]
  28. Begovic, B. Dictionary Learning for Scalable Sparse Image Representation. Adv. Signal Process. 2014, 2, 55–74. [Google Scholar] [CrossRef]
  29. Pati, Y.C.; Rezaiifar, R.; Krishnaprasad, P.S. Orthogonal Matching Pursuit: Recursive Function Approximation with Applications to Wavelet Decomposition. In Proceedings of the Proceedings of 27th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 1–3 November 1993; pp. 40–44. [Google Scholar]
  30. Singh, D.; Singh, B. Investigating the Impact of Data Normalization on Classification Performance. Appl. Soft Comput. 2020, 97, 105524. [Google Scholar] [CrossRef]
  31. Van Den Berg, E.; Friedlander, M.P. Probing the Pareto Frontier for Basis Pursuit Solutions. SIAM J. Sci. Comput. 2009, 31, 890–912. [Google Scholar] [CrossRef]
  32. Van Den Berg, E.; Friedlander, M.P. Sparse Optimization with Least-Squares Constraints. SIAM J. Optim. 2011, 21, 1201–1229. [Google Scholar] [CrossRef]
Figure 1. An example of atom matching.
Figure 1. An example of atom matching.
Entropy 27 00709 g001
Figure 2. (a) Examples of the atoms in the smooth dictionary. (b) Examples of the atoms in the texture dictionary.
Figure 2. (a) Examples of the atoms in the smooth dictionary. (b) Examples of the atoms in the texture dictionary.
Entropy 27 00709 g002
Figure 3. Flowchart of algorithm.
Figure 3. Flowchart of algorithm.
Entropy 27 00709 g003
Figure 4. PSNR comparison of various reconstruction algorithms for different sampling rates with 512 × 512 images. (a) Lena; (b) Barbara; (c) Peppers; (d) Goldhill; (e) Cameraman; (f) Pirate; (g) Luna; (h) Heron.
Figure 4. PSNR comparison of various reconstruction algorithms for different sampling rates with 512 × 512 images. (a) Lena; (b) Barbara; (c) Peppers; (d) Goldhill; (e) Cameraman; (f) Pirate; (g) Luna; (h) Heron.
Entropy 27 00709 g004
Figure 5. Visual quality comparison of various reconstruction algorithms on 512 × 512 images at a sampling rate r = 0.3 . The images from left to right in each row are the original image and those reconstructed with the BCS, ABCS-SD, ABCS-MC, ABCS-Entropy, proposed methods.
Figure 5. Visual quality comparison of various reconstruction algorithms on 512 × 512 images at a sampling rate r = 0.3 . The images from left to right in each row are the original image and those reconstructed with the BCS, ABCS-SD, ABCS-MC, ABCS-Entropy, proposed methods.
Entropy 27 00709 g005aEntropy 27 00709 g005b
Figure 6. Computation times for various adaptive algorithms at different sampling rates. A Raspberry Pi 5 was used as the test device. The large red rectangle contains an enlarged display of the small red rectangle. The tested images were (a) Cameraman (simple image); (b) Barbara (complex images); (c) Peppers (complex images); (d) Lena (complex images).
Figure 6. Computation times for various adaptive algorithms at different sampling rates. A Raspberry Pi 5 was used as the test device. The large red rectangle contains an enlarged display of the small red rectangle. The tested images were (a) Cameraman (simple image); (b) Barbara (complex images); (c) Peppers (complex images); (d) Lena (complex images).
Entropy 27 00709 g006
Table 1. Comparison of existing methods.
Table 1. Comparison of existing methods.
MethodOriginal Image DependenceComputational CostReconstruction Quality
ABCS-RW [4]YesMediumHigh
STD-BCS-SPL [5]YesLowLow
ABCS-ARS [6]YesMediumMedium
ABCS-SD [7]YesLowMedium
ABCS-Entropy [8]NoHighMedium
ABCS-MC [10]NoMediumMedium
Zigzag EMD [11]NoHighMedium
ABCS-IRD [12]NoMediumMedium
Table 2. Block classification parameters.
Table 2. Block classification parameters.
K s τ s K t τ t
10.140.01
Table 3. Rate allocation parameters.
Table 3. Rate allocation parameters.
m l m C t u l β
2324015
Table 4. Comparison of PSNRs (dB) at different sampling rates for 512 × 512 images.
Table 4. Comparison of PSNRs (dB) at different sampling rates for 512 × 512 images.
ImagesMethodsSampling Rates
0.10.20.30.40.5
LenaBCS23.0026.4028.7530.9332.90
ABCS-SD21.4326.4229.1031.2733.75
ABCS-MC24.9328.2330.5832.5934.16
ABCS-Entropy26.0429.0331.1632.9334.45
Proposed27.8832.1234.4936.0237.30
BarbaraBCS21.1623.9526.1428.1830.31
ABCS-SD19.7023.8826.0228.2930.72
ABCS-MC21.7824.2826.1928.2329.99
ABCS-Entropy22.3924.3326.5628.5730.64
Proposed22.4325.1427.6030.1232.78
PeppersBCS22.1325.8928.4730.5132.32
ABCS-SD20.0425.9928.5930.5932.48
ABCS-MC23.8427.1829.0030.6232.24
ABCS-Entropy23.8726.5528.0029.5630.87
Proposed27.4031.4033.0134.0734.93
GoldhillBCS23.0725.6127.5629.2530.94
ABCS-SD21.6425.7427.7929.5631.50
ABCS-MC23.1025.9627.5429.1430.52
ABCS-Entropy24.5626.1827.6429.1730.73
Proposed26.3828.9530.6232.2333.73
CameramanBCS21.5525.8329.0732.3735.82
ABCS-SD20.1925.6829.0732.4436.04
ABCS-MC24.6028.9932.4035.9938.71
ABCS-Entropy24.1227.7629.8831.7433.12
Proposed27.8235.5940.2342.2144.52
PirateBCS21.5324.2826.1427.8729.56
ABCS-SD20.4324.3326.2928.0729.96
ABCS-MC22.5425.3027.1528.7530.35
ABCS-Entropy23.9326.1627.9329.5231.17
Proposed25.4528.4830.4232.0333.53
LunaBCS25.4229.3331.9834.3536.75
ABCS-SD23.4229.2831.9734.5637.30
ABCS-MC27.3731.0733.6335.8737.80
ABCS-Entropy27.1929.7131.5733.2834.95
Proposed31.1036.4739.3441.0642.14
HeronBCS24.5527.1329.0230.7932.44
ABCS-SD23.2727.1828.9730.8632.57
ABCS-MC25.8828.5630.6632.4134.09
ABCS-Entropy26.4228.6630.2331.7433.20
Proposed28.0530.6332.2233.3934.47
Table 5. Comparison of PSNRs (dB) at different sampling rates on 256 × 256 images.
Table 5. Comparison of PSNRs (dB) at different sampling rates on 256 × 256 images.
ImagesMethodsSampling Rates
0.10.20.30.40.5
LenaBCS20.1923.5825.8927.8329.90
ABCS-SD18.9223.1925.8528.1230.80
ABCS-MC22.3225.7328.1330.2532.21
ABCS-Entropy23.5526.2528.4030.3032.23
Proposed25.0128.6331.0033.6135.41
HeronBCS23.2025.4527.4229.3031.17
ABCS-SD22.3025.5427.4329.3531.28
ABCS-MC24.3127.1729.2531.3333.38
ABCS-Entropy25.0827.0128.6330.1231.49
Proposed26.6829.6031.7933.5535.31
GoldhillBCS21.2624.0026.1828.2430.14
ABCS-SD20.2924.5126.4228.4730.66
ABCS-MC22.1825.0627.0228.6830.07
ABCS-Entropy23.3824.9226.2427.6929.21
Proposed25.2428.0730.1932.1734.06
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, J.; Li, D.; Yang, Q.; Peng, Y. Compressed Adaptive-Sampling-Rate Image Sensing Based on Overcomplete Dictionary. Entropy 2025, 27, 709. https://doi.org/10.3390/e27070709

AMA Style

Wang J, Li D, Yang Q, Peng Y. Compressed Adaptive-Sampling-Rate Image Sensing Based on Overcomplete Dictionary. Entropy. 2025; 27(7):709. https://doi.org/10.3390/e27070709

Chicago/Turabian Style

Wang, Jianming, Dingpeng Li, Qingqing Yang, and Yi Peng. 2025. "Compressed Adaptive-Sampling-Rate Image Sensing Based on Overcomplete Dictionary" Entropy 27, no. 7: 709. https://doi.org/10.3390/e27070709

APA Style

Wang, J., Li, D., Yang, Q., & Peng, Y. (2025). Compressed Adaptive-Sampling-Rate Image Sensing Based on Overcomplete Dictionary. Entropy, 27(7), 709. https://doi.org/10.3390/e27070709

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop