Next Article in Journal
Comprehensive Exploration of Limitations of Simplified Machine Learning Algorithm for Fault Diagnosis Under Fault and Ground Resistances of Multiterminal High-Voltage Direct Current System
Previous Article in Journal
Network Diffusion Algorithms and Simulators in IoT and Space IoT: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Compressive Sensing in Power Engineering: A Comprehensive Survey of Theory and Applications, and a Case Study

by
Lekshmi R. Chandran
1,*,
Ilango Karuppasamy
2,*,
Manjula G. Nair
1,*,
Hongjian Sun
3 and
Parvathy Krishnan Krishnakumari
4
1
Department of Electrical and Electronics Engineering, Amrita Vishwa Vidyapeetham, Amritapuri 690525, India
2
Department of Electrical and Electronics Engineering, Amrita School of Engineering, Coimbatore Amrita Vishwa Vidyapeetham, Ettimadai 641112, India
3
Department of Engineering, Durham University, Durham DH13LE, UK
4
Amsterdam Business School, University of Amsterdam, Plantage Muidergracht 12, 1018 TV Amsterdam, The Netherlands
*
Authors to whom correspondence should be addressed.
J. Sens. Actuator Netw. 2025, 14(2), 28; https://doi.org/10.3390/jsan14020028
Submission received: 15 January 2025 / Revised: 22 February 2025 / Accepted: 25 February 2025 / Published: 7 March 2025
(This article belongs to the Section Wireless Control Networks)

Abstract

:
Compressive Sensing (CS) is a transformative signal processing framework that enables sparse signal acquisition at rates below the Nyquist limit, offering substantial advantages in data efficiency and reconstruction accuracy. This survey explores the theoretical foundations of CS, including sensing matrices, sparse bases, and recovery algorithms, with a focus on its applications in power engineering. CS has demonstrated significant potential in enhancing key areas such as state estimation (SE), fault detection, fault localization, outage identification, harmonic source identification (HSI), Power Quality Detection condition monitoring, and so on. Furthermore, CS addresses challenges in data compression, real-time grid monitoring, and efficient resource utilization. A case study on smart meter data recovery demonstrates the practical application of CS in real-world power systems. By bridging CS theory and its application, this survey underscores its potential to drive innovation, efficiency, and sustainability in power engineering and beyond.

1. Introduction

In the era of big data and the Internet of Things (IoT), the ability to efficiently acquire, process, and analyze vast amounts of information has become increasingly critical. This need is especially pronounced in power engineering, where modern electrical grids are characterized by bidirectional flows of electricity and information. With the proliferation of smart meters, phasor measurement units (PMUs), and distributed renewable energy systems, power grids generate massive amounts of data every second. For instance, Advanced Metering Infrastructure (AMI) alone generates petabytes of data annually, with a single smart meter producing between 0.25 TB to 250 TB per year [1]. Similarly, PMUs deployed in Wide-Area Measurement Systems (WAMSs) continuously transmit synchro phasor data at rates of 10–120 samples per second, creating significant data transmission and storage demands [2,3]. Managing such high-dimensional data efficiently for applications like real-time state estimation, fault detection, energy management, and load forecasting remains a key challenge due to bandwidth limitations, communication overhead, and computational constraints. Several key challenges must be addressed for efficient sensor data handling and communication, particularly in large-scale smart grid applications [1,2,3]:
  • High-Volume Data Transmission: Traditional data acquisition and transmission techniques require full Nyquist-rate sampling, leading to excessive bandwidth usage, high storage requirements, and communication congestion in smart grids.
  • Bandwidth and Latency Constraints: Many power system applications, such as fault detection, real-time state estimation, and condition monitoring, require low-latency and high-fidelity data transmission. However, conventional compression methods introduce computational delays, making them unsuitable for real-time processing.
  • Energy-Efficient Data Processing: In large-scale sensor networks, such as PMUs and IoT-based smart grid sensors, the energy cost of continuous data transmission is high. Efficient data acquisition strategies are needed to reduce transmission overhead while ensuring robust monitoring capabilities.
  • Scalability and Resource Constraints: As smart grids expand, the increasing number of sensors and IoT devices exacerbates the problem of real-time data management, requiring lightweight, scalable solutions for sensor data acquisition.
Compressive Sensing as a Solution
One promising solution to these challenges is Compressive Sensing (CS), a revolutionary signal processing paradigm that enables the reconstruction of sparse signals using far fewer measurements than traditionally required. By exploiting sparsity, CS sidesteps the limitations of the Nyquist–Shannon theorem, making it possible to acquire and process data at sub-Nyquist rates [4,5,6,7]. Unlike conventional methods that first sample data exhaustively and then compress it, CS integrates data acquisition and compression, enabling efficient signal reconstruction with lower resource requirements. Sparse representations encompass various techniques, but CS specifically extends this concept by enabling reconstruction from limited or incomplete measurements, making it highly suited for fault-tolerant acquisition systems. However, not all sparse representations involve CS. Unlike generic sparse coding methods used in machine learning for feature extraction, CS extends sparsity by enabling signal reconstruction from limited or incomplete measurements, making it ideal for fault-tolerant acquisition systems [8].
Advancements and Practical Benefits of CS in Power Engineering
Recent studies have validated the benefits of CS-based compression and reconstruction across multiple power engineering domains. For example, low-power CS architectures have demonstrated up to 6× improvements in energy efficiency compared to that of traditional Nyquist-rate analog-to-digital converters (ADCs) [9]. CS-based Analog Information Conversion (AIC) systems have achieved a Figure of Merit (FOM) of a 10.2 fJ/conversion step, highlighting their suitability for energy-efficient wideband signal acquisition [10]. In Advanced Metering Infrastructure (AMI), CS enables low-latency smart meter data transmission while minimizing bandwidth and storage overhead [11,12,13]. Similarly, CS-driven PMU data compression has been proposed to address the scalability limitations of WAMS, reducing transmission latency and enhancing grid observability [2,3,14]. In state estimation and topology identification (SE & TI), CS techniques have been applied to optimize measurement redundancy and improve real-time monitoring accuracy. Furthermore, CS-based frameworks for fault detection, outage identification, harmonic source identification, and condition monitoring have demonstrated superior accuracy in reconstructing grid disturbances while reducing sensor data transmission costs [15,16,17,18,19].
These advantages make CS indispensable for modern power grids, addressing data-intensive power system applications by reducing communication bottlenecks, enhancing storage efficiency, and enabling real-time signal acquisition.
Comparing CS with Traditional Compression Methods
Compressive Sensing (CS) is fundamentally different from traditional compression methods, which are generally categorized into lossy and lossless techniques [1]:
  • Lossy Compression: Techniques like Singular Value Decomposition (SVD), Principal Component Analysis (PCA), and Symbolic Aggregate Approximation (SAX) reduce data size by discarding less significant information. These are suitable for applications where a trade-off between size and quality is acceptable, such as image or video compression.
  • Lossless Compression: Methods like Huffman coding and LZ algorithms preserve all original data, ensuring perfect reconstruction but often requiring extensive computational resources.
CS integrates data acquisition and compression at the hardware level by directly capturing the most critical information through random projections. Unlike traditional methods that operate post-acquisition, CS relies on the following:
  • Sparsity: Signals with many near-zero coefficients in a transform domain (e.g., wavelet or Fourier domain).
  • Random Projections: Encoding sparse signals through measurement matrices that satisfy the Restricted Isometry Property (RIP).
  • Efficient Recovery Algorithms: Reconstruction of the original signal using techniques like ℓ1-minimization.
While CS and traditional compression methods are distinct, they are not mutually exclusive. Hybrid approaches are emerging where traditional compression serves as a pre-processing step for CS by reducing dimensionality. CS recovery outputs are further optimized using traditional compression for applications like storage reduction in cloud systems.

2. Motivation and Contributions

The rapid transformation of power systems—driven by smart grids, renewable energy integration, and the shift toward digitalized infrastructure—has created an unprecedented demand for handling large-scale, complex datasets. Conventional data management and analysis methods often face challenges with data’s sheer scale and inherent sparsity, particularly in critical applications such as advanced metering, fault detection, and wide-area monitoring. In power systems, data compression has traditionally depended on well-established sparsity techniques [1]. Compressive Sensing (CS) presents a promising solution by facilitating efficient data acquisition, transmission, and recovery using minimal samples. While numerous reviews of CS focus on fields like medical imaging, communications, and general sparse signal recovery [20,21,22,23,24,25,26,27,28,29,30,31], a gap remains in the literature connecting CS theory with power engineering applications. This study aims to bridge the gap between CS theory and its practical applications in power engineering, addressing how CS helps resolve challenges like the high-dimensional data generated, resource-constrained environments, noisy, sparse, or incomplete measurements, etc., in power engineering applications. The main contributions of this work are as follows:
(a)
A Comprehensive Theoretical Overview: We present a robust foundation of CS principles, covering key aspects like sensing matrices, measurement bases, and recovery algorithms. This theoretical grounding aids in understanding how CS can be strategically applied to real-world grid applications.
(b)
Applications in Power Engineering: We examine major applications of CS across power engineering scenarios, including Advanced Metering Infrastructure, state estimation, fault detection, fault localization, outage identification, harmonic sources identification, power quality detection, condition monitoring, and IoT-based smart grid monitoring. By detailing these use cases, we highlight how CS addresses specific challenges such as data sparsity, transmission efficiency, and communication constraints, ultimately offering new pathways for efficient grid operation.
(c)
A Case Study: We evaluate the effectiveness of various sparse bases and measurement matrices for smart meter data recovery under different compression ratios and noise conditions. This study systematically examines the impact of compression ratios on reconstruction accuracy in both noise-free and noisy environments, providing practical insights into designing robust CS-based compression techniques for power grid data. The findings contribute to optimizing data acquisition and transmission strategies, enhancing efficiency in power system monitoring and operation.

3. Compressive Sensing Paradigm

Candes et al.’s groundbreaking work [4,5,6] revolutionized signal processing by introducing Compressive Sensing. This approach challenges the standard Nyquist–Shannon requirement (N samples) by using fewer measurements (M), opening new possibilities for signal acquisition and reconstruction [7].
Figure 1 outlines the general framework of the CS, encompassing the processes of data acquisition and reconstruction. Given a signal x ∈ RN, the conventional sensing paradigm requires the number of measurements M to be at least equal to N to ensure accurate reconstruction. However, CS enables accurate or approximate reconstruction with significantly fewer measurements (M < N), provided the signal is sparse or compressible in its original domain or a transformed domain. In CS, fewer measurements are obtained by linearly projecting the high-dimensional signal x onto a lower-dimensional space using a carefully designed sensing matrix Φ ∈ RMxN, resulting in a measurement vector y ∈ RM. Mathematically, this is expressed as in Equation (1) [4,5,6,7]:
y = Φ x  
Here, Φ is the measurement matrix/ sensing matrix.
The measurement matrices and sparse matrices play pivotal roles in the CS framework:
  • Measurement Matrix/Sensing Matrix (Φ):
In CS, the measurement matrix is designed to preserve the essential information of the sparse or compressible signal, ensuring that it can be reconstructed using nonlinear optimization techniques. The matrix Φ is applied to the high-dimensional signal to obtain a set of compressed measurements, commonly referred to as “compressive measurements” or “observations.” These measurements are a linear combination of the original signal’s elements. This approach drastically reduces the number of measurements needed compared to traditional sampling, making CS highly efficient for applications where data acquisition or storage is resource-constrained.
  • Sparse Basis/Dictionary Matrix (Ψ):
Sometimes, x may not be sparse by itself. To address this, a transformation matrix Ψ, known as the sparse basis or dictionary matrix, is applied to represent the signal in a domain where it is sparse or compressible. For example, if the signal is sparse in the frequency domain, Ψ could be a Fourier transform matrix. Mathematically, the signal in the sparse domain is represented as in Equation (2):
x = Ψ s
where s∈ RN is a K-sparse vector.
Using this transformed representation, compressed measurement is given as in Equation (3):
y = Φ Ψ s  
The sparse basis matrix plays a pivotal role in transforming the signal into a representation where a majority of coefficients are zero or near zero, making it sparse. This transformed representation is crucial for efficient signal recovery.
  • Reconstruction Matrix (Θ = ΦΨ):
The reconstruction matrix Θ combines the measurement matrix (Φ) and the sparse basis (Ψ) to represent the overall linear transformation from the sparse representation of the signal to its compressed measurements.
The challenge in CS is to recover the original signal x. This is an underdetermined system due to M < N. A nonlinear reconstruction algorithm, often simplified to a linear form, is employed to rebuild the initial signal. This algorithm operates on the principle that it must be aware of a specific representation basis, either the original or a transformed one—where the signal exhibits sparsity for precise recovery or compressibility for an approximate one. The compressed sensing reconstruction algorithm yields an estimated sparse representation of a signal, denoted as s ^ . From this, an estimate of the original signal, represented as   x ^ , can be derived by inversely transforming or closely approximating the reverse of s ^ . Reconstruction algorithms in compressive sensing address the problem of reconstructing a sparse signal from an underdetermined measurement, Equation (1). These algorithms exploit sparsity by solving optimization problems involving l0, l1, or l2 norms. The different norm minimization approaches and problem formulations are as follows:
(i) 
l0 Norm Minimization:
The objective is to minimize the number of non-zero coefficients in the reconstructed signal ‘x’, as shown in Equations (4) and (5)
for a case without noise
m i n   | x | 0   s u b j e c t   t o   y = Φ x
and for a case with noise:
min x 0   s u b j e c t   t o   y Φ x     2 2     e  
| x | 0 = i 1   ( x i = 0 )
Here, e is a small tolerance parameter accounting for measurement noise. Equation (6) shows the l0, norm of ‘x’.
(ii) 
l1 Norm Minimization:
The objective is to minimize the sum of absolute values of coefficients in the reconstructed signal ‘x’, as shown in Equations (7) and (9)
for a case without noise,
m i n   | x | 1   s u b j e c t   t o   y = Φ x
| x | 1 = i | x i |  
and for a case with noise,
m i n   | x | 1 + λ | y Φ x   |     2 2
Here, Equation (8) shows the l1 norm of ‘x’, where λ is the regularization parameter balancing sparsity and data fidelity. A higher value of λ encourages sparsity by penalizing large coefficients in the reconstructed signal.
(iii) 
l2 Norm Minimization:
The objective is to minimize the magnitude of coefficients in the reconstructed signal ‘x’, as shown in Equations (10) and (12)
for a case without noise,
m i n   | x | 2   s u b j e c t   t o   y = Φ x w i t h o u t   n o i s e
x 2 = i x i 2
and for a case with noise,
m i n   | x | 2 + λ | y Φ x |     2 2   w i t h   n o i s e
Here, Equation (11) shows the l2 norm of ‘x’.
The following conditions must be met for perfect sparse signal reconstruction [4,5,6,7,21,22,23,24,25,26,27,28,29,30,31].
(i)
Sparsity: For CS techniques to be effective, signals need to be sparse or nearly sparse. Sparsity refers to having few non-zero coefficients, while near sparsity means that the coefficients are close to zero. A signal x is said to be k-sparse in the Ψ domain if it can be represented with only k non-zero coefficients when transformed by Ψ.
(ii)
Incoherence: This broadens the time–frequency relationship, suggesting that objects with a sparse representation in one domain, symbolized by Ψ, are distributed over the domain of acquisition, just as a singular pulse or spike in the time domain disperses across the frequency domain [4]. Incoherence is a measure of the dissimilarity between the measurement basis ϕ and the sparsity basis ψ. For precise reconstruction in CS, these bases must be incoherent with each other. The mutual coherence μ is a statistic that quantifies the maximum correlation between the elements of these two matrices and is given by Equation (13) [20]:
μ ϕ , ψ = N   max 0 i , j N ϕ i , ψ j  
The scalar product (inner product) of two vectors ϕ and ψ is given by Equation (14).
ϕ , ψ = i = 1 N ϕ i   ψ i
The range of coherence is [1, N ]. A lower value of μ is desirable as it implies a higher degree of incoherence between the bases, facilitating accurate signal reconstruction with fewer measurements. The measurement requirement for different sensing matrices is shown in Table 1.
(i)
Restricted Isometry Property (RIP): The reconstruction matrix Θ must satisfy the RIP condition to ensure the preservation of the geometric properties of a sparse signal during transformation and measurement. RIP maintains the distances (Euclidian or l2 norm) between sparse signals, preventing them from being too closely mapped, which facilitates accurate reconstruction. Formally, a matrix obeys the RIP of order ‘k’ if the restricted isometry constant δk satisfies Equation (15) [22],
1 δ k | x | 2 2     | Θ x |     2 2   1 + δ k | x | 2 2  
for all k-sparse vectors ‘x’. The RIP ensures that all subsets of ‘k’ columns taken from the matrix are nearly orthogonal. RIP enables compressive sensing algorithms to embed sufficient information within a reduced number of samples, allowing for accurate reconstruction and robustness against noise. It provides a deterministic guarantee for the accurate reconstruction of sparse signals, even in the presence of noise interference.
l0 norm minimization can exactly recover sparse signals when the sparsity, coherence, and RIP are met. But it is computationally expensive and NP-hard. Greedy algorithms are commonly used for l0 minimization. In practice, l1-minimization is often used due to its convex nature, robustness to noise, and computational tractability. l2 minimization is commonly used in applications with well-behaved and Gaussian noise. l2-regularized least squares or ridge regressions are commonly used for l2 minimization. lp norm minimization becomes non-convex for 0 < p < 1, which can lead to multiple local minima. The values of p between 0 and 1 are less common but can be used to enforce stronger sparsity.

4. Measurement Matrix and Sparse Basis Matrix

4.1. Measurement Matrices

Measurement matrices are crucial for ensuring efficient sampling and signal reconstruction. They must satisfy the Restricted Isometry Property (RIP) to guarantee accurate reconstruction. Various types of measurement matrices have been studied in the literature, classified as in Figure 2, focusing on hardware compatibility, computational efficiency, and suitability for large-scale, real-time applications.

4.1.1. Random Matrices

Random Gaussian Matrices (RGMs): Universally incoherent with most sparse bases, RGMs satisfy the RIP criteria but pose challenges in terms of storage and reproducibility [23].
Sparse Binary Matrices (SBMs): These offer energy-efficient solutions compared to conventional data compression techniques, though their reconstruction performance may be slightly inferior to that of RGMs [23].

4.1.2. Deterministic and Structured Matrices

Toeplitz, Circulant, and Quasi-Cyclic Array Code (QCAC)-Based Binary Matrices: Being computationally efficient, they reduce memory requirements and are suitable for real-time applications, such as power grid monitoring and fault detection [21,32].
The move toward deterministic matrices stems from the need for low complexity, fast computation, and real-time compatibility, making them ideal for power engineering applications [23]. Performance analysis of deterministic and random matrices has highlighted their practical applications in domains like grid state estimation, harmonic analysis, and fault detection.

4.1.3. Chaotic Matrix

The chaotic matrix is derived from chaotic systems like logistic maps; these matrices balance deterministic and random properties. They are noise-resilient, satisfy RIP under specific conditions, and are suitable for robust applications [23].

4.2. Sparse Basis Matrices

Sparse basis matrices are essential for transforming signals into sparse representations, enabling efficient reconstruction. The choice of the basis depends on the signal’s characteristics and application requirements. Sparse bases can be broadly classified into fixed dictionaries, over-complete dictionaries, and data-driven dictionaries, each catering to specific scenarios and computational needs.

4.2.1. Fixed Sparse Basis Matrices

Fixed dictionaries are predefined mathematical constructs that are widely used in CS applications. These dictionaries are effective when their mathematical properties align well with the characteristics of the data.
Fourier transform (FT) is widely used for stationary signals, providing an efficient representation in the frequency domain. However, it is unsuitable for non-stationary signals due to its inability to offer time-frequency resolution [33].
Short-Time Fourier Transform (STFT) partitions the signal into segments using a fixed-size window, enabling localized time–frequency analysis. While it resolves some limitations of FT, its performance depends heavily on the chosen window size, leading to trade-offs between time and frequency resolution [34,35].
Wavelet Transform (WT) provides superior time–frequency resolution compared to FT and STFT by employing variable window sizes. It is particularly effective for identifying transient signals, fundamental frequencies, and harmonics. WT has been extensively used in CS for power system applications due to its ability to represent signals sparsely in localized time–frequency domains [34,36,37,38]. Discrete Wavelet Transform (DWT) requires fewer resources compared to Continuous Wavelet Transform (CWT) and is ideal for large-scale applications [35]. Wavelet Packet Transform (WPT) extends DWT by decomposing both approximation and detail components, allowing for a more refined sparse representation [37]. Wavelet Multi-Resolution (WMR) employs a combination of high-pass and low-pass filters to process high- and low-frequency components, respectively, making it effective for detecting transients in power systems [11,38].
Discrete Cosine Transform (DCT), Hilbert Transform (HT), Gabor Transform (GT), Wigner Distribution Function (WDF), S-Transform (ST), Gabor–Wigner Transform (GWT), Hilbert–Huang Transform (HHT), and other hybrid transform methods are also used [33,39]. Discrete Sine Transform (DST) [40] and Lapped Transform (LT) [41] are the other built-in (predefined dictionaries) transforms.
WT [11,38], DCT [12], FT [13], ST, STFT [35] and others can be used for 1D signals. For 2D signals like images, options like 2D Wavelets [34,36,37], Gabor [42], Curvelets [43], Contourlets [44,45], and Ridgelet Transform [46], shearlet transform [47], etc., can be used.

4.2.2. Over-Complete Dictionaries

Over-complete dictionaries combine multiple deterministic bases to create a richer representation. These dictionaries are highly redundant, allowing them to capture diverse signal features, but at the cost of higher computational complexity. Combinations of Haar, DCT, Toeplitz, and Hankel matrices have been successfully applied in imaging and power system applications, enabling better sparse representation and feature extraction [34,42,43,44,48].

4.2.3. Data-Driven Dictionaries

Data-driven dictionaries adaptively learn the sparse basis from the dataset, enabling superior performance for real-world signals. Adaptively learned dictionaries excel in non-stationary signals and applications requiring precise feature extraction, such as power system diagnostics and medical imaging [49,50,51]. Algorithms like K-Singular Value Decomposition (K-SVD) [52], Non-Negative Matrix Factorization (NMF) [53], and Deep Learning models (e.g., CNNs, RNNs, Autoencoders) [49,50,51] train the dictionary to capture unique data features.
For complex and high-dimensional signals, over-complete dictionaries and adaptive methods are preferred. These approaches provide enhanced flexibility and adaptability, particularly in applications where signals exhibit intricate structures or non-stationary characteristics.

5. Signal Reconstruction Algorithms

Signal reconstruction algorithms are pivotal in the Compressive Sensing (CS) framework, as they enable the recovery of sparse signals from compressed measurements. Table 2 provides a comprehensive classification of CS signal recovery algorithms, highlighting the features, advantages, and trade-offs of various approaches. These algorithms can be broadly categorized into several distinct classes as shown in Figure 3, each characterized by its unique approach and inherent trade-offs.
i.
Convex optimization methods: These are foundational approaches for solving l1-minimization problems, offering robust solutions in noise-free scenarios but often struggling with computational intensity and sensitivity to noise [54,55,56,57,58]., e.g., Basis Pursuit.
ii.
Nonconvex Method: This targets sparsity more aggressively than convex approaches do but faces challenges like computational intensity and potential instability in noisy environments [59,60]. (e.g., FOCUSS, IRLS).
iii.
Iterative/Thresholding Algorithms: These methods iteratively refine the solution through thresholding to promote sparsity and are computationally efficient and suitable for large-scale problems, but their performance depends on parameter selection and preconditioning [37,61,62,63] (e.g., ISTA, FISTA).
iv.
Greedy Algorithms: These methods iteratively build the sparse solution by selecting the best atom (column of the dictionary) at each step, but are sometimes less effective with highly correlated dictionaries [22,64,65,66,67,68,69,70] (e.g., OMP, CoSaMP).
v.
Probabilistic Models: These leverage prior information for robust recovery in noisy or uncertain conditions, though they require careful parameter selection and may be computationally demanding [71,72,73,74] (e.g., Bayesian Compressive Sensing).
vi.
Combinatorial and Sublinear Methods: These focus on discrete and combinatorial optimization (e.g., HSS) [22,57,75].
vii.
Deep Learning Approaches: These represent the latest advancements in CS reconstruction, offering unparalleled speed and accuracy by learning data-driven features [50,76,77,78,79,80,81] (e.g., ISTA-Net, LDAMP).
CS algorithm selection hinges on balancing sample complexity, computational demands, resilience to noise, and uncertainties. For noise-free scenarios, convex optimization methods such as Basis Pursuit are highly effective, offering precise solutions by leveraging ℓ1-minimization techniques [54]. In contrast, for noisy measurements, methods like Basis Pursuit Denoising (BPDN) and Bayesian Compressive Sensing (BCS) provide robust recovery by incorporating noise tolerance and leveraging prior information [54,72]. For real-time applications, iterative thresholding algorithms like FISTA and greedy approaches such as OMP and CoSaMP strike a balance between speed and accuracy, making them suitable for dynamic and resource-constrained environments [61,77]. When dealing with complex data structures, deep learning-based methods, including ISTA-Net and RecoNet, excel by learning intricate data-driven features, delivering superior performance, particularly in applications such as imaging and video reconstruction [50,80]. Probabilistic models and Bayesian approaches can handle uncertainties in dynamic environments, such as time-series data from power grids, but require optimization for large-scale applications [72]. Research efforts continue to refine algorithms, striving for excellence in these critical dimensions [21,22].

6. Performance Metrics for Evaluation

Many evaluation matrices are proposed in the literature [23,34,83], which is helpful in evaluating CS’s performance. The most commonly used metrics are as follows:
The coherence metric, defined in Equation (10), assesses the measurement matrix’s effectiveness and ensures the reconstruction process’s success. It measures the highest correlation between two normalized columns of the measurement matrix. A low coherence level means fewer measurements are needed for the original signal’s reconstruction. Essentially, the lower the coherence, the more efficiently the reconstruction algorithm operates. The other metrics are as follows:
(a)
Sparsity: For a signal x with N samples, if it is k-sparse in a sparse basis, then k represents the count of non-zero coefficients, which is significantly less than N. This means N−k coefficients can be discarded with minimal impact on the signal’s critical information. The percentage sparsity (fraction of non-zero coefficients) is given as in Equation (16):
%   S p a r s i t y = k N × 100
(b)
Compression Ratio (CR)
CR is determined by dividing the number of measurements M by the number of samples in the original input signals N, as given in Equation (17):
C R = M N
(c)
Error Metrics: RE, MSE, RMSE, NMSE, MAE, INAE
  • Reconstruction error (RE), also known as recovery error, is the ratio of the norm of the difference between the original signal and the reconstructed signal x ^ divided by the norm of the original signal. RE is given in Equation (18):
    R E = | x x ^ |   | x |
  • Mean square error (MSE) measures the average magnitude of the squared difference between the original signal and the recovered signal. MSE given as in Equation (19) is a widely used metric to assess the quality of reconstruction:
    M S E =   N [ x N x ^ ( N ) ] 2   N
  • Root Mean Square (RMSE) measures the square root of the MSE and is given as in Equation (20):
    R M S E = M S E
  • Normalized Mean Squared Error (NMSE) is given as in Equation (21):
    N M S E = N [ x N x ^ ( N ) ] 2 N [ x N x ¯ ( N ) ] 2
  • Mean Absolute Error (MAE) measures the average absolute difference between the original signal and the reconstructed signal and is given as in Equation (22):
    M A E = 1 N   i = 1 N | x i x ^ i |  
  • Integrated Normalized Absolute Error (INAE) evaluates the normalized cumulative reconstruction error over all elements of the signal and is given as in Equation (23):
    I N A E = i = 1 N x i x ^ i i = 1 N x i
(d)
Signal-to-Noise Ratio (SNR)
SNR measures the ratio of the signal power to the noise power as given in Equation (24). It is often used in CS to quantify the quality of reconstruction in the presence of noise.
S N R = 10 log 10 N [ x N ] 2 N [ x N x ^ ( N ) ] 2  
Peak Signal-to-Noise Ratio (PSNR) is a measure of the fidelity of the reconstructed signal, as given in Equation (25). It is often used in image compression applications. The maximum possible signal value of x (maxx), in the case of an image, is the maximum valid value of a pixel.
P N S R = 10   log 10 m a x x 2 M S E
(e)
Computation Time (CT)
Computation time encompasses all the computational steps involved in CS, including measurement acquisition, data processing, solving optimization problems, and any other algorithmic tasks.
(f)
Recovery Time (RT)
This is a subset of computation time and focuses solely on the reconstruction phase of CS. Recovery time specifically measures the time taken to solve the optimization problem and recover the original signal once the compressed measurements are acquired. Eventually, it depends on the complexity level of the reconstruction algorithms.
(g)
Reconstruction/Recovery Success Rate (RSR) and Failure Rate
RSR measures the percentage of successfully reconstructed signals as given in Equation (26). It is often used in scenarios where the exact reconstruction of every signal is not necessary.
R S R = N u m b e r   o f   s u c c e s s f u l   r e c o n s t r u c t e d   s i g n a l s T o t a l   n u m b e r   o f   s i g n a l s × 100 %
A successful recovery is typically defined as when the recovered signal is highly similar (e.g., 90% similarity) to the original signal for different values for the sparsity level, number of samples, and number of measurements. Failure Rate, FR, is essentially a complement of the RSR (FR = 1 − RSR). It represents how often the recovery algorithm fails to reconstruct the original signal. It is calculated as the reciprocal of the Success Rate.
(h)
Complexity
Complexity measures the computational resources required to perform signal reconstruction from compressed measurements. It quantifies the computational burden of CS algorithms and is crucial for assessing their practical feasibility, especially in real-time applications or resource-constrained environments. Complexity reflects how efficiently an algorithm performs with a large amount of data, and complexity can be measured in computational time or hardware resources. It is important to note that, in CS, the degree of complexity depends upon the sparsity, the number of samples, and the number of measurements.
Percentage bandwidth saving (PBWS) is measured using Equation (27):
P B S W = n . m ( p . k + n . p ) n . m × 100
n is the number of features, m is the number of samples taken from each feature, and p is the number of principal components.
(i)
Correlation
Correlation measures the similarities between the recovered signal and the original signal. The correlation coefficient, c, is given as in Equation (28):
c = N ( x N x ¯ ( N ) )   ( x ^ N x ^ ¯ ( N ) ) ( x N x ¯ ( N ) ) 2   ( x ^ N x ^ ¯ ( N ) ) 2  
x ¯ and x ^ ¯ are the averages of the actual and reconstructed signals.

7. Compressive Sensing in Power Engineering

In Power engineering, compressed Sensing (CS) has become a pivotal technology for enhancing the efficiency and reliability of Smart Grid Communication Infrastructure. Its implementation spans several critical areas: Advanced Metering Infrastructure (AMI) and Wide-Area Measurement Systems (WAMSs), where CS aids in managing the massive data influx from smart meters and synchro-phasor data transmission, respectively; state estimation (SE) and Topology Identification (TI), which benefit from CS in accurate grid state analysis and in understanding network topology amidst the complexities introduced by renewable energy integration; fault detection (FD), fault localization (FL), and outage identification (OI) in power grids, where CS’s sparse data processing capability is crucial for pinpointing faults and outages efficiently; harmonic source identification (HSI) and Power Quality Detection (PQD), where CS assists in identifying harmonic sources to maintain power quality in decentralized grids; and condition monitoring (CM) of machinery, where CS significantly reduces data volume and enhances real-time monitoring effectiveness. Across these domains, CS stands out for its ability to handle large datasets and sparse scenarios, positioning it as a transformative tool in the evolving landscape of power engineering.
Smart Grid (SG) Communication infrastructure integrates technologies for efficient electricity distribution monitoring, control, and management. Key components include Advanced Metering Infrastructure (AMI), phasor measurement units (PMUs), control centers, Communication networks, data management systems, and grid sensors [84]. Compressive Sensing (CS) optimizes data handling and enhances SG communication infrastructure efficiency.

7.1. Advanced Metering Infrastructure (AMI)

Compressive Sensing (CS) plays a crucial role in AMI, particularly with smart meters, aiding efficient data transmission and management. The widespread adoption of Advanced Metering Infrastructure (AMI), highlighted by India’s initiative to replace 250 million traditional meters with smart ones [85], has led to a massive increase in data generation. Smart meters produce between 0.25 and 250 TB of data yearly, with a collective output of 2920 TB from a undred million meters, as reported in [1]. This growth in data volume brings bandwidth and storage challenges, spurring research into efficient data compression and storage reduction strategies. Studies like [11,12,13,85,86,87,88,89,90] emphasize the potential of CS in AMI, with applications in compression and authentication [12], low-voltage customer data reconstruction [87], and deep blind compressive sensing for appliance monitoring [88]. Table 3 shows the CS application in AMI domain.
The work in [38] proposes a dynamic framework that combines temporal compression (wavelet-based) at the meter level with spatial compression at the local data center. This method adapts compression ratios using a novel sparsity measure, the Coefficient of Variation (CV), ensuring 99% data variance is preserved while reducing communication traffic to central control centers. Principal Component Analysis (PCA) is employed to achieve efficient spatial compression by capturing the most significant data components. The framework efficiently balances compression performance and reconstruction accuracy by exploiting spatial correlations among neighboring nodes. Addressing the limitations of static schemes, this framework dynamically adjusts compression ratios, reducing reconstruction errors and optimizing data compression for large-scale applications.
The study in [12] addresses the need for efficient, low-cost authentication in Advanced Metering Infrastructure (AMI) systems, where smart meters continuously transmit power consumption data to a Data Concentrator Unit (DCU). Traditional cryptographic methods often incur high computational costs, making them impractical for low-cost smart meters. A CS-based physical layer authentication scheme is introduced, which simultaneously compresses and authenticates power reading signals. The shared measurement matrix between the DCU and a legitimate meter (LM) act as a secret key for both compression and authentication. This matrix is generated using Linear Feedback Shift Registers (LFSRs), creating a pseudo-random sequence known only to the DCU and LM. The various steps are as follows:
  • Step 1: The initial vector required for generating the measurement matrix is securely transmitted via a physical layer security scheme based on channel reciprocity in a time–division duplex (TDD) mode.
  • Step 2: Upon receiving the compressed signal, the DCU reconstructs it using CS and evaluates the residual error.
  • Step 3: The residual error is used as a test statistic in hypothesis testing to distinguish legitimate signals from intrusion attempts.
By integrating authentication directly into the compression process, this approach offers a lightweight solution ideal for large-scale AMI networks. It provides a robust defense against impersonation attacks, paving the way for future research in efficient and secure data management in smart grids
This study [86] examines data compression for smart grid systems, focusing on power consumption data from a network of 1000 smart meters. The data are transmitted to a utility station in compressed form to minimize delay and communication overhead. After processing with a Daubechies wavelet, data sparsity is high, with only 70 out of 1000 elements being non-zero. The data are compressed at access points using a Gaussian measurement matrix, reducing the number of observations transmitted (“y”). Reconstruction is achieved using the Two-Step Iterative Shrinkage/Thresholding (TwIST) algorithm, ensuring precision through iterative convergence thresholds. Higher compression rates result in increased reconstruction errors, underlining the need to balance compression efficiency and reconstruction accuracy.
CS facilitates energy-efficient data gathering by combining model-based prediction and adaptive compression, reducing sampling rates and transmission frequency. While solutions like compressive data-gathering (CDG) improve energy distribution and communication costs, their scalability in dynamic networks remains limited. Methods such as joint sparse signal recovery minimize energy expense but may not meet application-specific data accuracy requirements.

7.2. Wide-Area Measurement Systems (WAMSs)

Wide-Area Measurement Systems (WAMSs) rely on Phase Measurement Units (PMUs) for monitoring power system dynamics. Table 4 shows the CS application in the WAMS domain. Centralized approaches in SG networks face overhead challenges [14,91,92,93]. Integrating multiple antennas with CS in home area networks improves performance and reduces delays. A CS and 802.15.4-based Medium Access Control (MAC) protocol for SGs with renewable energy enhances data transmission and minimizes delay [14]. While PMU installations are crucial for real-time monitoring, state estimation, and fault detection, they face challenges in efficiently transmitting synchro-phasor data due to high data volumes and noise [91,92,93,94]. CS introduces non-uniform sampling rates, requiring adaptations in protection algorithms for efficient implementation [94].
This study [95] proposes a CS-based data compression strategy for PMU data, leveraging clustering analysis and multiscale PCA (MSPCA) to address high data volumes and noise in WAMS. In the proposed method, Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is applied to the PMU data for preconditioning. DBSCAN automatically identifies clusters of correlated PMU data, excluding outliers or bad data, thus enhancing compression accuracy and avoiding data distortions. The clustered data are then subjected to MSPCA, which decomposes the signals into frequency sub-bands using wavelet transformation. High-frequency components are compressed through PCA, a technique effective for spatially sparse data. This combined approach leverages both spatial and temporal sparsity, efficiently compressing PMU data in ambient (normal) and event (disturbance) states. The strategy offers potential for future applications in enhancing WAMS efficiency and resilience, especially in large-scale power grids with complex data requirements, thereby supporting improved grid stability and monitoring capabilities [95].
Distributed Compressive Sensing (DCS) offers an innovative approach to data gathering in sensor networks by leveraging spatial-temporal correlations. The distributed compressive sensing (DCS) approach presented in [96] enhances data gathering in sensor networks by leveraging spatial-temporal correlations to improve energy efficiency and data reconstruction accuracy. Initially, a spatial correlation-based coalition formation algorithm groups sensor nodes into coalitions based on the sparsity distribution of their signals. This grouping helps localize data collection and defines a utility function that minimizes the number of active sensor nodes, significantly reducing energy consumption. Within each coalition, a spatial–temporal compressive sensing technique is applied. This technique employs a block diagonal measurement matrix to generate linear combinations of sensor node readings. The matrix is carefully structured to balance computational and communication loads across the coalitions, optimizing network performance. The compressed sensor readings are then transmitted to a central base station. At the base station, a joint sparse signal recovery mechanism is executed in two stages. First, a common sparsity profile is identified across all coalitions. Next, the recovery process within each coalition ensures a consistent sparsity profile among its sensor nodes. This dual-stage recovery enhances the accuracy of data reconstruction while reducing the number of measurements required. By efficiently utilizing spatial–temporal correlations, the DCS approach achieves improved energy efficiency and scalability, making it a robust solution for large-scale sensor networks.
Table 4. CS applications for Wide-Area Measurement Systems.
Table 4. CS applications for Wide-Area Measurement Systems.
Wide-Area Measurement Systems (WAMS)
Ref.Sensing /Measurement MatrixRecovery AlgorithmSparse BasisInferences/Comments
[92]RandomModified Subspace PursuitPartial Fourier/DCTCS-based PMU data recosntruction.
[93]RandomSubspace PursuitFourier TransformCS-based PMU data recosntruction.
[96]Random-Wavelet TransformAdaptive compression combining clustering analysis with multiscale Principal Component Analysis (MSPCA).
Leverages both spatial and temporal sparsity.

7.3. State Estimation (SE) and Topology Identification (TI)

State Estimation (SE) and Topology Identification (TI) are fundamental to modern power system operations, enabling real-time monitoring, situational awareness, and grid reliability. SE determines the grid’s operational states, such as voltage magnitudes and angles, by processing data from smart meters, Remote Terminal Units (RTUs), and Phase Measurement Units (PMUs) [15,16,97,98,99,100,101]. Table 5 shows the CS applications in the SE and TI domain. TI identifies the physical structure and connectivity of the grid. Topology identification in power grids has a sparse nature due to the structure of power networks, where each node (bus) is typically connected to only a few other nodes rather than to all other nodes. This sparse connectivity results in a nodal admittance (or Laplacian) matrix that has mostly zero entries, reflecting the limited direct connections between nodes. Due to this inherent sparsity, many techniques in topology identification can leverage Compressed Sensing (CS). However, integrating renewable energy sources and distributed generation poses challenges, such as nonlinearities, increased data volume, and dynamic variations. Traditional SE methods struggle to address these complexities, leading to a growing interest in advanced approaches like Compressive Sensing (CS).
The various challenges in SE and TI are as follows:
  • Complexity of Distribution Networks: SE in distribution networks is less studied compared to transmission systems, primarily due to its radial structure with multiple feeders and branches, unbalanced loads, and limited measurements.
  • Nonlinear Relationships: Power flow relationships between voltage states and other grid variables are highly nonlinear, complicating traditional SE approaches.
  • Impact of Renewable Integration: The variability introduced by distributed generation (DG) creates correlated data patterns, necessitating adaptive estimation techniques.
  • High Computational Costs: Traditional model-based methods rely on physical parameters, such as Distribution Factors (DFs) and Injection Shift Factors (ISFs), but face high computational costs and uncertainties in real-time applications [16]. Methods for calculating DFs include model-based, data-driven non-sparse, and data-driven sparse estimation. Model-based methods face uncertainties and high computational costs, while data-driven models adapt better to changing conditions [97]. However, non-sparse methods can contribute to the curse of dimensionality.
Compressive Sensing (CS) techniques address many of these challenges by reducing the number of measurements required for accurate state estimation and topology mapping [15]. This paper addresses challenges in state estimation for power distribution systems, especially as Distributed Generation (DG) from renewable sources creates highly correlated power data across both space and time. Traditional state estimation methods require large amounts of power measurements, demanding extensive communication bandwidth and reliability. The increase in data volume and the nonlinearity of power systems exacerbate this issue, making efficient aggregation and processing of measurements challenging. By leveraging spatial and temporal correlations, CS eliminates redundant data, enabling efficient information aggregation and enhancing grid security [15]. Data-driven sparse DF estimation methods are emerging to address these issues, focusing on dominant DFs while promoting result sparsity. Compressive Sensing (CS) aids in selecting and transmitting critical information, discarding redundant data, thus enhancing situational awareness and grid security. Two methods for SE are described in [15]: indirect state estimation, applying the Newton–Raphson method post-reconstruction, and direct state estimation, integrating compressed power measurements directly into Newton–Raphson iterations. Laplacian sparsity is a common technique in SE. Both methods achieve accurate voltage state estimation with as few as 50% of the original measurements.
TI is modeled as a sparse recovery problem using CS and graph theory [16]. Algorithms like Clustered Orthogonal Matching Pursuit (COMP) address clustered sparsity in Laplacian matrices and Band-Excluded Locally Optimized COMP (BLOMCOMP) prevents the loss of non-zero neighbor elements to improve SE [16]. In SGs, where interconnected nodes often exhibit correlated measurements, OMP can fail to identify the correct support, resulting in incomplete topology recovery. COMP extends OMP by expanding support selection to include neighboring indices, thereby handling clustered sparsity where related nodes appear in clusters. This is particularly useful for SG topology because interconnected nodes naturally form clusters. However, COMP still struggles with high coherence in the data, as it lacks a mechanism to prevent the selection of correlated columns. The BLOMCOMP algorithm improves on both OMP and COMP by integrating a “band-exclusion” approach, which defines a coherence band around each selected index, thus preventing adjacent highly correlated elements from being included in the support. Simulations on IEEE test systems (30-bus, 118-bus, and 2383-bus networks) demonstrate its effectiveness, with measurement requirements determined by signal sparsity rather than network size. BLOMCOMP introduces band exclusion and local optimization, addressing high coherence in correlated measurements, and outperforms other methods) in accuracy and robustness. OMP is straightforward, selecting the most correlated columns iteratively to build the sparse solution, yet it suffers in high-coherence conditions.
Data-driven dictionaries [97], derived from smart meter data, outperform traditional deterministic bases such as Haar, Hankel, and Toeplitz by achieving superior reconstruction accuracy and higher compression ratios, especially in dynamic grid conditions. These tailored dictionaries are particularly important in the context of state estimation (SE) and topology identification (TI) because they adapt to grids with high renewable energy penetration and dynamic scenarios, addressing the challenges posed by nonlinear and time-varying grid conditions. By enabling accurate reconstruction of critical states and connectivity patterns, data-driven methods provide a robust and adaptable framework, ensuring effective situational awareness, operational stability, and efficient grid management in modern power systems.
Sparse basis and sensing matrix for Distribution System State Estimation (DSSE) and Fault Location (FL) depend on meter distribution and a mapping matrix linked to the physical meter distribution [98].
CS estimates the three-phase current injection vector efficiently [100], with techniques like Coherence-Based Compressive Sampling Match Pursuit improving greedy algorithm limitations and enhancing convergence efficiency.
An Alternating Direction Method of Multipliers (ADMM)-based DSSE and its robustness against cyber-attacks, like false data injection (FDI), replay, and neighborhood attacks, ensuring stable grid operation, are analyzed in [101]. By minimizing computational and communication overhead, these approaches provide scalable and secure solutions for real-time grid monitoring. Distributed CS achieves similar accuracy to centralized methods while reducing computation time and communication overhead. For example, it reduces simulation time significantly (e.g., 20.48 s to 7.28 s for the IEEE 123-bus system).

7.4. Fault Detection (FD), Fault Localization (FL) and Outage Identification (OI)

Line outages significantly impact smart grids (SGs), leading to potential cascade failures. Accurate fault pinpointing using intelligent algorithms is vital for grid operators to quickly isolate faults and restore power. Generally, except for faulty buses, current injections remain unchanged from normal to fault conditions [17,102]. However, fault location in expansive distribution networks is challenging due to the limited measurement devices, necessitating effective monitoring to prevent incidents like blackouts. In power systems, faults typically affect only a small subset of nodes or lines, resulting in a sparse fault vector. While this sparsity poses challenges for traditional methods, which require dense measurement infrastructures to achieve accurate detection, it also serves as an opportunity for Compressive Sensing (CS). CS explicitly exploits this inherent sparsity to recover fault locations using limited data, reducing the need for extensive sensor deployment and enabling efficient fault localization even in large-scale systems [17,18,103,104,105,106,107,108,109,110,111,112,113]. The various challenges in fault detection and localization include the following:
  • Limited Measurement Devices: Traditional fault detection methods require dense measurement infrastructures, which are costly and impractical for large-scale networks [17,18].
  • High Coherence in Sensing Matrices: The sensing matrix derived from nodal admittance matrices can exhibit high pairwise correlation, reducing the accuracy of sparse recovery algorithms [108,109,110].
  • Noise and Perturbations: Real-world measurement data are often noisy, which can distort sparse recovery and impact fault localization accuracy [104,112].
  • Dynamic Range and Clustered Sparsity: Variations in fault signal magnitudes and clustered outage patterns complicate recovery, requiring advanced algorithms to handle these structured sparsity challenges [110].
Table 6 shows the CS application in the fault detection, fault localization and outage identification domain. CS models for fault localization use pre and during-fault voltage measurements [17,18,103,104,105,106,107,108,109,110,111,112,113]. The CS methods in the works [17,18,106] focus on detecting grid node faults by observing current injection changes but struggle with branch faults. Block-wise compressive sensing (BW-CS) improves multiple-line outage detection [106], offering better fault detection, robustness, and reduced complexity. Algorithms like Modified Block–Sparse Bayesian Learning (BSBL) and Bayesian CS maintain high accuracy even in noisy conditions, ensuring reliable fault localization [104,112]. Advanced solvers like Band-Exclusion Locally Optimized OMP (BLOOMP) and BLOMCOMP (Clustered version of BLOOMP) mitigate high-coherence issues and exploit clustered sparsity patterns for accurate recovery [110]. Event-triggered mechanisms and adaptive stopping criteria reduce computational overhead, making CS approaches suitable for real-time applications in large grids [112]. Combining CS with machine learning techniques like fuzzy clustering and CNNs enhances performance in fault diagnosis and localization [101,114].
CS applications span fault detection, fault location, and power network localization, utilizing system-specific frameworks and edge devices [114]. A CS-CNN-based method converts 1D PV inverter fault signals into 2D feature maps for edge computing [114]. CS simplifies on-site hardware by transferring computational tasks to central monitoring stations, reducing power demands. Other applications include fault classification [114,115], power swing detection [116], leakage current identification [117], partial discharge detection [118], and fault localization [119]. CS minimizes sample size for three-phase voltage signal analysis, reducing runtime [115], and is compared across BP, MP, and OMP for fault signal restoration. It also prevents distance relay maloperation in power swing scenarios [116].

7.5. Harmonic Source Identification (HSI) and Power Quality Detection (PQD)

In increasingly decentralized distribution grids, maintaining power quality (PQ) necessitates accurately identifying harmonic sources. While the harmonic behavior of these systems remains ambiguous due to sparse field measurements, primarily at HV to MV stations, and few grid-connected PQ-meters, the reality is that a substantial portion of the grid remains unmonitored for harmonic pollution. This underscores the anticipated need for more advanced monitoring in the imminent future. Notably, many grids contain only a small fraction of harmonic-polluting loads relative to the total, indicating a sparse nature in harmonic source identification. Compressive Sensing (CS) effectively addresses this sparsity challenge. CS can address this sparsity challenge. Table 7 shows the CS application in the harmonic source identification and power quality detection domain. Through its measurement matrix, CS discerns the relationship between measurements and source parameters, and its sparse basis matrix captures the unique patterns of harmonic sources. Thus, CS emerges as a pivotal tool for efficient and precise harmonic source identification in grids, especially as we look to future upgrades in monitoring systems. CS aids in single [120] and multiple harmonic source identification [121,122,123], enhancing grid stability and reducing the impact of harmonic pollution. The identification and recovery of harmonic signals from a single source using CS has been mentioned in the work [124,125,126,127,128]. The CS-based power quality classifier is mentioned in [129,130,131]. Overcomplete dictionaries offer flexibility but increase computational complexity, risk overfitting, and demand storage. A training-free high-dimensional convex hull approximation combined with a CS framework to reduce the time cost is proposed in [129].
The framework proposed in [128] leverages IoT-enabled edge nodes and dynamic CS for real-time PQ monitoring. Key features include the following:
  • Continuous Sampling and Compression: Signals are compressed using sparse random matrices to reduce data volume.
  • Dynamic Signal Recovery: Homotopy Optimization with Fundamental Filter (HO-FF) iteratively updates sparse solutions without re-solving the entire problem, enhancing computational efficiency.
  • Harmonic Spectrum Correction: Single-peak spectral interpolation mitigates spectral leakage and phase errors, ensuring accurate recovery.
  • Feedback Mechanism: Dynamically adjusts the compressed sampling ratio to adapt to fluctuating harmonic conditions.
This IoT-CS framework provides an efficient and scalable solution for real-time PQ monitoring, enabling grid operators to tackle the increasing complexity of modern power systems with distributed energy resources and electric vehicle integration.

7.6. Condition Monitoring (CM) of Machines

CS is applied in condition monitoring systems to address data volume, loss issues, noisy data, and multichannel data recovery. Table 8 shows the CS application in the condition monitoring domain. For example, in the remote condition monitoring of wind turbines, a CS-based missing-data-tolerant fault detection method is used [19]. The CS-based fault detection framework for remote wind turbine monitoring includes four modules: signal conditioning, CS-based sampling, signal reconstruction, and fault detection. Using a Wireless Sensor Network (WSN), vibration and generator current signals are collected by a V-Link-LXRS sensor node at a sampling rate of 1000 Hz, recording 15,000 samples over 15 s. These nonstationary signals, affected by noise and low sparsity due to fluctuating wind conditions, are processed to enhance sparsity through thresholding techniques. The conditioned signals are compressed via CS-based sampling and transmitted wirelessly to a WSDA-1500-LXRS gateway, and then uploaded to a Sensor Cloud™ server. A remote lab computer retrieves and reconstructs the compressed data using CS-based algorithms to recover signal envelopes, which are analyzed for fault detection. This framework efficiently handles missing data and nonstationary signals, making it robust for monitoring wind turbine health in harsh and variable conditions while reducing transmission and storage requirements. The reconstruction error remained below 0.3 with data loss rates up to 95% [19].
In the case of power transformers, vibration signals are traditionally collected at high sampling frequencies, leading to significant data volume [133]. To address this challenge and ensure data interoperability and real-time capabilities for the Ubiquitous Electric Internet of Things (UEIOF), the KSVD algorithm is employed to construct dictionaries of vibration signals, reducing data volume while preserving vital information [133].
A bearing fault diagnosis framework combines Compressive Sensing (CS) with advanced methods to address efficiency and data storage challenges. A bearing fault diagnosis framework combines CS with a stacked multi-granularity convolution denoising auto-encoder (SMGCDAE) method to reduce data storage requirements [134]. This paper [135] introduces CS with correlated principal and discriminant components (CS-CPDCs), a hybrid method combining CS, PCA, Linear Discriminant Analysis (LDA), and Canonical Correlation Analysis (CCA) for efficient bearing fault diagnosis, reducing storage and processing requirements. Another study uses CS, the Laplacian Score (LS), and the Multi-Class Support Vector Machine (MSVM) for bearing fault classification in, evaluating its efficiency with experimental vibration data [136]. A sensing matrix is derived from the Walsh–Hadamard ensemble, leading to a low-dimensional feature dictionary based on the Fourier dictionary [137]. For sparse signal recovery, especially in sizable or structured datasets, the L1 norm minimization problem is efficiently tackled using ADMM. This method is prized for scalability and speed, especially with structured sensing matrices. Traditional fault diagnosis methods have limitations in efficiency, feature extraction, and sensitivity to sparse signals. To address these, a method integrating CS and DKELM was introduced [139]. This method offers two key benefits: firstly, being a classic machine learning algorithm, it has reduced model and computational complexities compared to deep learning approaches, making it apt for industrial embedded systems; secondly, it is optimized for sparse signals post-compressed-sampling, ensuring quicker diagnostics while retaining high accuracy. CS and deep learning-based CBMs are proposed in [140,141]. The Weighted Distributed Compressed Sensing–Synchronous Orthogonal Matching Pursuit (WDCS-SOMP) approach for fault feature extraction in gear transmission systems effectively extracts fault features from multi-channel signals at ultra-low compression rates, achieving a compression ratio as low as 10%. This method employs a fault prominence index to identify a reference channel and utilizes a sliding window inner product strategy to align signals with a shift-invariant dictionary. By leveraging correlations between multi-channel signals, the framework achieves better reconstruction accuracy compared to single-channel methods, demonstrating resilience to noise and low compression rates. The feasibility of CS-based image reconstruction for thermal imaging for equipment fault identification is discussed in [142].

7.7. Compressive Sensing for IoT-Based Smartgrid Monitoring

The work in [145] presents a three-tier IoT-based smart grid network leveraging Compressive Sensing (CS) and Fog Computing to optimize data acquisition, transmission, security, and recovery while reducing communication and storage costs. The architecture consists of IoT-based smart meters (sensing layer), fog devices (edge layer), and cloud servers (processing layer), designed to address sensing bottlenecks, high transmission overhead, and security challenges in large-scale smart grid applications. CS-based data compression is applied at the smart meter level, where sampled data are compressed and encrypted before transmission to fog nodes. Fog devices aggregate and validate the compressed data, using XOR-based authentication and encrypted key mechanisms, before forwarding them to the cloud. The cloud executes data extraction, reconstruction, and verification, ensuring accurate recovery with reduced data overhead. Performance evaluations confirm that the proposed mechanism reduces communication costs by nearly 50%, minimizes storage requirements by up to 50% compared to existing methods, and optimizes transmission efficiency (0.713 transmission ratio for 65 IoT devices) [145]. Figure 4 presents a block diagram for a possible scalable and generalized framework for real-time compressive sensing-based monitoring for smart grids, developed based on insights from the existing literature and incorporating compressive sensing integrated with IoT-enabled platforms. The framework is divided into three distinct layers, ensuring the seamless acquisition, processing, and utilization of data for monitoring and decision-making. The IoT layer collects data from various sensors (voltage, current, and camera-based) and can leverage dynamic CS with adjustable sampling rates and weighted sampling techniques (assigns different measurement weights to different sensor types) to optimize data acquisition based on signal sparsity and system conditions [24,40,90,128]. The Edge Layer is conceptualized to process compressed data using sparse representation and dynamic CS recovery algorithms, ensuring accurate signal reconstruction with minimal bandwidth and energy usage. It minimizes latency by reducing the need to transmit data to distant cloud servers. Edge devices include microcontrollers, embedded systems, FPGAs, local PCs, and mobile devices that process raw data before sending them to fog or cloud servers. Fog nodes aggregate CS-compressed data from multiple meters, ensuring efficient bandwidth utilization and reduced transmission costs. Fog-assisted encryption mechanisms can be used to protect grid data privacy [145]. The application layer aims to provide real-time monitoring dashboards and predictive analytics and maintenance alerts for actionable insights. A hybrid cloud-edge processing approach can be employed to optimize computational efficiency, wherein non-critical tasks such as historical data analysis, load forecasting, and long-term trend identification can be offloaded to the cloud. Meanwhile, time-sensitive operations like fault detection, power fluctuations, and real-time grid stability monitoring can be processed within the edge-fog layers to reduce latency and ensure faster response times. To optimize data storage and retrieval, CS-based data compression in cloud storage can be utilized. Instead of storing vast amounts of raw sensor measurements, the cloud maintains feature-extracted CS data, significantly reducing storage requirements while preserving the critical information necessary for grid analysis, event detection, and decision-making. This approach enhances the efficiency of querying, retrieving, and processing data, making large-scale power system monitoring more practical and scalable.

8. Case Study: Performance Analysis of Compressive Sensing in Data Recovery

This study explores the application of Compressive Sensing (CS) techniques for energy monitoring and signal reconstruction in power engineering, with a focus on data aggregation in smart grids through data compression to optimize grid resource utilization. Using the UK Domestic Appliance-Level Electricity (UK-DALE-2017) dataset [146], a public benchmark for energy monitoring and disaggregation research, the study evaluates the effectiveness of various sparse bases, measurement matrices, and compression ratios (CRs) in compressing and reconstructing active power signals sampled at a 6 s interval. A random signal segment from one day’s power data was selected for analysis, with evaluations conducted across compression ratios ranging from 20% to 90% under both noise-free and noisy conditions (Gaussian noise at an SNR of 20 dB). The study examines the reconstruction quality using key metrics such as Mean Absolute Error (MAE) and Integral Normalized Absolute Error (INAE). To ensure uniform analysis and manageable data processing, raw power data were segmented into 256-sample non-overlapping windows. This segmentation enabled the efficient and systematic analysis of the dataset’s rich temporal information. Sparse bases such as Wavelet, DCT, Hadamard, Hankel, and Toeplitz were employed to compress and reconstruct the signals, leveraging the sparsity inherent in power data for efficient representation. Measurement matrices like Gaussian and Bernoulli random matrices project the sparse signals onto a lower-dimensional space. In this study, we employed Orthogonal Matching Pursuit (OMP) for signal reconstruction due to its lower computational overhead (O(kMN)) and suitability for real-time applications. All analyses were performed using MATLAB 2024 on a PC with 16GB RAM running a 64-bit Windows OS. Table 9 and Table 10 compare the performance of various sparse transformation bases—Hadamard, Hankel, Toeplitz, DCT, and Wavelet—under Gaussian and Bernoulli measurement matrices. The OMP algorithm was efficiently executed on this hardware configuration without significant processing delays, making it a feasible choice for power signal reconstruction. Table 9 presents the results for random data segments under varying compression ratios (CRs), whereas Table 10 provides the averaged performances over a month. Both tables consider scenarios with and without noise, evaluating MAE and INAE as metrics. The CS methodology was extended to the entire one-month dataset with CR 40–70% to evaluate its generalizability. Figure 5 shows the INAE for one month’s dataset with CR 50%.
The key observations that can be made based on the analysis of Table 9 and Table 10, and Figure 5 and Figure 6 reveal significant insights into the impact of compression ratios, measurement metrics, and sparse transformation bases on reconstruction quality, specifically in terms of MAE and INAE, under both noise-free and noisy conditions.

8.1. Effect of Compression Ratio (CR)

Figure 6 shows the MAE versus compression ratio plot for different sparse bases and measurement matrices. With noise-free conditions, as CR increases (data retained decreases), MAE and INAE improve consistently across most sparse bases. This is more evident in Table 10, where aggregated results over a month showcase the trend. For instance, at CR = 40%, the Toeplitz basis achieves Gaussian INAE = 1.00 and MAE = 14.33, reflecting its ability to preserve data integrity during higher compression levels. At higher CRs (e.g., 80%), Wavelet and Toeplitz maintain low error values under noise-free conditions. Wavelet records Gaussian INAE = 1.31 and MAE = 14.67, even at lower CRs (e.g., 60%), highlighting its consistent performance.
In noisy conditions, at higher CRs (e.g., 80%), bases like Hankel exhibit significant error inflation, as observed in Table 9 (Gaussian INAE = 146.89, MAE = 195.20). However, Wavelet and Toeplitz show remarkable resilience under noisy conditions. For instance, at CR = 50% in Table 5, Wavelet achieves Gaussian INAE = 2.11 and MAE = 2.77, demonstrating its robustness to noise, even under high compression. Compression ratios between 40% and 60% offer the best trade-off between data compression and reconstruction accuracy.

8.2. Sparse Basis Performance

Toeplitz consistently outperforms other bases across all scenarios in terms of both MAE and INAE. Its stability under noisy conditions is evident in Table 10, where it achieves Gaussian INAE = 1.00 and MAE = 14.33 at CR = 40%. This reliability makes it ideal for applications demanding high compression and noise resilience.
Across both tables, Wavelet emerges as the most robust transformation basis, particularly in noise-free environments. In Table 10, it achieves low Gaussian INAE and MAE values across multiple CRs, such as INAE = 1.31 and MAE = 14.67 at CR = 60%. This underscores its adaptability to both high compression and noisy environments.
Hadamard shows good performance only in noise-free scenarios at higher CRs, such as CR = 80% in Table 10, where Gaussian INAE = 22.88. However, its sensitivity to noise becomes evident at lower CRs, with errors increasing significantly in Table 9, especially under noisy conditions.
Hankel performs moderately in noise-free conditions but is highly vulnerable to noise, as highlighted in Table 10. At CR = 60%, it records Gaussian INAE = 170.81 under noisy conditions, making it less suitable for robust applications.
DCT strikes a balance between robustness and performance across all CRs and conditions. For instance, in Table 10 at CR = 70%, it achieves Gaussian INAE = 2.73 and MAE = 26.80 under noisy conditions, making it a reliable choice for mixed environments.

8.3. Choice of Measurement Matrices

The choice of measurement matrix has a noticeable impact on performance.
Gaussian matrices consistently outperform Bernoulli matrices in noisy environments across both tables. In Table 5, at CR = 50%, Gaussian matrices paired with Wavelet achieve INAE = 2.11 and MAE = 2.77, whereas Bernoulli matrices result in slightly higher values, with INAE = 2.23 and MAE = 2.78. This trend highlights Gaussian matrices’ superior noise suppression capabilities.
In noise-free conditions, the differences between Gaussian and Bernoulli matrices are less significant. For example, in Table 5, at CR = 40%, both matrices exhibit comparable trends across sparse bases like Toeplitz and Wavelet.

9. Conclusion and Emerging Research Opportunities in Compressive Sensing

Compressive Sensing (CS) has gained significant attention in power engineering for its ability to efficiently acquire and reconstruct signals using fewer measurements, thereby minimizing data transmission overhead and reducing storage requirements while preserving critical information. However, existing research is often fragmented across different applications, making it challenging to identify the full scope of CS’s impact. This review consolidates recent advancements, providing a structured overview of CS methodologies and their applications in power systems. By critically analyzing measurement matrices, sparse bases, and recovery algorithms, this paper highlights the key benefits and challenges of CS, making it a valuable resource for researchers and practitioners in the field.
In power engineering, CS has shown effectiveness in applications like Advanced Metering Infrastructure (AMI), Wide-Area Measurement Systems (WAMSs), state estimation (SE), fault detection (FD), Harmonic Source Identification (HSI), power quality detection (PQD) and condition monitoring (CM), where it addresses issues of data sparsity, real-time constraints, and resource limitations.
The effectiveness of measurement matrices and sparse bases for data recovery was evaluated using the UK DALE dataset. Results indicate that for robust recovery in noisy environments, Gaussian matrices paired with transformation bases like Wavelet or Toeplitz perform well. Compression ratios between 40% and 60% provide the best balance even in noisy conditions, achieving significant data compression while maintaining low errors, making this approach suitable for temporal data aggregation and data compression in smart grids to optimize resource utilization. The Toeplitz and Wavelet bases demonstrate superior performance, maintaining low error rates across both noise-free and noisy conditions, making them suitable for high-compression, high-accuracy applications.
Recent advancements in CS research have primarily focused on enhancing measurement matrix design, improving sparse recovery algorithms, and integrating CS with emerging technologies such as deep learning and cloud computing. Studies are increasingly exploring hybrid CS models that combine adaptive sensing with AI-driven reconstruction techniques to enhance accuracy and efficiency in large-scale power systems. Additionally, energy-efficient CS frameworks optimized for edge devices and IoT networks are becoming a focal point in smart grid applications. However, despite these achievements, real-world deployment of CS remains challenging due to issues such as signal correlation, scalability constraints, and high computational demands for large datasets. These challenges highlight the need for further advancements in both theoretical models and practical implementations to fully realize the potential of CS in power engineering.
Future research and implementation of CS can focus on the following specific applications and advancements, leveraging its capabilities across diverse domains:
i.
Measurement Matrix Design and Optimization: Adaptive and weighted measurement strategies can achieve this by focusing on the most informative aspects of the signal. Utilizing machine learning techniques, such as genetic algorithms, to design measurement matrices offers adaptive solutions tailored to dynamic signal behaviors [147,148].
ii.
Adaptive and Optimal Basis Selection: Developing algorithms that dynamically select the optimal sparsity basis is crucial for adapting to fluctuating system conditions. Data-driven and tensor-based methods can tailor the sparsity basis by analyzing inherent system characteristics, ensuring efficient signal representation and reconstruction. Incorporating advanced preprocessing techniques, such as noise filtering and decorrelation, into CS workflows can significantly enhance signal quality without compromising essential features required for accurate analysis and reconstruction [15,149]. The combination of CS with techniques like the Discrete Cosine Transform (DCT) and Amended Intrinsic Chirp Separation (AIChirS) to precisely reconstruct overlapping non-stationary signals can be explored [150].
iii.
Recovery algorithms: Research efforts should continue to refine recovery algorithms, striving for excellence in terms of speed, efficiency, robustness, handling structured and non-sparse signals.
iv.
Scalability and Energy Efficiency studies in Large-Scale Systems: As the number of connected devices increases, scalability becomes paramount [145,148]. CS can enhance data storage efficiency in large-scale frameworks like China’s UPIoT and the emerging energy internet [150]. By effectively compressing data, CS reduces storage requirements and facilitates seamless data management. Shifting computational demands from resource-constrained IoT devices to robust gateways can lead to significant energy savings. In smart grids, joint sparse recovery techniques mitigate communication network burdens by simultaneously recovering multiple sparse vectors, thereby optimizing energy consumption [151,152]. Designing lightweight CS solutions optimized for resource-constrained devices, such as IoT nodes and smart sensors, is essential [153,154,155]. Employing fixed-point arithmetic on FPGAs and optimizing GPU kernels can achieve a balance between performance and power consumption, ensuring efficient CS operations on edge devices [156]. In asset monitoring and vegetation management, employing CS-based image processing with fewer UAV sensors minimizes energy usage and extends the operational lifespan of deployed devices, contributing to sustainable and cost-effective monitoring solutions [157]. Block Compressed Sensing (BCS), which segments large datasets into smaller blocks, enhances processing speed and system efficiency, making it feasible for large-scale power systems.
v.
Spatio-Temporal Models: Developing hierarchical CS frameworks by integrating Distributed Compressive Sensing (DCS) and Dynamic Distributed Compressive Sensing (DDCS) can improve data handling from complex sources such as multi-bus grids, UAV networks, and smart cities [149,158]. These models enhance data reconstruction accuracy and efficiency in large-scale power systems. Investigating spatio-temporal CS techniques can enhance large-scale monitoring systems by exploiting spatial correlations to reduce data redundancy while maintaining high reconstruction accuracy in geographically distributed networks [149].
vi.
CS integration with Cloud and Edge Computing: Integrating compressive data gathering with link scheduling can further reduce energy consumption and network traffic in applications like Advanced Metering Infrastructure (AMI) and Smart Grids by focusing on data reduction and security. Hybrid Cloud-Edge Architectures enhances the scalability and responsiveness of CS applications in power engineering, balancing computational loads between local devices and centralized cloud resources.
vii.
CS fusion with Deep Neural Networks: Combining CS with deep learning can create adaptive and intelligent systems capable of simultaneous classification, forecasting, and reconstruction [87,88]. Such systems hold significant promise for applications like fault detection (FD) and Power Quality Detection (PQD), leveraging the strengths of both CS and deep learning for more robust and accurate monitoring solutions.
viii.
Security and Privacy Enhancements: Advancing CS-based encryption methods, where sensing matrices also serve as encryption keys, can enhance data security in critical applications such as AMI and sensitive fields like medical data systems [159,160]. This dual purpose use of sensing matrices offers a novel approach to securing transmitted data without additional encryption overhead. Expanding federated CS frameworks to process sensitive data locally while incorporating robust security protocols, such as Quantum Key Distribution (QKD), can safeguard distributed systems against sophisticated cyber threats [161].
ix.
Quantum Computing Integration: Employing quantum algorithms, such as the Quantum Fourier Transform (QFT) and Harrow–Hassidim–Lloyd (HHL) algorithm, can significantly accelerate sparse recovery and matrix operations [162]. This is particularly promising for real-time grid monitoring and renewable energy forecasting in resource-intensive applications. Exploring parallel computing, distributed algorithms, and hardware acceleration can address the computational demands of CS-based state estimations in expansive grids.

Author Contributions

Conceptualization: L.R.C.; Methodology: L.R.C., I.K. and H.S.; Software: L.R.C.; Validation: L.R.C., I.K. and M.G.N.; Formal Analysis: L.R.C., I.K. and H.S.; Investigation: I.K. and M.G.N.; Resources: H.S. and P.K.K.; Writing—Original Draft Preparation: L.R.C.; Writing—Review & Editing: M.G.N., H.S. and P.K.K.; Supervision: M.G.N. and I.K.; Project Administration: M.G.N. and I.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability

This study used a publicly available dataset, as referenced in [146].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wen, L.; Zhou, K.; Yang, S.; Li, L. Compression of Smart Meter Big Data: A Survey. Renew. Sustain. Energy Rev. 2018, 91, 59–69. [Google Scholar] [CrossRef]
  2. Li, H.; Mao, R.; Lai, L.; Qiu, R.C. Compressed Meter Reading for Delay-Sensitive and Secure Load Report in Smart Grid. In Proceedings of the First IEEE International Conference on Smart Grid Communications, Gaithersburg, MD, USA, 4–6 October 2010; pp. 114–119. [Google Scholar] [CrossRef]
  3. Louie, R.H.Y.; Hardjawana, W.; Li, Y.; Vucetic, B. Distributed Multiple-Access for Smart Grid Home Area Networks: Compressed Sensing with Multiple Antennas. IEEE Trans. Smart Grid 2014, 5, 2938–2946. [Google Scholar] [CrossRef]
  4. Candès, E.J.; Wakin, M.B. An Introduction to Compressive Sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  5. Donoho, D.L. Compressed Sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  6. Candès, E.J. The Restricted Isometry Property and Its Implications for Compressed Sensing. Compt. Rendus Math. 2008, 346, 589–592. [Google Scholar] [CrossRef]
  7. Arie, R.; Brand, A.; Engelberg, S. Compressive Sensing and Sub-Nyquist Sampling. IEEE Instrum. Meas. Mag. 2020, 23, 94–101. [Google Scholar] [CrossRef]
  8. Machidon, A.L.; Pejović, V. Deep Learning for Compressive Sensing: A Ubiquitous Systems Perspective. Artif. Intell. Rev. 2023, 56, 3619–3658. [Google Scholar] [CrossRef]
  9. Chen, X.; Zhang, J.; Wang, X.; Han, G.; Xie, J. A Sub-Nyquist Rate Compressive Sensing Data Acquisition Front-End. IEEE J. Emerg. Sel. Top. Circuits Syst. 2012, 2, 542–551. [Google Scholar] [CrossRef]
  10. Trakimas, M.; Hancock, T.; Sonkusale, S. A Compressed Sensing Analog-to-Information Converter with Edge-Triggered SAR ADC Core. In Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS), Seoul, Republic of Korea, 20–23 May 2012; pp. 3162–3165. [Google Scholar] [CrossRef]
  11. Lee, Y.; Hwang, E.; Choi, J. Compressive Sensing-Based Power Signal Compression in Advanced Metering Infrastructure. In Proceedings of the 23rd Asia-Pacific Conference on Communications (APCC), Perth, WA, Australia, 11–13 December 2017; pp. 1–6. [Google Scholar] [CrossRef]
  12. Lee, Y.; Hwang, E.; Choi, J. A Unified Approach for Compression and Authentication of Smart Meter Reading in AMI. IEEE Access 2019, 7, 34383–34394. [Google Scholar] [CrossRef]
  13. Chowdhury, M.R.; Tripathi, S.; De, S. Adaptive Multivariate Data Compression in Smart Metering Internet of Things. IEEE Trans. Ind. Inform. 2021, 17, 1287–1297. [Google Scholar] [CrossRef]
  14. Lan, L.T.; Le, L.B. Joint Data Compression and MAC Protocol Design for Smart Grids with Renewable Energy. Wirel. Commun. Mob. Comput. 2016, 16, 2590–2604. [Google Scholar]
  15. Alam, S.M.S.; Natarajan, B.; Pahwa, A. Distribution Grid State Estimation from Compressed Measurements. IEEE Trans. Smart Grid 2014, 5, 1631–1642. [Google Scholar] [CrossRef]
  16. Babakmehr, M.; Simões, M.G.; Wakin, M.B.; Harirchi, F. Compressive Sensing-Based Topology Identification for Smart Grids. IEEE Trans. Ind. Inform. 2016, 12, 532–543. [Google Scholar] [CrossRef]
  17. Majidi, M.; Etezadi-Amoli, M.; Fadali, M.S. A Novel Method for Single and Simultaneous Fault Location in Distribution Networks. IEEE Trans. Power Syst. 2015, 30, 3368–3376. [Google Scholar] [CrossRef]
  18. Majidi, M.; Arabali, A.; Etezadi-Amoli, M. Fault Location in Distribution Networks by Compressive Sensing. IEEE Trans. Power Deliv. 2015, 30, 1761–1769. [Google Scholar] [CrossRef]
  19. Peng, Y.; Qiao, W.; Qu, L. Compressive Sensing-Based Missing-Data-Tolerant Fault Detection for Remote Condition Monitoring of Wind Turbines. IEEE Trans. Ind. Electron. 2022, 69, 1937–1947. [Google Scholar] [CrossRef]
  20. Zhang, Z.; Xu, Y.; Yang, J.; Li, X.; Zhang, D. A Survey of Sparse Representation: Algorithms and Applications. IEEE Access 2015, 3, 490–530. [Google Scholar] [CrossRef]
  21. Rani, M.; Dhok, S.B.; Deshmukh, R.B. A Systematic Review of Compressive Sensing: Concepts, Implementations, and Applications. IEEE Access 2018, 6, 4875–4894. [Google Scholar] [CrossRef]
  22. Qaisar, S.; Bilal, R.M.; Iqbal, W.; Naureen, M.; Lee, S. Compressive Sensing: From Theory to Applications, a Survey. J. Commun. Netw. 2013, 15, 443–456. [Google Scholar] [CrossRef]
  23. Lal, B.; Gravina, R.; Spagnolo, F.; Corsonello, P. Compressed Sensing Approach for Physiological Signals: A Review. IEEE Sens. J. 2023, 23, 5513–5534. [Google Scholar] [CrossRef]
  24. Djelouat, H.; Amira, A.; Bensaali, F. Compressive Sensing-Based IoT Applications: A Review. J. Sens. Actuator Netw. 2018, 7, 45. [Google Scholar] [CrossRef]
  25. Hosny, S.; El-Kharashi, M.W.; Abdel-Hamid, A.T. Survey on Compressed Sensing over the Past Two Decades. Memories–Mater. Devices Circuits Syst. 2023, 4, 100060. [Google Scholar] [CrossRef]
  26. Orović, I.; Papić, V.; Ioana, C.; Li, X.; Stanković, S. Compressive Sensing in Signal Processing: Algorithms and Transform Domain Formulations. Math. Probl. Eng. 2016, 2016, 7616393. [Google Scholar] [CrossRef]
  27. Erkoc, M.E.; Karaboga, N. A Comparative Study of Multi-Objective Optimization Algorithms for Sparse Signal Reconstruction. Artif. Intell. Rev. 2022, 55, 3153–3181. [Google Scholar] [CrossRef]
  28. Crespo Marques, E.; Maciel, N.; Naviner, L.; Cai, H.; Yang, J. A Review of Sparse Recovery Algorithms. IEEE Access 2019, 7, 1300–1322. [Google Scholar] [CrossRef]
  29. Sharma, S.K.; Lagunas, E.; Chatzinotas, S.; Ottersten, B. Application of Compressive Sensing in Cognitive Radio Communications: A Survey. IEEE Commun. Surv. Tutor. 2016, 18, 1838–1860. [Google Scholar] [CrossRef]
  30. Draganic, A.; Orovic, I.; Stankovic, S. On Some Common Compressive Sensing Recovery Algorithms and Applications—Review Paper. arXiv 2017, arXiv:1705.05216. [Google Scholar]
  31. Zhang, Y.; Xiang, Y.; Zhang, L.Y.; Rong, Y.; Guo, S. Secure Wireless Communications Based on Compressive Sensing: A Survey. IEEE Commun. Surv. Tutor. 2019, 21, 1093–1111. [Google Scholar] [CrossRef]
  32. Nguyen, T.L.N.; Shin, Y. Deterministic Sensing Matrices in Compressive Sensing: A Survey. Sci. World J. 2013, 2013, 192795. [Google Scholar] [CrossRef]
  33. Yang, J.; Wang, X.; Yin, W.; Zhang, Y.; Sun, Q. Video Compressive Sensing Using Gaussian Mixture Models. IEEE Trans. Image Process. 2014, 23, 4863–4878. [Google Scholar] [CrossRef]
  34. Starck, J.L.; Fadili, J.; Murtagh, F. The Undecimated Wavelet Decomposition and Its Reconstruction. IEEE Trans. Image Process. 2007, 16, 297–309. [Google Scholar] [CrossRef] [PubMed]
  35. Sejdić, E.; Orović, I.; Stanković, S. Compressive Sensing Meets Time–Frequency: An Overview of Recent Advances in Time–Frequency Processing of Sparse Signals. Digit. Signal Process. 2018, 77, 22–35. [Google Scholar] [CrossRef] [PubMed]
  36. Starck, J.; Elad, M.; Donoho, D. Redundant Multiscale Transforms and Their Application for Morphological Component Separation. Adv. Imaging Electron Phys. 2003, 132, 287–348. [Google Scholar] [CrossRef]
  37. Namburu, S.S.G.; Vasudevan, N.; Karthik, V.M.S.; Madhu, M.N.; Hareesh, V. Compressive Sensing and Orthogonal Matching Pursuit-Based Approach for Image Compression and Reconstruction. In Intelligent Computing and Optimization, ICO 2023; Vasant, P., Ed.; Lecture Notes in Networks and Systems; Springer: Cham, Switzerland, 2023; Volume 729, pp. 73–82. [Google Scholar] [CrossRef]
  38. Joshi, A.; Das, L.; Natarajan, B.; Srinivasan, B. A Framework for Efficient Information Aggregation in Smart Grid. IEEE Trans. Ind. Inform. 2019, 15, 2233–2243. [Google Scholar] [CrossRef]
  39. Ruiz, M.; Montalvo, I. Electrical Faults Signals Restoring Based on Compressed Sensing Techniques. Energies 2020, 13, 2121. [Google Scholar] [CrossRef]
  40. Gupta, A.; Rao, K.R. A Fast Recursive Algorithm for the Discrete Sine Transform. IEEE Trans. Acoust. Speech Signal Process. 1990, 38, 553–557. [Google Scholar] [CrossRef]
  41. Davies, M.E.; Daudet, L. Sparse Audio Representations Using the MCLT. Signal Process. 2006, 86, 457–470. [Google Scholar] [CrossRef]
  42. Feichtinger, H.G.; Strohmer, T. (Eds.) Gabor Analysis and Algorithms: Theory and Applications; Birkhäuser Boston, Inc.: Boston, MA, USA, 1998. [Google Scholar]
  43. Abhishek, S.; Veni, S.; Narayanankutty, K.A. Biorthogonal Wavelet Filters for Compressed Sensing ECG Reconstruction. Biomed. Signal Process. Control 2019, 47, 183–195. [Google Scholar] [CrossRef]
  44. Ramya, K.; Bolisetti, V.; Nandan, D.; Kumar, S. Compressive Sensing and Contourlet Transform Applications in Speech Signal. In ICCCE 2020; Kumar, A., Mozar, S., Eds.; Lecture Notes in Electrical Engineering; Springer: Singapore, 2021; Volume 698. [Google Scholar] [CrossRef]
  45. Eslahi, N.; Aghagolzadeh, A. Compressive Sensing Image Restoration Using Adaptive Curvelet Thresholding and Nonlocal Sparse Regularization. IEEE Trans. Image Process. 2016, 25, 3126–3140. [Google Scholar] [CrossRef]
  46. Joshi, M.S.; Manthalhkar, R.R.; Joshi, Y.V. Color Image Compression Using Wavelet and Ridgelet Transform. In Proceedings of the Seventh International Conference on Information Technology: New Generations (ITNG), Las Vegas, NV, USA, 12–14 April 2010; pp. 1318–1321. [Google Scholar] [CrossRef]
  47. Ma, J.; März, M.; Funk, S.; Schulz-Menger, J.; Kutyniok, G.; Schaeffter, T.; Kolbitsch, C. Shearlet-Based Compressed Sensing for Fast 3D Cardiac MR Imaging Using Iterative Reweighting. Phys. Med. Biol. 2018, 63, 235004. [Google Scholar] [CrossRef]
  48. Kerdjidj, O.; Ghanem, K.; Amira, A.; Harizi, F.; Chouireb, F. Concatenation of Dictionaries for Recovery of ECG Signals Using Compressed Sensing Techniques. In Proceedings of the 26th International Conference on Microelectronics (ICM), Doha, Qatar, 14–17 December 2014; pp. 112–115. [Google Scholar] [CrossRef]
  49. Saideni, W.; Helbert, D.; Courreges, F.; Cances, J.-P. An Overview on Deep Learning Techniques for Video Compressive Sensing. Appl. Sci. 2022, 12, 2734. [Google Scholar] [CrossRef]
  50. Zhang, J.; Ghanem, B. ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 1828–1837. [Google Scholar]
  51. Inga-Ortega, J.; Inga-Ortega, E.; Gómez, C.; Hincapié, R. Electrical Load Curve Reconstruction Required for Demand Response Using Compressed Sensing Techniques. In Proceedings of the IEEE PES Innovative Smart Grid Technologies Conference–Latin America (ISGT Latin America), Quito, Ecuador, 20–22 September 2017; pp. 1–6. [Google Scholar] [CrossRef]
  52. Mairal, J.; Sapiro, G.; Elad, M. Learning Multiscale Sparse Representations for Image and Video Restoration. Multiscale Model. Simul. 2008, 7, 214–241. [Google Scholar] [CrossRef]
  53. Xu, G.; Zhang, B.; Yu, H.; Chen, J.; Xing, M.; Hong, W. Sparse Synthetic Aperture Radar Imaging from Compressed Sensing and Machine Learning: Theories, Applications, and Trends. IEEE Geosci. Remote Sens. Mag. 2022, 10, 32–69. [Google Scholar] [CrossRef]
  54. Chen, S.S.; Donoho, D.L.; Saunders, M.A. Atomic Decomposition by Basis Pursuit. SIAM J. Sci. Comput. 1999, 20, 33–61. [Google Scholar] [CrossRef]
  55. Candès, E.J.; Tao, T. The Dantzig Selector: Statistical Estimation When p is Much Larger than n. Ann. Stat. 2007, 35, 2313–2351. [Google Scholar]
  56. Donoho, D.L.; Tsaig, Y. Fast Solution of ℓ1-Norm Minimization Problems When the Solution May Be Sparse. IEEE Trans. Inf. Theory 2008, 54, 4789–4812. [Google Scholar] [CrossRef]
  57. Gilbert, A.; Strauss, M.; Tropp, J.; Vershynin, R. Algorithmic Linear Dimension Reduction in the ℓ1 Norm for Sparse Vectors. arXiv 2006, arXiv:cs/0608079. [Google Scholar]
  58. Efron, B.; Hastie, T.; Johnstone, I.; Tibshirani, R. Least Angle Regression. Ann. Stat. 2004, 32, 407–499. [Google Scholar] [CrossRef]
  59. Chartrand, R.; Staneva, V. Restricted Isometry Properties and Nonconvex Compressive Sensing. Inverse Probl. 2008, 24, 35020. [Google Scholar] [CrossRef]
  60. Cai, J.-F.; Osher, S.; Shen, Z. Linearized Bregman Iterations for Compressed Sensing. Math. Comput. 2009, 78, 1515–1536. [Google Scholar] [CrossRef]
  61. Beck, A.; Teboulle, M. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
  62. Blumensath, T.; Davies, M.E. Iterative Hard Thresholding for Compressed Sensing. Appl. Comput. Harmon. Anal. 2009, 27, 265–274. [Google Scholar] [CrossRef]
  63. Donoho, D.L.; Maleki, A.; Montanari, A. Message-Passing Algorithms for Compressed Sensing. Proc. Natl. Acad. Sci. USA 2009, 106, 18914–18919. [Google Scholar] [CrossRef]
  64. Mallat, S.G.; Zhang, Z. Matching Pursuits with Time-Frequency Dictionaries. IEEE Trans. Signal Process. 1993, 41, 3397–3415. [Google Scholar] [CrossRef]
  65. Blumensath, T.; Davies, M.E. Gradient Pursuits. IEEE Trans. Signal Process. 2008, 56, 2370–2382. [Google Scholar] [CrossRef]
  66. Needell, D.; Vershynin, R. Uniform Uncertainty Principle and Signal Recovery via Regularized Orthogonal Matching Pursuit. Found. Comput. Math. 2009, 9, 317–334. [Google Scholar] [CrossRef]
  67. Needell, D.; Tropp, J.A. CoSaMP: Iterative Signal Recovery from Incomplete and Inaccurate Samples. Appl. Comput. Harmon. Anal. 2009, 26, 301–321. [Google Scholar] [CrossRef]
  68. Dai, W.; Milenkovic, O. Subspace Pursuit for Compressive Sensing Signal Reconstruction. IEEE Trans. Inf. Theory 2009, 55, 2230–2249. [Google Scholar] [CrossRef]
  69. Indyk, P.; Ruzic, M. Near-Optimal Sparse Recovery in the ℓ1 Norm. In Proceedings of the 49th Annual IEEE Symposium on Foundations of Computer Science (FOCS), Philadelphia, PA, USA, 25–28 October 2008; pp. 199–207. [Google Scholar] [CrossRef]
  70. Berinde, R.; Indyk, P.; Ruzic, M. Practical Near-Optimal Sparse Recovery in the ℓ1 Norm. In Proceedings of the Forty-Sixth Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 23–26 September 2008; pp. 198–205. [Google Scholar]
  71. Godsill, S.; Cemgil, A.; Févotte, C.; Wolfe, P. Bayesian Computational Methods for Sparse Audio and Music Processing. In Proceedings of the Fifteenth European Signal Processing Conference (EUSIPCO), Poznan, Poland, 3–7 September 2007. [Google Scholar]
  72. Huang, Y.; Beck, J.L.; Wu, S.; Li, H. Bayesian Compressive Sensing for Approximately Sparse Signals and Application to Structural Health Monitoring Signals for Data Loss Recovery. Probab. Eng. Mech. 2016, 46, 62–79. [Google Scholar] [CrossRef]
  73. Wipf, D.P.; Rao, B.D. Sparse Bayesian Learning for Basis Selection. IEEE Trans. Signal Process. 2004, 52, 2153–2164. [Google Scholar] [CrossRef]
  74. Ghosh, A.K.; Chakraborty, A. Compressive Sampling Using EM Algorithm. arXiv 2014, arXiv:1405.5311. [Google Scholar]
  75. Cormode, G. Sketch Techniques for Approximate Query Processing. In Foundations and Trends in Databases; Now Publishers: Breda, The Netherlands, 2011. [Google Scholar]
  76. Sukumaran, A.N.; Sankararajan, R.; Rajendiran, K. Video Compressed Sensing Framework for Wireless Multimedia Sensor Networks Using a Combination of Multiple Matrices. Comput. Electr. Eng. 2015, 44, 51–66. [Google Scholar] [CrossRef]
  77. Gregor, K.; LeCun, Y. Learning Fast Approximations of Sparse Coding. In Proceedings of the 27th International Conference on Machine Learning (ICML), Haifa, Israel, 21–24 June 2010; pp. 399–406. [Google Scholar]
  78. Vitaladevuni, S.N.; Natarajan, P.; Prasad, R. Efficient Orthogonal Matching Pursuit Using Sparse Random Projections for Scene and Video Classification. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011; pp. 2312–2319. [Google Scholar]
  79. Ito, D.; Takabe, S.; Wadayama, T. Trainable ISTA for Sparse Signal Recovery. IEEE Trans. Signal Process. 2019, 67, 3113–3125. [Google Scholar] [CrossRef]
  80. Metzler, C.; Mousavi, A.; Baraniuk, R. Learned D-AMP: Principled Neural Network-Based Compressive Image Recovery. Adv. Neural Inf. Process. Syst. 2017, 30, 1772–1783. [Google Scholar]
  81. Kulkarni, K.; Lohit, S.; Turaga, P.; Kerviche, R.; Ashok, A. ReconNet: Non-Iterative Reconstruction of Images from Compressively Sensed Measurements. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 449–458. [Google Scholar]
  82. Yang, Y.; Sun, J.; Li, H.; Xu, Z. ADMM-CSNet: A Deep Learning Approach for Image Compressive Sensing. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 521–538. [Google Scholar] [CrossRef]
  83. Salahdine, F.; Ghribi, E.; Kaabouch, N. Metrics for Evaluating the Efficiency of Compressive Sensing Techniques. In Proceedings of the International Conference on Information Networking (ICOIN), Barcelona, Spain, 7–10 January 2020; pp. 562–567. [Google Scholar]
  84. Seema, P.N.; Nair, M.G. The Key Modules Involved in the Evolution of an Effective Instrumentation and Communication Network in Smart Grids: A Review. Smart Sci. 2023, 11, 519–537. [Google Scholar] [CrossRef]
  85. Energy Efficiency Services Limited (EESL). Newsletter-Edition 37. February 2022. Available online: https://eeslindia.org/wp-content/uploads/2022/03/Newsletter-Feb-2022_Main.pdf (accessed on 14 January 2025).
  86. Sun, Y.; Cui, C.; Lu, J.; Wang, Q. Data Compression and Reconstruction of Smart Grid Customers Based on Compressed Sensing Theory. Int. J. Electr. Power Energy Syst. 2016, 83, 21–25. [Google Scholar] [CrossRef]
  87. Singh, S.; Majumdar, A. Multi-Label Deep Blind Compressed Sensing for Low-Frequency Non-Intrusive Load Monitoring. IEEE Trans. Smart Grid 2022, 13, 4–7. [Google Scholar] [CrossRef]
  88. Tascikaraoglu, A.; Sanandaji, B.M. Short-Term Residential Electric Load Forecasting: A Compressive Spatio-Temporal Approach. Energy Build. 2016, 111, 380–392. [Google Scholar] [CrossRef]
  89. Sanandaji, B.M.; Tascikaraoglu, A.; Poolla, K.; Varaiya, P. Low-Dimensional Models in Spatio-Temporal Wind Speed Forecasting. In Proceedings of the 2015 American Control Conference (ACC), Chicago, IL, USA, 1–3 July 2015; pp. 4485–4490. [Google Scholar] [CrossRef]
  90. Karimi, H.S.; Natarajan, B. Recursive Dynamic Compressive Sensing in Smart Distribution Systems. In Proceedings of the 2020 IEEE Power & Energy Society Innovative Smart Grid Technologies Conference (ISGT), Washington, DC, USA, 17–20 February 2020; pp. 1–5. [Google Scholar] [CrossRef]
  91. Das, S.; Sidhu, T. Application of Compressive Sampling in Computer-Based Monitoring of Power Systems. Adv. Comput. Eng. 2014, 2014, 524740. [Google Scholar] [CrossRef]
  92. Das, S.; Sidhu, T.S. Reconstruction of Phasor Dynamics at Higher Sampling Rates Using Synchrophasors Reported at Sub-Nyquist Rate. In Proceedings of the 2013 IEEE PES Innovative Smart Grid Technologies Conference (ISGT), Washington, DC, USA, 24–27 February 2013; pp. 1–6. [Google Scholar] [CrossRef]
  93. Das, S.; Singh Sidhu, T. Application of Compressive Sampling in Synchrophasor Data Communication in WAMS. IEEE Trans. Ind. Inform. 2014, 10, 450–460. [Google Scholar] [CrossRef]
  94. Das, S. Sub-Nyquist Rate ADC Sampling in Digital Relays and PMUs: Advantages and Challenges. In Proceedings of the 2016 IEEE 6th International Conference on Power Systems (ICPS), New Delhi, India, 4–6 March 2016; pp. 1–6. [Google Scholar] [CrossRef]
  95. Lee, G.; Kim, D.-I.; Kim, S.H.; Shin, Y.-J. Multiscale PMU Data Compression via Density-Based WAMS Clustering Analysis. Energies 2019, 12, 617. [Google Scholar] [CrossRef]
  96. Masoum, A.; Meratnia, N.; Havinga, P.J.M. Coalition Formation-Based Compressive Sensing in Wireless Sensor Networks. Sensors 2018, 18, 2331. [Google Scholar] [CrossRef]
  97. Madbhavi, R.; Srinivasan, B. Enhancing Performance of Compressive Sensing-Based State Estimators Using Dictionary Learning. In Proceedings of the 2022 IEEE International Conference on Power Systems Technology (POWERCON), Kuala Lumpur, Malaysia, 12–14 September 2022; pp. 1–6. [Google Scholar] [CrossRef]
  98. Babakmehr, M.; Majidi, M.; Simões, M.G. Compressive Sensing for Power System Data Analysis. In Big Data Application in Power Systems; Arghandeh, R., Zhou, Y., Eds.; Elsevier: Amsterdam, The Netherlands, 2018; pp. 159–178. [Google Scholar] [CrossRef]
  99. Majidi, M.; Etezadi-Amoli, M.; Livani, H. Distribution System State Estimation Using Compressive Sensing. Int. J. Electr. Power Energy Syst. 2017, 88, 175–186. [Google Scholar] [CrossRef]
  100. Li, P.; Su, H.; Wang, C.; Liu, Z.; Wu, J. PMU-Based Estimation of Voltage to Power Sensitivity for Distribution Networks Considering the Sparsity of Jacobian Matrix. IEEE Access 2018, 6, 31307–31316. [Google Scholar] [CrossRef]
  101. Rout, B.; Natarajan, B. Impact of Cyber Attacks on Distributed Compressive Sensing-Based State Estimation in Power Distribution Grids. Int. J. Electr. Power Energy Syst. 2022, 142, 108295. [Google Scholar] [CrossRef]
  102. Gokul Krishna, N.; Raj, J.; Chandran, L.R. Transmission Line Monitoring and Protection with ANN-Aided Fault Detection, Classification, and Location. In Proceedings of the 2021 2nd International Conference on Smart Electronics and Communication (ICOSEC), Trichy, India, 7–9 October 2021; pp. 883–889. [Google Scholar] [CrossRef]
  103. Rozenberg, I.; Beck, Y.; Eldar, Y.C.; Levron, Y. Sparse Estimation of Faults by Compressed Sensing with Structural Constraints. IEEE Trans. Power Syst. 2018, 33, 5935–5944. [Google Scholar] [CrossRef]
  104. Jiang, K.; Wang, H.; Shahidehpour, M.; He, B. Block-Sparse Bayesian Learning Method for Fault Location in Active Distribution Networks with Limited Synchronized Measurements. IEEE Trans. Power Syst. 2021, 36, 3189–3203. [Google Scholar] [CrossRef]
  105. Jia, K.; Yang, B.; Bi, T.; Zheng, L. An Improved Sparse-Measurement-Based Fault Location Technology for Distribution Networks. IEEE Trans. Ind. Inform. 2021, 17, 1712–1720. [Google Scholar] [CrossRef]
  106. Yang, F.; Tan, J.; Song, J.; Han, Z. Block-Wise Compressive Sensing-Based Multiple Line Outage Detection for Smart Grid. IEEE Access 2018, 6, 50984–50993. [Google Scholar] [CrossRef]
  107. Wang, H.; Huang, C.; Yu, H.; Zhang, J.; Wei, F. Method for Fault Location in a Low-Resistance Grounded Distribution Network Based on Multi-Source Information Fusion. Int. J. Electr. Power Energy Syst. 2021, 125, 106384. [Google Scholar] [CrossRef]
  108. Babakmehr, M.; Harirchi, F.; Al-Durra, A.; Muyeen, S.M.; Simões, M.G. Exploiting Compressive System Identification for Multiple Line Outage Detection in Smart Grids. In Proceedings of the 2018 IEEE Industry Applications Society Annual Meeting (IAS), Portland, OR, USA, 23–27 September 2018; pp. 1–8. [Google Scholar] [CrossRef]
  109. Babakmehr, M.; Simões, M.G.; Al-Durra, A.; Harirchi, F.; Han, Q. Application of Compressive Sensing for Distributed and Structured Power Line Outage Detection in Smart Grids. In Proceedings of the 2015 American Control Conference (ACC), Chicago, IL, USA, 1–3 July 2015; pp. 3682–3689. [Google Scholar] [CrossRef]
  110. Babakmehr, M.; Harirchi, F.; Al-Durra, A.; Muyeen, S.M.; Simões, M.G. Compressive System Identification for Multiple Line Outage Detection in Smart Grids. IEEE Trans. Ind. Appl. 2019, 55, 4462–4473. [Google Scholar] [CrossRef]
  111. Huang, K.; Xiang, Z.; Deng, W.; Tan, X.; Yang, C. Reweighted Compressed Sensing-Based Smart Grids Topology Reconstruction with Application to Identification of Power Line Outage. IEEE Syst. J. 2020, 14, 4329–4339. [Google Scholar] [CrossRef]
  112. Ding, L.; Nie, S.; Li, W.; Hu, P.; Liu, F. Multiple Line Outage Detection in Power Systems by Sparse Recovery Using Transient Data. IEEE Trans. Smart Grid 2021, 12, 3448–3457. [Google Scholar] [CrossRef]
  113. Li, W.; Liu, Z.-W.; Yao, W.; Yu, Y. Multiple Line Outage Detection for Power Systems Based on Binary Matching Pursuit. IEEE Trans. Circuits Syst. II Express Briefs 2023, 70, 2999–3003. [Google Scholar] [CrossRef]
  114. Wang, X.; Yang, B.; Wang, Z.; Liu, Q.; Chen, C.; Guan, X. A Compressed Sensing and CNN-Based Method for Fault Diagnosis of Photovoltaic Inverters in Edge Computing Scenarios. IET Renew. Power Gener. 2022, 16, 1434–1444. [Google Scholar] [CrossRef]
  115. Cheng, L.; Wu, Z.; Duan, R.; Dong, K. Adaptive Compressive Sensing and Machine Learning for Power System Fault Classification. In Proceedings of the 2020 SoutheastCon, Raleigh, NC, USA, 28–29 March 2020; pp. 1–7. [Google Scholar] [CrossRef]
  116. Taheri, B.; Sedighizadeh, M. Detection of Power Swing and Prevention of Mal-Operation of Distance Relay Using Compressed Sensing Theory. IET Gener. Transm. Distrib. 2020, 14, 5558–5570. [Google Scholar] [CrossRef]
  117. Ghosh, R.; Chatterjee, B.; Chakravor, S. A Low-Complexity Method Based on Compressed Sensing for Long-Term Field Measurement of Insulator Leakage Current. IEEE Trans. Dielectr. Electr. Insul. 2016, 23, 596–604. [Google Scholar] [CrossRef]
  118. Yang, F.; Sheng, G.; Xu, Y.; Hou, H.; Qian, Y.; Jiang, X. Partial Discharge Pattern Recognition of XLPE Cables at DC Voltage Based on the Compressed Sensing Theory. IEEE Trans. Dielectr. Electr. Insul. 2017, 24, 2977–2985. [Google Scholar] [CrossRef]
  119. Li, Z.; Luo, L.; Liu, Y.; Sheng, G.; Jiang, X. UHF Partial Discharge Localization Algorithm Based on Compressed Sensing. IEEE Trans. Dielectr. Electr. Insul. 2018, 25, 21–29. [Google Scholar] [CrossRef]
  120. Carta, D.; Muscas, C.; Pegoraro, P.A.; Sulis, S. Harmonics Detector in Distribution Systems Based on Compressive Sensing. In Proceedings of the 2017 IEEE International Workshop on Applied Measurements for Power Systems (AMPS), Liverpool, UK, 20–22 September 2017; pp. 1–5. [Google Scholar] [CrossRef]
  121. Carta, D.; Muscas, C.; Pegoraro, P.A.; Sulis, S. Identification and Estimation of Harmonic Sources Based on Compressive Sensing. IEEE Trans. Instrum. Meas. 2019, 68, 95–104. [Google Scholar] [CrossRef]
  122. Carta, D.; Muscas, C.; Pegoraro, P.A.; Solinas, A.V.; Sulis, S. Impact of Measurement Uncertainties on Compressive Sensing-Based Harmonic Source Estimation Algorithms. In Proceedings of the 2020 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Dubrovnik, Croatia, 25–28 May 2020; pp. 1–6. [Google Scholar] [CrossRef]
  123. Carta, D.; Muscas, C.; Pegoraro, P.A.; Solinas, A.V.; Sulis, S. Compressive Sensing-Based Harmonic Sources Identification in Smart Grids. IEEE Trans. Instrum. Meas. 2021, 70, 1–10. [Google Scholar] [CrossRef]
  124. Amaya, L.; Inga, E. Compressed Sensing Technique for the Localization of Harmonic Distortions in Electrical Power Systems. Sensors 2022, 22, 6434. [Google Scholar] [CrossRef] [PubMed]
  125. Huang, S.; Sun, H.; Yu, L.; Zhang, H. A Class of Deterministic Sensing Matrices and Their Application in Harmonic Detection. Circuits Syst. Signal Process. 2016, 35, 4183–4194. [Google Scholar] [CrossRef]
  126. Palczynska, B.; Masnicki, R.; Mindykowski, J. Compressive Sensing Approach to Harmonics Detection in the Ship Electrical Network. Sensors 2020, 20, 2744. [Google Scholar] [CrossRef] [PubMed]
  127. Yang, T.; Pen, H.; Wang, D.; Wang, Z. Harmonic Analysis in Integrated Energy System Based on Compressed Sensing. Appl. Energy 2016, 165, 583–591. [Google Scholar] [CrossRef]
  128. Niu, Y.; Yang, T.; Yang, F.; Feng, X.; Zhang, P.; Li, W. Harmonic Analysis in Distributed Power System Based on IoT and Dynamic Compressed Sensing. Energy Rep. 2022, 8, 2363–2375. [Google Scholar] [CrossRef]
  129. Babakmehr, M.; Sartipizadeh, H.; Simões, M.G. Compressive Informative Sparse Representation-Based Power Quality Events Classification. IEEE Trans. Ind. Inform. 2020, 16, 909–921. [Google Scholar] [CrossRef]
  130. Cheng, L.; Wu, Z.; Xuanyuan, S.; Chang, H. Power Quality Disturbance Classification Based on Adaptive Compressed Sensing and Machine Learning. In Proceedings of the 2020 IEEE Green Technologies Conference (GreenTech), Oklahoma City, OK, USA, 1–3 April 2020; pp. 65–70. [Google Scholar] [CrossRef]
  131. Wang, J.; Xu, Z.; Che, Y. Power Quality Disturbance Classification Based on Compressed Sensing and Deep Convolution Neural Networks. IEEE Access 2019, 7, 78336–78346. [Google Scholar] [CrossRef]
  132. Anjali, V.; Panikker, P.P.K. Investigation of the Effect of Diverse Dictionaries and Sparse Decomposition Techniques for Power Quality Disturbances. Energies 2024, 17, 6152. [Google Scholar] [CrossRef]
  133. Dang, X.J.; Wang, F.H.; Zhou, D.X. Compressive Sensing of Vibration Signals of Power Transformer. In Proceedings of the 2020 IEEE International Conference on High Voltage Engineering and Application (ICHVE), Beijing, China, 6–10 September 2020; pp. 1–4. [Google Scholar] [CrossRef]
  134. Liang, C.; Chen, C.; Liu, Y.; Jia, X. A Novel Intelligent Fault Diagnosis Method for Rolling Bearings Based on Compressed Sensing and Stacked Multi-Granularity Convolution Denoising Auto-Encoder. IEEE Access 2021, 9, 154777–154787. [Google Scholar] [CrossRef]
  135. Ahmed, H.O.A.; Nandi, A.K. Three-Stage Hybrid Fault Diagnosis for Rolling Bearings with Compressively Sampled Data and Subspace Learning Techniques. IEEE Trans. Ind. Electron. 2019, 66, 5516–5524. [Google Scholar] [CrossRef]
  136. Hu, Z.X.; Wang, Y.; Ge, M.F.; Liu, J. Data-Driven Fault Diagnosis Method Based on Compressed Sensing and Improved Multiscale Network. IEEE Trans. Ind. Electron. 2020, 67, 3216–3225. [Google Scholar] [CrossRef]
  137. Du, Z.; Chen, X.; Zhang, H.; Miao, H.; Guo, Y.; Yang, B. Feature Identification with Compressive Measurements for Machine Fault Diagnosis. IEEE Trans. Instrum. Meas. 2016, 65, 977–987. [Google Scholar] [CrossRef]
  138. Wang, H.; Ke, Y.; Luo, G.; Li, L.; Tang, G. A Two-Stage Compression Method for the Fault Detection of Roller Bearings. Shock Vib. 2016, 2016, 2971749. [Google Scholar] [CrossRef]
  139. Shan, N.; Xu, X.; Bao, X.; Qiu, S. Fast Fault Diagnosis in Industrial Embedded Systems Based on Compressed Sensing and Deep Kernel Extreme Learning Machines. Sensors 2022, 22, 3997. [Google Scholar] [CrossRef]
  140. Ma, Y.; Jia, X.; Bai, H.; Liu, G.; Wang, G.; Guo, C.; Wang, S. A New Fault Diagnosis Method Based on Convolutional Neural Network and Compressive Sensing. J. Mech. Sci. Technol. 2019, 33, 5177–5188. [Google Scholar] [CrossRef]
  141. Tang, X.; Xu, Y.; Sun, X.; Liu, Y.; Jia, Y.; Gu, F.; Ball, A.D. Intelligent Fault Diagnosis of Helical Gearboxes with Compressive Sensing-Based Non-Contact Measurements. ISA Trans. 2023, 133, 559–574. [Google Scholar] [CrossRef]
  142. Wang, Y.; Zhang, J.; Wang, L. Compressed Sensing Super-Resolution Method for Improving the Accuracy of Infrared Diagnosis of Power Equipment. Appl. Sci. 2022, 12, 4046. [Google Scholar] [CrossRef]
  143. Liu, Z.; Kuang, Y.; Jiang, F.; Zhang, Y.; Lin, H.; Ding, K. Weighted Distributed Compressed Sensing: An Efficient Gear Transmission System Fault Feature Extraction Approach for Ultra-Low Compression Signals. Adv. Eng. Inform. 2024, 62, 102833. [Google Scholar] [CrossRef]
  144. Jian, T.; Cao, J.; Liu, W.; Xu, G.; Zhong, J. A Novel Wind Turbine Fault Diagnosis Method Based on Compressive Sensing and Lightweight SqueezeNet Model. Expert Syst. Appl. 2025, 260, 125440. [Google Scholar] [CrossRef]
  145. Rani, S.; Shabaz, M.; Dutta, A.K.; Ahmed, E.A. Enhancing Privacy and Security in IoT-Based Smart Grid System Using Encryption-Based Fog Computing. Alex. Eng. J. 2024, 102, 66–74. [Google Scholar] [CrossRef]
  146. Kelly, J. UK Domestic Appliance Level Electricity (UK-DALE)-Disaggregated Appliance/Aggregated House Power [Data Set]; Imperial College: London, UK, 2012. [Google Scholar] [CrossRef]
  147. Ahmed, I.; Khan, A. Genetic Algorithm-Based Framework for Optimized Sensing Matrix Design in Compressed Sensing. Multimed. Tools Appl. 2022, 81, 39077–39102. [Google Scholar] [CrossRef]
  148. Ahmed, I.; Khan, A. Learning-Based Speech Compressive Subsampling. Multimed. Tools Appl. 2023, 82, 15327–15343. [Google Scholar] [CrossRef]
  149. Gambheer, R.; Bhat, M.S. Optimized Compressed Sensing for IoT: Advanced Algorithms for Efficient Sparse Signal Reconstruction in Edge Devices. IEEE Access 2024, 12, 63610–63617. [Google Scholar] [CrossRef]
  150. Shareef, S.M.; Rao, M.V.G. Separation of Overlapping Non-Stationary Signals and Compressive Sensing-Based Reconstruction Using Instantaneous Frequency Estimation. Digit. Signal Process. 2024, 155, 104737. [Google Scholar] [CrossRef]
  151. Chen, H.; Wang, X.; Li, Z.; Chen, W.; Cai, Y. Distributed Sensing and Cooperative Estimation/Detection of Ubiquitous Power Internet of Things. Prot. Control Mod. Power Syst. 2019, 4, 13. [Google Scholar] [CrossRef]
  152. Hsieh, S.H.; Hung, T.H.; Lu, C.S.; Chen, Y.C.; Pei, S.C. A Secure Compressive Sensing-Based Data Gathering System via Cloud Assistance. IEEE Access 2018, 6, 31840–31853. [Google Scholar] [CrossRef]
  153. Kong, L.; Zhang, D.; He, Z.; Xiang, Q.; Wan, J.; Tao, M. Embracing Big Data with Compressive Sensing: A Green Approach in Industrial Wireless Networks. IEEE Commun. Mag. 2016, 54, 53–59. [Google Scholar] [CrossRef]
  154. Liu, J.; Cheng, H.-Y.; Liao, C.-C.; Wu, A.Y.A. Scalable Compressive Sensing-Based Multi-User Detection Scheme for Internet-of-Things Applications. In Proceedings of the IEEE Workshop on Signal Processing Systems (SiPS), Hangzhou, China, 14–16 October 2015; pp. 1–6. [Google Scholar]
  155. Tekin, N.; Gungor, V.C. Analysis of Compressive Sensing and Energy Harvesting for Wireless Multimedia Sensor Networks. Ad Hoc Netw. 2020, 103, 102164. [Google Scholar] [CrossRef]
  156. Kulkarni, A.; Mohsenin, T. Accelerating Compressive Sensing Reconstruction OMP Algorithm with CPU, GPU, FPGA, and Domain-Specific Many-Core. In Proceedings of the 2015 IEEE International Symposium on Circuits and Systems (ISCAS), Lisbon, Portugal, 24–27 May 2015; pp. 970–973. [Google Scholar] [CrossRef]
  157. Wang, R.; Qin, Y.; Wang, Z.; Zheng, H. Group-Based Sparse Representation for Compressed Sensing Image Reconstruction with Joint Regularization. Electronics 2022, 11, 182. [Google Scholar] [CrossRef]
  158. Prabha, M.; Darly, S.S.; Rabi, B.J. A Novel Approach of Hierarchical Compressive Sensing in Wireless Sensor Network Using Block Tri-Diagonal Matrix Clustering. Comput. Commun. 2021, 168, 54–64. [Google Scholar] [CrossRef]
  159. Xue, W.; Luo, C.; Lan, G.; Rana, R.; Hu, W.; Seneviratne, A. Kryptein: A Compressive-Sensing-Based Encryption Scheme for the Internet of Things. In Proceedings of the ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN), Pittsburgh, PA, USA, 18–21 April 2017. [Google Scholar]
  160. Hu, G.; Xiao, D.; Xiang, T.; Bai, S.; Zhang, Y. A Compressive Sensing-Based Privacy Preserving Outsourcing of Image Storage and Identity Authentication Service in Cloud. Inf. Sci. 2017, 387, 132–145. [Google Scholar] [CrossRef]
  161. Xue, W.; Luo, C.; Shen, Y.; Rana, R.; Lan, G.; Jha, S.; Seneviratne, A.; Hu, W. Towards a Compressive-Sensing-Based Lightweight Encryption Scheme for the Internet of Things. IEEE Trans. Mobile Comput. 2021, 20, 3049–3065. [Google Scholar] [CrossRef]
  162. Sherbert, K.M.; Naimipour, N.; Safavi, H.; Shaw, H.C.; Soltanalian, M. Quantum Compressive Sensing: Mathematical Machinery, Quantum Algorithms, and Quantum Circuitry. Appl. Sci. 2022, 12, 7525. [Google Scholar] [CrossRef]
Figure 1. General framework of CS (a) data acquisition model and (b) reconstruction model.
Figure 1. General framework of CS (a) data acquisition model and (b) reconstruction model.
Jsan 14 00028 g001
Figure 2. Sensing/measurement matrix classification.
Figure 2. Sensing/measurement matrix classification.
Jsan 14 00028 g002
Figure 3. CS-based recovery algorithms.
Figure 3. CS-based recovery algorithms.
Jsan 14 00028 g003
Figure 4. Generalized block diagram for Compressive Sensing for IoT-based smart grid monitoring.
Figure 4. Generalized block diagram for Compressive Sensing for IoT-based smart grid monitoring.
Jsan 14 00028 g004
Figure 5. INAE for one month’s dataset with CR = 50% for different sparse bases and measurement matrices: (a) Gaussian—no noise; (b) Gaussian—With noise (c); Bernoulli—no noise; (d) Bernoulli—With noise.
Figure 5. INAE for one month’s dataset with CR = 50% for different sparse bases and measurement matrices: (a) Gaussian—no noise; (b) Gaussian—With noise (c); Bernoulli—no noise; (d) Bernoulli—With noise.
Jsan 14 00028 g005
Figure 6. MAE versus compression ratio for different sparse bases and measurement matrices: Gaussian—no noise; Gaussian—noisy; Bernoulli—no noise; and Bernoulli—noisy.
Figure 6. MAE versus compression ratio for different sparse bases and measurement matrices: Gaussian—no noise; Gaussian—noisy; Bernoulli—no noise; and Bernoulli—noisy.
Jsan 14 00028 g006
Table 1. Sensing matrices and compressive measurements requirements.
Table 1. Sensing matrices and compressive measurements requirements.
Sensing MatrixNumber of Measurements
Bernoulli or GaussianM     ϕ k log N/K
Partial FourierMϕμk (log N)4
Random (any other)M = O (k log N)
DeterministicM = O (k2 log N)
Table 2. CS-based algorithms highlighting the features.
Table 2. CS-based algorithms highlighting the features.
ApproachRef.AlgorithmsFeatures, Pros (+) and Cons (−)
Convex optimization[54]Basis Pursuit (BP)Solves the ℓ1-minimization.
Complexity: O(N3), minimum measurement: O (k log N)
+ Utilizes simplex or interior point methods for solving.
+ Effective when measurements are noise-free.
− Sensitive to noise, may not recover accurately in noisy conditions.
[54]Basis Pursuit De-Noising (BPDN)Seeks a solution with minimum 1-norm while relaxing constraint conditions.
+ Useful when dealing with noise.
+ Incorporates quadratic inequality constraints.
[55]Dantzig Selector (DS)Uses ℓ1 and ℓ norms to find a sparse solution.
+ Provides a robust sparse solution.
[56]Least Absolute Shrinkage and Selection Operator (LASSO)Employs ℓ1 regularization for simultaneous variable selection and regularization
+ Handles variable selection and regularization in one step
− Can introduce bias in high-dimensional data.
[57]Total variation (TV) denoisingIs suitable for piecewise constant signals, denoising, and image reconstruction as a measurement technique.
+ Preserves edges and fine details.
+ Effective in minimizing total variation while considering signal statistics.
− Can lead to blocky reconstructions.
[58]Least angle regression (LARS) + Identifies a subset of relevant features.
Non-Convex[59]Focal Understanding System Solution (FOCUSS)Performs dictionary learning through gradient descent and directly targets sparsity.
+ Emphasizes sparsity.
− NP-hard, computationally intensive.
− Used for limited data scenarios.
[60]Iterative Reweighted least Squares (IRLS)+ Adapts weights in each iteration for better sparsity.
− Convergence can be slow.
[45,60]Bregman iterative Type (BIT)Solves by transforming a constrained (ℓ1-minimization) problem into a series of unconstrained problems.
+ Gives a faster and stable solution.
Iterative /Thresholding[61]Iterative Soft Thresholding (IST)Performs element-wise soft thresholding, which is a smooth approximation to the ℓ0-norm.
+ Smooth approximation to ℓ0-norm encourages sparsity.
− Introduces bias.
[62]Iterative hard Thresholding (IHT)Belongs to a class of low computational complexity algorithms and uses a nonlinear thresholding operator.
+ Less complex.
− Sensitive to noise.
[37]Iterative Shrinkage/Thresholding Algorithm (ISTA) Variant of IST that involves linearization or preconditioning.
− Performance depends on the choice of parameters and preconditioning.
[61]Fast iterative soft thresholding (FISTA) + Variant of IST designed to obtain global convergence and accelerate convergence.
− Complexity may be higher due to the additional linear combinations of previous points.
[63]Approximate Message Passing Algorithm (AMP)Iterative algorithm known for performing well with deterministic and highly structured measurement matrices (e.g., partial Fourier, Toeplitz, circulant matrices).
+ Demonstrates regular structure, fast convergence, and low storage requirements.
+ Hardware-friendly.
Greedy[64]Matching Pursuit (MP) Associates basic variables (messages) with directed graph edges and performs exhaustive search.
(+) Fast and simple implementation.
(−) May not be optimal for highly correlated dictionaries.
[65]Gradient Pursuit (GP)Relaxation algorithm that uses the ℓ2 norm to smooth the ℓ0 norm.
(+) Offers relaxation for the ℓ0 norm, which can be beneficial.
[22,66]Orthogonal Matching Pursuit (OMP) Orthogonally projects the residuals and selects columns of the sensing matrix.
Complexity: O(kMN); minimum measurement: O (k log N).
(+) Orthogonalizes the residuals.
+ Efficient for sparse signal recovery.
(−) Computationally intensive for large dictionaries.
[22,66]Regularized OMP (ROMP) Extension of OMP that selects multiple vectors at each iteration.
Complexity: O(kMN); minimum measurement: O (k log2 N)
(+) Suitable for recovering sparse signals based on the Restricted Isometry Property (RIP).
[22,67]Compressive sampling OMP (CoSAmP) Combines RIP and pruning technique
Complexity: O(MN); minimum measurement: O (k log N)
+ Effective for noisy samples.
[22,32]Stagewise orthogonal matching pursuit (StOMP) Combines thresholding, selecting, and projection
Complexity: O (N log N); minimum measurement: O(N log N)
[22,68]Subspace Pursuit (SP)SP samples signal to satisfy the constraints of the RIP with a constant parameter.
Complexity: O (k MN); minimum measurement: O (k log N/k)
[22,69]Expander Matching Pursuit (EMP)Based on sparse random (or pseudo-random) matrices.
Complexity: O (n log n/k); minimum measurement: O (k log N/k)
+ Efficient for large-scale problems.
+ Resilient to noise.
− Need more measurement than LP-based sparse recovery algorithms.
[22,70]Sparse Matching Pursuit (SMP)Variant of EMP.
Complexity: O ((N log N/k) log R); minimum measurement: O (k log N/k)
+ Efficient in terms of measurement count compared to EMP.
− Run time higher than that of EMP.
Probabilistic[71]Markov Chain Monte Carlo (MCMC)Relies on stochastic sampling techniques.
Generates a Markov chain of samples from the posterior distribution and leverages these samples to compute expectations and make inferences.
(+) Can handle large-scale problems effectively.
(−) Requires multiple random samples, which can be computationally expensive.
[72]Bayesian Compressive Sensing (BCS)+ Incorporates prior information into the recovery process.
+ Considers the time correlation of signals, which can be valuable for time-series data.
(−) Requires careful choice of prior distributions, which may be challenging.
[73]Sparse Bayesian Learning Algorithms (SBLA)Uses Bayesian methods to handle sparse signals.
+ Incorporates prior information.
+ Considers the time correlation of signals
− Requires careful choice of priors.
[74]Expectation Maximization (EM)Assumes a statistical distribution for the sparse signal and the measurement process.
(+) Can be used when there is prior knowledge about the signal distribution.
(−) May require a good initial guess for model parameters.
[75]Gaussian Mixture Models (GMM)Is used to model the statistical distribution of signals and measurements.
Represents the signal as a mixture of Gaussian components and use the EM algorithm for parameter estimation.
(+) Suitable for modeling complex and multimodal signal distributions.
Can capture dependencies between signal components.
(−) Requires careful parameter estimation and may not work well for highly non-Gaussian data.
Combinatorial/Sublinear[22,57]Chaining Pursuit (CP)+Efficient for large dictionaries.
Complexity: O (k log2 N log2 k); minimum measurement: O (k log2 N).
− Might miss some sparse components.
− Can result in suboptimal solutions
[22,76]Heavy Hitters on Steroids (HSS)+ Fast detection of significant coefficients/heavy hitters.
Complexity: O (k poly log N); minimum measurement: O(poly(k, log N))
− Requires careful parameter tuning.
Deep Learning [77,78]Learned ISTA (LISTA)Mimics ISTA for sparse coding.
+ Uses a deep encoder architecture, trained using stochastic gradient descent; has faster execution.
−Only finds the sparse representation of a given signal in a given dictionary
[50]Iterative shrinkage-thresholding algorithm based deep-network (ISTA-Net) Mimics ISTA for CS reconstruction.
+ Reduces the reconstruction complexity by more than 100 times compared to traditional ISTA.
[79]TISTASparse signal recovery algorithm inspired by ISTA.
+ Uses an error variance estimator which improves the speed of convergence.
[80]Learned D-AMP (LDAMP)Deep unfolded D-AMP (Approximate Message Passing) implementation.
+ Designed as CNNs; eliminates block-like artifacts in image reconstruction.
[81]RecoNet Employs CNN for compressive sensing.
+ Superior reconstruction quality, faster than traditional algorithms for image application.
− Uses a blocky measurement matrix.
[82]ADMM CSNet+ Is a reconstruction approach that does not mimic a known iterative algorithm.
+ Has the highest recovery accuracy in terms of PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index Measure).
Table 3. CS applications for Advanced Metering Infrastructure.
Table 3. CS applications for Advanced Metering Infrastructure.
Advanced Metering Infrastructure (AMI)
Ref.Sensing MatrixRecovery AlgorithmSparse BasisInferences/Comments
[11]GaussianOrthogonal Matching Pursuit (OMP)Wavelet Transform (WT)CS-based compression of the aggregated power signal for narrow-bandwidth conditions in AMI.
[12]Gaussian Discrete Cosine Transform (DCT)A CS-based physical layer authentication method is proposed.
A measurement matrix between the DCU and a legitimate meter (LM) acts as a secret key for both compression and authentication.
[38]GaussianL1 MinimizationWavelet Transform (WT)Focuses on dynamic temporal and spatial compression rather than spatial compression.
[86]RandomTwo step iteration threshold algorithm (TwIST)Wavelet Transform (WT)Focuses on the study of CS to minimize delay and communication overhead.
[87]Binary randomDeep Blind Compressive SensingMultilayer adaptively learning sparsifying matrixCS-based smart meter data transmission for non-intrusive load monitoring applications.
[88]ToeplitzBlock Orthogonal Matching Pursuit (BOMP)Block sparse basisCS-based short-term load forecasting.
[89]-Block Orthogonal Matching Pursuit (BOMP)Block sparse basisCS-based spatio-temporal wind speed forecasting.
[90]RandomWeighted Basis Pursuit Denoising (BPDN)--Recursive dynamic CS approaches, addressing changing sparsity patterns.
Table 5. CS applications for state estimation and topology identification.
Table 5. CS applications for state estimation and topology identification.
State Estimation and Topology Identification
Ref.Sensing MatrixRecovery AlgorithmSparse BasisInferences/Comments
[15]Gaussianℓ1 minimization problemWavelet—Spatio-Temporal Indirect method: Reconstructs power values from compressed measurements before state estimation. Provides better accuracy but computationally expensive.
Uses compressed measurements directly within the Newton–Raphson iteration. Avoids full reconstruction; is potentially faster but requires solving underdetermined systems.
Even with only 50% compressed measurements, both methods allow for accurate estimation of voltage states.
[16]GaussianLASSO Clustered OMP (COMP), Band-Excluded Locally Optimized MCOMP (BLOMCOMP), LASSO Laplacian sparsity BLOMCOMP outperforms others due to the following: (1) band exclusion for handling high coherence, (2) local optimization for support refinement, (3) effective exploitation of clustered sparsity, (4) robustness across IEEE test systems, and (5) reduced measurement requirements for accurate recovery.
[97]RandomDirect and Indirect State EstimationData-Driven Dictionaries, Deterministic Dictionaries (Hankel, Toeplitz)Data-driven dictionaries outperform deterministic bases (Haar, Hankel, DCT, etc.) in reconstruction accuracy and state estimation. Hankel and Toeplitz perform best among the deterministic dictionaries but are outperformed by learned dictionaries.
[99]Impedance Matrix of the Systemℓ1-Norm Minimization, Regularized Least SquaresSparse Injection Current VectorThe oroposed DSSE algorithm minimizes the number of μPMUs required for accurate state estimation; performs well compared to conventional WLS; requires fewer measurements but achieves comparable accuracy in voltage phasor estimation; is suitable for low-cost DSSE implementation in large-scale distribution networks with limited observability.
[100]Normalized Jacobian MatrixCohCoSaMP (Coherence-Based CoSaMP), OMP, ROMP, CoSaMPSparse Voltage-to-Power Sensitivity Matrix- The proposed CohCoSaMP ensures accurate Jacobian matrix estimation by addressing sensing matrix correlation; it outperforms OMP, ROMP, and CoSaMP in convergence and accuracy.
CohCoSaMP fully estimates the Jacobian matrix with as few as 40 measurements, in contrast to LSE, which needs more than 64 measurements.
- The proposed method achieves lower computation times and fewer iterations compared to other algorithms, making it suitable for online applications.
Is effective for sparse recovery under noisy PMU measurement conditions and is suitable for networks with correlated phase angle and voltage variations.
[101]Gaussian RandomAlternating Direction Method of Multipliers (ADMM) Sparse Nodal Voltage and Current Phasors- Proposes a distributed CS-based DSSE for power distribution grids divided into sub-networks using ADMM for global convergence.
- Robust to cyber-attacks, loss of measurements, FDI, replay, and neighborhood attacks.
- Outperforms centralized CS in computation time and communication overhead (e.g., 3.85 s vs. 1.03 s for IEEE 37-bus system). Distributed CS achieves similar accuracy to centralized CS while reducing simulation time significantly (e.g., 20.48 s vs. 7.28 s for an IEEE 123-bus system).
Table 6. Fault detection, fault localization and outage identification.
Table 6. Fault detection, fault localization and outage identification.
Fault Detection (FD), Fault Localization (FL) and Outage Identification (OI)
Ref.Sensing MatrixRecovery AlgorithmSparse BasisInferences/Comments/Limitations
[17]Reduced Impedance Matrix from ΔVPrimal–dual linear programming (PDIP) Fault Current VectorRobust to noise and capable of locating single, double, and triple faults with minimal measurement infrastructure. Effective in noisy environments (using ℓ1s for stability). Less accurate for triple faults compared to double faults.
[18]Positive-sequence impedance matrix derived from measured voltage sagsPrimal–dual linear programming (PDIP) and Log Barrier Algorithm (LBA) Fault current vectorRobust to noise, fault types, and fault resistances. Does not require load data updates, unlike other methods. Works with limited smart meters. Handles single-, double-, and three-phase faults effectively. Computationally efficient.
[103]Impedance matrix and PMU measurements for positive sequence dataStructured Matching Pursuit (StructMP) with alternating minimizationFault current vectors subjected to structural constraintsEffective for single and simultaneous faults. Utilizes non-convex constraints for improved fault location. Requires fewer PMUs but is sensitive to sensor placement. Computationally efficient and robust at higher SNRs. Handles various fault types including line-to-ground, disconnected lines, and line-to-line faults.
[104]Derived from the Kron reduction in the admittance matrix, capturing the block structure for balanced and unbalanced systemsModified Block-Sparse Bayesian Learning (BSBL) algorithm using bound optimizationBlock-sparse fault injection currents at adjacent nodesProvides accurate fault location in ADNs with limited μPMUs. Considers DG integration and intra-block amplitude correlation for improved performance. Satisfactory results in noisy conditions with success rates > 86% at 1% noise. Sensitive to noise and block structure consistency but robust against fault resistance variations.
[105]Positive-sequence impedance matrix modified based on meter allocation and network parametersBayesian Compressive Sensing (BCS) algorithmSparse voltage magnitude differencesBCS algorithm improves sparse fault current solution accuracy compared to other algorithms. Limited accuracy in noisy conditions and bipower supply mode. Performance drops with DGs access but remains acceptable.
[106]Modified reactance equations with block-wise sparsityBlock-Wise Compressive Sensing (BW-CS)Block-sparse structure of line outagesBW-CS method outperforms QR decomposition and conventional OMP in detecting multiple line outages with high recovery accuracy and computational efficiency. Extended to three-phase systems for better spatial correlation utilization. Robust to noise. Assumes no islanding due to outages.
[107]Positive sequence impedance matrixBayesian Compressive Sensing (BCS) + Dempster–Shafer Evidence Theory Integrates multiple data sources for fault location using CS for signal reconstruction, Bayesian networks for switching fault analysis, and DS evidence theory for fusion. Handles low-resistance grounded networks.
[108]Constructed using the inverse of the nodal-admittance matrix and incidence matrix.- OMP
- Binary POD-SRP (BPOD-SRP)
- BLOOMP (Bound-exclusion Locally Optimized Matching Pursuit)
- BLOMCOMP (Clustered version of BLOOMP).
Sparse Outage Vector (SOV)- Efficient for large-scale, multiple outages.
- Binary POD-SRP resolves dynamic range issues, improving recovery.
- High coherence in sensing matrices requires techniques like BLOOMP/BLOMCOMP.
- Recovery is sensitive to perturbations in power and noise.
[109]Constructed using the inverse nodal-admittance matrix and incidence matrix.- OMP
- Modified COMP (MCOMP) for structured outages.
- LASSO (Least Absolute Shrinkage and Selection Operator).
Sparse Outage Vector (SOV)High coherence in sensing matrices affects recovery performance.
- QR decomposition reduces average coherence but may not always lower coherence.
- MCOMP outperforms traditional OMP in structured sparse cases.
Performance declines with higher noise or sparsity levels.
[110]Constructed using the inverse nodal-admittance matrix and incidence matrix.- OMP
- Band-exclusion Locally Optimized OMP (BLOOMP)
- Modified Clustered OMP (MCOMP)
- LASSO for structured outages.
Sparse Outage Vector (SOV): Represents power line outages.
Clustered Sparse Outage Vector (C-SOV): Models structured outages with cluster-like sparsity patterns.
- High coherence and signal dynamic range issues in sensing matrices affect recovery performance.
- Binary POD-SRP formulation addresses the dynamic range issue effectively.
- BLOOMP outperforms OMP in handling high coherence for large-scale outages.
- BPOD-SRP and BLOOMP combination is efficient for multiple large-scale outages.
- Performance declines with increased perturbation or noise levels.
- Structured outage scenarios require additional modifications like MCOMP.
[111]Laplacian matrix-Symmetric Reweighting of Modified Clustered OMP (SRwMCOMP)
- Orthogonal Matching Pursuit (OMP)
- LASSO method for comparison.
Sparse outage vector, Sparse structural matrix- Integrates SG-specific features (symmetry, diagonal, cluster) to improve topology reconstruction.
- QR decomposition reduces coherence, enhancing power line outage identification.
- Superior performance compared to state-of-the-art methods like LASSO and MCOMP.
- Time-consuming for large-scale networks.
- Assumes transient stable state post-outage.
[112]Constructed using transient dynamic model with DC and AC approximations.- Adaptive Stopping Criterion OMP (ASOMP).
- Orthogonal Matching Pursuit (OMP), LASSO method for comparison.
Sparse outage vector- Utilizes transient data for real-time line outage detection.
- Adaptive threshold improves performance under varying noise intensities.
- Effective for single-, double-, and triple-line outages.
- Event-triggered mechanism reduces computation overhead.
- Performance degrades with violent phase angle fluctuations and non-smooth data.
- Requires full PMU observability for dynamic data.
- Limited accuracy under DC model for multiple outages.
[113]Formulated from transient data with QRP decomposition to reduce coherence.- Improved Binary Matching Pursuit (IBMPDC) with dice coefficient.
- Binary Matching Pursuit (BMP), Orthogonal Matching Pursuit (OMP) for comparison.
Binary outage vector- The IBMPDC algorithm improves atom selection accuracy and avoids repeated atom selection.
- Utilizes binary constraints for faster computations and higher efficiency.
- Is Resilient to noise and less sensitive to sample size.
- QRP decomposition enhances sensing matrix orthogonality, improving detection accuracy.
- Is Effective for single-, double-, and triple-line outages.
- Has an accuracy that degrades with high Gaussian noise or insufficient sampling.
- Has a Slightly higher execution time than BMP but significantly better accuracy.
[115]RandomAlternating Direction Optimization Method (ADOM)Sparse coefficient vector with non-zero entries corresponding to fault type.Incorporates correlation and sparsity properties for higher accuracy.
Table 7. CS applications in harmonic source identification and power quality detection.
Table 7. CS applications in harmonic source identification and power quality detection.
Harmonic Source Identification (HSI) and Power Quality Detection
Ref.Sensing MatrixRecovery AlgorithmSparse BasisInferences/Comments/Limitations
[120] Block Orthogonal Matching Pursuit (BOMP) Sparse harmonic current injections- Achieves reliable harmonic detection in loads L3 and L5 with higher accuracy for loads with direct current measurements.
- Sensitive to noise and measurement uncertainty in lower accuracy classes.
[121] Local Block Orthogonal Matching Pursuit (LBOMP) Harmonic current sources, block-sparse, grouped by load.- Identifies and estimates primary harmonic sources efficiently with sparse phasor measurements.
Outperforms WLS and single-harmonic BOMP methods in detection and estimation accuracy.
- Requires synchronized, high-quality harmonic phasor measurements for accurate results.
- Sensitive to network model inaccuracies and measurement uncertainties, though detection robustness is retained.
[122] Block Orthogonal Matching Pursuit (BOMP), ℓ1-minimizationHarmonic current sources, block-sparse, grouped by load.- BOMP: Sensitive to phase angle measurement errors, decreasing accuracy significantly at higher errors (e.g., 62% detection in challenging cases).
- ℓ1: More robust, achieving ≥85% detection in noisy scenarios.
- Both methods require accurate uncertainty modeling and weighting.
[123] - ℓ1-minimization with quadratic constraint (P2)
- Traditional ℓ1-minimization (P1)
- Weighted Least Squares (WLS)
Harmonic current sources, modeled as sparse/compressible vectors.- P2 outperforms P1 and WLS due to error energy modeling and better uncertainty handling.
- Incorporates a novel whitening matrix for recovering error distributions, improving bounds.
[124]RandomBasis Pursuit (BP)Discrete Cosine Transform (DCT) - Random sampling introduces variability in error
- Performance is sensitive to dictionary selection.
[125]DeterministicOrthogonal Matching Pursuit (OMP) Fast Fourier Transform (FFT) - Deterministic sampling: Overcomes hardware limitations of random sampling in traditional CS.
- Fewer samples required: Demonstrates feasibility with prime-number constraints, reducing Nyquist rate dependency.
- Limitations: Recovery success probability decreases with higher sparsity, especially when structural sparsity is unexploited.
[126]Random Bernoulli Expectation Maximization (EM) Radon Transform (RT), Discrete Radon Transform (DRT) Reconstruction accuracy decreases with high amplitude disparities or noise.
[127]Binary Sparse Random SPG-FF Algorithm: Combines Spectral Projected Gradient with Fundamental Filter to enhance reconstruction precision.Discrete Fourier Transform (DFT) Basis: Better sparsity compared to DCT and DWT.Reduces data storage and sampling complexity by leveraging the binary sparse matrix.
The method requires filtering fundamental components to achieve optimal sparsity.
Double-spectral-line interpolation mitigates leakage effects but adds computational steps.
[128]Binary Sparse Homotopy Optimization with Fundamental Filter (HO-FF)Short-Time Fourier Transform (STFT) with Hanning Window) Performance is sensitive to rapid changes in harmonics; computational load increases with more data frames.
HO-FF iteratively solves along the homotopy path, avoiding repeated recovery and enhancing real-time performance.
[129]GaussianOrthogonal Matching Pursuit (OMP) Low-Dimensional Subspace via SVD and Feature Selection- Training-free, fast, and adaptable to changes.
- Handles single and combined PQ events effectively.
- May require convex hull approximation for high-dimensional feature space, which is computationally intensive.
- Performance may degrade with fewer informative samples for complex events.
[130]RandomOrthogonal Matching Pursuit (OMP), Soft-thresholding Sparse coefficients derived from training samples, representing PQD signals in a low-dimensional subspace.- Handles both single and combined PQDs effectively.
[131]DCT-based Observation MatrixOrthogonal Matching Pursuit (OMP), Sparsity Adaptive Matching Pursuit (SAMP)DCT Sparse Basis:- Combines compressed sensing (CS) with 1D-DCNN for direct PQD classification.
[132] Orthogonal Matching Pursuit (OMP)DCT (Discrete Cosine Transform), DST (Discrete Sine Transform), and Impulse DictionaryDCT and DST: Perform well for low sparsity, with lower MSE and better reconstruction accuracy.
Impulse Dictionary: Excels for extremely low sparsity, providing close-to-original signal reconstruction.
Combinations (overcomplete hybrid dictionaries): Adding the Impulse dictionary to a combination dominates the sparse representation, rendering the contribution of other dictionaries negligible.
Table 8. CS application for condition monitoring.
Table 8. CS application for condition monitoring.
Condition Monitoring
Ref.Sensing MatrixRecovery AlgorithmSparse BasisInferences/Comments/Limitations
[19]Gaussian Random OMPSTFTRelies on signal sparsity achieved through signal conditioning, including synchronous resampling and demodulation.
- Reconstruction error increases significantly beyond 95% data loss.
[133]Sparse Random OMPK-SVD Trained Dictionary (Adaptive, based on Discrete Cosine Transform (DCT))- Achieved compression ratio of 1/8 with an average reconstruction error of 0.06%.
[134]Random GaussianCompressed Sensing Reconstruction + Stacked Multi-Granularity Convolution Denoising Auto-Encoder (SMGCDAE) - Combines CS with deep learning for fault diagnosis in rolling bearings.
[135]Random Gaussian Matrix or Bernoulli MatrixCoSaMPFFTCompressed data are directly used for feature extraction without full signal recovery; the focus is on dimensionality reduction and classification;
Feature learning via PCA, LDA, and CCA
[136]Random Gaussian, Bernoulli, Unit Sphere Image data created from 1D signalOnly single-channel vibration signals are considered.
[137]Random Sensing Matrices (e.g., Walsh–Hadamard, Uniform Spherical Ensemble)Convex OptimizationFourier dictionary1. Extracts fault features directly from compressive measurements, avoiding full signal recovery.
[138]Stochastic SamplingOMPVariational Mode Decomposition (VMD)
Frequency spectrum signals
Retains critical fault features in low-dimensional space via transfer learning.
[139]Gaussian RandomParticle Swarm Optimization (PSO) with Deep Kernel Extreme Learning Machine (DKELM)DCTMaintains 99% accuracy with CR ≤ 80%, balancing efficiency and fault classification precision.
[140]Gaussian Random DCTDirectly uses compressed signals for fault classification
[141]Random Thermal Image is sparseCS transforms the high-dimensional sparse data (thermal and modulation signal bispectrum images) into lower-dimensional compressed data.
Compression achieves a compression ratio (CR) of 324, reducing image size from 1080 × 14,401,080 pixels to 60 × 8060 pixels.
[142]RandomADMM, Soft-thresholding Wavelet, Gradient Norm RatioAccurate blur kernel estimation with GNR; improves infrared image quality and diagnosis accuracy; computationally intensive.
[143]GaussianWeighted Distributed Compressed Sensing-Synchronized Orthogonal Matching Pursuit (WDCS-SOMP) Shift-Invariant Dictionary- Efficiently reconstructs fault features from multi-channel compressed signals at ultra-low compression rates (10%).
- Leverages correlations across channels for improved accuracy.
[144]Gaussian and BernoulliCoSaMPDCT- Gaussian matrix ensures RIP compliance; Bernoulli matrix adds randomness, simplifying implementation and storage.
Table 9. MAE and INAE for random data segment for different transformation bases.
Table 9. MAE and INAE for random data segment for different transformation bases.
CR(%)Data Retained (%)Sparse BasisGaussian, No Noise—MAEGaussian, No Noise—INAEBernoulli, No Noise—MAEBernoulli, No Noise—INAEGaussian, with Noise—MAEGaussian, with Noise—INAEBernoulli, with Noise—MAEBernoulli, with Noise—INAE
8020Hadamard30.41622.888835.187326.479334.332225.835836.563327.5148
Hankel8.16156.14178.12826.1167195.204146.8959197.3912148.5418
Toeplitz1.34071.00891.33331.00337.13335.3686.7795.1014
DCT2.09531.57672.72712.05224.26663.21075.35064.0264
Wavelet1.71531.29081.74161.31062.61571.96842.99592.2545
7030Hadamard34.250125.774136.580427.527638.356428.864144.03233.1352
Hankel7.60925.72614.47093.3645139.5385105.0062161.6916121.677
Toeplitz1.32951.00051.33091.00162.79522.10341.80241.3563
DCT2.67462.01272.45131.84463.63062.73214.63233.4859
Wavelet1.24890.93991.28610.96792.74452.06532.31651.7433
6040Hadamard27.592420.76432.059724.125726.876120.22532.258224.2751
Hankel8.12146.11165.57094.1923226.9955170.8198118.75789.3676
Toeplitz1.331.00091.32931.00033.04432.29098.04266.0522
DCT2.26391.70362.52991.90383.66032.75455.04553.7969
Wavelet1.17170.88180.9920.74652.58341.9442.4911.8746
5050Hadamard31.138223.432329.367922.100126.821520.183932.609624.5395
Hankel6.27474.72195.10713.8432123.792193.1567139.8015105.2041
Toeplitz1.32911.00021.33041.00111.57951.188610.75158.0908
DCT2.42381.8242.30741.73634.70063.53733.79532.856
Wavelet0.92710.69761.06160.79892.80742.11272.77782.0904
4060Hadamard35.369126.616129.89522.496735.420826.65528.848621.7093
Hankel5.13593.86494.04633.0449111.057183.5733147.2666110.8218
Toeplitz1.32780.99921.33281.00292.89892.18152.67512.0131
DCT2.31671.74342.12471.59894.163.13053.57282.6886
Wavelet1.02560.77181.06970.8052.53371.90672.26721.7061
3070Hadamard25.887119.480726.46419.914826.413119.876525.814119.4257
Hankel4.29553.23253.82842.881135.0848101.654775.140156.5448
Toeplitz1.32740.99891.32931.00034.41593.32312.82912.129
DCT1.81141.36312.00551.50923.24142.43923.38222.5452
Wavelet0.9770.73520.93790.70582.14481.6141.85471.3957
2080Hadamard23.700217.83528.469521.42423.364417.582327.601620.7709
Hankel4.17133.1394.17683.1431121.813491.6676142.2333107.0341
Toeplitz1.32961.00061.33471.00444.06693.06053.20642.4129
DCT1.92421.4481.83341.37972.45191.84513.0642.3057
Wavelet0.97990.73740.97920.73691.75181.31831.93561.4566
1090Hadamard26.594820.013228.933321.77326.582920.004328.988721.8147
Hankel3.59742.70715.13193.8619101.24176.186487.975266.2035
Toeplitz1.33111.00171.328913.53832.66272.66632.0065
DCT1.7471.31471.87971.41453.07262.31223.03992.2876
Wavelet0.95010.7150.89360.67242.0711.55851.73191.3033
Table 10. Averaged MAE and INAE for one month.
Table 10. Averaged MAE and INAE for one month.
CRNoiseMetricHadamardDCTWaveletHankelToeplitzHadamardDCTWaveletHankelToeplitz
INAEMAE
40%NoGaussian16.6991.27991.12031.84681.0003243.0218.1914.65916.56614.335
Bernoulli16.9381.29551.12171.87281.0003245.1918.59914.66516.64314.334
YesGaussian16.7352.05631.604774.5012.1387244.7129.92522.6191076.831.634
Bernoulli17.1272.10711.618974.0742.0507246.6330.64122.9441148.830.721
50%NoGaussian17.5311.33931.12171.91521.0004251.9819.114.66216.7414.334
Bernoulli17.9551.36471.12341.92541.0003260.718.78214.66516.80614.335
YesGaussian17.72.23421.664979.6472.1485252.2631.3723.4031250.729.364
Bernoulli17.9982.26571.701680.9672.2299259.2232.87923.8011096.929.897
60%NoGaussian18.6151.42641.12492.05321.0005266.5520.89314.66717.13714.335
Bernoulli19.151.43961.12542.07521.0004277.6720.70114.6717.21114.334
YesGaussian18.6042.37311.743386.0622.3814266.4733.55724.4841259.438.581
Bernoulli19.0622.45891.764790.4692.4349271.2135.15524.795130936.613
70%NoGaussian20.4651.55011.13332.2971.0008307.3122.0114.68817.84914.335
Bernoulli21.4281.57851.13232.34571.0004307.8123.64414.69317.95114.334
YesGaussian20.4422.74221.845599.6342.6698297.6939.90126.8061402.738.751
Bernoulli21.6922.70721.8817100.212.7588319.0439.51826.6521487.438.694
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chandran, L.R.; Karuppasamy, I.; Nair, M.G.; Sun, H.; Krishnakumari, P.K. Compressive Sensing in Power Engineering: A Comprehensive Survey of Theory and Applications, and a Case Study. J. Sens. Actuator Netw. 2025, 14, 28. https://doi.org/10.3390/jsan14020028

AMA Style

Chandran LR, Karuppasamy I, Nair MG, Sun H, Krishnakumari PK. Compressive Sensing in Power Engineering: A Comprehensive Survey of Theory and Applications, and a Case Study. Journal of Sensor and Actuator Networks. 2025; 14(2):28. https://doi.org/10.3390/jsan14020028

Chicago/Turabian Style

Chandran, Lekshmi R., Ilango Karuppasamy, Manjula G. Nair, Hongjian Sun, and Parvathy Krishnan Krishnakumari. 2025. "Compressive Sensing in Power Engineering: A Comprehensive Survey of Theory and Applications, and a Case Study" Journal of Sensor and Actuator Networks 14, no. 2: 28. https://doi.org/10.3390/jsan14020028

APA Style

Chandran, L. R., Karuppasamy, I., Nair, M. G., Sun, H., & Krishnakumari, P. K. (2025). Compressive Sensing in Power Engineering: A Comprehensive Survey of Theory and Applications, and a Case Study. Journal of Sensor and Actuator Networks, 14(2), 28. https://doi.org/10.3390/jsan14020028

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop