Next Article in Journal
A Lightweight DTDMA-Assisted MAC Scheme for Ad Hoc Cognitive Radio IIoT Networks
Previous Article in Journal
LJ-TTS: A Paired Real and Synthetic Speech Dataset for Single-Speaker TTS Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Compressive-Sensing-Based Fast Acquisition Algorithm Using Gram-Matrix Optimization via Direct Projection

1
School of Aerospace Engineering, Geely University of China, Chengdu 641423, China
2
School of Earth and Space Science and Technology, Wuhan University, Wuhan 430072, China
*
Author to whom correspondence should be addressed.
Electronics 2026, 15(1), 171; https://doi.org/10.3390/electronics15010171 (registering DOI)
Submission received: 25 November 2025 / Revised: 24 December 2025 / Accepted: 25 December 2025 / Published: 30 December 2025
(This article belongs to the Section Microwave and Wireless Communications)

Abstract

This paper proposes a compressive-sensing (CS) acquisition algorithm for low-power, high-dynamic GNSS receivers based on low-dimensional time-domain measurements, a non-iterative compressive-domain direct-projection peak-search pipeline, and a coherence-optimized sensing-matrix design. Unlike most existing GNSS-CS acquisition approaches that rely on explicit sparse-recovery formulations (e.g., OMP/BP/LS-type iterative reconstruction) to identify the delay–Doppler support—often incurring substantial computational burden and acquisition latency—the proposed method performs peak detection directly in the compressive measurement domain and is supported by unified Gram-matrix optimization and perturbation/detection analyses. Specifically, the measurement Gram matrix is optimized on the symmetric positive-definite (SPD) manifold to obtain a diagonally dominant and well-conditioned structure with reduced inter-column correlation, thereby bounding reconstruction-induced perturbations and preserving the main correlation peak. Simulation results show that the proposed scheme retains the low online complexity characteristic of direct-projection baselines while achieving a 2–3 dB acquisition sensitivity gain, and it requires substantially fewer operations than iterative OMP-based CS acquisition schemes whose cost scales approximately linearly with the sparsity level K. The proposed framework enables robust, low-latency acquisition suitable for resource-constrained GNSS receivers in high-dynamic environments.

1. Introduction

Global Navigation Satellite Systems (GNSS) underpin modern positioning services across vehicle navigation, precision agriculture, surveying, and mobile devices [1,2,3]. Acquiring GNSS signals, especially under weak-signal or high-dynamic conditions, remains computationally intensive: traditional FFT-based parallel frequency search and time-domain sliding correlation must scan a two-dimensional code-phase/carrier-frequency grid, demanding large resources and long dwell times [4,5,6].
In recent years, compressed sensing (CS) [7] has significantly advanced GNSS signal acquisition. By exploiting the inherent sparsity of GNSS signals in the code-phase and Doppler dimensions, the traditional 2D search problem can be recast as sparse recovery, offering a powerful way to address the classic trilemma of low SNR, high dynamics, and limited computational resources.
Ou et al. [8] first proposed CS-based acquisition methods that relied on Gaussian-random measurement matrices and simple orthogonal matching pursuit (OMP) recovery. Although computationally lightweight, they suffered noticeable carrier-to-noise-density ratio ( C / N 0 ) loss. To improve detection sensitivity, Elango et al. [9] proposed a Kronecker-product-structured measurement matrix that better satisfies the restricted isometry property (RIP), yielding higher sensitivity at the expense of increased storage and matrix-multiplication overhead. To reduce this overhead, Ali et al. [10] introduced a two-stage compression strategy that separately sparsifies the code-phase and Doppler dictionaries, substantially lowering memory and computational requirements. Zhou et al. [11] proposed a GNSS compressive acquisition algorithm that optimizes the sensing matrix by minimizing the Frobenius norm between its Gram matrix and an approximate ETF matrix, using a modified conjugate gradient method to reduce mutual coherence. Alternatively, Deng et al. [12] exploited singular value decomposition to design a measurement matrix with optimized incoherence properties. This method directly targets the enhancement of signal acquisition probability for compressed-sensing receivers in low-SNR regimes. Zhang et al. [13] further improved Gaussian matrices by applying singular value decomposition (SVD) to enhance mutual incoherence and combined them with partial matched filter–FFT (PMF–FFT) preprocessing, achieving near-conventional performance with far fewer operations.
On the reconstruction side, Yang et al. [14] adapted the Alternating Direction Method of Multipliers (ADMM) to GNSS CS acquisition. By decomposing the problem into parallel subproblems with closed-form updates, their approach dramatically reduces runtime while maintaining robustness under high dynamics. Ma et al. [15] took a different route, precomputing offline compression matrices for code phase and frequency bins and combining denoising back-projection with non-iterative shrinkage–thresholding, which effectively suppresses measurement noise and accelerates recovery.
More recently, researchers have started exploring quantum acceleration: preliminary studies propose embedding the Quantum Approximate Optimization Algorithm (QAOA) into the support-detection stage of CS recovery, potentially enabling quantum speedups for future large-scale, high-dimensional GNSS acquisition problems on NISQ or fault-tolerant quantum hardware [16].
Despite these advances, iterative sparse solvers remain too slow for real-time embedded implementations. We instead optimize the measurement matrix offline to yield a Gram matrix close to identity in the signal subspace and then reconstruct via simple direct projection to avoid OMP entirely. Eigendecomposition-based projection on the symmetric positive-definite manifold drives mutual coherence close to the Welch bound. Unlike greedy pursuit methods, whose online computational cost grows with both dictionary size and signal sparsity, the resulting deterministic measurement matrix keeps the online complexity linear in the number of measurements and dictionary atoms.

2. Compressive Sensing Theory

Compressive sensing (CS) captures sparse signals using far fewer measurements than the Nyquist rate [7]. When a signal is sparse in some transform basis, a small number of incoherent linear projections suffice to reconstruct the original waveform by solving a sparsity-promoting optimization problem.

2.1. Sparse Representation and Compressed Measurements

Let the original signal x R N be an N-dimensional real-valued vector that admits a sparse representation in some transform domain [17]. Given a transform basis matrix Ψ R N × N , the signal x can be written as
x = Ψ s ,
where s R N is a K-sparse coefficient vector; that is, only K entries of s are non-zero, and K denotes the sparsity level of the signal.
A measurement matrix Φ C M × N , which is incoherent with the basis Ψ , is then used to project the signal onto a lower-dimensional subspace and obtain the measurement vector y C M with M N as
y = Φ x = Φ Ψ s = Θ s ,
where Θ = Φ Ψ C M × N denotes the equivalent sensing matrix.

2.2. Signal Reconstruction and Optimization Problem

Compressive sensing recovers the sparse vector s from y by minimizing the 1 norm subject to an 2 data-fidelity constraint as
min s s 1 subject to y Θ s 2 ε ,
where s ^ denotes the estimated sparse coefficient vector, ε is the noise-tolerance parameter, · 1 is the 1 norm, and · 2 is the 2 norm. Once this optimization problem is solved, the reconstructed signal is obtained as
x ^ = Ψ s ^ .
Unfortunately, iterative solvers such as Orthogonal Matching Pursuit (OMP) [18] and Basis Pursuit (BP) [18] are too slow for low-power GNSS receivers operating under strict real-time constraints.

2.3. Direct Projection Method for GNSS Signal Acquisition

Let the received baseband signal be r C N and the local pseudorandom noise (PRN) code sequence be c C N . GNSS acquisition locates the correlation peak via cyclic correlation [19,20] as
R r c ( τ ) = ( r c ) ( τ ) ,
where ⊛ denotes circular convolution, τ is the code-phase offset, and ( · ) denotes complex conjugation. In practice, a two-dimensional search over code phase τ and Doppler frequency f d identifies the signal parameters [21].
Unlike generic sparse reconstruction, GNSS acquisition does not require precise amplitude recovery—only that the peak location be preserved [22]. Acquisition succeeds as long as
arg max τ | ( r c ) ( τ ) | = arg max τ | ( r ^ c ) ( τ ) | .
This observation enables a far simpler reconstruction scheme.
Let the measurement matrix Φ C M × N with M N perform compressed sampling on the baseband signal r as
y = Φ r + n ,
where y C M is the measurement vector, and n denotes measurement noise. Instead of solving an iterative sparse-recovery problem, direct projection reconstructs via the adjoint as follows:
r ^ = Φ H y .
Defining the Gram matrix as
G = Φ H Φ ,
we have
r ^ = G r + Φ H n .
For an underdetermined system with M < N , rank ( G ) M < N , so G cannot equal I N . We write
G = I N + E ,
where E is a perturbation matrix, giving
r ^ = r + E r + Φ H n .
GNSS acquisition does not demand exact waveform recovery; it suffices to preserve the correlation-peak location. This holds if G is approximately diagonally dominant: diagonal entries near one and off-diagonal entries small. More generally, within the principal signal subspace, G can approximate a scaled projection as follows:
G α P + E ,
where α > 0 is a scaling factor, P is a rank-r projection with r K < M < N (K = signal sparsity), and E captures the residual correlation outside the main support. The reconstructed signal in the effective subspace becomes
r ^ α r + E r + Φ H n ,
while components orthogonal to the signal subspace remain irrecoverable. As long as E r and Φ H n are small compared with the main-lobe energy, the peak position is preserved. Section 3 develops a Gram-matrix optimization strategy that drives G toward diagonal dominance with minimal off-diagonal entries, enabling effective direct projection for GNSS acquisition.

3. Compressive-Sensing-Based Fast Acquisition Algorithm Using Direct Projection

3.1. Frequency-Domain Fast Cyclic Correlation

Cyclic correlation is computed in the frequency domain for efficiency [23]. Let r ^ denote the reconstructed baseband signal (from direct projection) and c denote the local PRN code. Their discrete Fourier transforms are
R ( f ) = FFT ( r ^ ) , C ( f ) = FFT ( c ) ,
where f indexes frequency. Cyclic cross-correlation between r ^ and c follows from element-wise multiplication with complex conjugation; then, the inverse FFT is
R r c ( τ ) = IFFT ( R ( f ) C ( f ) ) ,
where ⊙ denotes element-wise multiplication, ( · ) denotes complex conjugation, and R r c ( τ ) denotes the correlation at code phase τ .
This frequency-domain implementation costs O ( N log N ) per correlation versus O ( N 2 ) in time domain [7]. Here, r ^ is a back-projection into N-dimensional space—not a subsampled version of the original signal—so the FFT/IFFT serve purely as algorithmic acceleration. The frequency-domain formulation is mathematically equivalent to time-domain cyclic correlation; as long as G is approximately diagonally dominant and the perturbation terms in r ^ are small, the peak location is preserved.

3.2. Peak Detection and Acquisition Decision

After correlating over all Doppler bins, the acquisition result is a two-dimensional correlation matrix R ( d , τ ) , where d = 0 , 1 , , D 1 indexes the Doppler frequency, and τ = 0 , 1 , , L 1 indexes the code phase. The correlation peak is
( d , τ ) = arg max d , τ | R ( d , τ ) | .
A detection threshold γ determines signal presence. If | R ( d , τ ) | > γ , acquisition succeeds; otherwise, no satellite is detected in the current search window.
Once declared successful, the code-phase estimate τ ^ and Doppler estimate f ^ d are obtained by mapping peak indices to physical parameters as
τ ^ = τ · Δ τ , f ^ d = f min + d · Δ f ,
where Δ τ and Δ f are the code-phase and Doppler step sizes, f min is the minimum Doppler frequency in the search range, and these estimates initialize the tracking loop.

3.3. Measurement Matrix Design Based on Gram-Matrix Optimization

Direct projection works only if the measurement matrix Φ yields a Gram matrix G = Φ H Φ that is approximately diagonally dominant: diagonal entries near one and off-diagonal entries small. This requirement is closely tied to column coherence [18] and defined as
μ ( Φ ) = max i j | ϕ i , ϕ j | ϕ i 2 ϕ j 2 ,
where ϕ i and ϕ j are the i-th and j-th columns. High coherence produces large off-diagonal entries in G , severely distorting the reconstructed signal.
Random Gaussian matrices satisfy the restricted isometry property (RIP) with high probability when the number of measurements is large [18], but in the low-measurement regime considered here, purely random Φ typically yields G far from identity with substantial off-diagonal terms. Directly minimizing μ ( Φ ) in the high-dimensional measurement-matrix space is a hard non-convex problem.
We adopt an indirect strategy: introduce a transformation T so that Φ = T Φ 0 , where Φ 0 is an initial Gaussian matrix, and optimize the Gram matrix
G = Φ H Φ = Φ 0 H T H T Φ 0
to be diagonally dominant and well conditioned, while remaining symmetric positive definite. Working on the manifold of symmetric positive-definite (SPD) matrices exploits the geometric structure to design stable update rules—eigendecomposition-based projections and manifold-aware gradient steps—that systematically suppress off-diagonal energy. This converts the coherence-reduction problem for Φ into a Gram-matrix optimization problem for G , which is more amenable to analysis and numerical implementation.

3.3.1. Objective Function Under a Bi-Level Optimization Framework

We construct a bi-level framework that indirectly reduces coherence by optimizing the Gram matrix via manifold projection. In the outer layer, a linear transformation adjusts the measurement-matrix column relationships so that the Gram matrix approaches a diagonally dominant target. In the inner layer, the Gram matrix is projected onto the symmetric positive-definite (SPD) manifold, preserving positive definiteness and numerical stability throughout [24,25].
Let the initial measurement matrix Φ 0 C M × N be a column-normalized Gaussian-random matrix satisfying the underdetermined condition M < N . An optimized measurement matrix Φ is obtained by applying a transformation T C M × M :
Φ = T Φ 0 .
The corresponding Gram matrix is
G = Φ H Φ = Φ 0 H T H T Φ 0 .
In Section 3.3.1, we explicitly index the transformation as T k and the corresponding Gram matrix as G k to denote the k-th iteration.
Ideally, G = I N so that the columns of Φ are orthonormal. However, under the underdetermined condition M < N , rank ( G ) M < N , so G cannot equal I N over the entire space. In practice, it suffices to make G approximately identity-like on the effective signal subspace and diagonally dominant overall. As in (11), we write G = I N + E , where E is a perturbation matrix whose off-diagonal entries are to be minimized. The goal is to suppress off-diagonal entries of G while keeping diagonal entries close to one.
Let 1 R N denote the all-ones vector, and define the operator that extracts the off-diagonal part of a matrix as
Off ( G ) = G diag ( diag ( G ) ) ,
where diag ( · ) constructs a diagonal matrix from its vector argument. A natural measure of the off-diagonal energy is then
J off ( G ) = Off ( G ) F 2 ,
where · F denotes the Frobenius norm. Following coherence-based dictionary and measurement-matrix design, this objective directly penalizes inter-column correlations.
To improve numerical stability and explicitly enforce symmetry, we work with the symmetrized Gram matrix G s = 1 2 ( G + G H ) and perform optimization in terms of G s .
To facilitate iterative optimization, we introduce an auxiliary Gram matrix G ˜ and handle the diagonal constraint in a separate normalization step. The resulting least-squares-type objective for the transformation matrix T can be written as
min T J ( T ) = Off ( G ) F 2 off - diagonal suppression + α tr tr ( T H T ) trace regularization + λ d diag ( G ) 1 2 2 diagonal constraint ,
where α tr > 0 is a trace-regularization coefficient that prevents the norm of T from growing excessively, λ d > 0 weights the deviation of the diagonal entries from one, and · 2 is the Euclidean norm.
In the k-th iteration, the current Gram matrix is constructed from T k and the initial measurement matrix Φ 0 as
G k = ( T k Φ 0 ) H ( T k Φ 0 ) = Φ 0 H T k H T k Φ 0 .
A small diagonal loading is then added to obtain a regularized Gram matrix as
G ˜ k = G k + ρ I N ,
where ρ > 0 is a diagonal-loading parameter that improves conditioning and promotes strict positive definiteness. Based on G ˜ k , we apply diagonal enhancement, adaptive thresholding, and manifold projection to iteratively drive the Gram matrix toward a diagonally dominant SPD structure with an improved condition number.

3.3.2. Adaptive Threshold Setting

The adaptive threshold adopts a two-stage decay scheme balancing convergence speed and optimization accuracy. Early exponential decay rapidly reduces overall coherence; late linear decay refines the Gram-matrix structure for local optimality.
In the early exponential-decay stage, for iterations k = 0 , 1 , , I e , the threshold is
θ k = θ 0 exp ( β k ) ,
where β > 0 is the decay coefficient, and θ 0 is the initial threshold defined as
θ 0 = max i j | g i j ( 0 ) | ,
which is the maximum absolute off-diagonal element of the initial Gram matrix G ( 0 ) . This stage achieves fast suppression of large off-diagonal entries and quick global coherence reduction.
In the late linear-decay stage, for iterations k = I e + 1 , , I max , the threshold is
θ k = θ min + I max k I max I e ( θ I e θ min ) ,
where θ min prevents excessive decay that would degrade conditioning, and I e and I max specify the switching point and total iterations. This stage slowly eliminates small residual coherence without over-shrinkage.
Combining exponential decay early with linear refinement later achieves both rapid global coherence reduction and fine-grained local optimization.

3.3.3. Manifold Projection Based on Eigendecomposition

Directly minimizing the objective in Section 3.3.1 is a non-convex problem without a closed-form solution. Since the Gram matrix is Hermitian-positive semidefinite, its eigendecomposition coincides with its SVD up to zero singular values. We use eigendecomposition to perform manifold projection on the (regularized) SPD manifold [26,27]. In the underdetermined case M < N ,
rank ( G ) M < N ,
so G cannot exactly equal I N . Instead, we iteratively drive G toward a diagonally dominant, well-conditioned symmetric positive-definite (SPD) matrix that approximately preserves the correlation-peak structure.
At iteration k, the current Gram matrix is
G k = ( T k Φ 0 ) H ( T k Φ 0 ) = Φ 0 H T k H T k Φ 0 .
We use the regularized Gram matrix G ˜ k as defined in (27).
Using the adaptive threshold θ k from Section 3.3.2, we apply a shrinkage operator to off-diagonal entries of G ˜ k . For off-diagonal element g i j ( k ) ( i j ), the complex shrinkage operator is
S θ k ( g i j ( k ) ) = ( | g i j ( k ) | β s θ k ) g i j ( k ) | g i j ( k ) | , | g i j ( k ) | > θ k , g i j ( k ) , | g i j ( k ) | θ k ,
where β s > 0 controls the suppression level. This reduces large off-diagonal entries while preserving their complex phase. Diagonal entries are fixed at one to reflect column normalization. The thresholded Gram matrix is
G ˜ k = diag ( 1 ) + Off ( S θ k ( G ˜ k ) ) ,
where 1 is the all-ones vector, and Off ( · ) extracts off-diagonal entries. The shrinkage step is analogous to soft-thresholding used in sparse regularization and wavelet denoising [18].
Because thresholding and finite-precision effects may destroy positive definiteness, G ˜ k is projected back onto the SPD manifold via eigendecomposition. First, symmetrization is defined as
G s , k = 1 2 ( G ˜ k + G ˜ k H ) ,
followed by eigendecomposition as
G s , k = U k Λ k U k H ,
where U k is a unitary matrix of eigenvectors, and Λ k = diag ( λ k , 1 , , λ k , N ) contains eigenvalues. To enforce strict positive definiteness, the eigenvalues are clipped from below as
Λ ˜ k = diag ( λ ˜ k , 1 , , λ ˜ k , N ) , λ ˜ k , i = max ( λ k , i , δ ) ,
where δ > 0 is a small constant. The projected SPD Gram matrix is then
G k SPD = U k Λ ˜ k U k H .
Balancing stability and convergence speed, a smoothing update is performed as
G k + 1 = ( 1 η k ) G k + η k G k SPD ,
where η k ( 0 , 1 ) is a smoothing coefficient. An adaptive schedule is defined as
η k = η 0 + ( η max η 0 ) k I max ,
which allows the step size to increase gradually from η 0 to η max as iteration k grows, so early iterations emphasize stability, while later ones accelerate convergence.
Given the updated Gram matrix G k + 1 , the corresponding transformation T k + 1 is obtained by solving a regularized least-squares problem as
T k + 1 = arg min T Φ 0 H T H T Φ 0 G k + 1 F 2 + α tr tr ( T H T ) ,
where α tr > 0 is a regularization coefficient [18]. Note that we have the same trace-regularization coefficient α tr as in (25). This problem can be solved numerically using gradient-based or alternating minimization methods. The measurement matrix is then
Φ k + 1 = T k + 1 Φ 0 .
After a finite number of iterations or upon meeting a convergence criterion, the final optimized measurement matrix is Φ opt = T final Φ 0 .

3.4. Design Principle of Fast Acquisition

The proposed scheme exploits the structural properties of GNSS acquisition to reduce the online computational burden of CS-based methods. As established in Section 2.3, exacting the waveform reconstruction is unnecessary; preserving the correlation peak location is sufficient for reliable acquisition. This method replaces iterative sparse-recovery algorithms with a direct-projection strategy governed by the Gram matrix G = Φ H Φ .
From Section 2.3, the compressive measurement model can be written as
y = Φ r + n ,
where r C N is the Nyquist-rate-sampled baseband signal, y C M is the compressed measurement vector, Φ C M × N is the sensing matrix with M < N , and n denotes additive noise. The proposed reconstruction is obtained by direct projection as
r ^ = Φ H y = G r + Φ H n ,
where G = Φ H Φ C N × N is the Gram matrix.
We decompose G as
G = I N + E ,
where I N is the N × N identity matrix, and E denotes the perturbation matrix. A diagonally dominant G with small off-diagonal entries ensures that E r remains negligible, thereby preserving the principal correlation peak of r after reconstruction.
The core design principle is to construct, offline, a measurement matrix Φ whose Gram matrix approximates the identity within the effective signal subspace. Section 3.3 realizes this through Gram-matrix optimization on the SPD manifold. Specifically, a transformation T C M × M is introduced such that
Φ = T Φ 0 ,
where Φ 0 C M × N is an initial Gaussian sensing matrix. The Gram matrix
G = Φ H Φ = Φ 0 H T H T Φ 0
is iteratively refined toward diagonal dominance via eigendecomposition-based manifold projection and adaptive thresholding. This optimization is executed offline and reused across all subsequent acquisitions.
Once Φ opt is obtained, online acquisition proceeds efficiently via
r ^ = Φ opt H y ,
which yields an N × 1 reconstructed signal. This is followed by FFT/IFFT-based cyclic correlation with the local PRN code of length N. Peak detection on the code-phase/Doppler surface then yields the acquisition decision. All computationally intensive steps are confined to the offline stage.

3.5. Computational Complexity Analysis

The computational complexity of the proposed compressive-sensing-based fast acquisition algorithm using Gram-matrix optimization via direct projection comprises an offline optimization stage and an online acquisition stage. Here, the term “online complexity” refers to the asymptotic number of arithmetic operations that must be executed per coherent integration interval (and per Doppler bin) in real-time receiver operation, excluding the one-time offline optimization of the sensing matrix.
In the offline stage, the sensing matrix Φ opt C M × N is designed through repeated eigendecomposition of the N × N Gram matrix G = Φ H Φ . With maximum iteration count I max , this stage incurs a complexity of O ( I max N 3 ) . The procedure is executed once during system design, and the resulting matrix is reused thereafter.
In the online stage, the compressed measurements y C M are first projected via
r ^ = Φ opt H y ,
where Φ opt H C N × M , so the projection requires O ( M N ) operations and yields r ^ C N . The reconstructed signal r ^ is then correlated with the local PRN code (also of length N) using FFT/IFFT-based circular correlation at a cost of O ( N log N ) . The overall online complexity is therefore O ( M N + N log N ) [18].
By contrast, classical OMP-based CS acquisition operates on the same sensing matrix Φ C M × N and must iteratively recover a K-sparse coefficient vector. Its online complexity is approximately O ( M N + K M N ) per acquisition for K-sparse recovery [18]. For large K and dictionary sizes, the proposed non-iterative approach achieves substantially lower online complexity.
It is important to distinguish this from the classical least-squares (LS) methods, which recover r via
r ^ LS = ( Φ H Φ ) 1 Φ H y ,
explicitly forming and inverting the N × N Gram matrix G = Φ H Φ . This formulation is ill posed when M < N and prohibitively expensive for real-time acquisition. The proposed method instead optimizes Φ offline such that G approximates a scaled identity. Online reconstruction then reduces to direct projection r ^ = Φ H y , which preserves the correlation-peak location without matrix inversion.
To give a concrete sense of the computational load, consider a representative GNSS configuration with signal length N = 1024 , number of measurements M = 768 , and sparsity level K = 1 . For the direct-projection schemes (Gaussian-Random, ELAD-Optimized, and the proposed CGM-Optimized), the online cost per coherent integration interval and Doppler bin is M N + N log 2 N 768 × 1024 + 1024 log 2 1024 7.86 × 10 5 + 1.02 × 10 4 8.0 × 10 5 complex multiplications. By contrast, the CCM-Optimized OMP-based, and the SVD-Optimized OMP-based CS acquisition require approximately ( 1 + K ) M N 2 × 768 × 1024 1.57 × 10 6 operations for the projection-related term (i.e., about 2 × the direct-projection cost when K = 1 ), while the PMF-FFT-SVD scheme entails M N + N M log 2 M 7.86 × 10 5 + 7.54 × 10 6 8.33 × 10 6 operations due to the additional FFT/SVD-related processing. Since the proposed direct-projection algorithm does not perform any iterative sparse recovery, its online complexity does not depend on the sparsity level K, whereas the OMP-based schemes scale linearly with K and become increasingly more expensive as K increases.

3.6. Complete Algorithm Workflow

In Algorithm 1, the complete compressive-sensing-based fast acquisition algorithm using Gram-matrix optimization via direct projection consists of the following steps.
Algorithm 1: Compressive-Sensing-Based Fast Acquisition Algorithm Using Gram-Matrix Optimization via Direct Projection (CGM)
Electronics 15 00171 i001

4. Algorithm Performance Analysis

4.1. Perturbation Analysis of the Direct Projection Method

The received baseband signal is modeled as
r = a c ( τ 0 , f d ) + w ,
where c ( τ 0 , f d ) denotes the spread-spectrum sequence corresponding to code delay τ 0 and Doppler frequency f d , a is the signal amplitude, w is complex Gaussian noise with zero mean and variance σ w 2 , τ 0 is the true code phase, and f d is the true Doppler shift. Let t denote time index and T s the sampling interval, so the discrete-time vector r collects samples over one coherent integration interval [28].
To reduce processing dimension, a compressive sensing measurement matrix Φ C M × N with M N is introduced, and the compressed measurements are
y = Φ r + n ,
where y C M is the compressed measurement vector, and n denotes measurement noise modeled as complex Gaussian with zero mean and covariance σ n 2 I M .
The proposed algorithm reconstructs via direct projection as
r ^ = Φ H y = Φ H Φ r + Φ H n ,
where G = Φ H Φ is the Gram matrix. As in (11), we write G = I N + E , with E being the perturbation matrix, so the reconstructed signal becomes
r ^ = r + E r + Φ H n .
Let R 0 denote the ideal correlation peak value and R ^ the corresponding peak value obtained after reconstruction at the same code phase. The condition for successful detection can be simplified to
| R ^ | Γ ,
where Γ is the decision threshold determined by the minimum separable difference between the main lobe and the largest interference (side lobe) or noise peak.
From the reconstruction model above, the reconstruction error satisfies
r ^ r = E r + Φ H n .
where we denote R r c ( τ ) as the correlation between the original signal and the local code and R ^ r c ( τ ) as the correlation between the reconstructed signal and the local code. Then,
R ^ r c ( τ ) = r ^ , c ( τ ) = r , c ( τ ) + E r , c ( τ ) + Φ H n , c ( τ ) .
To ensure that the correlation-peak location is preserved, the perturbation term must not generate a spurious peak that exceeds the true main lobe. In other words, the maximum perturbation at incorrect code phases must be smaller than the minimum separable margin between the main peak and the side lobes. For GNSS peak detection, this margin is governed by the autocorrelation and cross-correlation properties of the spreading code. For Gold codes, the off-peak cross-correlation magnitude is on the order of O ( N ) [29], so the main lobe is well separated from the side lobes in the ideal (unperturbed) case.
Using the cyclic convolution property of the correlation and the quasi-orthogonality of the spreading codes, the correlation at the correct code phase τ = τ 0 can be approximated as
R ^ r c ( τ 0 ) R r c ( τ 0 ) + Δ R sig + Δ R noise ,
where R r c ( τ 0 ) is the ideal peak, Δ R sig is the attenuation induced by the perturbation E r , and Δ R noise is the noise contribution after reconstruction. At incorrect code phases, due to the orthogonal property of the PRN codes, the main contribution to the correlation comes from the perturbation term.
R ^ r c ( τ ) Δ R int ( τ ) , τ τ 0 ,
where Δ R int ( τ ) is dominated by E r and noise.
To keep the peak position unchanged, a conservative sufficient condition is that the maximum magnitude of the perturbation term at all code phases be smaller than the smallest distinguishable peak interval as
max τ | E r , c ( τ ) | < Δ min ,
where Δ min denotes the minimum required separation between the true main lobe and the largest side lobe, and τ denotes the code phase. For normalized signals, using the Gershgorin disk theorem and a conservative upper bound on the off-diagonal entries of G , the perturbation energy can be bounded in terms of the matrix norm of E [30]. This yields a sufficient condition of the form
E ε peak ,
where ε peak is a design parameter related to Δ min and the code correlation properties. In practice, by designing the measurement matrix so that the Gram matrix is diagonally dominant with small off-diagonal entries, the perturbation energy of E r is kept small, and the correlation peak position is stabilized.
In summary, under the underdetermined condition rank ( G ) M < N , it is impossible to enforce G = I N over the entire space. Instead, the perturbation analysis shows that it is sufficient to make G diagonally dominant with controlled off-diagonal magnitude. This ensures that the reconstruction error E r remains bounded, the main correlation peak remains separated from sidelobes and noise, and the direct projection method preserves the peak location with high probability.

4.2. Detection Probability Analysis

In this subsection, we analyze the detection performance of the proposed CGM-based direct-projection acquisition algorithm and clarify how the Gram-matrix coherence influences the detection probability.
As in Section 4.1, the received baseband signal over one coherent integration interval is modeled as in ref. [31] as
r = a c 0 + w ,
where a is the signal amplitude, c 0 C N is the spreading code at the correct code phase, and w C N is complex Gaussian noise. The proposed algorithm first forms the compressed measurements
y = Φ r + n ,
and then reconstructs the signal via the direct-projection step as
r ^ = Φ H y = Φ H Φ r + Φ H n = G r + Φ H n ,
where Φ C M × N is the measurement matrix, y , n C M , and G = Φ H Φ C N × N is the Gram matrix of the CGM-optimized measurement matrix. We write
G = I N + E ,
with E denoting the perturbation, whose off-diagonal entries are bounded in terms of the mutual coherence μ ( Φ ) .
The acquisition statistic used in this algorithm is the cyclic correlation between r ^ and the local PRN code [32], which is defined as
z ( τ ) = r ^ , c ( τ ) ,
where c ( τ ) C N denotes the locally generated code replica at code phase τ . A successful acquisition occurs when the magnitude at the true code phase τ 0 exceeds a given threshold and all side lobes. Substituting (64) into (66) yields
z ( τ ) = r , c ( τ ) + E r , c ( τ ) + Φ H n , c ( τ ) .
At τ = τ 0 , the first term provides the useful signal component, while the perturbation E r , c ( τ 0 ) and the effective noise Φ H n , c ( τ 0 ) slightly reduce the peak amplitude and increase the noise floor. At incorrect code phases, the ideal correlation r , c ( τ ) is small, and the perturbation term dominates; its magnitude can be bounded by a function of μ ( Φ ) .
To summarize these effects, we define an effective post-reconstruction SNR as
γ eff = P sig ( μ ( Φ ) ) P I + N ( μ ( Φ ) ) ,
where P sig ( μ ( Φ ) ) is the signal power of z ( τ 0 ) , and P I + N ( μ ( Φ ) ) is the combined interference-and-noise power. Both the attenuation of the main peak and the growth of sidelobes are controlled by the coherence μ ( Φ ) , so γ eff is a decreasing function of μ ( Φ ) . In the ideal case μ ( Φ ) 0 , the proposed scheme approaches the performance of a conventional matched-filter acquisition [33].
Since, for a fixed threshold, the detection probability is a monotonically increasing function of γ eff , reducing the Gram-matrix coherence through CGM optimization directly improves the detection probability of the proposed direct-projection acquisition algorithm, especially at low SNRs.

5. Simulation Validation

The simulation parameters are as follows. We consider the GPS L1 signal with code rate R c = 1.023 MHz and sampling rate f s = 2 R c . The code phase is set to τ = 400 , and the signal length and number of compressive measurements are N = 1024 and M = 768 , respectively. The Doppler frequency deviation range is ± 12 kHz. The 2D correlation surface R ( d , τ ) is normalized by the estimated noise standard deviation σ ^ n obtained from off-peak cells, and the decision statistic is defined as R ˜ ( d , τ ) = | R ( d , τ ) | / σ ^ n . The detection threshold is set to V t = 3 on this normalized statistic, which corresponds to a 3 σ level under the AWGN assumption. All algorithms were implemented in MATLAB R2024b, see Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9. As shown in Table 1, the simulations were executed on a desktop computer equipped with an AMD Ryzen 9 9950X CPU @ 4.3 GHz, 64 GB of RAM and running Windows 11. The CPU times reported in Table 2 were obtained using this platform. The CPU times in Table 2 are averaged over 500 Monte Carlo trials under the same simulation settings as in Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9.
As shown in Figure 1, the CGM optimizer exhibits rapid initial convergence: the maximum coherence dropped from 0.1287 to 0.0834 within the first 10 iterations—a 35.2% reduction—demonstrating the strong shrinkage capability of the Gram-space formulation in its early phase. The two-stage nature of the algorithm is clearly visible in the Gram-error and gradient trajectories: Stage 1 aggressively suppressed coherence through continuous Gram-matrix reshaping, while Stage 2 stabilized the solution at the numerical precision limit. The Gram matrix steadily departed from the identity ( G I F rose from 0.388 to 5.109), confirming that the optimizer effectively exploits the available degrees of freedom on the manifold to minimize off-diagonal energy without sacrificing unit column norms. The coordinated parameter schedule reliably drove the optimizer to a high-quality local minimum by iteration 10, establishing an excellent starting point for the subsequent refinement stage and demonstrating full control over the entire optimization trajectory.
As shown in Figure 2, the CGM-optimized measurement matrix achieved the largest reduction in column coherence μ ( Φ ) relative to the Gaussian-random baseline. In terms of maximum column coherence, CGM suppressed the value to 27.6% of the Gaussian case, corresponding to a 72.4% improvement, whereas the Elad and Toeplitz designs provided only 26.4% and 13.41% reduction, respectively. Bernoulli and sparse random matrices even led to negative “improvements,” slightly increasing the maximum coherence. A similar trend was observed for the average coherence, where CGM reduced the mean value by 40.34%. Consistent with the effective-SNR analysis in Section 4.2, these coherence levels remain well below the regime in which Gram-matrix perturbations E cause noticeable losses in the effective SNR, indicating that CGM preserves the detection SNR γ eff almost optimally.
Figure 3 summarizes the coherence statistics of six measurement matrices in terms of the maximum, mean, and median column inner products for the ( M , N ) = ( 768 , 1024 ) configuration. The CGM-optimized matrix attained the smallest values in all three metrics, with maximum coherence μ ( Φ ) = 0.045 and mean coherence 0.017 , whereas the Gaussian and sparse random matrices reached maxima of 0.163 and 0.180 , respectively. For reference, the Welch bound provides a benchmark lower bound on the achievable coherence levels, μ Welch = N M M ( N 1 ) , which yields μ Welch 0.0181 for ( M , N ) = ( 768 , 1024 ) . Although the worst-case (mutual) coherence μ ( Φ ) is above this bound (here, μ ( Φ ) 2.5 μ Welch ), the small mean coherence 0.017 indicates that the typical inter-column correlation remains in the Welch-limit regime and that strong correlations are confined to only a small fraction of column pairs. According to the Donoho–Tanner critical-measurement-number theory, M crit c DT K sparse log N K sparse , such reduced coherence supports reliable peak detection with fewer measurements M while keeping the perturbation term E in the direct-projection model G = I N + E small, thereby improving reconstruction accuracy.
The distribution of absolute column inner products in Figure 4 provides a more detailed view of how CGM reshapes Gram-matrix coherence. The maximum coherence of the CGM matrix is 0.045 , only about 25% of sparse random ( 0.180 ) and 28% of Gaussian random ( 0.163 ), and also substantially below the Bernoulli ( 0.167 ), Toeplitz ( 0.141 ), and Elad ( 0.120 ) designs. The mean value is 0.017 , roughly 40% lower than the 0.028 0.029 range of the competing matrices, and the median ( 0.019 ) also lies below their 0.023 0.025 range. In addition, the standard deviation is only 0.009 —less than half that of the Gaussian and Elad matrices ( 0.022 )—indicating that coherence is not only small in magnitude but also tightly clustered. This uniform low-coherence distribution is characteristic of a near equiangular tight frame (ETF), with the manifold-based Gram-matrix projection effectively preventing any pair of columns from exhibiting abnormally high correlation.
Figure 5 compares acquisition probability versus SNR for six measurement matrices under the CS algorithm at signal dimension N = 1024 and measurement ratio M / N = 0.75 . The CGM matrix, with maximum coherence μ ( Φ ) 0.044 , consistently outperformed the Gaussian, Elad, sparse random, Bernoulli, and Toeplitz designs. At SNR = 20 dB, CGM achieved a detection probability of 32.8%, whereas the Gaussian and Elad matrices reached only 8.2% and 8.6%, respectively. At SNR = 16 dB, the corresponding probabilities were 95.2%, 60.2%, and 59.6%. The SNR required to reach 90% detection is 14 dB for CGM, representing a 2 dB gain over the 12 dB threshold of Gaussian and Elad. These gains are consistent with the peak-to-interference ratio (PIR) analysis in Section 4: by reducing the off-diagonal energy of the Gram matrix, CGM reduces the perturbation term E r , thereby enhancing the accuracy of direct-projection reconstruction and effectively preserving the main correlation peak.
The impact of signal length N on acquisition performance at a fixed measurement ratio M / N = 0.75 is examined in Figure 6. When N increased from 256 ( M = 192 ) to 1024 ( M = 768 ), the acquisition probability of the CGM matrix at SNR = 20 dB improved from 4.6% to 32.6%, and the 90%-detection threshold shifted from 11 dB to 14 dB. At N = 1024 and SNR = 19 dB, CGM achieved 53.2% detection probability, whereas the Gaussian and Elad matrices attained only about 14.0% and 11.2%, respectively. From the viewpoint of the critical-measurement-number condition M crit c DT K sparse log N K sparse , increasing N moves CGM further above M crit while maintaining low coherence μ ( Φ ) 0.045 , significantly reducing the norm E of the perturbation matrix and improving the robustness of direct projection.
Figure 7 investigates how the number of measurements M affects acquisition performance at fixed dimension N = 1024 . As M increased from 410 to 922, the 90%-detection threshold improved from 14 dB to 17 dB. At M = 922 and SNR = 20 dB, the CGM matrix achieved a detection probability of 46.2% compared with 6.6% at M = 410 ; at SNR = 16 dB and M = 922 , CGM attained 98.8% detection, while the Gaussian and Elad matrices reached only about 65%. For the low-coherence CGM design, M = 922 substantially exceeds the critical measurement number M crit , driving the Gram matrix G = Φ H Φ closer to the identity and further shrinking the perturbation term E . This behavior is consistent with the theoretical results in Section 4.2: a larger M at fixed coherence μ ( Φ ) provides more measurement diversity and stronger noise averaging, thereby enhancing acquisition sensitivity in low-SNR regimes.
Figure 8 compares the acquisition performance of the six GPS acquisition algorithms. PMF-FFT-SVD achieved the best 90% acquisition threshold at 18 dB and CCM at 17 dB, but both relied on iterative OMP-based recovery and incurred high online complexity and latency. The CGM-optimized scheme reached a 90% threshold of 16 dB and achieved 95.6% and 99.2% acquisition probability at 16 dB and 15 dB, respectively, while requiring only a single matrix–vector multiplication and one FFT with complexity O ( M N + N log N ) . Compared with a Gaussian-random sensing matrix, CGM provides about 3 dB sensitivity gain with very low online complexity, offering a favorable performance–complexity trade-off.
Figure 9 shows the impact of the measurement length M on six acquisition methods. PMF-FFT-SVD, SVD-CS achieved the best 90% acquisition threshold at 18 dB and CCM at 17 dB, but both incurred high computational overhead and were largely insensitive to the compression ratio, offering little flexibility to trade complexity for performance. In contrast, the CGM-optimized scheme attained a 90% threshold of 16 dB at M = 922 and maintained good scalability at lower compression levels ( M = 410 , 40%), where it still outperformed the Gaussian-random matrices. CGM required only a single matrix–vector multiplication and one FFT with complexity O ( M N + N log N ) , so reducing M directly lowered the online cost; across all tested compression ratios, it provided about a 2–3 dB sensitivity gain over Gaussian sensing while preserving very low complexity.

6. Conclusions

This paper has proposed a compressive-sensing-based fast acquisition method for high-dynamic GNSS receivers based on low-dimensional time-domain measurements, direct-projection reconstruction, and a coherence-optimized measurement matrix. By performing Gram-matrix optimization on the symmetric positive-definite manifold, the proposed coherence-based Gram-matrix (CGM) scheme reshapes the measurement Gram matrix into a diagonally dominant, well-conditioned form, reducing inter-column correlations so that the mean coherence is on the order of the Welch bound while keeping the maximum coherence within a small constant factor of that limit and effectively bounding reconstruction-induced perturbations. The perturbation and detection-probability analyses further clarify how Gram-matrix coherence influences correlation distortion and the effective post-reconstruction SNR: lower-coherence CGM designs suppress off-peak interference, stabilize the main correlation peak, and thereby mitigate SNR loss. By replacing iterative sparse recovery with a direct-projection step, the proposed algorithm achieves substantially lower online complexity than iterative OMP-based CS acquisition schemes whose cost scales linearly with the sparsity level K while retaining excellent detection sensitivity. At the same time, compared with other direct-projection baselines, the CGM design delivers a 2–3 dB gain in acquisition sensitivity at essentially the same online complexity, making it attractive for resource-constrained GNSS receivers in high-dynamic environments.

Author Contributions

Conceptualization, F.Z. proposed the main idea and finished the draft manuscript; F.Z. conceived of the experiments and drew the figures and tables; Methodology, F.Z., W.W. and Y.X. analyzed the data; F.Z. wrote the paper; W.W., Y.X. and C.Z. reviewed the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by University-level Scientific Research Project of Geely University, grant numbers 2024XZKZD004.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GNSSGlobal Navigation Satellite System
FFTFast Fourier Transform
CGM-OptimizedProposed CGM-based direct-projection acquisition
CCM-OptimizedCCM-based CS acquisition
ELAD-OptimizedELAD-Optimized direct-projection acquisition
Gaussian-RandomGaussian-Random direct-projection acquisition
PMF-FFT-SVDCS-SVD-PMF-FFT acquisition
SNRSignal-to-Noise Ratio
ETFEquiangular Tight Frame

References

  1. Carvalho, G.S.; Silva, F.O.; Pacheco, M.V.O.; Campos, G.A.O. Performance Analysis of Relative GPS Positioning for Low-Cost Receiver-Equipped Agricultural Rovers. Sensors 2023, 23, 8835. [Google Scholar]
  2. Kowalczyk, W.Z.; Hadas, T. A comparative analysis of the performance of various GNSS positioning concepts dedicated to precision agriculture. Rep. Geod. Geoinformatics 2024, 117, 11–20. [Google Scholar] [CrossRef]
  3. Kubo, N. Global Navigation Satellite System Precise Positioning Technology. IEICE Trans. Commun. 2024, 11, 691–705. [Google Scholar] [CrossRef]
  4. Hegarty, C.J. The Global Positioning System (GPS). In Springer Handbook of Global Navigation Satellite Systems; Springer: Berlin/Heidelberg, Germany, 2017; pp. 197–218. [Google Scholar]
  5. Hofmann-Wellenhof, B.; Lichtenegger, H.; Wasle, E. GNSS—Global Navigation Satellite Systems: GPS, GLONASS, Galileo, and More; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  6. Zhang, C.; Li, X.; Gao, S.; Lin, T.; Wang, L. Performance Analysis of Global Navigation Satellite System Signal Acquisition Aided by Different Grade Inertial Navigation System under Highly Dynamic Conditions. Sensors 2017, 17, 980. [Google Scholar] [CrossRef]
  7. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  8. Ou, S.; Li, J.; Sun, J.; Zeng, D.; Li, J.; Yan, Y. A GNSS Signal Acquisition Scheme Based on Compressed Sensing. In Proceedings of the ION 2015 Pacific PNT Meeting, Honolulu, HI, USA, 20–23 April 2015; pp. 618–628. [Google Scholar]
  9. Elango, G.A.; Sudha, G.F. Weak GPS acquisition via compressed differential detection using structured measurement matrix. Int. J. Smart Sens. Intell. Syst. 2016, 9, 1877. [Google Scholar] [CrossRef]
  10. Albu-Rghaif, A.; Lami, I.A. Novel dictionary decomposition to acquire GPS signals using compressed sensing. In 2014 World Congress on Computer Applications and Information Systems (WCCAIS); IEEE: New York, NY, USA, 2014; pp. 1–5. [Google Scholar]
  11. Zhou, F.; Zhao, L.; Jiang, X.; Li, L.; Yu, J.; Liang, G. GNSS Signal Compression Acquisition Algorithm Based on Sensing Matrix Optimization. Appl. Sci. 2022, 12, 5866. [Google Scholar] [CrossRef]
  12. Deng, L.; Zhou, F.; Zhao, L.; Liang, G.; Yu, J. Compressed sensing GNSS signal acquisition algorithm based on singular value decomposition. J. Univ. Chin. Acad. Sci. 2023, 40, 128–134. [Google Scholar] [CrossRef]
  13. Zhang, W.; Chen, J.; Guo, Y.; Zhao, Y. GNSS Signal Acquisition Based on Compressive Sensing and Improved Measurement Matrix. In Proceedings of the 2024 6th International Conference on Electronic Engineering and Informatics (EEI), Chongqing, China, 28–30 June 2024; pp. 1762–1765. [Google Scholar]
  14. Yang, F.; Zhou, F.; Pan, L.; Lin, J. Parallel GPS Signal Acquisition Algorithm Based on Alternating Direction Method of Multipliers. J. Univ. Electron. Sci. Technol. China 2020, 49, 187–193. [Google Scholar]
  15. Ma, Z.; Deng, M.; Huang, H.; Wang, X.; Liu, Q. Non-Iterative Shrinkage-Thresholding-Reconstructed Compressive Acquisition Algorithm for High-Dynamic GNSS Signals. Aerospace 2025, 12, 958. [Google Scholar]
  16. Cai, Y.; Tang, X.; Zhang, Y.; Gao, H.; Sun, X.; Pan, S. Photonic compressive sensing system based on 1-bit quantization for broadband signal sampling. J. Light. Technol. 2025, 43, 9442–9449. [Google Scholar] [CrossRef]
  17. Eldar, Y.C.; Kutyniok, G. Compressed Sensing: Theory and Applications; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  18. Tropp, J.A. A mathematical introduction to compressive sensing book review. Bull. Am. Math. Soc. 2017, 54, 151–165. [Google Scholar] [CrossRef]
  19. Cui, H.; Li, Z.; Dou, Z. Fast Acquisition Method of GPS Signal Based on FFT Cyclic Correlation. Int. J. Commun. Netw. Syst. Sci. 2017, 10, 246–254. [Google Scholar] [CrossRef]
  20. Huang, H.S. Research on fast acquisition of GPS signal using radix-2 FFT and radix-4 FFT algorithm. In Proceedings of the 2016 IEEE 6th International Conference on Advanced Computing (IACC), Bhimavaram, India, 27–28 February 2017. [Google Scholar]
  21. Nezhadshahbodaghi, M.; Mosavi, M.R.; Rahemi, N. Improved Semi-Bit Differential Acquisition Method for Navigation Bit Sign Transition and Code Doppler Compensation in Weak Signal Environment. J. Navig. 2020, 73, 892–911. [Google Scholar] [CrossRef]
  22. Hao, F.; Yu, B.; Gan, X.; Jia, R.; Zhang, H.; Huang, L.; Wang, B. Unambiguous Acquisition/Tracking Technique Based on Sub-Correlation Functions for GNSS Sine-BOC Signals. Sensors 2020, 20, 485. [Google Scholar] [CrossRef] [PubMed]
  23. Nie, G.; Wang, X.; Shen, L.; Cai, Y. A fast method for the acquisition of weak long-code signal. GPS Solut. 2020, 24, 104. [Google Scholar]
  24. Pennec, X. Manifold-valued image processing with SPD matrices. In Riemannian Geometric Statistics in Medical Image Analysis; Elsevier: Amsterdam, The Netherlands, 2020; pp. 75–134. [Google Scholar]
  25. Krebs, J.; Rademacher, D.; von Sachs, R. Statistical inference for intrinsic wavelet estimators of SPD matrices in a log-Euclidean manifold. arXiv 2022, arXiv:2202.07010. [Google Scholar] [CrossRef]
  26. Chu, L.; Wu, X.J. Dimensionality reduction on the symmetric positive definite manifold with application to image set classification. J. Electron. Imaging 2020, 29, 043015. [Google Scholar] [CrossRef]
  27. Cheng, A.; Weber, M. Structured Regularization for Constrained Optimization on the SPD Manifold. arXiv 2024, arXiv:2410.09660. [Google Scholar] [CrossRef]
  28. Jun, H. A New Technology For Gnss Signal Fast Acquisition Within Three Seconds, Applicable To Current Gnss Receivers. In Proceedings of the 2006 National Technical Meeting of the Institute of Navigation, Monterey, CA, USA, 18–20 January 2006. [Google Scholar]
  29. Kaplan, E.D.; Hegarty, C. Understanding GPS/GNSS: Principles and Applications; Artech House: New York, NY, USA, 2017. [Google Scholar]
  30. Xu, L.; Chen, K.; Ying, R.; Liu, P.; Yu, W. Parallel Acquisition of Gnss Signal Based on Combined Code. In Proceedings of the 26th International Technical Meeting of the Satellite Division of the Institute of Navigation, Nashville, TN, USA, 16–20 September 2013. [Google Scholar]
  31. Zhang, Y.; Wang, M.; Li, Y. Low Computational Signal Acquisition for GNSS Receivers Using a Resampling Strategy and Variable Circular Correlation Time. Sensors 2018, 18, 678. [Google Scholar] [CrossRef]
  32. Ta, T.H.; Shivaramaiah, N.C.; Dempster, A.G.; Presti, L.L. Significance of Cell-Correlation Phenomenon in GNSS Matched Filter Acquisition Engines. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 1264–1286. [Google Scholar] [CrossRef]
  33. Kahouli, K.; Ripken, W.; Gugler, S.; Unke, O.T.; Müller, K.R.; Nakajima, S. Disentangling Total-Variance and Signal-to-Noise-Ratio Improves Diffusion Models. arXiv 2025, arXiv:2502.08598. [Google Scholar] [CrossRef]
Figure 1. The CGM optimizer process of the matrix:(a) coherence evolution in Gram space; (b) Gram-error and gradient-norm trajectories; (c) adaptive threshold and step-size scheduling.
Figure 1. The CGM optimizer process of the matrix:(a) coherence evolution in Gram space; (b) Gram-error and gradient-norm trajectories; (c) adaptive threshold and step-size scheduling.
Electronics 15 00171 g001
Figure 2. CGM improvement over Gaussian Random.
Figure 2. CGM improvement over Gaussian Random.
Electronics 15 00171 g002
Figure 3. CGM Optimized inter-column coherence comparison with other measurement matrices:(a) Gaussian random; (b) CGM optimized; (c) ELad optimized; (d) Sparse random; (e) Bernoulli; (f) Toeplitz.
Figure 3. CGM Optimized inter-column coherence comparison with other measurement matrices:(a) Gaussian random; (b) CGM optimized; (c) ELad optimized; (d) Sparse random; (e) Bernoulli; (f) Toeplitz.
Electronics 15 00171 g003
Figure 4. Coherence distribution statistics comparison of all measurement matrices: (a) Gaussian random; (b) CGM optimized; (c) ELad optimized; (d) Sparse random; (e) Bernoulli; (f) Toeplitz.
Figure 4. Coherence distribution statistics comparison of all measurement matrices: (a) Gaussian random; (b) CGM optimized; (c) ELad optimized; (d) Sparse random; (e) Bernoulli; (f) Toeplitz.
Electronics 15 00171 g004
Figure 5. Acquisition probability vs. SNR under the different measurement matrices ( N = 1024 , M / N = 0.75 ): (a) acquisition probability versus SNR; (b) coherence of the corresponding measurement matrices.
Figure 5. Acquisition probability vs. SNR under the different measurement matrices ( N = 1024 , M / N = 0.75 ): (a) acquisition probability versus SNR; (b) coherence of the corresponding measurement matrices.
Electronics 15 00171 g005
Figure 6. Impact of signal length N: (a) CGM optimized; (b) Gaussian random; (c) Elad optimized.
Figure 6. Impact of signal length N: (a) CGM optimized; (b) Gaussian random; (c) Elad optimized.
Electronics 15 00171 g006
Figure 7. Impact of measurement length M on acquisition performance: (a) Gaussian random; (b) CGM optimized; (c) Elad optimized.
Figure 7. Impact of measurement length M on acquisition performance: (a) Gaussian random; (b) CGM optimized; (c) Elad optimized.
Electronics 15 00171 g007
Figure 8. GPS signal acquisition algorithm performance comparison (6 methods).
Figure 8. GPS signal acquisition algorithm performance comparison (6 methods).
Electronics 15 00171 g008
Figure 9. Impact of measurement length M for six acquisition methods: (a) Gaussian random; (b) Elad optimized; (c) CCM optimized; (d) CGM optimized; (e) PMF FFT svd; (f) SVD cs.
Figure 9. Impact of measurement length M for six acquisition methods: (a) Gaussian random; (b) Elad optimized; (c) CCM optimized; (d) CGM optimized; (e) PMF FFT svd; (f) SVD cs.
Electronics 15 00171 g009
Table 1. Configuration of the testbed.
Table 1. Configuration of the testbed.
HardwareParameters
CPUAMD Ryzen 9 9950X CPU @ 4.3 GHz
RAMKINGBANK DDR5 6400MHz 64GB
Hard DiskPREDATOR SSD 4T
Graphics CardNVIDIA RTX 5080
Table 2. Online complexity and CPU time of different acquisition algorithms.
Table 2. Online complexity and CPU time of different acquisition algorithms.
Acquisition AlgorithmComplexityCPU Time
CCM-based OMP CS acquisition (CCM) [11] O ( M N + K M N ) t CCM = 0.169814
SVD-based OMP CS acquisition (SVD) [12] O ( M N + K M N ) t SVD = 0.170010
PMF-FFT-SVD based OMP CS acquisition (PMF-FFT-SVD) [13] O M N + N M log M t PMF - FFT - SVD = 0.640200
CGM-based OMP CS acquisition (CGM) O ( M N + K M N ) t CGM = 0.171491
The proposed acquisition (CGM-Optimized) O M N + N log N t The proposed = 0.051094
Gaussian-Random-based direct-projection CS acquisition (GR-Optimized) O M N + N log N t GR - Optimized = 0.050771
ELAD-Optimized-based direct-projection CS acquisition (ELAD-Optimized) O M N + N log N t ELAD - Optimized = 0.050892
Note: GR-Optimized, ELAD-Optimized, and CGM-Optimized are all implemented using our proposed compressive-domain direct-projection acquisition framework; the differences among them lie only in the sensing-matrix design.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, F.; Wang, W.; Xiao, Y.; Zhou, C. Compressive-Sensing-Based Fast Acquisition Algorithm Using Gram-Matrix Optimization via Direct Projection. Electronics 2026, 15, 171. https://doi.org/10.3390/electronics15010171

AMA Style

Zhou F, Wang W, Xiao Y, Zhou C. Compressive-Sensing-Based Fast Acquisition Algorithm Using Gram-Matrix Optimization via Direct Projection. Electronics. 2026; 15(1):171. https://doi.org/10.3390/electronics15010171

Chicago/Turabian Style

Zhou, Fangming, Wang Wang, Yin Xiao, and Chen Zhou. 2026. "Compressive-Sensing-Based Fast Acquisition Algorithm Using Gram-Matrix Optimization via Direct Projection" Electronics 15, no. 1: 171. https://doi.org/10.3390/electronics15010171

APA Style

Zhou, F., Wang, W., Xiao, Y., & Zhou, C. (2026). Compressive-Sensing-Based Fast Acquisition Algorithm Using Gram-Matrix Optimization via Direct Projection. Electronics, 15(1), 171. https://doi.org/10.3390/electronics15010171

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop