Next Article in Journal
VIIRS Nightfire Super-Resolution Method for Multiyear Cataloging of Natural Gas Flaring Sites: 2012-2025
Previous Article in Journal
Non-Linear Global Ice and Water Storage Changes from a Combination of Satellite Laser Ranging and GRACE Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Wavenumber Domain Consistent Imaging Method Based on High-Order Fourier Series Fitting Compensation for Optical/SAR Co-Aperture System

1
The National Key Laboratory of Microwave Imaging, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
2
The School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 101408, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2026, 18(2), 315; https://doi.org/10.3390/rs18020315 (registering DOI)
Submission received: 27 November 2025 / Revised: 7 January 2026 / Accepted: 13 January 2026 / Published: 16 January 2026

Highlights

What are the main findings?
  • We establish a new paradigm that shifts optical/SAR co-registration from a separated, post-processing task to a unified, automated workflow within the wavenumber domain, effectively eliminating the bottleneck of complex alignment.
  • By embedding a high-order Fourier series model directly into the ω k imaging chain, we compensate for pixel deviations at their source, ensuring sub-pixel co-registration in both range and azimuth as an inherent part of the image formation process.
What are the implication of the main findings?
  • The work establishes a new paradigm for multi-source remote sensing by integrating an airborne co-aperture system design with a signal-level processing framework, effectively solving the fundamental challenge of synchronous acquisition and real-time registration of SAR and optical data.
  • The proposed processing framework provides a scalable and efficient solution for real-time, high-precision applications. It opens up new possibilities for demanding civilian and defense tasks, such as dynamic disaster monitoring and time-sensitive target detection, where rapid and accurate image registration and fusion is critical.

Abstract

Optical and SAR image registration and fusion are pivotal in the remote sensing field, as they leverage the complementary advantages of both modalities. However, achieving this with high accuracy and efficiency remains challenging. This challenge arises because traditional methods are confined to the image domain, applied after independent image formation. They attempt to correct geometric mismatches that are rooted in fundamental physical differences, an approach that inherently struggles to achieve both precision and speed. Therefore, this paper introduces a co-designed system and algorithm framework to overcome the fundamental challenges. At the system level, we pioneer an innovative airborne co-aperture system to ensure synchronous data acquisition. At the algorithmic level, we derive a theoretical model within the wavenumber domain imaging process, attributing optical/SAR pixel deviations to the deterministic phase errors introduced by its core Stolt interpolation operation. This model enables a signal-domain compensation technique, which employs high-order Fourier series fitting to correct these errors during the SAR image formation itself. This co-design yields a unified processing pipeline that achieves direct, sub-pixel co-registration, thereby establishing a foundational paradigm for real-time multi-source data processing. The experimental results on both multi-point and structural targets confirm that our method achieves sub-pixel registration accuracy across diverse scenarios, accompanied by a marked gain in computational efficiency over the time-domain approach.

1. Introduction

Propelled by rapid technological development, remote sensing imaging has entered an era defined by multi-platform, multi-load and multi-modal observation [1]. In this context, high-precision multi-source image registration has emerged as a fundamental prerequisite for advanced image processing tasks such as fusion, forming the critical foundation for all subsequent analysis [2]. As two pivotal Earth observation modalities, optical cameras provide high-resolution imagery while Synthetic Aperture Radar (SAR) sensors enable all-weather and all-day imaging capabilities. However, optical cameras are easily affected by cloud cover, which reduces the observation efficiency, while SAR imaging suffers from significant speckle noise, resulting in poor visual quality [3,4]. The images obtained by a single optical/SAR remote sensing imaging system are limited and are often insufficient to meet the diverse needs of practical applications. Therefore, the complementary imaging systems that integrate optical and SAR sensors have emerged as a research focus in remote sensing [5]. Such systems leverage the synergistic advantages of both modalities, significantly enhancing observation accuracy, data acquisition capability, and robustness against individual sensor limitations. Several successful implementations worldwide have demonstrated the feasibility of optical/SAR joint observations. For instance, the ESA Sentinel-1 (SAR) and Sentinel-2 (optical) constellation enables synergistic data fusion when temporal and spatial resolutions are properly matched [6], significantly enhancing land surface monitoring completeness, data accuracy, and temporal effectiveness. Similarly, China’s GF series satellites (GF-1, GF-2 and high-resolution SAR satellite GF-3) can accurately detect complex surface environments, which is conducive to natural disasters and agricultural and forestry monitoring [7]. However, most current implementations rely on non-co-aperture architectures [8]. In these systems, multi-source imagery acquired from separate platforms at different temporal intervals introduces fundamental registration challenges due to temporal disparities, atmospheric variations, and differing geometries [9], thereby complicating the registration process.
Beyond these systemic issues, a further complication arises from the fundamental differences in how optical and SAR images are acquired, leading to divergent imaging mechanisms and radiation characteristics [10]. Consequently, traditional image-domain registration techniques based on region, feature, and deep learning [11] are fundamentally challenged, compromising their accuracy and hindering real-time processing. Region-based registration methods typically involve optimizing a similarity measure between images. As the search range expands, the computational load also increases, thereby reducing the registration speed. Wang [12] proposed a fast registration method based on block matching using grayscale normalized mutual information, which improved the registration success rate. However, this method still requires customized segmentation strategies for different image pairs, and the registration accuracy still needs further improvement. While SIFT and SURF methods are common feature-based registration methods, each presents distinct limitations. SIFT suffers from high computational cost and a propensity for false matches, reducing its efficiency. Although SURF accelerates registration while maintaining SIFT’s invariance, its sparser feature points intensify the difficulties in cross-modal scenarios, especially between optical and SAR data, where gradient characteristics fundamentally differ. Guo [13] proposed a fast automatic registration method using Edge Point Features (EPFs) angle matching to enhance both registration accuracy and efficiency. However, its performance in optical/SAR registration remains unvalidated and is fundamentally limited by the method’s underlying assumptions, which indicates that this algorithm requires further improvement. Moreover, deep learning-based registration methods often operate under the constraint of forcing cross-modal images to appear similar, which inevitably discards valuable image details, increasing the risk of mismatches. Dou [14] proposed a matching method that combines deep features with wavelet information. This fusion enhances the discriminative power of the features, leading to improved matching performance. Nevertheless, a fundamental challenge with these data-driven methods is their strong dependence on the training dataset and the inherent difficulty in training them effectively.
Therefore, it is necessary to address the fundamental issues of multi-source image registration from both the remote sensing imaging system and registration approaches. Although co-aperture systems exist for guidance (microwave/infrared) [15] and communication (microwave/laser) [16], optical/SAR integrated imaging is an emerging field, with most efforts still in the stages of system design and experimental validation. For instance, the U.S. ORS agency’s HIGHRISE project (2007) conducted a preliminary design and simulation analysis for a multi-band system [17]. More recently, Wu et al. (2020) designed an airborne infrared/SAR system featuring shared partial structures [18]. Li et al. (2021) developed an optical/SAR system capable of dual-band operation in the visible, near-infrared and Ka bands [19]. Beyond hardware integration, the concept of signal-domain alignment, performing registration during image formation, represents a revolutionary paradigm shift. This unified approach to imaging and registration is a nascent but critically important research direction.
Aiming at the bottlenecks in the development of co-aperture imaging systems and multi-source image registration methods, this paper proposes a novel signal-domain consistent imaging method implemented on a newly designed airborne co-aperture system. The system employs spectral and frequency division technology to ensure pointing consistency, establishing a hardware foundation for acquiring spatially and temporally consistent optical/SAR data. The core of our method, the Fourier series high-order fitting consistent imaging method, operates in the wavenumber domain. It unifies optical and SAR imaging results into a shared pixel coordinate system, achieving sub-pixel registration accuracy by integrating real-time compensation into the SAR imaging process. In contrast to our prior time-domain approach [20], this method significantly enhances computational efficiency and imaging speed while ensuring the alignment accuracy. Simulations involving multi-point and structural targets within the overlapping field of view (FOV) validate the feasibility and universality of the proposed wavenumber domain method for real-time deviation compensation.
The main contributions of this work are summarized as follows. (1) We establish a new signal-domain paradigm for optical/SAR co-registration by shifting it from a post-processing task to an integrated part of the wavenumber-domain SAR imaging process, thereby enabling a unified and real-time-capable workflow. (2) To realize this paradigm, we propose a co-designed system-algorithm framework that includes a novel airborne co-aperture system for synchronous data acquisition and a unified geometric model linking SAR imaging deviations to cross-modal pixel errors. (3) At the algorithmic core, we develop and implement a compensation method that models Stolt interpolation errors as a high-order Fourier series and embeds the derived correction function directly into the ω k imaging chain, achieving source-level, sub-pixel co-registration. (4) We provide comprehensive experimental validation through simulations under both ideal and noisy conditions, demonstrating that the method maintains sub-pixel accuracy while achieving a computational speedup exceeding 24 times compared to the time-domain algorithm, confirming its accuracy, efficiency, and practical robustness.
The remainder of this paper is organized as follows. Section 2 details the optical/SAR airborne co-aperture system and the associated imaging models. Section 3 presents the proposed wavenumber domain consistent imaging compensation method. Experimental results and analysis are provided in Section 4, followed by a discussion in Section 5. Finally, Section 6 concludes the paper.

2. Optical/SAR Airborne Co-Aperture System and Its Imaging Model

2.1. Optical/SAR Airborne Co-Aperture System

Figure 1 illustrates the proposed optical/SAR airborne co-aperture system, a design that resolves two critical issues in traditional systems. SAR azimuth resolution is limited by antenna beamwidth and the difficulties of payload integration [21]. This is achieved by sharing a large-aperture, high-precision main reflector in conjunction with an efficient spectral and frequency division technique. The system achieves a beam-splitting efficiency exceeding 90%, a prerequisite for efficient synchronous data acquisition and reduced system power consumption.
The realization of this integrated architecture required co-design of optical and SAR payloads, which exhibit significant differences in requirements for reflector precision, structural materials, component distribution, machining accuracy, and assembly tolerances [22]. Our methodology began with establishing a physical model to govern the spectral and frequency division of transmitted microwaves and reflected optical waves. To translate this model into a functional system, a topological optimization algorithm was employed to achieve a broadband, highly efficient design for microwave transmission and optical-wave reflection. A critical implementation challenge involved the multispectral reflective films, which were addressed by embedding frequency-selective surface (FSS) technology into the glass surfaces. Furthermore, we developed specialized functional materials with low resistivity and long lifespan through optimized lithography-coating processes, ensuring the system’s performance and durability.
As diagrammed in Figure 1, the proposed co-aperture system utilizes a shared aperture. The optical and microwave signals simultaneously pass through the spectral and frequency splitter. The microwave signal is transmitted and received via the radar feed for subsequent SAR image processing, while the optical signal is directed to a beam splitter that separates visible and infrared light, channeling them to their respective imaging systems and detectors. This co-aperture architecture enables the acquisition of multi-source remote sensing data with identical temporal, spatial, and angular characteristics. Thereby, it fundamentally resolves the data inconsistency issues encountered in non-co-aperture systems. This novel cross-optical and electronic remote sensing system simplifies the multi-source image registration process and establishes a solid foundation for the subsequent image fusion.

2.2. Optical Camera and Strip-Map SAR Imaging Models

2.2.1. Optical Camera Imaging Model

The area-array aerial camera enables rapid-response optical imaging through global shutter exposure. As per the principle of optical central projection imaging [23], a strict collinearity condition is maintained among the image point, the camera optical center, and the object point. Thus, for any flat ground target P ( X , Y , 0 ) in the North East Down (NED) coordinate system and its image point p ( u , v ) in the pixel coordinate system, the following conversion relationship exists.
u v 1 = 1 z c · M I N · M E X · X Y 0 + T R
Here, T R = x 1 y 1 H denotes the camera optical center position at the moment of capture. Assuming negligible errors, the aircraft altitude H is considered as the instantaneous Z-axis of the optical center under the NED coordinate system. The camera extrinsic parameter matrix M E X is defined by the rotation angles ψ (yaw, Z-axis), φ (pitch, Y-axis) and θ (roll, X-axis). The intrinsic parameter matrix M I N is determined by the focal length f , pixel sizes d x ,   d y , and principal point ( u 0 , v 0 ) . The parameter z c represents the projection of the object point along Z-coordinate of the camera coordinate system. All these parameters are known during actual flight, enabling 2D–3D coordinate conversion. This collinearity is also mathematically expressed by the collinearity equations.
u = u 0 + f d x R 11 ( X x 1 ) + R 12 ( Y y 1 ) R 13 H R 31 ( X x 1 ) + R 32 ( Y y 1 ) R 33 H v = v 0 + f d y R 21 ( X x 1 ) + R 22 ( Y y 1 ) R 23 H R 31 ( X x 1 ) + R 32 ( Y y 1 ) R 33 H
M E X = 1 0 0 0 c o s θ s i n θ 0 s i n θ c o s θ c o s φ 0 s i n φ 0 1 0 s i n φ 0 c o s φ c o s ψ s i n ψ 0 s i n ψ c o s ψ 0 0 0 1 = R x ( θ ) · R y ( φ ) · R z ( ψ ) = R 11 R 12 R 13 R 21 R 22 R 23 R 31 R 32 R 33
M I N = f d x 0 u 0 0 f d y v 0 0 0 1
This geometric model provides a precise reference for the spatial location of each pixel. It is crucial to note, however, that optical imaging geometry is fundamentally different from the operating principle of SAR.

2.2.2. Strip-Map SAR Imaging Model

SAR is an active, coherent system that forms imagery by precisely measuring and coherently integrating radar echoes over a synthetic aperture. Within the co-aperture system, SAR operates in strip-map mode, continuously transmitting and receiving signals to facilitate rapid, large-scale terrain mapping. This capability makes it suitable for geological exploration, environmental monitoring [24], and disaster assessment [25].
As the foundation for high-quality imaging, the precise modeling of the instantaneous distance history between a target and the sensor is paramount. This requirement places stringent demands on the imaging algorithm, necessitating one that can handle the non-approximated form of the instantaneous range without compromising efficiency. To meet these dual needs of precision and speed, this paper employs the wavenumber domain ( ω k ) imaging algorithm for subsequent consistent imaging studies. After quadrature demodulation, the echo from a point target P can be represented as follows
s 0 ( τ , η ) = A 0 ω r ( τ 2 R ( η ) / c ) ω a ( η η c ) · e x p ( j 4 π f 0 R ( η ) / c ) · e x p [ j π K r ( τ 2 R ( η ) / c ) 2 ]
The instantaneous slant range R ( η ) from the point target P to the radar trajectory is given by:
R ( η ) = R 0 2 + v r 2 η 2
where c is the speed of light, f 0 is the carrier frequency, A 0 is the complex constant, K r is the range chirp rate, τ is the range time, η is the azimuth time, η c is the beam center time, ω r and ω a are the range and azimuth envelopes. R 0 is the closest slant range, and v r is the aircraft velocity.

2.3. A Consistent Geometric Imaging Framework: Principles and Workflow

Building upon the individual imaging models, this section integrates them into a unified geometric framework to address the core registration challenges at the source, embedding a geometric compensation model directly into the SAR imaging pipeline rather than correcting misalignments afterward.

2.3.1. System Architecture and Geometric Configuration

The geometric configuration of the airborne optical/SAR consistent imaging system is illustrated in Figure 2. SAR operates in side-looking strip-map mode, while the frame-array camera performs forward push-broom imaging in a tilted configuration. The aircraft maintains a constant velocity along the flight trajectory Y p , and the optical camera captures equally spaced images of the target area within the NED coordinate system ( O X p Y p Z p ) . Its foundational feature is the theoretical co-axial alignment of the optical center, aircraft center of mass, and SAR antenna phase center. This configuration eliminates coordinate origin deviations between the two sensors, thereby establishing a unified geometric baseline and fulfilling the physical prerequisite for consistent imaging.

2.3.2. Ensuring Spatio-Temporal Consistency

The feasibility of consistent imaging hinges on precise spatio-temporal correspondence. The System clocks for the optical camera and SAR sensor are synchronized at the initial flight moment, with both sending pulse signals synchronously to the Position and Orientation System (POS). According to the shutter speed of the optical camera and the SAR synthetic aperture time, the number of optical photos acquired per aperture is calculated. The central optical photo within this interval serves as the consistent imaging reference, with its corresponding POS data containing the precise flight attitude and positional parameters, which are used to determine the camera extrinsic parameter matrix M E X and the translation offset T R .

2.3.3. Core Concept: Compensation During Imaging

The principle of optical/SAR consistent imaging is to compensate for the pixel offsets between the two sensors during the SAR image formation. The objective is to directly generate SAR images that are registered with optical imagery in a unified coordinate system. This study utilizes higher-resolution and instantaneous-exposure optical images as the reference benchmark, assuming that they have undergone precise image processing. By establishing the quantitative relationship between SAR imaging deviations and multi-source payload pixel deviations, consistent compensation is implemented during the SAR imaging process to automatically align the SAR image with the optical reference.

2.3.4. Workflow and Geometric Interpretation

Figure 3 details the processing chain that implements this consistent imaging compensation for pixel deviations between heterogeneous payloads.
  • Transformation: Optical imaging is modeled as a central projection from object points to image points. SAR data is projected into the optical pixel coordinate system through a series of coordinate transformations (from the NED coordinate system to the sensor coordinate system, then to the aircraft coordinate system and the camera coordinate system, finally to the pixel coordinate system).
  • Modeling: This transformation chain establishes the theoretical compensation model, which formulates a direct, quantitative relationship between SAR imaging position deviations and the resulting optical/SAR pixel offsets.
  • Embedded Compensation: Automatic registration is achieved by directly applying the high-order Fourier series compensation factor to the relevant SAR data segment. This in-process correction fundamentally bypasses the traditional, computationally complex post-processing pipeline.
The following section presents the complete mathematical derivation that formalizes this compensation model.

3. Wavenumber Domain Consistent Imaging Compensation Method Based on High-Order Fourier Series Fitting

The wavenumber domain ω k imaging algorithm is selected as the cornerstone for our consistent imaging framework due to its superior computational efficiency over the back-projection (BP) algorithm [26], a decisive factor for enabling real-time processing. However, this efficiency comes with an inherent trade-off; both theoretical analysis and simulation results demonstrate that a core operation within this process, the Stolt interpolation, inherently introduces systematic residual geometric deviations. This inherent flaw fundamentally limits the co-registration accuracy between SAR and optical data. Thus, this section presents a compensation method embedded directly into the ω k process, which effectively corrects the errors at their source while preserving its computational advantage.
We first outline the key steps of the ω k algorithm, which serves as the foundation for our subsequent compensation framework. The echo signal is transformed into the 2D frequency domain and expressed as
S 2 d f ( f τ , f η ) = A 1 W r ( f τ ) W a ( f η f η c ) e x p j θ a ( f τ , f η )
θ a ( f τ , f η ) = 4 π R 0 ( f 0 + f τ ) c 1 c 2 f η 2 4 v r 2 ( f 0 + f τ ) 2 π f τ 2 K r
where A 1 is the complex constant in the 2D frequency domain, f τ is the range frequency, f η is the azimuth frequency, W r is the envelope of the range spectrum, W a ( f η f η c ) is the envelope of the azimuth spectrum centered around the Doppler central frequency f η c , and θ a ( f τ , f η ) is the phase term in the 2D frequency domain.
The key steps of ω k imaging are reference function multiplication (RFM) and Stolt interpolation. The purpose of RFM is to eliminate the phase at the reference distance to achieve full focusing, which is generally set to the nearest slant distance R 0 . The phase of the reference signal is
θ r e f = 4 π R r e f c ( f 0 + f τ ) 2 c 2 f η 2 4 v r 2 + π f τ 2 K r
Following RFM and range compression, Equation (10) presents expansion of the nonlinear residual phase function θ R F M to the quadratic term, dividing it into three parts. The first term is the nonlinear residual azimuth modulation, which is approximated as a quadratic function of f η . The second term is the residual distance drift, which is a linear function of f τ , with the frequency increasing approximately linearly with the square of f η . The third term, the range-azimuth coupling term, can be ignored in the pure side-looking SAR imaging geometry [27].
θ R F M 4 π ( R 0 R r e f ) c ( f 0 + f τ ) 2 c 2 f η 2 4 v r 2 4 π ( R 0 R r e f ) c ( f 0 D ( f η , v r ) + f τ D ( f η , v r )     f τ 2 2 f 0 D 3 ( f η , v r ) c 2 f η 2 4 v r 2 f 0 2 )
where D ( f η , v r ) = 1 c 2 f η 2 4 v r 2 f 0 2 is the migration parameter.
The pivotal step in the ω k algorithm is the Stolt interpolation, as defined in Equation (11). Its role is to remap the range frequency from a nonlinear to a linear scale, effectively completing the SAR focusing process by compensating for the residual range and azimuth compression terms. This critical transformation forms the primary bottleneck in balancing accuracy and efficiency. The interpolation process itself is an inherent source of phase error, where the choice of interpolation algorithm and the effects of non-uniform sampling directly dictate the final image quality and introduce geometric deviations, sidelobe artifacts, and focus degradation. This geometric inaccuracy is particularly critical, as it is the chief contributor to the consistent imaging bias that violates sub-pixel co-registration requirements. The resulting phase term after Stolt interpolation is given by Equation (12).
( f 0 + f τ ) 2 c 2 f η 2 4 v r 2 = f 0 + f τ
θ S t o l t 4 π ( R 0 R r e f ) c ( f 0 + f τ )
In practice, the Stolt interpolation relies on a sinc kernel, which inherently introduces the Gibbs phenomenon. This manifests as periodic time-domain oscillations that resist complete suppression by windowing, with unsampled areas remaining particularly vulnerable. Consequently, the interpolation process imprints a distinct, oscillatory signature as residual high-order phase error, while the windowing operation itself further degrades interpolation accuracy. Additionally, geometric distortion also arises from the evolving imaging geometry, as the radar’s viewing angle changes continuously throughout the synthetic aperture.
Collectively, these inherent limitations explain the ω k algorithm’s inferior geolocation accuracy. The deterministic, oscillatory nature of the residual high-order phase errors, as revealed by this analysis, therefore establishes them as the definitive target for compensation. The practical impact of these inaccuracies becomes critically evident during optical/SAR cross-modal registration. The residual deviations from Stolt interpolation exceed the sub-pixel registration tolerance of high-resolution optical imagery. When the SAR image is projected into the optical pixel coordinate system, these otherwise minor deviations collectively cause noticeable registration discrepancies.
Building upon this error analysis, we propose a compensation framework designed to correct these deterministic, oscillatory geometric deviations at their source within the imaging chain. The core idea is to model the Stolt-induced azimuth position error as a function of azimuth coordinate, convert this error model into a compensatory phase in the wavenumber domain, and embed this phase factor directly after the Stolt interpolation step. This approach leverages the very oscillatory nature of the error, making a high-order Fourier series its natural mathematical representation.
Driven by the need for real-time performance as well as the need to circumvent the difficulties of image-domain post-processing, we develop a compensation framework that operates directly during SAR image formation. Given the oscillatory nature of the dominant Stolt-induced phase errors, a high-order Fourier series presents a natural and mathematically suited model. The framework is implemented in three concrete steps. First, establishing the quantitative relationship between SAR imaging position errors and optical/SAR pixel deviations based on the unified geometric projection. Second, quantifying the allowable bounds of SAR geometric deviations that correspond to optical/SAR sub-pixel co-registration requirements, and acquiring the specific azimuth position error samples by simulating ideal targets, measuring their pixel deviations after uncompensated ω k imaging, then converting them via the established relationship. Finally, fitting these sampled errors with a high-order Fourier series to derive the compensation coefficients, which are then embedded into the ω k imaging chain.
The foundation of this method is to project the SAR imaging results directly into the optical pixel coordinate system, based on the object point to image point transformation relationship defined in Equation (1). To facilitate formula derivation and simulation experiment analysis, we introduce a set of ideal geometric assumptions. We assume that after the aircraft achieves stable flight, all sensor NED coordinate systems are parallel at all capture moments. As illustrated in Figure 2, the origin of the NED coordinate system (point O) is defined as the point on the ground at the same distance as the starting moment of the flight. The translation vector between this point and the origin of the camera coordinate system at a certain shooting moment is T R = 0 A z 0 H , where A z 0 represents the flight direction distance. Under the above ideal circumstances, the transformation from the NED to the camera coordinate system consists of a rotation of ψ around the Z-axis, followed by a rotation of θ around the X-axis.
By substituting Equations (1)–(3), we can approximate the image point positions in the pixel coordinate system for both an ideal target point P i d e a l X Y 0 and a target point P d e v X + x Y + y 0 that is subject to SAR imaging position deviations. The resulting pixel deviations between the two are shown in Equations (15) and (16). This derivation yields the mathematical model that forms the basis for our real-time compensation framework.
u 1 v 1 = f d x · Y A z 0 s i n θ · X c o s θ · H + u 0 f d y · c o s θ · X s i n θ · H s i n θ · X c o s θ · H + v 0
u 2 v 2 = f d x · Y + y A z 0 s i n θ · X c o s θ · H + u 0 f d y · c o s θ · ( X + x ) s i n θ · H s i n θ · X c o s θ · H + v 0
Δ u = u 2 u 1 = f d x · y s i n θ · X c o s θ · H
Δ v = v 2 v 1 = f d y · c o s θ · x s i n θ · X c o s θ · H
From Equations (15) and (16), it is evident that the pixel deviations in range and azimuth are proportional to the SAR imaging position deviations in their respective directions. In the range direction, the scaling coefficient c o s θ in Equation (16) provides a relatively large tolerance margin, as the multiplication of range imaging position deviation x by the coefficient results in a sub-pixel level deviation within the range of [ 0.2784 , 0.2784 ] . Quantitative analysis based on Equation (16) and Table 1 demonstrates that for a range resolution of 0.33 m or higher, the resulting deviation remains within the sub-pixel tolerance range. However, the azimuth direction presents a much more challenging scenario. Quantitative analysis via Equation (15) shows that the allowable SAR azimuth position deviations corresponding to optical/SAR sub-pixel level azimuth deviations fall within a very narrow interval of [ 0.0557 , 0.0557 ] . This tight bound is extremely challenging to achieve the direct azimuth sub-pixel registration for most SAR imaging results.
This quantitative finding precisely defines the precision gap, the inherent azimuth deviations of the ω k algorithm systematically exceed the narrow tolerance dictated by the optical imagery. To bridge this gap, a consistent imaging deviation compensation framework is essential. The core of our approach is to establish and leverage the joint relationship between optical/SAR pixel deviations and SAR imaging position deviations to solve for the specific azimuth imaging position deviation requiring compensation. This key compensation term is modeled as a high-order Fourier series dependent on azimuth positions in Equation (17), since this mathematical form inherently captures the periodic and oscillatory errors that originate from the Stolt interpolation process.
f ( x ) = A 0 + n = 1 [ a n c o s ( n ω x ) + b n s i n ( n ω x ) ]
where n is the order of the Fourier series, A 0 , a n , b n , ω are the coefficients to be determined, and x is the azimuth position within the SAR imaging interval.
The corresponding phase factor for the azimuth imaging position deviation to be compensated in the frequency domain is expressed as follows:
θ c o m = 2 π f η · f ( x ) v r
By the same rationale, the optical/SAR azimuth pixel deviation must also be modeled as a high-order Fourier series, given the proportional relationship between u and y in Equation (15) and the nonlinear nature of the azimuth residual phase. The azimuth-consistent imaging compensation factor θ c o m under the co-aperture system is derived and takes the same form as the azimuth deviation phase factor θ c o m . The scaling coefficient between the pixel deviations and the imaging position deviations is denoted as k 1 , which can be regarded as a constant within the optical/SAR overlapping FOV.
θ c o m = 2 π f η · u · k 1 v r = θ c o m
The complete flowchart of the proposed consistent imaging method is presented in Figure 4, illustrating the overall workflow from raw data to co-registered output. This framework achieves source-level compensation by embedding the derived phase factor directly into the wavenumber domain processing chain, with the “Embedded Compensation” step serving as the real-time registration operator. Thereby, it not only addresses the systemic precision gap but also establishes a foundation for pixel-level co-registration at the time of SAR image formation.

4. Experiments and Analysis

This section presents proof-of-concept experiments to validate the proposed wavenumber domain consistent imaging algorithm. The experiments are designed to validate the method through two primary comparisons: (1) against the time-domain BP algorithm to benchmark its performance in the critical trade-off between geometric accuracy and computational speed; and (2) a performance comparison of the ω k algorithm with versus without the proposed Fourier-series compensation, to directly demonstrate the necessity and effectiveness of the core compensation module. These include tests on multi-point targets and a complex structural target derived from an optical photograph, as detailed in subsequent subsections. The key system parameters of the airborne co-aperture system, optimized for long-range SAR imaging with high bandwidth, penetration, and anti-interference capabilities, are summarized in Table 1. The overlapping imaging area, calculated from these parameters, measures 550 m in range by 110 m in azimuth, defining the scope for all simulations. Furthermore, to approximate realistic conditions, both system thermal noise and speckle noise were incorporated into the SAR imaging simulation.

4.1. Analysis of Optical/SAR Consistent Imaging for Multi-Point Targets

This subsection verifies the core functionality of the proposed consistent imaging algorithm using multi-point targets. For comparison, systematic simulations employed the time-domain BP imaging as a geometric accuracy benchmark, selected for its pixel-by-pixel processing that closely mimics the optical central projection principle [28].
Figure 5a shows the simulated “CAS” targets within the optical pixel coordinate system. By applying the optical imaging constraints to the time-domain BP imaging workflow, we obtained the consistent imaging results. Figure 5b and Figure 5c present the BP imaging results under noise-free and noisy conditions, respectively. As shown in Figure 5d, all deviations remain within the sub-pixel level tolerance in the noise-free case. However, introducing noise induces positional offsets in several targets, pushing their pixel deviations beyond the sub-pixel threshold and degrading image quality. This limitation, combined with a substantial computational burden of 1727 s, underscores the practical limitations of the BP consistent imaging method.
In contrast, ω k algorithm demonstrates superior robustness. This robustness stems from the fundamental noise-handling characteristics of its frequency-domain processing chain. Unlike the pixel-wise coherent summation in BP, which amplifies the visual impact of uncorrelated additive noise, the FFT-based operations in ω k inherently distribute such noise more evenly across the image, preserving the perceptible sharpness and positional stability of point targets. A comparison between its noise-free and noisy imaging results show virtually unchanged target positions, preserving image quality and positional accuracy with markedly higher computational efficiency (322 s). The ω k imaging result under the noisy conditions (Figure 5e) is displayed directly in the optical pixel coordinate system using the transformation in Equation (13). A comparison with the actual image point positions from Equation (14) shows close agreement, as visualized in Figure 5f. This validates the feasibility of directly generating SAR imagery within the optical pixel coordinate system through the object-to-image point transformation. Furthermore, under these realistic conditions that include system and speckle noise, the range deviations consistently remain within the sub-pixel level. This provides critical assurance of the method’s robustness and performance in practical scenarios. Azimuth pixel deviations exhibit nonlinear low-amplitude oscillations that predominantly exceed sub-pixel level tolerance. This necessitates a dedicated compensation method specifically designed to address these pixel deviations. To validate the imaging system, the achievable resolution was measured using the −3 dB width of the impulse response from an isolated point target, as shown in Figure 5g. The measured range and azimuth resolutions were approximately 0.239 m and 0.315 m, respectively, which align well with the theoretical values specified in Table 1, thereby confirming the fidelity of our simulation and imaging chain. Furthermore, to provide a comprehensive assessment of the focusing quality, we calculated the Peak Side-Lobe Ratio (PSLR) and Integrated Side-Lobe Ratio (ISLR) for the same point target. The measured PSLR values are −14.89 dB (range) and −13.73 dB (azimuth), and the ISLR values are −12.15 dB (range) and −11.81 dB (azimuth). These results meet the typical requirements for high-quality SAR imaging, confirming that the ω k algorithm provides excellent sidelobe suppression.
Comprehensive experiments using arbitrary multi-point targets with 0.5 m intervals across the whole 110 m azimuth swath on flat terrain were performed. The corresponding optical image and uncompensated ω k imaging result in the pixel coordinate system are shown in Figure 6a and Figure 6b, respectively. Figure 6c demonstrates that the nonlinear azimuth pixel deviations can be effectively modeled by a Fourier high-order series, validating the theoretical derivations in Equations (17) and (19). Figure 6d presents the ω k imaging result after compensation, showing well-preserved image quality. As illustrated in Figure 6e, most azimuth deviations are corrected to sub-pixel accuracy. This result confirms the feasibility of the Fourier series representation and validates the proportional relationship in Equation (15). It should be noted, however, that the fitted deviation model does not completely cover all multi-point targets, still resulting in azimuth deviations above the sub-pixel level at a few points. Despite these minor residuals, the compensation effectiveness is significantly improved overall. All range pixel deviations remain within the sub-pixel level, and the azimuth points exceeding the sub-pixel threshold are substantially reduced from 115 points to 9 points.
The universality of the method was further tested on the “CAS” targets. The compensated ω k imaging result is presented in Figure 7a, while Figure 7b quantifies the improvement, with targets exceeding the azimuth sub-pixel threshold reduced from 24 points to only 3. The few residual outliers are consistent with the expected behavior of a global parametric fit and do not invalidate the overall sub-pixel accuracy achievement. This compensation process added an overhead of 48 s to the base ω k imaging time (56 s), resulting in a total processing time of 104 s for the “CAS” targets. Even with this overhead, the proposed method achieves significant computational savings compared to the 1727 s of time-domain BP imaging (a speedup of over 16×), while maintaining compensation accuracy.
These results validate two key findings. First, the theoretical model establishes a proportional relationship between optical/SAR pixel deviations and SAR azimuth imaging position deviations. Second, the efficacy of the proposed high-order Fourier series method in effectively compensating for the nonlinear azimuth deviations. Furthermore, sub-pixel accuracy in range is still achieved through the object-to-image coordinate transformation. The overall method demonstrates universal applicability and exceptional computational efficiency, requiring minimal overhead to enable near real-time optical/SAR consistent imaging and thus offering a practical solution for time-sensitive remote sensing applications.

4.2. Analysis of Optical/SAR Consistent Imaging for Structural Targets

To further validate the universal applicability of the proposed Fourier high-order series method for consistent imaging deviation compensation, we conducted a simulation experiment using a complex semi-physical area target from a 65 × 50 optical photograph (Figure 8). Using this image as a reference, SAR echo data was generated (system parameters in Table 1) and processed through both time-domain BP and the proposed wavenumber-domain ω k imaging algorithms, with all results projected into the optical pixel coordinate system.
The BP algorithm establishes a performance baseline. As shown in Figure 9a, it produces high-quality imagery under noise-free conditions, accurately rendering the target’s shape and size with sub-pixel co-registration accuracy (Figure 9b). However, its computational cost of 1787.68 s precludes real-time application. Furthermore, its performance degrades under noisy conditions (Figure 9c), introducing significant positional deviations (Figure 9d).
In contrast, the efficacy and efficiency of our proposed method are summarized in Figure 10. A comparison of ω k imaging results before (Figure 10a) and after (Figure 10c) compensation demonstrates its corrective capability. While a slight quality trade-off exists compared to BP, our method achieves a substantial gain in computational efficiency, completing the entire compensation process in merely 72.08 s. Critically, as evidenced by the Fourier series fitting (Figure 10b) and the final deviation analysis (Figure 10d), the proposed method successfully confines all azimuth pixel deviations within sub-pixel tolerances, with range registration also achieving sub-pixel accuracy in the optical pixel coordinate system, thereby robustly validating its universal applicability for consistent imaging.
The experimental results validate the proposed wavenumber domain consistent imaging method as a viable and superior alternative to the BP algorithm. While the native ω k accuracy is limited by Stolt interpolation errors, our method overcomes the critical limitations of high computational cost and pronounced sensitivity to noise that hinder the BP algorithm’s real-time application. By modeling the residual high-order imaging errors as a Fourier high-order series of the azimuth positions and embedding the compensation directly into the ω k imaging workflow, we reliably achieve automatic, sub-pixel level co-registration in both range and azimuth, establishing a robust solution.

5. Discussion

This work establishes a new paradigm for optical/SAR consistent imaging by performing geometric registration directly within the wavenumber domain imaging process itself, which overcomes the traditional reliance on complex and inefficient post-processing alignment. The experimental results robustly confirm the feasibility and computational superiority of the proposed wavenumber domain method, which completed the entire imaging and registration process in merely 72.08 s, over 24 times faster than the time-domain BP algorithm (1787.68 s).
Practical Robustness and Extendable Framework. The current validation demonstrates the core paradigm under an ideal co-aperture geometry with flat terrain. To address its operational robustness, it is crucial to recognize that the proposed method is fundamentally an extendable parametric compensation framework, not a fixed solution. The high-order Fourier series models systematic geometric biases. In practice, deviations such as residual pointing errors or low-frequency platform motion can be incorporated by constructing an updated geometric model, deriving the range and azimuth bias functions, and embedding them into the compensation series. This approach, conceptually aligned with bias compensation in interferometric SAR, allows the method to handle a class of deterministic, modelable errors. Sub-pixel registration is guaranteed under these conditions. The framework is also compatible with standard motion compensation preprocessing for handling high-frequency phase errors. The compensation step itself adds negligible computational overhead, preserving the real-time capability intrinsic to the ωk algorithm. Thus, the method provides a foundation for robust, calibrated operation beyond the ideal case.
Limitations and Pathways to Operational Robustness. While the proposed method achieves high accuracy under the controlled conditions of this study, its transition to operational use requires addressing inherent limitations and environmental complexities. First, the few residual azimuth outliers observed even in flat-terrain simulations stem from the fundamental trade-off in parametric fitting: a finite-order Fourier series minimizes the overall error across the aperture but does not guarantee zero error at every discrete point. Local signal properties, such as sidelobe interactions, can also contribute to these minor deviations. For real-world deployment involving non-flat terrain, noise, and potential calibration residuals, the core signal-domain compensation should be integrated into a hierarchical robustness framework: (1) Enhanced Geometric Modeling by incorporating a digital elevation model (DEM) to derive terrain-aware compensation coefficients; (2) Detection and Local Refinement through lightweight quality-assurance and targeted local alignment for flagged outliers; (3) System-Level Integration with confidence-aware fusion algorithms in downstream applications. This envisioned progression charts a clear pathway for evolving the method from a laboratory demonstration into a robust system component.
Building upon the extendable framework and the robustness pathways outlined above, future work will center on evolving this paradigm from an ideal-case solution to a robust tool. This includes exploring the adaptation of the proposed signal-domain compensation principle to other high-efficiency SAR imaging algorithms, such as the Chirp Scaling algorithm, to broaden the framework’s applicability. We also plan to develop a dynamic error model that incorporates real-time navigation data and high-resolution terrain information. The trial deployment of our system will be instrumental in this endeavor, providing the crucial dataset needed to probe the method’s limitations and advance its capabilities. The inherent flexibility of the high-order Fourier series compensation suggests a strong potential for adaptation, positioning it as a cornerstone for future high-fidelity, consistent imaging in topographically complex and dynamic environments.

6. Conclusions

This paper has introduced an innovative signal-domain methodology that unifies SAR imaging and optical co-registration into a single, automated workflow within the wavenumber domain. Supported by an airborne co-aperture system for synchronous acquisition, our approach is rooted in the recognition that pixel deviations are proportional to imaging position errors and that Stolt residuals are oscillatory. Consequently, we embed a high-order Fourier series model directly into the SAR wavenumber domain imaging chain, achieving compensation at the source. Comprehensive experiments on both point and structural targets have conclusively validated the method’s feasibility and sub-pixel accuracy. Most notably, it achieves a dramatic increase in computational efficiency, typically exceeding an order of magnitude speedup over the prior time-domain BP algorithm. This breakthrough in efficiency, combined with its automated registration workflow, establishes a new paradigm for multi-source remote sensing data processing that moves beyond slow, complex post-imaging alignment. While the current framework is validated under idealized conditions, future work will extend it to handle complex topography and motion errors. The scheduled trial deployment of our system will provide the critical real-world data to guide this evolution. With its proven accuracy and superior processing speed, this research opens new avenues for real-time, high-precision multi-source remote sensing in both defense and civilian sectors.

Author Contributions

Conceptualization, K.W. and B.W.; methodology, K.W.; software, K.W. and Y.W.; validation, K.W., Y.W. and B.W.; formal analysis, K.W., L.T. and X.W.; investigation, K.W., L.T. and X.W.; resources, B.W.; data curation, K.W. and Y.W.; writing—original draft preparation, K.W.; writing—review and editing, C.S., B.W. and K.W.; visualization, K.W.; supervision, M.X.; project administration, Y.W.; funding acquisition, Y.W. and B.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China under Grant 2022YFB3902300.

Data Availability Statement

The data used to support the study are available upon request to the author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ghamisi, P.; Rasti, B.; Yokoya, N.; Wang, Q.; Hofle, B.; Bruzzone, L.; Bovolo, F.; Chi, M.; Anders, K.; Gloaguen, R.; et al. Multisource and Multitemporal Data Fusion in Remote Sensing: A Comprehensive Review of the State of the Art. IEEE Geosci. Remote Sens. Mag. 2019, 7, 6–39. [Google Scholar] [CrossRef]
  2. Zhu, B.; Zhang, J.; Tang, T.; Ye, Y. SFOC: A Novel Multi-Directional and Multi-Scale Structural Descriptor for Multimodal Remote Sensing Image Matching. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2022, 43, 113–120. [Google Scholar] [CrossRef]
  3. Sun, Y.; Jiang, W.; Yang, J.; Li, W. SAR Target Recognition Using cGAN-Based SAR-to-Optical Image Translation. Remote Sens. 2022, 14, 1793. [Google Scholar] [CrossRef]
  4. Mansaray, L.R.; Yang, L.; Kabba, V.T.; Kanu, A.S.; Huang, J.; Wang, F. Optimising rice mapping in cloud-prone environments by combining quad-source optical with Sentinel-1A microwave satellite imagery. GISci. Remote Sens. 2019, 56, 1333–1354. [Google Scholar] [CrossRef]
  5. Kulkarni, S.C.; Rege, P.P. Pixel level fusion techniques for SAR and optical images: A review. Inf. Fusion 2020, 59, 13–29. [Google Scholar] [CrossRef]
  6. Ye, Y.; Yang, C.; Zhu, B.; Zhou, L.; He, Y.; Jia, H. Improving Co-Registration for Sentinel-1 SAR and Sentinel-2 Optical Images. Remote Sens. 2021, 13, 928. [Google Scholar] [CrossRef]
  7. Wei, C.; Zheng, Q.; Shang, Y.; Zhang, X.; Yin, J.; Shen, Z. Black and Odorous Water Monitoring by Using GF Series Remote Sensing Data. In Proceedings of the 2021 9th International Conference on Agro-Geoinformatics (Agro-Geoinformatics), Shenzhen, China, 26–29 July 2021. [Google Scholar]
  8. Zhou, L. Preliminary process of airborne multidimensional space joint observation SAR system. J. Electron. Inf. Technol. 2023, 45, 1243–1253. [Google Scholar] [CrossRef]
  9. Zhu, B.; Zhou, L.; Pu, S.; Fan, J.; Ye, Y. Advances and Challenges in Multimodal Remote Sensing Image Registration. IEEE J. Miniaturization Air Space Syst. 2023, 4, 165–174. [Google Scholar] [CrossRef]
  10. Schmitt, M.; Tupin, F.; Zhu, X.X. Fusion of SAR and Optical Remote Sensing Data—Challenges and Recent Trends. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017. [Google Scholar]
  11. Ma, J.; Jiang, X.; Fan, A.; Jiang, J.; Yan, J. Image Matching from Handcrafted to Deep Features: A Survey. Int. J. Comput. Vis. 2021, 129, 23–79. [Google Scholar] [CrossRef]
  12. Wang, M.; Xiong, B.; Guo, Q.; Zhou, Y. Fast Registration of High-Resolution Optical-SAR Images Based on Grayscale Normalized Mutual Information. In Proceedings of the 2023 8th International Conference on Control, Robotics and Cybernetics (CRC), Changsha, China, 22–24 September 2023. [Google Scholar]
  13. Guo, Q.; He, M.; Li, A. High-Resolution Remote-Sensing Image Registration Based on Angle Matching of Edge Point Features. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2881–2895. [Google Scholar] [CrossRef]
  14. Quan, D.; Wei, H.; Wang, S.; Gu, Y.; Hou, B.; Jiao, L. A Novel Coarse to Fine Deep Learning Registration Framework for Multimodal Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–16. [Google Scholar] [CrossRef]
  15. Zuo, W.; Zhou, B.H.; Li, W.Z. Research progress and development analysis of multi-mode and composite precision guidance technology. Aerosp. Def. 2019, 2, 44–52. [Google Scholar]
  16. Charles, J.R.; Hoppe, D.J.; Sehic, A. Hybrid RF/Optical Communication Terminal with Spherical Primary Optics for Optical Reception. In Proceedings of the 2011 International Conference on Space Optical Systems and Applications (ICSOS), Santa Monica, CA, USA, 11–13 May 2011. [Google Scholar]
  17. Li, R. Research on Common Aperture Composite Imaging Technology of Synthetic Aperture Radar and Optical Remote Sensing Camera. Ph.D. Dissertation, University of Chinese Academy of Sciences, Xi’an, China, 2022. Available online: https://kns.cnki.net/kcms2/article/abstract?v=cpyCR_1GzmDRB8VKmlsD_-HOrmEO6HOJYpUUFMN0ikEyZMNaZTChjgf9HmFzfcKb7toyDkdVJP-cZ9cT5zQLaDTkW8I5anBCIjUy8JKS0dgFIqzpW0Yyd1sUY4ci4wTXCrgeW4SJCjMBKDUhSaRb091sqiclKOlRGGHPmYTcED-YJjUcNh7UXw==&uniplatform=NZKPT (accessed on 12 January 2026).
  18. Wu, W. Research on Airborne Co-aperture Imaging System with Infrared and Synthetic Aperture Radar. Master’s Thesis, University of Chinese Academy of Sciences, Changchun, China, 2020. [Google Scholar] [CrossRef]
  19. Li, R.; Feng, L.; Xu, K.; Wang, N.; Fan, X. Optical design of an integrated imaging system of optical camera and synthetic aperture radar. Opt. Express 2021, 29, 36796–36812. [Google Scholar] [CrossRef] [PubMed]
  20. Wang, K.; Song, C.; Wang, Y.; Wang, B.; Xiang, M. The Study of Consistency Imaging Method for Optics and SAR Based on Common Aperture. In Proceedings of the 2024 Photonics & Electromagnetics Research Symposium (PIERS), Chengdu, China, 21–24 April 2024. [Google Scholar]
  21. Wang, F.R.; Liu, L.; Ren, L.X.; Dai, F. Large aperture microwave-optical integrated space imaging technology. Chin. Space Sci. Technol. 2024, 44, 164–170. [Google Scholar] [CrossRef]
  22. Hu, J.J.; Dong, Z.H.; Yang, X.W.; Xia, L.R.; Chen, X.Q.; Lu, Y. Design of an integrated imaging system of airborne SAR and visible light camera based on common aperture antenna. Opt. Express 2024, 32, 22508–22524. [Google Scholar] [CrossRef] [PubMed]
  23. Zhang, X.; Yuan, G.; Zhang, H.; Qiao, C.; Liu, Z.; Ding, Y.; Liu, C. Precise target geo-location of long-range oblique reconnaissance system for UAVs. Sensors 2022, 22, 1903. [Google Scholar] [CrossRef] [PubMed]
  24. Rui, G.; Zhao, Z.; Rui, G.; Guobin, J.; Mengdao, X.; Yongfeng, Z. Ocean Target Investigation Using Spaceborne SAR under Dual-Polarization Strip-map Mode. In Proceedings of the 2021 2nd China International SAR Symposium (CISS), Shanghai, China, 3–5 November 2021. [Google Scholar]
  25. Liang, C.; Zeng, Q.; Cui, X.; Jiao, J. Burst mode to strip-map mode SAR interferometry of ALOS PALSAR. In Proceedings of the 2010 IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 25–30 July 2010. [Google Scholar]
  26. Xu, G.; Xing, M.; Zhang, L.; Bao, Z. Robust Autofocusing Approach for Highly Squinted SAR Imagery Using the Extended Wavenumber Algorithm. IEEE Trans. Geosci. Remote Sens. 2013, 51, 5031–5046. [Google Scholar] [CrossRef]
  27. Cumming, I.G.; Wong, F.H. Digital Processing of Synthetic Aperture Radar Data Algorithms and Implementation, 8th ed.; Publishing House of Electronics Industry: Beijing, China, 2019; p. 209. [Google Scholar]
  28. Zhang, T. SAR Fast Time-Domain Imaging and Motion Compensation Method. Ph.D. Dissertation, Xidian University, Xi’an, China, 2022. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the optical/SAR airborne co-aperture system. The system employs a co-aperture design with a shared primary reflector. The incident composite wavefront (containing both optical and microwave signals) is collected by the primary and secondary mirrors and then separated by a spectral-frequency splitter. The optical signal is directed to the subsequent optical imaging branch (split into visible and infrared channels), while the microwave signal is received by the radar feed and sent to the SAR processing chain. This architecture ensures spatiotemporal and viewing consistency for the multi-source data.
Figure 1. Schematic diagram of the optical/SAR airborne co-aperture system. The system employs a co-aperture design with a shared primary reflector. The incident composite wavefront (containing both optical and microwave signals) is collected by the primary and secondary mirrors and then separated by a spectral-frequency splitter. The optical signal is directed to the subsequent optical imaging branch (split into visible and infrared channels), while the microwave signal is received by the radar feed and sent to the SAR processing chain. This architecture ensures spatiotemporal and viewing consistency for the multi-source data.
Remotesensing 18 00315 g001
Figure 2. Geometric configuration of the optical/SAR consistent imaging.
Figure 2. Geometric configuration of the optical/SAR consistent imaging.
Remotesensing 18 00315 g002
Figure 3. Data processing chain for consistent imaging.
Figure 3. Data processing chain for consistent imaging.
Remotesensing 18 00315 g003
Figure 4. Flowchart of the wavenumber domain imaging with embedded high-order Fourier series compensation for pixel-level co-registration.
Figure 4. Flowchart of the wavenumber domain imaging with embedded high-order Fourier series compensation for pixel-level co-registration.
Remotesensing 18 00315 g004
Figure 5. Comparative analysis of consistent imaging: BP algorithm vs. ω k algorithm in terms of robustness and efficiency. (a) Simulated “CAS” targets in the optical coordinate system; (b) BP imaging result under noise-free conditions; (c) BP imaging result under noisy conditions; (d) Pixel deviation analysis of BP algorithm; (e) Robust ω k imaging result under noisy conditions; (f) Pixel deviations between the ω k imaging result and the theoretical reference; (g) Impulse response width measurement for resolution validation.
Figure 5. Comparative analysis of consistent imaging: BP algorithm vs. ω k algorithm in terms of robustness and efficiency. (a) Simulated “CAS” targets in the optical coordinate system; (b) BP imaging result under noise-free conditions; (c) BP imaging result under noisy conditions; (d) Pixel deviation analysis of BP algorithm; (e) Robust ω k imaging result under noisy conditions; (f) Pixel deviations between the ω k imaging result and the theoretical reference; (g) Impulse response width measurement for resolution validation.
Remotesensing 18 00315 g005aRemotesensing 18 00315 g005b
Figure 6. Modeling, compensation, and validation of azimuth pixel deviations using high-order Fourier series on multi-point targets. (a) Simulated optical image; (b) Uncompensated ω k imaging result; (c) High-order Fourier series fitting of the nonlinear azimuth pixel deviations; (d) Compensated ω k imaging result; (e) Quantitative comparison of azimuth deviations before and after compensation.
Figure 6. Modeling, compensation, and validation of azimuth pixel deviations using high-order Fourier series on multi-point targets. (a) Simulated optical image; (b) Uncompensated ω k imaging result; (c) High-order Fourier series fitting of the nonlinear azimuth pixel deviations; (d) Compensated ω k imaging result; (e) Quantitative comparison of azimuth deviations before and after compensation.
Remotesensing 18 00315 g006
Figure 7. Demonstration of universal applicability and robust high-resolution performance. (a) Compensated ω k image of the “CAS” targets; (b) Quantitative analysis of azimuth deviations for the “CAS” targets.
Figure 7. Demonstration of universal applicability and robust high-resolution performance. (a) Compensated ω k image of the “CAS” targets; (b) Quantitative analysis of azimuth deviations for the “CAS” targets.
Remotesensing 18 00315 g007
Figure 8. Optical semi-physical simulation picture.
Figure 8. Optical semi-physical simulation picture.
Remotesensing 18 00315 g008
Figure 9. Performance benchmark and limitations of the BP algorithm. (a) BP imaging result under noise-free conditions; (b) Analysis of range and azimuth pixel deviations (noise-free); (c) BP imaging result under noisy conditions; (d) Corresponding pixel deviations under noisy conditions.
Figure 9. Performance benchmark and limitations of the BP algorithm. (a) BP imaging result under noise-free conditions; (b) Analysis of range and azimuth pixel deviations (noise-free); (c) BP imaging result under noisy conditions; (d) Corresponding pixel deviations under noisy conditions.
Remotesensing 18 00315 g009
Figure 10. Demonstration of efficient and accurate ω k compensation method. (a) Uncompensated ω k imaging result; (b) Azimuth deviation modeling via third-order Fourier series fitting; (c) Compensated ω k imaging result; (d) Achievement of sub-pixel azimuth accuracy after compensation.
Figure 10. Demonstration of efficient and accurate ω k compensation method. (a) Uncompensated ω k imaging result; (b) Azimuth deviation modeling via third-order Fourier series fitting; (c) Compensated ω k imaging result; (d) Achievement of sub-pixel azimuth accuracy after compensation.
Remotesensing 18 00315 g010
Table 1. Parameters of Airborne Co-aperture System.
Table 1. Parameters of Airborne Co-aperture System.
ParameterValue
Aircraft altitude4 km
Optical camera resolution0.055 m
Optical camera FOV0.32°
SAR carrier frequency35 GHz
SAR range resolution0.33 m
SAR azimuth resolution0.35 m
SAR beam width
Center oblique distance20 km
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, K.; Wang, Y.; Song, C.; Wang, B.; Tang, L.; Wang, X.; Xiang, M. A Wavenumber Domain Consistent Imaging Method Based on High-Order Fourier Series Fitting Compensation for Optical/SAR Co-Aperture System. Remote Sens. 2026, 18, 315. https://doi.org/10.3390/rs18020315

AMA Style

Wang K, Wang Y, Song C, Wang B, Tang L, Wang X, Xiang M. A Wavenumber Domain Consistent Imaging Method Based on High-Order Fourier Series Fitting Compensation for Optical/SAR Co-Aperture System. Remote Sensing. 2026; 18(2):315. https://doi.org/10.3390/rs18020315

Chicago/Turabian Style

Wang, Ke, Yinshen Wang, Chong Song, Bingnan Wang, Li Tang, Xuemei Wang, and Maosheng Xiang. 2026. "A Wavenumber Domain Consistent Imaging Method Based on High-Order Fourier Series Fitting Compensation for Optical/SAR Co-Aperture System" Remote Sensing 18, no. 2: 315. https://doi.org/10.3390/rs18020315

APA Style

Wang, K., Wang, Y., Song, C., Wang, B., Tang, L., Wang, X., & Xiang, M. (2026). A Wavenumber Domain Consistent Imaging Method Based on High-Order Fourier Series Fitting Compensation for Optical/SAR Co-Aperture System. Remote Sensing, 18(2), 315. https://doi.org/10.3390/rs18020315

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop