Next Article in Journal
Enhanced Mainlobe Jamming Suppression in Distributed Array Radar via Joint Optimization of Radar Positions and Subpulse Frequencies
Previous Article in Journal
Global Validation of the Version F Geophysical Data Records from the TOPEX/POSEIDON Altimetry Satellite Mission
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

TMTS: A Physics-Based Turbulence Mitigation Network Guided by Turbulence Signatures for Satellite Video

1
Electronic Information School, Wuhan University, Wuhan 430079, China
2
Key Laboratory of Space Utilization, Chinese Academy of Sciences, Beijing 100094, China
3
Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing 100094, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(14), 2422; https://doi.org/10.3390/rs17142422
Submission received: 29 May 2025 / Revised: 3 July 2025 / Accepted: 9 July 2025 / Published: 12 July 2025

Abstract

Atmospheric turbulence severely degrades high-resolution satellite videos through spatiotemporally coupled distortions, including temporal jitter, spatial-variant blur, deformation, and scintillation, thereby constraining downstream analytical capabilities. Restoring turbulence-corrupted videos poses a challenging ill-posed inverse problem due to the inherent randomness of turbulent fluctuations. While existing turbulence mitigation methods for long-range imaging demonstrate partial success, they exhibit limited generalizability and interpretability in large-scale satellite scenarios. Inspired by refractive-index structure constant ( C n 2 ) estimation from degraded sequences, we propose a physics-informed turbulence signature (TS) prior that explicitly captures spatiotemporal distortion patterns to enhance model transparency. Integrating this prior into a lucky imaging framework, we develop a Physics-Based Turbulence Mitigation Network guided by Turbulence Signature (TMTS) to disentangle atmospheric disturbances from satellite videos. The framework employs deformable attention modules guided by turbulence signatures to correct geometric distortions, iterative gated mechanisms for temporal alignment stability, and adaptive multi-frame aggregation to address spatially varying blur. Comprehensive experiments on synthetic and real-world turbulence-degraded satellite videos demonstrate TMTS’s superiority, achieving 0.27 dB PSNR and 0.0015 SSIM improvements over the DATUM baseline while maintaining practical computational efficiency. By bridging turbulence physics with deep learning, our approach provides both performance enhancements and interpretable restoration mechanisms, offering a viable solution for operational satellite video processing under atmospheric disturbances.

1. Introduction

Satellite video imaging has revolutionized Earth observation by enabling continuous spatiotemporal monitoring of terrestrial dynamics at sub-second temporal resolution and a sub-meter-scale spatial fidelity [1]. This capability supports mission-critical applications such as real-time traffic flow analysis [2], tactical reconnaissance [3], and cross-platform object tracking [4]. A fundamental challenge arises from atmospheric turbulence in the tropospheric boundary layer, which introduces spatiotemporally correlated distortions during image acquisition. As depicted in Figure 1a, degradation originates from wavefront phase perturbations induced by atmospheric turbulence, driven by vertically stratified C n 2 profiles exhibiting horizontal anisotropy along the line-of-sight atmospheric path. Propagating through this medium, these perturbations simultaneously cause intensity distribution dispersion, peak intensity reduction, and focal plane displacement. Consequently, satellite videos exhibit temporal jitter, random blurring, and geometric distortion. Figure 1b demonstrates these effects in an aerial airport scene: the temporal profile reveals high-frequency glitch-like fluctuations, while spatially, locally magnified regions show distorted runway geometries with position-dependent blur intensity. Such visual degradation compromises target clarity and obscures morphological details, thereby leading to performance drops of subsequent applications. Effective turbulence mitigation is therefore a prerequisite for operational satellite video analytics.
Compared with adaptive optics systems [5,6] that require sophisticated wavefront correctors and real-time controllers, turbulence mitigation (TM) algorithms offer a computationally efficient solution to this ill-posed inverse problem. Conventional TM methodologies predominantly employ lucky imaging frameworks, which statistically select and fuse minimally distorted frames. Law et al. [7] pioneered this paradigm in ground-based visible light imaging by developing quality-aware frame selection metrics. Subsequent innovations include Joshi et al.’s [8] locally adaptive lucky region fusion to suppress blurring artifacts and Anantrasirichai’s [9] dual-tree complex wavelet transform approach for spatial distortion correction. Mao et al. [10] further advanced the field through spatiotemporal non-local fusion, achieving hybrid photometric–geometric compensation. However, these methods suffer from limited robustness under strong turbulence and dynamic conditions, often yielding suboptimal restoration results due to mismatches between real-world turbulent imaging environments and idealized assumptions. Furthermore, their computationally intensive optimization frameworks frequently lead to performance degradation in practical deployments.
Recent breakthroughs in deep learning, particularly its demonstrated efficacy in low-level vision tasks, have established data-driven TM as the state-of-the-art approach for restoring degraded imagery [11,12,13]. Supervised learning frameworks utilizing paired turbulence-distorted/clean datasets currently dominate TM research, benefiting from their capacity to model deterministic degradation processes. Transformer architectures have emerged as powerful tools for single-frame TM tasks. Mao et al. [14] pioneered a spatial-adaptive processing network that captures long-range dependencies to address turbulence-induced distortions. Complementary work by Li et al. [15] demonstrated that U-Net structures with global feature aggregation can effectively mitigate atmospheric distortions in remote sensing applications. However, such single-frame methods inherently fail to resolve geometric distortions, as temporal information loss in isolated frames prevents complete turbulence parameter estimation. To overcome this limitation, multi-frame TM frameworks have been developed to exploit spatiotemporal correlations across sequential observations. By aggregating high-frequency details from multiple degraded frames, these methods achieve superior reconstruction fidelity. Terrestrial imaging systems have particularly benefited from such approaches, including a turbulence-specific Wasserstein GAN framework [16] that inverts degradation processes via adversarial learning. Zou et al. [17] integrated deformable 3D convolutions with Swin transformers to handle motion artifacts in long-range videos. Zhang et al. [18] combined deformable attention with temporal-channel mechanisms, enabling pixel-level registration through lucky imaging principles. Despite terrestrial successes, existing multi-frame TM methods face satellite limitations due to altitude-varying turbulence, orbital dynamics, missing physics integration, and absent physics regularization for unseen regimes.
Physics-based methods provide a compelling alternative for TM tasks by explicitly modeling atmospheric degradation mechanisms. The systematic integration of turbulence-governed physical processes, particularly phase distortion modeling [19], spatiotemporal point spread function (PSF) variations [10], and physical parameters of the turbulence field [20], have demonstrated substantial benefits. These approaches not only yield more accurate and reliable results but also improve the generalization capability across varying turbulence conditions. The C n 2 parameter, a fundamental metric for atmospheric turbulence quantification, has been successfully integrated into collaborative learning frameworks for joint artifact suppression and turbulence estimation in thermal infrared imagery [20]. However, satellite-based TM implementation faces two unresolved challenges. First, the requirement for synchronized two-dimensional turbulence field measurements, typically acquired through ground-based scintillometer networks, conflicts with orbital platforms’ inherent lack of in situ sensing capabilities. Second, while recent studies [21,22,23] propose video-based estimation methods leveraging temporal intensity correlations, their functional integration with TM tasks remains experimentally unverified under orbital imaging conditions characterized by rapid platform motion and variable observation geometries.
The classical model [21,22] for estimating 2D C n 2 fields from distorted video distinguishes two variable categories: video-intrinsic features (image gradients, temporal intensity variance) and imaging-system parameters (pixel field-of-view, imaging distance, optical aperture diameter). Satellite TM tasks exhibit unique advantages through homologous video sequences sharing identical imaging-system parameters, allowing these parameters to be treated as constants. Under this constraint, we define turbulence signature—a physics-informed metric calculated from gradient-temporal analysis—to quantify scaled C n 2 spatial distributions. As demonstrated in Figure 2, turbulence signatures provide three critical functionalities: (1) enhanced distortion feature visualization in raw video frames, (2) sparse representation enabling efficient large-scale inference, and (3) strong positive correlation with distortion severity. This discovery motivates our turbulence signature-guided modeling strategy, which resolves the limitation of conventional TM methods, namely their incapacity to incorporate turbulence physics as inductive bias, via explicit feature encoding.
Specifically, this study proposes a physics-based Turbulence Mitigation network guided by Turbulence Signature (TMTS) for satellite video. The turbulence signature models spatiotemporal distortions through direct extraction image gradients and local temporal intensity variance from the source video, thereby eliminating external data dependencies. Maintaining model interpretability through the three-stage framework of feature alignment, reference reconstruction, and temporal fusion, the architecture incorporates three novel modules: Turbulence Signature-Guided Alignment (TSGA), Convolutional Gated Reference Feature Update (CGRFU), and Turbulence Spatial-Temporal Self-Attention (TSTSA). By integrating turbulence signatures, these modules concentrate on effective distortion information, substantially constraining solution space complexity. The TSGA module leverages deformation attention on downsampled deep features to activate global geometric distortion representations, replacing conventional optical flow methods with physics-aware compensation. The CGRFU module addresses misalignment propagation via convolutional gating mechanisms that model long-range temporal dependencies in reference features. The TSTSA module enhances complementary feature aggregation in low-distortion regions via adaptive attention mechanisms.
In brief, our contributions are listed as follows. (1) Physics-Informed Turbulence Signature: A spatiotemporal distortion prior constructed directly from raw video provides explicit physical constraints without requiring external instrumentation. This approach improves generalizability across varying atmospheric conditions. (2) Task-Specific Network Modules: Three turbulence signature-integrated components target critical aspects. The TSGA module enables global geometric compensation for turbulence-distorted features, while the CGRFU module ensures dynamically reliable reference construction. The TSTSA module further accomplishes adaptive fusion of spatiotemporally complementary information. (3) Synergistic Framework: This tightly integrates physical priors with deep learning through turbulence signature-guided three-stage TM processing, enhancing the interpretability of network models with minimal complexity growth. (4) Cross-Platform Validation: Comprehensive evaluations across five satellite systems (Jilin-1, Carbon-2, UrtheCast, Skysat-1, and Luojia3-01) demonstrate state-of-the-art performance.

2. Materials and Methods

2.1. Turbulence Signature

The proposed turbulence signature is derived from estimating the refractive index structure constant C n 2 using consecutive image sequences. To provide a clearer understanding of this process, we first briefly review the classical approach to estimating C n 2 . Higher C n 2 values indicate stronger turbulence, causing stronger wavefront distortions. The retrieval of C n 2 relies on statistical analysis of angle of arrival (AOA) variance [21]. When video-based remote sensing systems operate in the inertial subrange ( D > l 0 ), the monoaxial AOA variance σ α 2 for unbounded plane waves is expressed in radians as follows:
σ α 2 = 2.914 D 1 / 3 0 L C n 2 ( z ) d z , λ L < D < L 0 ,
where α denotes the angular variance component (rad) of AOA fluctuations, D is the aperture diameter, λ represents the optical wavelength, L corresponds to the propagation path length, while l 0 and L 0 characterize the inner scale and outer scale of atmospheric turbulence, respectively. A wavefront angular shift of α radians at the optical aperture induces a lateral image displacement of α × P F O V 1 pixels, where P F O V is the pixel field of view. Consequently, the single-axis variance σ i m g 2 of the image displacement caused by AOA fluctuations can be mathematically expressed as follows:
σ i m g 2 = σ α 2 × P F O V 1 .
This conversion bridges wavefront-level turbulence ( σ α 2 ) to pixel-level displacement ( σ i m g 2 ), enabling video-based quantification. Note that since the AOA variance is equal in all axes, it is appropriate to specify the image offset variance along each axis, X and Y, represented by σ i m g 2 = σ X 2 = σ Y 2 . Turbulence disrupts temporal image sampling, creating nonuniform spatial discretization. To quantify turbulence-induced spatial discretization, we derive the temporal intensity variance σ I 2 as follows:
σ I 2 ( m , n ) = [ I X 2 ( m , n ) + I Y 2 ( m , n ) ] σ i m g 2 ,
where I X 2 ( m , n ) and I Y 2 ( m , n ) are the vertical and horizontal derivatives of the ideal image at point ( m , n ) , respectively. Given that the local intensity temporal variance σ I 2 ( m , n ) and the derivatives I X 2 ( m , n ) and I Y 2 ( m , n ) can be calculated from the consecutive image sequence, the explicit formulation of the C n 2 retrieval may be systematically derived through the following:
C n 2 ( m , n ) = P F O V 2 × D 1 / 3 2.914 × L × σ I 2 ( m , n ) I X 2 ( m , n ) + I Y 2 ( m , n ) .
Equation (4) reveals two key insights: (1) Satellite parameters P F O V , D, and L remain constant; (2) C n 2 depends on spatiotemporal features S = σ I 2 ( m , n ) / [ I X 2 ( m , n ) + I Y 2 ( m , n ) ] . This mathematical framework enables quantitative identification of turbulence strength at arbitrary pixel coordinates across temporal frames. The magnitude of S directly correlates with local turbulence intensity, serving as a probabilistic indicator of turbulence-induced distortion likelihood. Formally, we define this spatiotemporally resolved metric as the turbulence signature (TS). This prior is characterized as physics-informed for three interconnected reasons: (1) the TS originates from the physical parameter C n 2 , (2) its estimation formula is derived from physical principles, and (3) it serves as a physically scaled representation of atmospheric effects. As demonstrated in Figure 2, TS captured in the Jilin-1 and Carbonite-2 satellite videos reveal significant correlations between spatiotemporal degradation patterns and optical turbulence effects. These degradation patterns, which manifest through space-variant blur and geometric distortion, collectively facilitate the spatiotemporal mapping of atmospheric refractive index fluctuations. This empirical evidence motivates us to develop an enhanced restoration framework for turbulence-degraded satellite video by integrating a physically interpretable turbulence signature into the reconstruction process.

2.2. TMTS Network

2.2.1. Overview

Take a continuous sequence of 2 N + 1 frames from a satellite video that captures atmospheric turbulence degradation, denoted as I = { I t N , , I t , , I t + N } , where I t represents the target frame for reconstruction, while the remaining frames are its adjacent counterparts. The goal of our network is to reconstruct satellite video frames that mitigate turbulence effects, ensuring that the reconstruction results closely approximate I t r u e . Figure 3 illustrates the overall structure of the proposed method, which utilizes the DATUM recursive network structure [18]. This structure comprises three stages: features alignment, lucky fusion, and post-processing.
Features Alignment Stage: For each input frame I t at time t, residual dense blocks (RDB) perform downsampling operations. These blocks facilitate the extraction of three levels of features while concurrently capturing turbulence signature S t . We propose an alignment module informed by the turbulence signature, TSGA, to align deep features with the preceding hidden state r t 1 . The CGRFU module effectively models spatiotemporal features of turbulent degradation, incorporating long-range dependencies and recursively updating the hidden reference feature r t .
Lucky Fusion Stage: The aligned features f t and hidden state r t 1 are integrated using a series of RDB to produce the forward embedding e t f w . This embedding is presumed to remain unaffected by turbulence, allowing for the update of the hidden reference feature r. Following the bidirectional recursive process, the novel self-attention module, TSTSA, is introduced for the effective fusion of feature layers. This module processes forward embedding e t f w , backward embedding e t b w , and bidirectional embedding of adjacent frames, along with fusion features derived from TS.
Post-Processing Stage: Features obtained from lucky fusion are decoded into turbulence-free satellite video frames via a dual-decoder architecture. The first decoder employs transposed convolution for upsampling and channel attention blocks (CAB) to generate a geometric distortion compensation field. This field compensates for turbulent warping in the intermediate image and simultaneously computes the geometric restoration loss against the ground truth I G T . The second decoder utilizes an identical upsampling-and-attention structure to reconstruct the final turbulence-corrected video frame I o u t .

2.2.2. TSGA Module for Feature Alignment

In contrast to natural and turbulence-free satellite videos, turbulence-degraded satellite videos exhibit not only moving subjects of varying scales but also significant random geometric distortions induced by turbulence [1,24]. Therefore, capturing motion information and compensating for chaotic geometric distortions become more complex. Conventional deformation convolution-based techniques [25,26] are incapable of learning intricate enclosed turbulent geometric distortion motion information due to their restricted receptive field. Consequently, we introduce a TSGA module that utilizes turbulence signatures as a priori to direct the deep feature alignment of the image. The turbulence signature incorporates the spatial location and the relative intensity of the relationship between the geometric distortions caused by turbulence in each frame of the satellite video, as implicitly described by the input frame sequence. This prior can incorporate global spatio-temporal information for aberration compensation for each distinct frame rather than solely relying on local temporal information between adjacent frames, such as optical flow.
The objective of our TSGA module is to synchronize feature f t 3 with prospective reference features to enhance the utilization of the spatiotemporal information inherent in the turbulence signature for modeling completion. The architecture of the TSGA module is shown in Figure 4. We utilize SpyNet [27] estimation of optical flow to derive a coarse deformation field, to which f t 3 warping is applied for initial feature alignment. The turbulence signature is input into a convolutional layer and activated by a sigmoid function. This activated signature is then linked to the aligned features, hidden reference features, and deformation field, while displacement offsets are predicted by a residual dense block. The guided deformation attention (GDA) module f G D A utilizes offsets that have experienced several iterations of multiclustered deformation attentions to align the input features f t 3 with the hidden features r t . This procedure is articulated as follows:
( f t ^ 3 , O t f r ) = f G D A ( [ W ( f t 3 , O t 1 f r ) , σ ( C o n v ( S t ) ) , O t 1 f r , r t 1 ] ) ,
Here, [·] represents the connection operation, W denotes the warping operation, and the sigmoid function is indicated as σ . S t represents the turbulence signature, f t ^ 3 denotes the aligned feature, and O f r signifies the updated deformation field from feature to reference.

2.2.3. CGRFU Module for Reference Feature Update

Feature alignment requires the participation of reference features, as previously discussed [28,29]. However, the presence of atmospheric turbulence complicates the construction of reference features derived from local temporal characteristics due to the randomness of geometric deformations in satellite videos. To address this, we introduce the CGRFU module, as illustrated in Figure 5. This module effectively captures long-distance temporal dependencies and constructs more precise reference features by considering the global time information conveyed by TS and the Gaussian-like, multi-frame jitter characteristics [30] of turbulent degraded satellite videos. The core idea is to integrate convolution operations and the turbulence signature into the Gated Recurrent Unit (GRU). This allows for the concurrent collection of spatial characteristics and the temporal dependence of reference features. By concealing information from the preceding moment to the present, together with a gated mechanism, the module adaptively acquires significant aspects for the construction of reference features.
Structurally, the CGRFU module closely mirrors the GRU, incorporating update gates, reset gates, and hidden states. The pivotal distinction lies in the module’s tripartite input: the turbulence signature, the hidden state from the preceding time step, and the forward embedding. The turbulence signature, which encapsulates the spatial and temporal characteristics of atmospheric turbulence, is initially processed through a convolutional layer followed by a sigmoid activation function. This processed signature is then concatenated with the forward embedding residual, forming an augmented feature tensor. Subsequently, this tensor is partitioned into reset and update gates via a gating mechanism. The reset gate and update gate regulate the extent to which the current hidden state retains or discards information from the preceding hidden state. This adaptive gating process enables the CGRFU module to dynamically capture long-range temporal dependencies and construct more precise reference features, thereby enhancing the robustness of feature alignment in the presence of turbulent distortions. The update gate U and reset gate G indicate the proportion of the current time step’s hidden state r used to retain and discard information from the preceding time step:
( u t , g t ) = P ( σ ( C o n v [ Z , r ] ) ) ,
where P denotes the tensor partitioning operation, r indicates the hidden state from the previous time step, and Z refers to the residual connection between the turbulence signature activated by the sigmoid function and the forward embedding:
Z = e σ ( S ) + e ,
To enhance the capture of temporal dependencies within the sequence, the turbulence signature S, forward embedding e, and prior hidden state r, processed by the reset gate G, are integrated to produce a new candidate hidden state. This integration is achieved through nonlinear activation using the t a n h function.
H t = t a n h ( [ Z , r g t ] )
Subsequently, the candidate hidden states H t are weighted and fused with the hidden states from the previous time step via the update gate u t , resulting in the hidden state H t for the current time step.
H t = ( 1 u t ) H t + u t g t
The update gate u t assesses the influence of the prior hidden state and the candidate hidden states on the current hidden state. The CGRFU module performs recursive updates to the hidden feature H t , which serves as a reference for correcting turbulent geometric distortions.

2.2.4. TSTSA for Lucky Fusion

Although the deep features of adjacent consecutive frames have been aligned, it remains crucial to integrate their complementary information. To address this, we design the TSTSA module, as illustrated in Figure 6. This module processes the bidirectional embedding and TS from the current frame and its adjacent frames as input. It generates a feature tensor that encapsulates the complementary characteristics of consecutive frames. Bidirectional embedding ensures uniform restoration quality across various frames. The participation of TS aims to enhance the lucky fusion process by directing focus towards spatial regions with more pronounced turbulence signatures. This approach facilitates the model’s ability to learn key features more efficiently and enhances its generalization capabilities. These insights stem from a fundamental analysis indicating that the blurriness associated with turbulence is typically observed in areas characterized by higher turbulence signature values. While aligned bidirectional embedding features present challenges in assessing the extent of blur distortion, TS offers a practical and empirical benchmark.
The TSTSA module begins by concatenating the channels from multiple frames and then reduces the channel dimension using 1 × 1 convolution. Separable convolution is employed to construct spatially varying queries, keys, and values across both temporal and channel dimensions. The turbulence signature, generated via convolution and a sigmoid function, is used to compute the attention score through dot multiplication with the query-key product. The attention score is normalized using the Softmax function to produce attention weights. These weights are applied to aggregate the value vectors, forming the attention map.
T S T S A ( Q , K , V , S a ) = S o f t M a x ( Q K / C S a ) V
Here, S o f t M a x denotes the softmax operation applied along the row direction of Q. Q, K, and V represent the query, key, and value matrices derived from the concatenated multi-temporal channel features. These features pass through linear transformations via a GELU activation layer and a convolutional layer, respectively. C represents the scaling factors. S a denotes the turbulence signature after sigmoid activation.

2.2.5. Loss Function

A composite loss function is developed that integrates pixel level fidelity loss L o s s G and geometric distortion loss L o s s T , represented by the following formula:
L o s s = w 1 L o s s G + w 2 L o s s T ,
We empirically set w 1 = 0.8 , w 2 = 0.2 . L o s s G is engineered to achieve high fidelity at the pixel level. To quantify the pixel-level differences between the model output and the reference image, we utilized the Charbonnier loss function [31]. This function provides a balance between robustness to outliers and the preservation of high-frequency detail information in the reconstruction results:
L o s s G = ( I G I R ) 2 + ε 2 ,
where I R denotes the restored image, while I G signifies the target reference image. ε is a small constant. L o s s T is implemented to eliminate geometric distortions induced by turbulence in the reconstruction processing. One of the dual decoders is utilized to estimate the inverse tilt field. Subsequently, the image containing only the tilt is warped to assess its difference from the reference image.
L o s s T = ( I G W ( I t i l t , T ) ) 2 + ε 2
Here, T represents the inverse tilt field estimated by the decoder, while I t i l t denotes the image that exclusively contains tilt without blurring, which can be produced in turbulent degraded satellite video sequences.

2.3. The Implementation of TMTS

2.3.1. Satellite Video Data Source

Our turbulence mitigation framework was evaluated using video data from five prominent video satellite platforms: Jilin-1 (primary data source), Carbonite-2, UrtheCast, Skysat-1, and Luojia3-01. The training dataset comprised 189 non-overlapping 512 × 512 patches curated from Jilin-1 sequences, following the methodology of [1]. For benchmark evaluation, we stratified the test data across the platforms: five scenes from two distinct Jilin-1 videos, four scenes each from Carbonite-2 and UrtheCast, three from Skysat-1, and two from Luojia3-01. Each 100-frame temporal sequence preserves the spatiotemporal turbulence characteristics. The final dataset contains 189 training clips and 18 test scenes across five satellites, with detailed specifications given in Table 1.

2.3.2. Paired Turbulence Data Synthesis

Our framework employs an autoregressive model [32] as an efficient atmospheric turbulence simulator, synthesizing paired turbulence data through Taylor’s frozen flow hypothesis and temporal phase screen theory. Given that atmospheric turbulence within a 20 km altitude predominantly affects Earth observation satellites, we model light propagation from ground targets as spherical wavefronts traversing conical turbulence volumes prior to spaceborne sensor acquisition.
As show in Figure 7, the simulation adopts a multi-layer turbulence integration approach [33], stratifying atmospheric disturbances into three vertically distributed phase screens (1 km, 4 km, and 20 km altitudes) with respective weighting coefficients of 0.6, 0.3, and 0.1. Wind-driven temporal dynamics are controlled via velocity parameters (≤5 m/s) governing turbulent structure advection, thereby generating spatiotemporally correlated distortions. Satellite imaging geometry is constrained to 400–600 km orbital ranges, with atmospheric coherence lengths calibrated between 3 and 10 cm using Hufnagel–Valley profile models [34]. Post-simulation processing introduces Gaussian noise ( σ = 0.015 0.025 ) to emulate sensor noise characteristics.

2.3.3. Metrics

The quantitative evaluation of atmospheric turbulence mitigation efficacy in our framework employs three principal metrics. For synthetic dataset validation, established image quality benchmarks including the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) are utilized to quantify reconstruction fidelity relative to ground truth references. In real-world applications where reference images are unavailable, three no-reference metrics were employed: (1) Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) [35] analyzes statistical features of locally normalized luminance coefficients to detect distortions in natural scene statistics; (2) Contrast Enhancement-based Image Quality (CEIQ) [36] evaluates degradation by quantifying pixel intensity distribution changes during contrast enhancement processes; (3) Natural Image Quality Evaluator (NIQE) [37] measures the distance between statistical features of restored images and a multivariate Gaussian model derived from pristine natural scenes. Notably, lower BRISQUE and NIQE scores indicate superior perceptual quality aligned with human visual assessment criteria.

2.3.4. Implementation Details

The computational framework was implemented in PyTorch 2.2.0 on a workstation with an AMD EPYC 9754 CPU and NVIDIA RTX 4090 GPU. Our architecture employs a three-layer convolutional encoder (16 kernels/layer) preceding the TSGA module, which incorporates 8 parallel deformable attention groups. The training configuration utilized 240 × 240 pixel inputs with batch size of 2 over 1000 epochs, optimized using the Adam optimizer [38] ( β 1 = 0.9 , β 2 = 0.99 ) and cosine annealing scheduling ( 2 × 10 4 1 × 10 6 ). All network parameters were jointly optimized through backpropagation.

3. Results and Discussion

3.1. Performance on Synthetic Datasets

3.1.1. Quantitative Evaluation

We compared the proposed TSTM with six state-of-the-art models specifically designed for turbulence mitigation tasks on synthetic data: TSRWGAN [16], NDIR [39], TMT [40], DeturNet [15], TurbNet [14], and DATUM [18]. Additionally, four state-of-the-art generic video restoration models were included in the comparison: VRT [41], RVRT [29], x-Restormer [42], and Shift-net [43].To enable impartial evaluation, all models were retrained on our synthetic datasets. PSNR and SSIM were adopted as objective metrics to evaluate the fidelity of the restoration results. It should be noted that both PSNR and SSIM calculations were performed in the YCbCr color space to align with human visual perception characteristics.
Table 1, Table 2, Table 3 and Table 4 provide quantitative comparisons in terms of PSNR and SSIM across five satellite video datasets, demonstrating the performance of benchmark models in 18 distinct operational scenarios. Our empirical observations reveal that generic video restoration models yield only marginal improvements compared with specialized turbulence mitigation architectures. This performance discrepancy primarily originates from the inherent spatiotemporal distortions characteristic of turbulence-degraded videos—particularly atmospheric-induced non-uniform geometric warping and temporally varying blur patterns—which fundamentally challenge the generalizability of conventional restoration paradigms to satellite-based turbulence mitigation tasks.
Notably, the proposed TMTS framework demonstrates superior efficacy over competing methods, achieving significant performance gains across all satellite video benchmarks. This consistent superiority suggests that TMTS maintains robust reconstruction capabilities under varying imaging configurations and diverse remote sensing scenarios. Specifically, compared with the state-of-the-art DATUM model, our TMTS exhibits a 0.27 dB average PSNR improvement. This enhancement quantitatively validates our method’s capacity to exploit turbulence-specific spatiotemporal signatures for deriving relative atmospheric turbulence intensity metrics, thereby enabling physics-aware restoration of remote sensing video sequences.

3.1.2. Qualitative Results

The subjective restoration results of seven video satellite turbulence-degraded synthetic sequences are illustrated in Figure 8, Figure 9 and Figure 10. Our method demonstrates superior visual clarity and structural fidelity, with restored geometric features closely aligned with ground truth while effectively mitigating turbulence-induced distortions. In magnified patches, our model achieves enhanced reconstruction of fine details, including alphanumeric characters, architectural components, and regional boundaries. Specifically, in Scene 2 of Jilin-1, the runway numbers reconstructed by VRT, RVRT, and X-Restormer exhibit excessive blurring, rendering them indistinguishable. This highlights the limitations of generic video restoration models in preserving temporal details under turbulence degradation. In Scene 7 of Carbonite-2, TurbNet, TSRWGAN, and TMT retain spatially varying blur artifacts in the terminal roof region, particularly distorting the circular truss structures. In Scene 11 of UrtheCast, while TMTS successfully maintains the shape and edges of circular markers on buildings, competing methods lacking iterative reference frame reconstruction (e.g., VRT, RVRT, X-Restormer) fail to recover severely warped structural edges. TurbNet, TSRWGAN, and TMT further introduce edge blurring and artifacts. Though DATUM achieves relative accuracy, its results remain less sharp compared to TMTS.
As shown in Figure 8, TMTS reconstructs crisp linear features of buildings and partially restores spiral textures on surfaces. All competing methods except DATUM and TMTS yielded distorted edges despite recovering coarse structures. Figure 9 presents magnified details from two Luojia3-01 satellite scenes, where TMTS demonstrates superior high-frequency detail restoration compared to other approaches. The visual comparisons confirm that our physics-based module, which incorporates temporal averaging information and adaptively responds to localized turbulence-induced deformations, achieves structurally faithful reference frame construction. This mechanism enables targeted compensation of high-frequency features into degraded frames through adaptive fusion, thereby achieving significant perceptual improvements over state-of-the-art baselines.

3.2. Performance on Real Data

To validate the effectiveness of the proposed TMTS in addressing real-world degradation in satellite video, experiments were conducted on the SatSOT dataset [44]. Notably, this dataset contains no simulated degradations, with testing exclusively performed on image sequences exhibiting visually confirmed significant atmospheric turbulence degradation.
Table 5 presents the turbulence mitigation performance of various methods in terms of BRISQUE, CEIQ, and NIQE metrics. The results demonstrate that TMTS achieves the best performance in BRISQUE and NIQE while ranking second in CEIQ, indicating its strong robustness and competitiveness in addressing turbulence mitigation challenges for real degraded remote sensing images. Significantly, the optimal BRISQUE and NIQE scores highlight TMTS’s superiority in restoring human perception-aligned authentic results. Furthermore, the high CEIQ score confirms the capability of our TMTS to effectively reconstruct high-frequency textures in complex real-world scenarios affected by turbulence degradation.
Figure 11 provides a visual comparison of turbulence mitigation in real-world remote sensing videos. Observations reveal that our TMTS effectively corrects localized geometric distortions and generates visually appealing textures with rich high-frequency details. Specifically, the red and yellow regions of interest (ROIs) illustrate that TMTS produces sharper and more distinct edges, thereby delivering visually superior outcomes. These findings validate the efficacy of TMTS in addressing turbulence mitigation tasks for satellite videos, demonstrating its practical utility in real-world applications.
To systematically evaluate the temporal coherence and consistency of turbulence mitigation methods, a row of pixels was recorded and temporally stacked to generate corresponding y–t temporal profiles. As illustrated in Figure 12, our TMTS produces more precise and smoother long-term temporal details, highlighting the effectiveness of the turbulence signature in suppressing turbulence-induced temporal jitter.

3.3. Ablation Studies

Our ablation framework systematically evaluates critical architectural determinants that introduce effective inductive biases into the model architecture, including turbulence signature modeling, multi-stage architectural integration of turbulence signature operators, temporal sequence length optimization, and window-based feature aggregation mechanisms. The experimental protocol further extends to encompass quantitative assessments of computational budget and operational efficiency.

3.3.1. Effect of Turbulence Signature

To advance computational efficiency and improve interpretability in deep learning frameworks for satellite video turbulence mitigation, we propose turbulence signature encoding to explicitly model multi-scale distortion characteristics inherent in atmospheric turbulence degradation. Validation experiments employing benchmark evaluations under identical training protocols demonstrate that architectures integrating turbulence signature operators outperform their ablated counterparts by statistically significant margins. The qualitative comparisons in Figure 13 visually confirm the signature’s capacity to spatially resolve turbulence-induced artifacts, effectively distinguishing between space-variant blur patterns and geometric deformation regions. This discriminative capability enables deep networks to adaptively prioritize distortion correction in critical areas, thus achieving enhanced turbulence suppression fidelity while maintaining computational tractability.

3.3.2. Effect of TSGA, CGRFU, and TSTSA

This study systematically validates the functional efficacy of individual architectural components within the TMTS framework through a multi-stage ablation protocol. The foundational architecture employs a TSTSA module that strategically aggregates spatiotemporal embeddings from adjacent frames via window-based feature fusion. All comparative analyses were conducted under strictly controlled experimental conditions, maintaining identical training configurations and evaluation metrics. The component-level ablation investigation focuses on three core modules: TSGA, CGRFU, and TSTSA. Through component-wise deactivation experiments, we progressively removed the turbulence signature operators (denoted as w/o configurations) from each module to quantify their performance contributions. The experimental results (Table 6) quantitatively establish the differential contributions of the TSGA, CGRFU, and TSTSA modules to turbulence suppression efficacy, revealing that integrated operation of all components achieves optimal distortion rectification through complementary feature refinement mechanisms.
The TSGA module addresses the registration challenges of turbulence-induced non-rigid geometric distortions by leveraging turbulence signature encoding as a physics-aware prior for deep feature alignment. As quantified in Table 6, integration of TSGA enhances reconstruction fidelity compared to baseline architectures, achieving PSNR/SSIM improvements of +1.64 dB and +0.0318, respectively. This performance gap highlights the criticality of explicit spatiotemporal distortion modeling for atmospheric turbulence degradation.
The CGRFU module introduces convolutional operations and turbulence signature encoding into gated recurrent architectures, enabling synergistic integration of spatial characteristics and temporal dependencies in reference feature refinement. This novel architectural innovation establishes turbulence-resilient reference features through adaptive spatiotemporal filtering, effectively suppressing atmospheric distortion propagation across sequential frames. Experimental comparisons demonstrate that incorporating CGRFU yields measurable performance enhancements, with quantitative metrics improving by +0.69 dB PSNR and +0.0095 SSIM. This empirically demonstrates the critical role of dynamic reference feature updating in achieving robust temporal alignment for satellite video sequences under turbulent conditions.
Notably, the removal of the TS operator from the TSGA module demonstrates only minimal performance improvements, thereby highlighting the critical importance of the proposed encoding mechanism for effective turbulence mitigation. This observed performance discrepancy primarily stems from the distinctive capability of turbulence signatures to resolve distortion patterns specifically induced by atmospheric turbulence through the integration of refractive index-aware spatial constraints. The framework achieves enhanced distortion characterization by embedding physics-derived calculations of atmospheric refractive-index structure parameters ( C n 2 values), which facilitate precise identification of turbulence-induced geometric deformation patterns. Specifically, this approach enables comprehensive spatial mapping of distortion concentration regions and quantitative analysis of localized intensity variations, consequently reducing feature misregistration errors during geometric transformation processes through optimized deformation field estimation.

3.3.3. Influence of Input Frame Number

In turbulence mitigation tasks, the frame quantity employed during both the training and inference phases critically determines reconstruction outcomes. Given that turbulence-induced degradation originates from zero-mean random phase distortions, enhanced frame perception enables the network to theoretically achieve more precise estimation of turbulence-free states through spatiotemporal analysis. This principle remains valid for quasi-static remote sensing scenarios characterized by limited moving targets, where pixel-level turbulence statistics demonstrate enhanced temporal tractability for systematic tracking and analytical processing.
Three models were developed using 10-, 20-, and 30-frame input sequences, with their comparative performance metrics across varying inference-phase frame conditions systematically analyzed in Figure 14. Experimental results demonstrate a statistically significant positive correlation between turbulence mitigation efficacy and input sequence length under temporal acquisition constraints inherent to remote sensing video systems. Specifically, incremental increases in frame quantity generate performance improvements exceeding 1 dB in objective metrics. These findings confirm that effective turbulence mitigation in remote sensing video processing requires temporal integration of scene information across prolonged frame sequences, a computational principle directly analogous to the multi-frame fusion strategies used in remote sensing video super-resolution tasks.

3.3.4. Model Efficiency and Computation Budget

To rigorously evaluate model efficiency, we conducted a comprehensive analysis of the correlations among FLOPs, parameter counts, and PSNR metrics, as visualized in Figure 15a. The reported PSNR values represent averages calculated across 18 distinct scenes from 5 satellite video sequences. Our model demonstrates superior computational efficiency by occupying the optimal upper-left quadrant in the parametric space analysis, indicating enhanced restoration performance (higher PSNR values on the y-axis) coupled with reduced computational complexity (lower FLOPs on the x-axis) and compact model size.
Further computational analysis, presented in Figure 15b, compares inference time budgets across competing architectures under standardized testing conditions using a single NVIDIA 4090 GPU. Our framework achieves state-of-the-art processing efficiency with a turbulence mitigation latency of approximately 0.19 seconds per image. While this performance currently falls short of real-time processing requirements, it significantly outperforms existing benchmark methods in terms of inference speed while maintaining competitive restoration quality.

3.4. Limitations and Future Works

While the proposed TMTS model demonstrates superior performance across multiple aspects of turbulence mitigation in remote sensing video processing, several limitations warrant further investigation:
(1)
Constructing Ground-Truth Turbulence Datasets
The staring imaging technology employed by remote sensing satellites enables sustained observation of designated target areas, capturing dynamic variations and generating continuous image sequences. Post-acquisition orbital adjustments facilitate revisit capabilities for re-imaging identical geographical regions, creating opportunities to acquire temporally aligned video pairs containing turbulence-degraded and turbulence-free observations of the same scenes. Such datasets provide objective quantitative benchmarks for evaluating turbulence mitigation algorithms, enabling precise modeling of degradation–restoration mappings under supervised learning frameworks. This approach effectively constrains model optimization trajectories while enhancing the physical consistency and visual fidelity of restoration outcomes.
(2)
Exploring TS Potential
The turbulence signature mechanism, inspired by C n 2 value estimation methodologies, leverages gradient features and global variance statistics from current frames. This approach demonstrates operational validity in staring imaging scenarios where static backgrounds dominate remote sensing observations. However, the global per-pixel variance computation exhibits inherent limitations when handling multi-scale moving targets and satellite attitude variations. For example, in agile imaging modes where video satellites track fast-moving aircraft at sub-meter spatial resolutions, dynamic objects and orbit coverage changes introduce motion artifacts distinct from turbulent distortion. Consequently, the turbulence signatures in these regions become entangled with coupled multi-dimensional disturbances. The current methodology insufficiently addresses the interplay between moving targets, attitude dynamics, and turbulence signatures, potentially compromising reconstruction fidelity. To address these constraints, future investigations should prioritize developing dedicated feature extraction protocols for moving targets through integration of average optical flow estimation and unsupervised motion segmentation [45], which enables decoupling of dynamic/static scene components prior to turbulence correction while concurrently establishing explicit mathematical relationships between turbulence signatures and phase distortion mechanisms inherent to atmospheric turbulence effects to enhance the physical interpretability of restoration frameworks.

4. Conclusions

This study proposes a physics-based Turbulence Mitigation Network guided by Turbulence Signature (TMTS) for satellite video processing, designed to overcome critical limitations inherent in existing purely deep learning-based approaches. Drawing inspiration from image sequence-derived C n 2 estimation methodologies, we develop a turbulence signature extraction mechanism that directly derives spatiotemporal distortion priors from raw video gradients and localized temporal intensity variance, thereby eliminating external dataset dependencies for physics-guided networks. The turbulence signature is systematically integrated into three computational modules under a lucky imaging paradigm: the TSGA module employs signature-driven priors to achieve robust geometric compensation through deformation-aware alignment of deep features; the CGRFU module leverages turbulence signatures carrying global temporal statistics to model long-range dependencies of quasi-Gaussian jitter patterns across multi-frame satellite videos, enabling precise reference feature construction; and the TSTSA module enhances inter-frame interactions by adaptively fusing complementary features in weakly distorted regions through signature-informed attention mechanisms.
To evaluate TMTS’s efficacy, comparative experiments were conducted on five satellite video datasets (Jilin-1, Carbon-2, UrtheCast, Skysat-1, Luojia3-01), with turbulence-degraded sequences synthesized using temporal phase screens and autoregressive models. Our model demonstrated superior performance against six specialized turbulence mitigation models and four general video restoration frameworks, achieving average improvements of 0.27 dB (PSNR) and 0.0015 (SSIM) on synthetic data while reducing non-reference metrics, NIQE and BRISQUE by 4.05 % and 4.46 % on real-world data. Subjective evaluations and temporal profile analyses confirm TMTS’s generalization capability in complex remote sensing scenarios. Systematic ablation studies validate the necessity of physics-informed inductive biases, particularly the turbulence signature’s role in enhancing spatiotemporal distortion characterization.
Compared with existing deep learning methods, our approach uniquely integrates physics-informed turbulence priors while deploying a domain-specific architecture for satellite video restoration. Against these baselines, this physics-domain synergy delivers interpretable mechanisms alongside superior cross-platform performance. The framework establishes a critical bridge between turbulence physics and deep learning, demonstrating that physics-driven inductive biases significantly improve model generalizability.

Author Contributions

Conceptualization, J.Y. and T.S.; methodology, J.Y. and G.Z.; software, J.Y. and X.Z.; validation, J.Y., J.H. and X.Z.; formal analysis, J.Y. and X.Z.; investigation, J.Y. and T.S.; resources, J.Y. and X.W.; data curation, J.Y., T.S., G.Z. and X.Z.; writing—original draft preparation, J.Y.; writing—review and editing, J.Y. and T.S.; visualization, J.Y., X.Z. and T.S.; supervision, J.Y.; project administration, J.Y.; funding acquisition, T.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Project of China (No. 2020YFF0304902) and Hubei Key Research and Development (No. 2021BAA201).

Data Availability Statement

Our dataset is available at https://github.com/whuluojia (accessed on 1 March 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Xiao, Y.; Yuan, Q.; Jiang, K.; Jin, X.; He, J.; Zhang, L.; Lin, C.W. Local-Global Temporal Difference Learning for Satellite Video Super-Resolution. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 2789–2802. [Google Scholar] [CrossRef]
  2. Zhao, B.; Han, P.; Li, X. Vehicle Perception From Satellite. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 2545–2554. [Google Scholar] [CrossRef] [PubMed]
  3. Han, W.; Chen, J.; Wang, L.; Feng, R.; Li, F.; Wu, L.; Tian, T.; Yan, J. Methods for Small, Weak Object Detection in Optical High-Resolution Remote Sensing Images: A survey of advances and challenges. IEEE Geosci. Remote Sens. Mag. 2021, 9, 8–34. [Google Scholar] [CrossRef]
  4. Zhang, Z.; Wang, C.; Song, J.; Xu, Y. Object Tracking Based on Satellite Videos: A Literature Review. Remote Sens. 2022, 14, 3674. [Google Scholar] [CrossRef]
  5. Rigaut, F.; Neichel, B. Multiconjugate Adaptive Optics for Astronomy. Annu. Rev. Astron. Astrophys. 2018, 56, 277–314. [Google Scholar] [CrossRef]
  6. Liang, J.; Williams, D.; Miller, D. Supernormal vision and high-resolution retinal imaging through adaptive optics. J. Opt. Soc. Am. A 1997, 14, 2884–2892. [Google Scholar] [CrossRef]
  7. Law, N.; Mackay, C.; Baldwin, J. Lucky imaging: High angular resolution imaging in the visible from the ground. Astron. Astrophys. 2006, 446, 739–745. [Google Scholar] [CrossRef]
  8. Joshi, N.; Cohen, M. Seeing Mt. Rainier: Lucky Imaging for Multi-Image Denoising, Sharpening, and Haze Removal. In Proceedings of the 2010 IEEE International Conference on Computational Photography (ICCP 2010), Cambridge, MA, USA, 29–30 March 2010; IEEE: Piscataway, NJ, USA, 2010; p. 8. [Google Scholar]
  9. Anantrasirichai, N.; Achim, A.; Kingsbury, N.G.; Bull, D.R. Atmospheric Turbulence Mitigation Using Complex Wavelet-Based Fusion. IEEE Trans. Image Process. 2013, 22, 2398–2408. [Google Scholar] [CrossRef]
  10. Mao, Z.; Chimitt, N.; Chan, S.H. Image Reconstruction of Static and Dynamic Scenes Through Anisoplanatic Turbulence. IEEE Trans. Comput. Imaging 2020, 6, 1415–1428. [Google Scholar] [CrossRef]
  11. Cheng, J.; Zhu, W.; Li, J.; Xu, G.; Chen, X.; Yao, C. Restoration of Atmospheric Turbulence-Degraded Short-Exposure Image Based on Convolution Neural Network. Photonics 2023, 10, 666. [Google Scholar] [CrossRef]
  12. Ettedgui, B.; Yitzhaky, Y. Atmospheric Turbulence Degraded Video Restoration with Recurrent GAN (ATVR-GAN). Sensors 2023, 23, 8815. [Google Scholar] [CrossRef] [PubMed]
  13. Wu, Y.; Cheng, K.; Cao, T.; Zhao, D.; Li, J. Semi-supervised correction model for turbulence-distorted images. Opt. Express 2024, 32, 21160–21174. [Google Scholar] [CrossRef] [PubMed]
  14. Mao, Z.; Jaiswal, A.; Wang, Z.; Chan, S.H. Single Frame Atmospheric Turbulence Mitigation: A Benchmark Study and a New Physics-Inspired Transformer Model. In Proceedings of the 17th European Conference on Computer Vision (ECCV), ECCV 2022, PT XIX, Tel Aviv, Israel, 23–27 October 2022; Avidan, S., Brostow, G., Cisse, M., Farinella, G., Hassner, T., Eds.; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2022; Volume 13679, pp. 430–446. [Google Scholar] [CrossRef]
  15. Li, X.; Liu, X.; Wei, W.; Zhong, X.; Ma, H.; Chu, J. A DeturNet-Based Method for Recovering Images Degraded by Atmospheric Turbulence. Remote Sens. 2023, 15, 5071. [Google Scholar] [CrossRef]
  16. Jin, D.; Chen, Y.; Lu, Y.; Chen, J.; Wang, P.; Liu, Z.; Guo, S.; Bai, X. Neutralizing the impact of atmospheric turbulence on complex scene imaging via deep learning. Nat. Mach. Intell. 2021, 3, 876–884. [Google Scholar] [CrossRef]
  17. Zou, Z.; Anantrasirichai, N. DeTurb: Atmospheric Turbulence Mitigation with Deformable 3D Convolutions and 3D Swin Transformers. In Proceedings of the Computer Vision-ACCV 2024: 17th Asian Conference on Computer Vision, Hanoi, Vietnam, 8–12 December 2024; Cho, M., Laptev, I., Tran, D., Yao, A., Zha, H., Eds.; Lecture Notes in Computer Science (15475). Springer: Berlin/Heidelberg, Germany, 2025; pp. 20–37. [Google Scholar] [CrossRef]
  18. Zhang, X.; Chimitt, N.; Chi, Y.; Mao, Z.; Chan, S.H. Spatio-Temporal Turbulence Mitigation: A Translational Perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), CVPR 2024, Seattle, WA, USA, 16–22 June 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 2889–2899. [Google Scholar] [CrossRef]
  19. Wang, J.; Markey, J. Modal Compensation of Atmospheric-Turbulence Phase-Distortion. J. Opt. Soc. Am. 1978, 68, 78–87. [Google Scholar] [CrossRef]
  20. Wang, Y.; Jin, D.; Chen, J.; Bai, X. Revelation of hidden 2D atmospheric turbulence strength fields from turbulence effects in infrared imaging. Nat. Comput. Sci. 2023, 3, 687–699. [Google Scholar] [CrossRef]
  21. Zamek, S.; Yitzhaky, Y. Turbulence strength estimation from an arbitrary set of atmospherically degraded images. J. Opt. Soc. Am. A 2006, 23, 3106–3113. [Google Scholar] [CrossRef]
  22. Saha, R.K.; Salcin, E.; Kim, J.; Smith, J.; Jayasuriya, S. Turbulence strength Cn2 estimation from video using physics-based deep learning. Opt. Express 2022, 30, 40854–40870. [Google Scholar] [CrossRef]
  23. Beason, M.; Potvin, G.; Sprung, D.; McCrae, J.; Gladysz, S. Comparative analysis of Cn2 estimation methods for sonic anemometer data. Appl. Opt. 2024, 63, E94–E106. [Google Scholar] [CrossRef]
  24. Zeng, T.; Shen, Q.; Cao, Y.; Guan, J.Y.; Lian, M.Z.; Han, J.J.; Hou, L.; Lu, J.; Peng, X.X.; Li, M.; et al. Measurement of atmospheric non-reciprocity effects for satellite-based two-way time-frequency transfer. Photon. Res. 2024, 12, 1274–1282. [Google Scholar] [CrossRef]
  25. Zhang, Y.; Sun, Y.; Liu, S. Deformable and residual convolutional network for image super-resolution. Appl. Intell. 2022, 52, 295–304. [Google Scholar] [CrossRef]
  26. Luo, G.; Qu, J.; Zhang, L.; Fang, X.; Zhang, Y.; Man, S. Variational Learning of Convolutional Neural Networks with Stochastic Deformable Kernels. In Proceedings of the 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Prague, Czech Republic, 9–12 October 2022; pp. 1026–1031. [Google Scholar] [CrossRef]
  27. Ranjan, A.; Black, M.J. Optical Flow Estimation using a Spatial Pyramid Network. In Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2017), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 2720–2729. [Google Scholar] [CrossRef]
  28. Rucci, M.A.; Rucci, M.A.; Hardie, R.C.; Martin, R.K.; Gladysz, S. Atmospheric optical turbulence mitigation using iterative image registration and least squares lucky look fusion. Appl. Opt. 2022, 61, 8233–8247. [Google Scholar] [CrossRef]
  29. Liang, J.; Fan, Y.; Xiang, X.; Ranjan, R.; Ilg, E.; Green, S.; Cao, J.; Zhang, K.; Timofte, R.; Van Gool, L. Recurrent Video Restoration Transformer with Guided Deformable Attention. In Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS 2022), Electric Network, New Orleans, LA, USA, 28 November–9 December 2022; Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., Oh, A., Eds.; Advances in Neural Information Processing Systems 35. [Google Scholar] [CrossRef]
  30. Lau, C.P.; Lai, Y.H.; Lui, L.M. Restoration of atmospheric turbulence-distorted images via RPCA and quasiconformal maps. Inverse Probl. 2019, 35, 074002. [Google Scholar] [CrossRef]
  31. Barron, J. A General and Adaptive Robust Loss Function. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 4326–4334. [Google Scholar] [CrossRef]
  32. Srinath, S.; Poyneer, L.A.; Rudy, A.R.; Ammons, S.M. Computationally efficient autoregressive method for generating phase screens with frozen flow and turbulence in optical simulations. Opt. Express 2015, 23, 33335–33349. [Google Scholar] [CrossRef] [PubMed]
  33. Chimitt, N.; Chan, S.H. Simulating Anisoplanatic Turbulence by Sampling Correlated Zernike Coefficients. In Proceedings of the 2020 IEEE International Conference on Computational Photography (ICCP), Saint Louis, MO, USA, 24–26 April 2020; pp. 1–12. [Google Scholar] [CrossRef]
  34. Wu, X.Q.; Yang, Q.K.; Huang, H.H.; Qing, C.; Hu, X.D.; Wang, Y.J. Study of C n 2 profile model by atmospheric optical turbulence model. Acta Phys. Sin. 2023, 72, 069201. [Google Scholar] [CrossRef]
  35. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-Reference Image Quality Assessment in the Spatial Domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef]
  36. Fang, Y.; Ma, K.; Wang, Z.; Lin, W.; Fang, Z.; Zhai, G. No-Reference Quality Assessment of Contrast-Distorted Images Based on Natural Scene Statistics. IEEE Signal Process. Lett. 2015, 22, 838–842. [Google Scholar] [CrossRef]
  37. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “Completely Blind” Image Quality Analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
  38. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015; pp. 1–15. [Google Scholar] [CrossRef]
  39. Li, N.; Thapa, S.; Whyte, C.; Reed, A.; Jayasuriya, S.; Ye, J. Unsupervised Non-Rigid Image Distortion Removal via Grid Deformation. In Proceedings of the 18th IEEE/CVF International Conference on Computer Vision (ICCV 2021), Electric Network, Montreal, QC, Canada, 11–17 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 2502–2512. [Google Scholar] [CrossRef]
  40. Zhang, X.; Mao, Z.; Chimitt, N.; Chan, S.H. Imaging Through the Atmosphere Using Turbulence Mitigation Transformer. IEEE Trans. Comput. Imaging 2024, 10, 115–128. [Google Scholar] [CrossRef]
  41. Liang, J.; Cao, J.; Fan, Y.; Zhang, K.; Ranjan, R.; Li, Y.; Timofte, R.; Van Gool, L. VRT: A Video Restoration Transformer. IEEE Trans. Image Process. 2024, 33, 2171–2182. [Google Scholar] [CrossRef]
  42. Chen, X.; Li, Z.; Pu, Y.; Liu, Y.; Zhou, J.; Qiao, Y.; Dong, C. A Comparative Study of Image Restoration Networks for General Backbone Network Design. In Proceedings of the 18th European Conference on Computer Vision (ECCV 2024), PT LXXI, Milan, Italy, 29 September–4 October 2024; Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G., Eds.; Lecture Notes in Computer Science. AIM Group: Madrid, Spain, 2025; Volume 15129, pp. 74–91. [Google Scholar] [CrossRef]
  43. Li, D.; Shi, X.; Zhang, Y.; Cheung, K.C.; See, S.; Wang, X.; Qin, H.; Li, H. A Simple Baseline for Video Restoration with Grouped Spatial-temporal Shift. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 9822–9832. [Google Scholar] [CrossRef]
  44. Li, S.; Zhou, Z.; Zhao, M.; Yang, J.; Guo, W.; Lv, Y.; Kou, L.; Wang, H.; Gu, Y. A Multitask Benchmark Dataset for Satellite Video: Object Detection, Tracking, and Segmentation. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5611021. [Google Scholar] [CrossRef]
  45. Saha, R.K.; Qin, D.; Li, N.; Ye, J.; Jayasuriya, S. Turb-Seg-Res: A Segment-then-Restore Pipeline for Dynamic Videos with Atmospheric Turbulence. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 25286–25296. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of distortion in satellite video caused by atmospheric turbulence. (a) Satellite video imaging geometry highlighting tropospheric turbulence-induced optical wavefront distortions. (b) Degraded video frames exhibiting geometric displacement, space-variant blur, and temporal jitter.
Figure 1. Schematic diagram of distortion in satellite video caused by atmospheric turbulence. (a) Satellite video imaging geometry highlighting tropospheric turbulence-induced optical wavefront distortions. (b) Degraded video frames exhibiting geometric displacement, space-variant blur, and temporal jitter.
Remotesensing 17 02422 g001
Figure 2. Turbulence signature observed in satellite video frames. Turbulence signature is produced through physics-based C n 2 estimation techniques [21,22], revealing clues of relative turbulence intensity across various spatial positions and temporal instances in satellite videos.
Figure 2. Turbulence signature observed in satellite video frames. Turbulence signature is produced through physics-based C n 2 estimation techniques [21,22], revealing clues of relative turbulence intensity across various spatial positions and temporal instances in satellite videos.
Remotesensing 17 02422 g002
Figure 3. Overall architecture of our proposed TMTS network. This framework comprises three processing stages analogous to traditional lucky imaging: features alignment, lucky fusion, and post-processing, which are marked with different background colors in the figure. Core functional modules comprise: (i) TSGA for turbulence-robust feature alignment, (ii) CGRFU for reference feature updating (utilizing 15 stacked RDBs), and (iii) TSTSA for multi-channel spatiotemporal feature aggregation. Dashed lines indicate information propagation from other directions and frames.
Figure 3. Overall architecture of our proposed TMTS network. This framework comprises three processing stages analogous to traditional lucky imaging: features alignment, lucky fusion, and post-processing, which are marked with different background colors in the figure. Core functional modules comprise: (i) TSGA for turbulence-robust feature alignment, (ii) CGRFU for reference feature updating (utilizing 15 stacked RDBs), and (iii) TSTSA for multi-channel spatiotemporal feature aggregation. Dashed lines indicate information propagation from other directions and frames.
Remotesensing 17 02422 g003
Figure 4. Architecture of TSGA. This diagram depicts the forward temporal processing of the t-th frame. The network takes the following as inputs: the current frame feature f t 3 , the two adjacent frames I t and I t 1 , and the turbulence signature S corresponding to the current frame. It outputs the updated feature f ^ t 3 and alignment flow O t f r .
Figure 4. Architecture of TSGA. This diagram depicts the forward temporal processing of the t-th frame. The network takes the following as inputs: the current frame feature f t 3 , the two adjacent frames I t and I t 1 , and the turbulence signature S corresponding to the current frame. It outputs the updated feature f ^ t 3 and alignment flow O t f r .
Remotesensing 17 02422 g004
Figure 5. Network details of the proposed CGRFU module.
Figure 5. Network details of the proposed CGRFU module.
Remotesensing 17 02422 g005
Figure 6. Network details of the proposed TSTSA module.
Figure 6. Network details of the proposed TSTSA module.
Remotesensing 17 02422 g006
Figure 7. Stratified turbulence simulation for satellite imaging. (a) Altitude-dependent phase perturbations; (b) Original vs. simulated turbulence-degraded images.
Figure 7. Stratified turbulence simulation for satellite imaging. (a) Altitude-dependent phase perturbations; (b) Original vs. simulated turbulence-degraded images.
Remotesensing 17 02422 g007
Figure 8. Qualitative comparisons of Scene 2 of Jilin-1, Scene 7 of Carbonite-2, and Scene 11 from UrtheCast. Zoom in for better visualization.
Figure 8. Qualitative comparisons of Scene 2 of Jilin-1, Scene 7 of Carbonite-2, and Scene 11 from UrtheCast. Zoom in for better visualization.
Remotesensing 17 02422 g008
Figure 9. Qualitative comparisons of Scene 14 and Scene 15 of SkySat-1. Zoom in for better visualization.
Figure 9. Qualitative comparisons of Scene 14 and Scene 15 of SkySat-1. Zoom in for better visualization.
Remotesensing 17 02422 g009
Figure 10. Qualitative comparisons of Scene 17 and Scene 18 of Luojia3-01. Zoom in for better visualization.
Figure 10. Qualitative comparisons of Scene 17 and Scene 18 of Luojia3-01. Zoom in for better visualization.
Remotesensing 17 02422 g010
Figure 11. Visualization comparison of real-world turbulence-degraded satellite videos. The original video sequences are sourced from the SatSOT dataset. Boxes of different colors show the regions of interest, and the aqua-encoded pixel line indicates the position used for temporal–spatial profile analysis. Zoom in for better visualization.
Figure 11. Visualization comparison of real-world turbulence-degraded satellite videos. The original video sequences are sourced from the SatSOT dataset. Boxes of different colors show the regions of interest, and the aqua-encoded pixel line indicates the position used for temporal–spatial profile analysis. Zoom in for better visualization.
Remotesensing 17 02422 g011
Figure 12. Temporal–spatial (y–t) profiles of the “SatSOT_car_42” scene reconstructed via turbulence mitigation, generated by tracking an aqua-encoded pixel line across sequential frames with temporal stacking. The blue circle shows the magnified result at the center position.
Figure 12. Temporal–spatial (y–t) profiles of the “SatSOT_car_42” scene reconstructed via turbulence mitigation, generated by tracking an aqua-encoded pixel line across sequential frames with temporal stacking. The blue circle shows the magnified result at the center position.
Remotesensing 17 02422 g012
Figure 13. Effectiveness of turbulence signature modeling. (a) Turbulence-distorted input. (b) Visualizations of the intermediate turbulence signature. (c,d) Repair effects without and with the turbulence signature, respectively.
Figure 13. Effectiveness of turbulence signature modeling. (a) Turbulence-distorted input. (b) Visualizations of the intermediate turbulence signature. (c,d) Repair effects without and with the turbulence signature, respectively.
Remotesensing 17 02422 g013
Figure 14. Performance comparison of 10-, 20-, and 30-frame input models across multi-scale inference conditions. (a) displays the relationship between PSNR and the number of input frames. (b) shows the relationship between SSIM and the number of input frames.
Figure 14. Performance comparison of 10-, 20-, and 30-frame input models across multi-scale inference conditions. (a) displays the relationship between PSNR and the number of input frames. (b) shows the relationship between SSIM and the number of input frames.
Remotesensing 17 02422 g014
Figure 15. Computational complexity–performance trade-off and real-time inference analysis: (a) FLOPs–parameters–PSNR correlation in multi-scene evaluation; (b) Inference speed improvement over baseline architectures.
Figure 15. Computational complexity–performance trade-off and real-time inference analysis: (a) FLOPs–parameters–PSNR correlation in multi-scene evaluation; (b) Inference speed improvement over baseline architectures.
Remotesensing 17 02422 g015
Table 1. Properties of the five video satellites used in this paper, including Jilin-1, Loujia3-01, SkySat-1, UrtheCast, and Carbonite-2.
Table 1. Properties of the five video satellites used in this paper, including Jilin-1, Loujia3-01, SkySat-1, UrtheCast, and Carbonite-2.
Train/TestVideo SatelliteRegionCaptured DateDurationFPSFrame Size
TrainJilin-1San Francisco24 April 201720 s253840 × 2160
Valencia, Spain20 May 201730 s254096 × 2160
Derna, Libya20 May 201730 s254096 × 2160
Adana-02, Turkey20 May 201730 s254096 × 2160
Tunisia20 May 201730 s254096 × 2160
Minneapolis-012 June 201730 s254096 × 2160
Minneapolis-022 June 201730 s254096 × 2160
Muharag, Bahrain4 June 201730 s254096 × 2160
TestJilin-1San Diego22 May 201730 s254096 × 2160
Adana-01, Turkey25 May 201730 s254096 × 2160
Carbonite-2Buenos Aires16 April 201817 s102560 × 1440
Mumbai, India16 April 201859 s62560 ×1440
Puerto Antofagasta16 April 201818 s102560× 1440
UrtheCastBoston, USA-20 s301920 × 1080
Barcelona, Spain-16 s301920 ×1080
Skysat-1Las Vegas, USA25 March 201460 s301920 × 1080
Burj Khalifa, Dubai9 April 201430 s301920 × 1080
Luojia3-01Geneva, Switzerland11 October 202327 s251920 × 1080
LanZhou, China23 February 202315 s24640 × 384
Table 2. Quantitative comparisons on Jilin-T. The PSNR/SSIMs are calculated on the luminance channel (Y). The best and second-best performances are highlighted in red and blue, respectively.
Table 2. Quantitative comparisons on Jilin-T. The PSNR/SSIMs are calculated on the luminance channel (Y). The best and second-best performances are highlighted in red and blue, respectively.
MethodScene 1Scene 2Scene 3Scene 4Scene 5Average
Turbulence24.96/0.764425.15/0.774322.97/0.720724.21/0.767024.35/0.755924.33/0.7564
NDIR [39]25.45/0.820625.63/0.830824.33/0.790626.04/0.826225.12/0.846625.31/0.82296
TSRWGAN [16]27.55/0.912228.75/0.923626.85/0.891826.63/0.898926.97/0.907127.35/0.9067
RVRT [29]28.65/0.881726.44/0.827626.27/0.876227.33/0.883026.95/0.904427.13/0.8746
TurbNet [14]27.03/0.854828.31/0.866325.28/0.804925.15/0.826326.35/0.859026.42/0.8423
DeturbNet [15]25.61/0.830526.08/0.832724.27/0.785525.81/0.840925.74/0.850125.50/0.8280
ShiftNet [43]29.31/0.927528.93/0.875727.62/0.890531.48/0.935730.34/0.929729.54/0.9118
TMT [40]33.25/0.933132.67/0.935928.91/0.898933.27/0.928431.22/0.926731.11/0.9242
VRT [41]28.49/0.883426.39/0.823226.30/0.875727.39/0.887526.92/0.904127.10/0.8748
x-Restormer [42]30.16/0.896029.15/0.860128.54/0.897330.68/0.903430.27/0.910229.76/0.8934
DATUM [18]33.15/0.932933.92/0.951729.66/0.904232.47/0.957131.97/0.943832.23/0.9379
TMTS33.46/0.946134.17/0.945829.75/0.904632.53/0.960932.13/0.939032.41/0.9393
Table 3. Quantitative comparisons on Carbonite-2 and UrtheCast. The best and second-best performances are highlighted in red and blue, respectively.
Table 3. Quantitative comparisons on Carbonite-2 and UrtheCast. The best and second-best performances are highlighted in red and blue, respectively.
SatelliteMethodScene 6Scene 7Scene 8Scene 9Average
Carbonite-2Turbulence25.86 /0.795525.20 /0.780623.57/0.816122.85/0.774724.37/0.7917
NDIR [39]26.42/0.82327.56 /0.809026.97/0.831126.34/0.830926.82/0.8235
TSRWGAN [16]28.35/0.864929.04/0.893128.46 0.874227.11/0.863528.24 0.8739
RVRT [29]29.03/0.883329.70/0.904827.58 0.885928.82/0.860128.78/0.8835
TurbNet [14]30.56/0.905629.32/0.928527.37/0.915628.21/0.873828.87/0.9059
DeturNet [15]27.29/0.869628.14/0.897226.26/0.871427.43/0.857527.28/0.8739
Shift-Net [43]28.09/0.909828.52/0.886327.63/0.914127.82/0.881528.02/0.8979
TMT [40]31.48/0.939731.35/0.922029.62/0.937231.27/0.902930.93/0.9255
VRT [41]28.19/0.899128.77/0.853527.18/0.867429.51/0.863028.41/0.8708
X-Restormer [42]29.41/0.913529.36/0.905028.95/0.898330.55/0.870629.57/0.8969
DATUM [18]32.38/0.940532.23/0.925930.91/0.931631.68/ 0.905531.80/0.9259
TMTS32.69/0.942232.44/0.932830.75/ 0.938532.24/0.901832.03/0.9288
SatelliteMethodScene 10Scene 11Scene 12Scene 13Average
UrthecastTurbulence25.46 /0.854827.28/0.861024.56/0.842625.10/0.841525.60/0.8500
NDIR [39]26.42/0.82327.56 /0.809026.97/0.831126.34/0.830926.82/0.8235
TSRWGAN [16]29.69/0.901829.43/0.929328.18/0.912128.92/0.907929.06/0.9128
RVRT [29]29.04/0.863328.35/0.891527.62/0.874028.4/0.896328.35/0.8813
TurbNet [14]27.76/0.900528.72/0.884328.45/0.876427.34/0.897228.07/0.8896
DeturNet [15]27.29/0.869628.14/0.897226.26/0.871427.43/0.857527.28/0.8739
Shift-Net [43]28.27/0.913630.25/0.930531.24/0.919930.41/0.926830.04/0.9227
TMT [40]31.23/0.926730.36/0.935932.18/0.942430.64/0.917031.10/0.9305
VRT [41]29.96/0.903328.38/0.896628.53/0.907029.68/0.905229.14/0.9030
X-Restormer [42]31.42/0.922830.75/0.931231.66/0.930530.42/0.910331.06/0.9237
DATUM [18]32.56/0.940933.37/0.945232.16/0.939630.43/0.935532.13/0.9403
TMTS32.97/0.936833.82/0.954033.24/0.943731.05/0.936232.77/0.9427
Table 4. Quantitative comparisons on SkySat-1 and Luojia3-01. The best and second-best performances are highlighted in red and blue, respectively.
Table 4. Quantitative comparisons on SkySat-1 and Luojia3-01. The best and second-best performances are highlighted in red and blue, respectively.
SatelliteSceneTurbNet [14]VRT [41]TMT [40]X-Restormer [42]DATUM [18]TMTS
SkySat-1Scene 1431.10/0.946233.61/0.949330.49/0.925832.05/0.929633.86/0.942433.93/0.9430
Scene 1531.61/0.929933.08/0.951830.85/0.914532.33/0.931034.02/0.955833.86/0.9572
Scene 1632.42/0.923332.52/0.930431.24/0.916730.68/0.924232.17/0.957532.77/0.9510
Luojia3-01Scene 1733.18/0.930633.43/0.941830.06/0.903531.55/0.930433.21/0.945033.59/0.9453
Scene 1831.07/0.927332.31/0.935629.48/0.908531.60/0.912432.84/0.938933.29/0.9405
Average31.87/0.931532.99/0.941830.42/ 0.913833.39/0.916431.64/0.925533.49/0.9474
Table 5. Blind reference results on real-world satellite video. The best results are highlighted bold.
Table 5. Blind reference results on real-world satellite video. The best results are highlighted bold.
MethodVRT [41]TurbNet  [14]TSRWGAN [16]TMT [40]DATUM [18]TMTS (Ours)
BRISQUE (↓)48.897946.704146.258645.857744.083542.2954
CEIQ (↑)2.93263.08313.11023.17933.35123.3458
NIQE (↓)4.61374.48324.34194.11353.99433.8161
Table 6. Ablation experiments to determine the effectiveness of each component.
Table 6. Ablation experiments to determine the effectiveness of each component.
ComponentsBaseline (TSTSA)+ TSGA+ CGRFU
w/o TSw TSw/o TSw TSw/o TSw TS
PSNR (↑)30.1530.4131.6732.0532.5232.74
SSIM (↑)0.87650.88040.91130.91220.92060.9217
#Param. (M)4.7824.7685.7395.7246.276.24
FLOPs (G)306.5304.2352.8349.7381.4377.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yin, J.; Sun, T.; Zhang, X.; Zhang, G.; Wan, X.; He, J. TMTS: A Physics-Based Turbulence Mitigation Network Guided by Turbulence Signatures for Satellite Video. Remote Sens. 2025, 17, 2422. https://doi.org/10.3390/rs17142422

AMA Style

Yin J, Sun T, Zhang X, Zhang G, Wan X, He J. TMTS: A Physics-Based Turbulence Mitigation Network Guided by Turbulence Signatures for Satellite Video. Remote Sensing. 2025; 17(14):2422. https://doi.org/10.3390/rs17142422

Chicago/Turabian Style

Yin, Jie, Tao Sun, Xiao Zhang, Guorong Zhang, Xue Wan, and Jianjun He. 2025. "TMTS: A Physics-Based Turbulence Mitigation Network Guided by Turbulence Signatures for Satellite Video" Remote Sensing 17, no. 14: 2422. https://doi.org/10.3390/rs17142422

APA Style

Yin, J., Sun, T., Zhang, X., Zhang, G., Wan, X., & He, J. (2025). TMTS: A Physics-Based Turbulence Mitigation Network Guided by Turbulence Signatures for Satellite Video. Remote Sensing, 17(14), 2422. https://doi.org/10.3390/rs17142422

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop