Next Article in Journal
Vibrational Transport of Granular Materials Achieved by Dynamic Dry Friction Manipulations
Previous Article in Journal
Using Virtual Reality to Promote Cognitive Engagement in Rett Syndrome: Eye-Tracking Evidence from Immersive Forest Tasks
error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Video Super-Resolution Combining Dual Motion Compensation and Multi-Scale Structure–Texture Prior

by
Xiaolei Liu
1,
Jiawei Shi
1,*,
Jiayi Xu
2,3,
Pengfei Song
1,
Hongxia Gao
1,
Fuhai Wang
1,
Meining Ji
1,
Chen Chen
1 and
Xianghao Kong
1
1
China Academy of Space Technology, Beijing 100094, China
2
College of Astronautics, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
3
Key Laboratory of Space Photoelectric Detection and Sensing of Industry and Information Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(2), 631; https://doi.org/10.3390/app16020631
Submission received: 4 December 2025 / Revised: 4 January 2026 / Accepted: 5 January 2026 / Published: 7 January 2026
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

Video super-resolution methods based on convolutional kernels or optical flow often face challenges such as limited utilization of multi-frame detail information or strong reliance on accurate optical flow estimation. To address these issues, this paper proposes a novel super-resolution reconstruction network named Dual Motion Compensation and Multi-scale Structure–Texture Prior (DCST-Net). Dual motion compensation performs direct and progressive motion mapping in parallel, effectively mitigating estimation bias in motion modeling. A multi-scale structure–texture prior is introduced to enhance high-frequency details through feature fusion, alleviating over-smoothing caused by warping and fusion processes. The proposed DCST-Net method is validated on datasets containing both large and small targets, demonstrating its effectiveness and robustness.

1. Introduction

With the rapid development of digital imaging and information processing, video has become a primary vehicle for information acquisition and dissemination. Nevertheless, constrained by sensor performance, imaging distance, and transmission bandwidth, especially in infrared, including short-wave infrared (SWIR) scenarios, captured videos inevitably suffer from degradations such as insufficient resolution, blurred details, and severe noise. Computational imaging techniques are therefore urgently required to enhance visual quality. Super-resolution reconstruction, which recovers high-resolution structural details from low-resolution observations, has demonstrated critical value in medical imaging [1,2], remote-sensing surveillance [3,4], and infrared object perception [5,6].
Compared to Single Image Super-Resolution (SISR), Video Super-Resolution (VSR) faces the high-complexity challenge of aggregating information from multiple unaligned yet highly correlated frames. Wang et al. proposed using multi-scale deformable alignment and attention mechanisms to achieve high-precision inter-frame alignment and fusion, significantly improving reconstruction performance [7]. However, this method did not incorporate an explicit motion estimation mechanism, limiting its alignment accuracy in scenes with intense motion. Haris et al. progressively modeled inter-frame differences through multiple residual projections, enhancing the utilization of temporal information [8]. However, the sequential projection process may introduce cumulative errors, affecting the final reconstruction result. Chan et al. proposed BasicVSR, which divides the VSR task into four modules—propagation, alignment, fusion, and upsampling—simplifying model design while improving reconstruction effectiveness [9]. Nevertheless, this method still faces significant alignment errors when handling large-scale or non-rigid motion, particularly prone to artifacts or structural distortions in complex backgrounds or occluded regions.
It should be emphasized that infrared videos differ markedly from their visible-light counterparts in imaging mechanism, texture representation, and noise characteristics; directly transferring VSR pipelines designed for RGB scenes therefore frequently fails to simultaneously guarantee motion-alignment stability and high-quality detail recovery. Consequently, how to enhance motion modeling robustness under complex motion and occlusion while effectively reconstructing the limited yet critical structural and textural cues present in infrared videos remains a pivotal and unresolved challenge in current video super-resolution research.
To address these issues in infrared videos, such as blurred small-target edges, sparse texture, and severe occlusion, this paper proposes a video super-resolution network combining Dual Motion Compensation and Multi-scale Structure–Texture Prior (DCST-Net). This method approaches the improvement of video reconstruction quality and structural fidelity from two key perspectives: motion alignment and high-frequency detail modeling.
Specifically, to enhance the modeling capability for moving objects, this paper designs a Dual Motion Compensation Module (DMCM) that performs direct warping and progressive warping compensation strategies in parallel. This integrates the sensitivity of direct warping to local fine motion and the stability of progressive warping in handling large-scale displacement, thereby mitigating the problem of errors easily arising in intense motion scenarios. Combined with an occlusion mask to eliminate invalid regions during motion compensation, it enhances the stability and robustness of alignment. To avoid ambiguity, we explicitly distinguish the proposed method from existing bidirectional propagation or recurrent alignment frameworks. Although these approaches also perform multi-frame alignment, their core idea is to propagate features through recurrent structures, where alignment merely facilitates temporal propagation. In contrast, our method abandons feature propagation. Instead, for each input frame, we explicitly build two complementary motion-compensated results: single-flow direct warping and step-by-step progressive warping. The former performs one-shot global alignment, while the latter refines alignment locally and progressively, forming dual motion hypotheses that supply more robust cues for subsequent fusion. We stress that the progressive warping is not used for recurrent feature propagation; rather, it serves as a complementary alignment to the direct warping, operates in parallel with it, and jointly participates in the subsequent fusion stage. Thanks to this explicit dual-hypothesis modeling, the proposed network delivers more stable texture alignment under occlusion and complex displacement, yielding consistently lower LPIPS and higher perceptual quality.
In terms of detail restoration, this paper further proposes a Multi-scale Structure–Texture Prior Module (MSTPM). It introduces a pre-trained network to extract multi-scale feature information, guiding the model to perceive structural and textural details at different levels. This module effectively compensates for high-frequency details by upsampling, concatenating, and compressively fusing multi-scale feature maps, thereby alleviating structural blurring and textural degradation caused by alignment errors during feature fusion. DCST-Net demonstrates superior reconstruction performance on multiple datasets, effectively restoring object contours while recovering rich textural details, with overall visual quality and geometric consistency surpassing existing methods.

2. Methodology

2.1. Network Architecture

The overall architecture of DCST-Net is shown in Figure 1, including an optical flow estimation network and an upsampling module, as well as DMCM and MSTPM.
Input data is read using a sliding window approach. The input video frame sequence [ I t L R ] t = t n t = t + n contains 2 n + 1 low-resolution images, and the corresponding output is 1 super-resolution image I t S R . A pre-trained SpyNet [10] is introduced to compute optical flow, combined with the dual motion compensation module to obtain warped frames. A multi-scale feature extraction module is employed to extract structural and textural information from high-resolution frames, which is concatenated with the warped frames and then fed into the upsampling module. The upsampling module uses a U-Net [11] to reshape features. The encoder uses common convolutional blocks, each consisting of two convolutional layers and PReLU [12] activation. The decoder performs upsampling via deconvolution and combines with the corresponding encoder output using skip connections. PixelShuffle [13] is used after the U-Net for upsampling to obtain the reconstructed frame.

2.2. Dual Motion Compensation Module

The implementation process of DMCM is shown in Figure 2, and it uses two different forms of motion compensation operations to model motion: direct warping and progressive warping. This approach integrates the sensitivity of direct warping to local fine motion and the stability of progressive warping in handling large-scale displacement. Thus, during inter-frame alignment, it accommodates motion characteristics at different scales, achieving more accurate and robust compensation.
Direct warping, which is shown in Figure 2a, calculates the motion field F i t between the input frame I i and the target frame I t using the optical flow estimation module and then uses this motion field to directly warp the input frame to the position of the target frame. Specifically, the compensated frame I i t d i r e c t generated by direct warping can be expressed as
I i t d i r e c t = W ( I i , F i t )
where W ( · , F ) denotes the optical flow-based backward warping operator. Direct warping remains stable under continuous displacement, yet it is sensitive to motion estimation error. When alignment accuracy is insufficient, artifacts and structural distortion readily appear in the compensated frame.
To mitigate the sensitivity of direct warping to one-shot motion estimation, we adopt the progressive warping strategy shown in Figure 2b. This approach splits the alignment from the reference frame to the target frame into a sequence of small-step mappings, reducing single-step error and stabilizing motion compensation. Specifically, the compensated frame I i t p r o g r e s s i v e generated by one-side progressive warping is expressed as
I ˜ i ( i ) = I i
F j j + 1 = F ( I ˜ i ( j ) , I j + 1 ) , j = i , i + 1 , , t 1
I ˜ i ( j + 1 ) = W ( I ˜ i ( j ) , F j j + 1 )
I i t p r o g r e s s i v e = I ˜ i ( t )
where F ( · , · ) denotes the optical flow network. Progressive warping reduces the impact of one-shot motion error through step-wise alignment, lowering artifact risk. Yet it may accumulate error across steps and hurt the final accuracy, so it is not used alone.
Dual motion compensation combines the advantages of the above methods to generate more accurate compensation results. Specifically, the two compensation results are concatenated along the channel axis and fed into the reconstruction network, where convolution kernels learn spatially varying fusion weights for each path. The final dual motion compensation result I i t d u a l can be expressed as
I i t d u a l = C o n c a t ( I i t d i r e c t , I i t p r o g r e s s i v e )
where C o n c a t ( · ) denotes feature concatenation. This design avoids the instability of hand-crafted weights and boosts adaptability to complex motion and occlusion.
To further suppress artifacts, the dual motion compensation module introduces an occlusion mask. By multiplying the compensated frame with the mask, invalid regions are eliminated, enhancing the accuracy and robustness of the compensation result. Given optical flow F i t , a one-tensor matching input frame I i is first built, and backward warping is performed with the same sampling strategy used for image remapping to yield an initial mask M i .
M i = W ( 1 , F i t )
Pixels successfully mapped into the reference frame receive mask values close to 1, whereas those in occluded or out-of-range regions obtain values clearly below 1.
To further separate valid and invalid regions, we binarize the initial mask M i by thresholding, yielding the final occlusion mask M ˜ i .
M ^ i = 1 , M i ( x ) τ 0 , o t h e r w i s e
With the threshold set to τ = 0.9999 , invalid pixels caused by interpolation or out-of-bound sampling are effectively removed.
After the occlusion mask is obtained, we apply it directly to the remapped result to suppress interference from invalid regions in later feature fusion.
I ^ i t = W ( I i , F i t ) M ^ i
where ⨀ denotes pixel-wise multiplication. This step keeps only the pixels that hold valid correspondences in the reference frame, so the motion-compensated result stays robust at occlusion boundaries and under large displacement. Note that the mask needs no extra network. It emerges naturally from the optical flow consistency. The scheme is light, fast, and fully aligned with the motion compensation pipeline, avoiding extra parameters for explicit occlusion modeling.

2.3. Multi-Scale Structure–Texture Prior Module

MSTPM employs a pre-trained VGG16 [14] network to perform multi-scale feature extraction on the input data, generating a set of feature maps { F 1 , F 2 , , F n } at different resolutions, where F i denotes the feature map at the i-th scale. We use the convolutional part of VGG16 and divide it into five stages, covering shallow to deep multi-scale features. Stage1–stage5 take the outputs of c o n v 1 _ 2 , c o n v 2 _ 2 , c o n v 3 _ 3 , c o n v 4 _ 3 , and c o n v 5 _ 3 , with downsampling between adjacent stages to change scale. These feature maps can perceive the global structural contours and local texture variations of the image, respectively. For the extracted VGG feature maps, upsampling is performed along the channel dimension, followed by concatenation. A 1 × 1 convolution is used to reduce the channel number. The result is then concatenated with the compensated frame I t ± n t d u a l obtained from the previous section and sent for further processing. This process can be expressed as
F i n p u t = C o n c a t ( F V G G , F c u r r e n t ) ,
where F V G G represents the multi-scale features extracted by the VGG model, F c u r r e n t represents the features of the current frame, and C o n c a t ( · ) denotes the feature concatenation operation. Through this fusion strategy, the reconstruction module can fully integrate multi-scale structural and textural prior information. While enhancing the expression of high-frequency details, it alleviates the over-smoothing phenomenon caused by alignment errors and feature warping, thereby improving the accuracy of detail restoration and visual quality.
Note that our multi-scale structure–texture block is not a strict statistical prior. It is a perception-guided constraint. By feeding multi-scale features, it pushes the network to keep pixel fidelity while watching layout and texture consistently, so visual quality rises under motion and occlusion. Unlike adding a perceptual term to the loss alone, we feed the structure–texture features directly into the fusion and reconstruction stages, giving steady guidance to representation learning.
Although deeper nets like ResNet [15] or ConvNeXt [16] give higher classification scores, we keep VGG16 in MSTPM. The main reason is that we are not chasing optimal accuracy; however, we value its steady perceptual features and consistent texture modeling. Specifically, VGG16 has a neat convolutional layout, and its shallow and mid layers keep clear spatial detail and local texture, so the features are easy to read. These maps are common in perceptual loss, style transfer, and video restoration. By comparison, some new nets aim for high semantics in the processes of down-sampling and feature compression, so fine texture details may drop.
Our goal is to improve perceptual quality and keep temporal texture steady in infrared motion scenes, not just to cut pixel loss, so we keep VGG16 as the MSTPM backbone.

3. Experimental Results and Analysis

To validate the effectiveness of the proposed method, experiments and result analysis were conducted. The superiority of the proposed method was demonstrated through qualitative and quantitative comparisons with various baseline methods. The effectiveness of each module was verified through ablation studies and visual analysis.

3.1. Datasets

Since infrared videos are small and lack variety, we pre-train on RGB data and carry out fine-tuning on infrared clips. The public dataset Vimeo-90K [17] was used to train the network. A short-wave infrared dataset (SWIRD) was used for fine-tuning and testing. Note that we pre-train on Vimeo-90K not to learn RGB looks, but to let the net fully learn complex motion and temporal alignment. Optical flow and motion compensation rely on temporal structure and displacement patterns, not on the light band. Big RGB clips teach the model to handle large shifts and non-rigid movement. Then, short infrared fine-tuning adapts the model to radiant signal and noise, giving valid infrared super-resolution. In addition, to guarantee an impartial comparison, we conduct evaluations on the public benchmarks REDS4 [18] and Vid4 [19], so as to verify the superiority of the proposed method. Detailed information about the datasets is shown in Table 1.
The Vimeo-90K dataset contains 91,701 video clips, covering various scenes and motion types. Each clip contains seven consecutive frames with a resolution of 448 × 256, suitable for video denoising and super-resolution tasks. The REDS4 subset is from the full REDS, comprising four 100-frame sequences ( 000 , 011 , 015 , 020 ) captured in diverse scenarios at a resolution of 1280 × 720. Vid4, a classical benchmark adopted for video enhancement tasks, consists of four clips ranging from 40 to 50 frames—namely calendar, city, foliage, and walk—whose spatial resolutions span 480 × 720 to 576 × 720.
The short-wave infrared dataset contains six video sequences, covering various scenes and aircraft targets in different poses with a resolution of 640 × 512. In our experimental setup, we adopt a staged training scheme. First, the model is pre-trained on the Vimeo-90K dataset to acquire generic spatio-temporal reconstruction and motion modeling capacity. Next, the longest sequence in SWIRD is divided into 987 short subsequences of seven frames each. These subsequences are used for fine-tuning so that the network can adapt to infrared imaging characteristics. Each of the remaining five long sequences, labeled IR1–IR5, contain 200 frames and are reserved exclusively for testing. They never participate in training or fine-tuning, ensuring impartial and objective evaluation.
During the data preprocessing stage, all video frames were downsampled by a factor of 4 using bicubic linear interpolation to generate low-resolution input data. For an original high-resolution frame I H R , the corresponding low-resolution frame I L R can be expressed as
I L R = B i c u b i c D o w n s a m p l e ( I H R , s = 4 ) ,
where B i c u b i c D o w n s a m p l e ( · ) represents the bicubic linear interpolation downsampling operation, and s = 4 represents the downsampling factor. The low-resolution data generated in this way can effectively simulate the degradation process in real-world scenarios, providing high-quality input–output pairs for the video super-resolution task.

3.2. Implementation Details

The hardware and software configurations used in the experiments are shown in Table 2.
The following optimization strategies and parameter settings were adopted for the training process. Data was read using a sliding window approach, with seven consecutive video frames input each time as a training sample. To enhance the model’s local feature extraction capability, each frame was cropped into local patches of size 64 × 64 , which were used as the network’s input. All tests use a fixed random seed to keep results repeatable. Before training, we set the Python, NumPy and PyTorch seeds to 42 and turned on the deterministic computing mode. The ADAM optimizer [20] was used for model training, with momentum parameters set to β 1 = 0.9 and β 2 = 0.999 . The batchsize was set to 1, the initial learning rate was set to 1 × 10 4 , and a cosine annealing strategy was employed to dynamically adjust the learning rate; namely the learning rate decays every 5 epochs down to 1 × 10 7 , enabling finer parameter optimization in the later stages of training. To constrain the training process and enhance reconstruction quality, a composite loss function including Charbonnier loss and perceptual loss was used. Charbonnier loss [21] was used to minimize the difference between the reconstructed frame I S R and the real high-resolution frame I H R . Perceptual loss [22] supervised the retention of high-frequency detail information, with its weight set to 0.1. During training, a pre-trained SpyNet model was used to initialize the optical flow estimation module, while other modules were trained from scratch. To keep training stable and stop early optical flow estimation errors from harming the reconstruction network, we freeze SpyNet for the first two epochs. After the model reaches a stable state, we unfreeze it and train end-to-end with the rest modules. This strategy keeps alignment accurate and improves overall quality. The model was trained for a total of 30 epochs. Under these settings, the model converged quickly and achieved optimal performance.

3.3. Evaluation Metrics

Network performance was primarily evaluated using three metrics: Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Learned Perceptual Image Patch Similarity [23] (LPIPS). For the results of each image sequence, the PSNR, SSIM, and LPIPS values for each frame in the sequence were calculated. The average values of these calculations were then taken as the evaluation metrics for the reconstruction result of that sequence.
It should be emphasized that PSNR and SSIM primarily quantify the pixel-level reconstruction error, which often fails to faithfully reflect visual perceptual quality in some scenes, such as large-scale motion, occlusion, and non-rigid deformation. In contrast, LPIPS concentrates on high-level semantic and textural perceptual consistency, offering a more effective assessment of the visual realism and naturalness of video reconstruction results. Consequently, we adopt LPIPS as a key metric for perceptual quality and focus on analyzing the performance of different methods under this indicator.

3.4. Quantitative Results

Details of the compared models are shown in Table 3, from which it can be seen that DCST-Net has the smallest parameter scale among the compared methods.
All methods were trained on the same dataset and evaluated under identical conditions. Comparative results of different methods on the test set are shown in Table 4, Table 5 and Table 6, and red and blue indicate the best and second-best results in Table 6. It should be noted that the best and second-best results exclude the without-fine-tuning comparison. It can be seen that the PSNR, SSIM, and LPIPS values of DCST-Net’s reconstruction results on the five test sequences are significantly better than those of EDVR [7] and DUF [24]. From the perspective of perceptual quality analysis, DCST-Net achieves comparable PSNR and SSIM values to BasicVSR [9], IART [25], and ST-AVSR [26], but with lower LPIPS values, indicating superior perceptual similarity while maintaining reconstruction accuracy. From Table 3, Table 4, Table 5 and Table 6, it can be seen that DCST-Net has a parameter count comparable to DUF but significantly outperforms DUF in all metrics. Compared to ST-AVSR, the parameter count is only about one-twentieth, yet the reconstruction PSNR and SSIM values are comparable and LPIPS is lower, demonstrating the superior performance of DCST-Net. It is worth pointing out that the proposed method demonstrates stronger stability in scenes characterized by complex motion patterns or pronounced occlusion, whereas its advantage becomes less compared with reconstruction approaches centered on long−term feature propagation when displacement variation is small or the texture is highly repetitive.
Although the proposed method does not consistently achieve the highest PSNR or SSIM scores across all test sequences, it delivers more consistent perceptual quality and exhibits superior robustness against motion-alignment uncertainty and occlusion. For sequences with gentle motion and negligible deformation, recurrent propagation-based approaches such as BasicVSR may retain an advantage in pixel-level fidelity. This indicates the complementarity of different motion modeling strategies in video reconstruction tasks and further clarifies the applicable scope of the present method.
To further investigate the generalization capability of the model, Table 4, Table 5 and Table 6 present the test results obtained by directly deploying the Vimeo-90K pre-trained weights on the infrared dataset without any infrared fine-tuning. The experiments demonstrate that the proposed method still maintains a perceptual advantage in terms of LPIPS, yet suffers from noticeable degradation in PSNR and SSIM, further corroborating the necessity of domain-adaptive fine-tuning for infrared images.
The comparative result curves of all methods on the test set are illustrated in Figure 3. It can be seen that DCST-Net delivers PSNR and SSIM values comparable to BasicVSR, IART, and ST-AVSR across all test sequences, while markedly outperforming EDVR and DUF. Its LPIPS scores surpass those of every comparison method.
We additionally quantify the performance distribution across the test sequences and report the corresponding means and standard deviations in Table 7. The results reveal that the proposed method exhibits stable performance variation among different infrared video sequences, corroborating its cross-sequence consistency and stability.
Beyond the aforementioned comparisons, Table 8 further reports the quantitative results obtained on the public test benchmarks to ensure an impartial evaluation.
As evidenced in Table 8, DCST-Net achieves the lowest LPIPS on both public benchmarks, corroborating its superiority in perceptual quality enhancement. On REDS4, although PSNR and SSIM do not attain the best or second-best ranks, the performance gap is marginal. On Vid4, DCST-Net obtains the second-best PSNR. This gain is attributed to the fact that Vid4 contains more intensive motion and richer textural details, thereby fully exploiting the motion modeling and texture restoration capacities of the proposed network.

3.5. Qualitative Results

Qualitative comparison results of different methods on the test set are shown in Figure 4.
The reconstruction results for large-scale targets (ten-thousand-pixel level) are shown in Figure 4a. As seen in Figure 4a, EDVR, DUF, and BasicVSR perform poorly in restoring textured regions, with significant loss of detail information. IART and ST-AVSR can reconstruct some details but still show considerable differences from the ground truth image. In contrast, DCST-Net performs better in detail restoration, accurately recovering high-frequency textures such as the white stripes on the aircraft, demonstrating stronger detail modeling capability. The reconstruction results for small-scale targets (hundred-pixel level) are shown in Figure 4b. As seen in Figure 4b, for small-scale targets with less texture, EDVR produces artifacts in the reconstruction result, causing contour blurring. DCST-Net can reconstruct clear contour boundaries in this scenario, presenting superior visual quality.
We further present qualitative comparisons on the REDS4 and Vid4 datasets.
As illustrated in Figure 5, DCST-Net still delivers visually good results within repetitive-texture regions. Regular structures such as window grids are faithfully reconstructed without observable distortion or artifacts. Nevertheless, on this particular sample the visual gap between the proposed method and several state-of-the-art competitors remains relatively limited, implying that under scenes dominated by regular textures and mild motion the performance of DCST-Net and the compared approaches converges to a comparable level.
As illustrated in Figure 6, the city sequence is dedicated to evaluating the capability of algorithms to model large-scale repetitive structures and high-frequency textures. Methods with inferior reconstruction quality tend to produce aliasing between low- and high-frequency components, such as conspicuous stripe artifacts. In contrast, DCST-Net successfully preserves the structural consistency of repetitive patterns such that the grid and edge of the windows remain sharp and arrangements regular—without any observable stripe or structural distortion, underscoring its superiority in high-frequency texture restoration.

3.6. Ablation Studies

To verify the effectiveness of the Dual Motion Compensation Module and the Multi-scale Structure–Texture Prior Module, ablation experiments were conducted on each module without changing the training and test sets. The experimental results are shown in Table 9, Table 10 and Table 11, and red and blue indicate the best and second-best results.
w/o Motion denotes the model equipped only with direct warping motion compensation and the texture prior. w/o Texture denotes the model equipped only with dual motion compensation. From Table 9, Table 10 and Table 11, it can be seen that the w/o Motion model achieves LPIPS values on the test set significantly better than the w/o Texture model and close to the full model’s performance, indicating that the multi-scale structure–texture prior plays a significant role in enhancing perceptual quality. The w/o Texture model performs better than w/o Motion in PSNR and SSIM, indicating that the dual motion compensation module improves the accuracy of inter-frame alignment, thereby enhancing overall reconstruction performance.
The result curves of ablation studies on the test set are shown in Figure 7. It can be seen that removing either DMCM or MSTPM consistently degrades model performance, corroborating the effectiveness of both the Dual Motion Compensation Module and the Multi-scale Structure–Texture Prior Module.
The ablation study results for large-scale targets (ten-thousand-pixel level) are shown in Figure 8a, from which it can be seen that the reconstruction result of the w/o Motion model exhibits obvious artifacts, with blurred wing contours, and the w/o Texture model alleviates the artifact problem to some extent but leads to the loss of high-frequency information.
The ablation study results for small-scale targets (hundred-pixel level) are shown in Figure 8b, from which it can be seen that the reconstruction result of the w/o Motion model shows blurred edges on the aircraft body contour, and the w/o Texture model yields a clearer contour but suffers from significant detail loss, such as the inability to effectively reconstruct stripes and highlight regions on the aircraft’s body.
Overall, both the Dual Motion Compensation Module and the Multi-scale Structure–Texture Prior Module lead to a performance drop, validating the effectiveness of DCST-Net.
To justify the adoption of VGG16 in MSTPM, we conduct additional ablation studies on the feature extractor. Concretely, while keeping the overall architecture, training protocol, and loss functions intact, we replace the VGG16 backbone with ResNet-50 and ConvNeXt-Tiny, respectively, and retrain under the same dataset and evaluation protocol. The quantitative comparisons are summarized in Table 12, Table 13 and Table 14.
The experimental results reveal that, although PSNR and SSIM exhibit marginal variations across different backbones, the VGG16-equipped MSTPM achieves the lowest LPIPS, indicating a clear advantage in encoding perceptually relevant features and maintaining textural consistency. Further inspection shows that the intermediate activations of VGG16 furnish more stable multi-scale texture representations and local structure alignment, effectively alleviating texture mismatch caused by occlusion and complex displacement during motion compensation. This characteristic aligns precisely with our core objective of boosting perceptual quality. Consequently, considering overall performance, stability, and perceptual reconstruction fidelity, VGG16 is selected as the feature extractor in MSTPM.

4. Conclusions

This paper proposed an infrared video super-resolution reconstruction method combining dual motion compensation and multi-scale structure–texture priors. The devised dual motion compensation improves resilience to motion alignment uncertainties by introducing complementary alignment pathways, while an occlusion mask effectively suppresses disturbances from misaligned regions. Concurrently, the multi-scale structure–texture prior enhances structural edges and high-frequency texture information, alleviating image over-smoothing caused by alignment errors and fusion distortions. Experimental results show that DCST-Net achieved 4x super-resolution reconstruction on multiple test sequences, attaining a high PSNR and SSIM together with a lower LPIPS. Compared with existing methods, the proposed approach can more accurately reconstruct clear contour boundaries and effectively restore the structure and texture details of targets, while markedly suppressing artifacts.

Author Contributions

Conceptualization, J.S.; Methodology, J.X.; Software, J.X.; Validation, X.L.; Formal analysis, P.S.; Investigation, X.L.; Writing—original draft, X.L.; Writing—review & editing, J.S.; Visualization, P.S.; Supervision, H.G., F.W., M.J., C.C. and X.K.; Project administration, J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, and further inquiries can be directed to the corresponding author.

Acknowledgments

We are grateful to the other researchers who contributed to this study. Special thanks go to Jiapeng Wu and Xuguang Liu for their assistance in preparing the manuscript.

Conflicts of Interest

Author Xiaolei Liu was employed by China Academy of Space Technology. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as potential conflicts of interest.

References

  1. Puttagunta, M.; Subban, R. Swinir transformer applied for medical image super-resolution. Procedia Comput. Sci. 2022, 204, 907. [Google Scholar] [CrossRef]
  2. Ma, Y.Z.; Zhou, W.T.; Ma, R.; Wang, E.; Yang, S.; Tang, Y.; Zhang, X.; Guan, X. DOVE: Doodled vessel enhancement for photoacoustic angiography super resolution. Med. Image Anal. 2024, 94, 103106. [Google Scholar] [CrossRef] [PubMed]
  3. Zhang, H.P.; Wang, P.R.; Jiang, J.G. Nonpairwise-Trained Cycle Convolutional Neural Network for Single Remote Sensing Image Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2020, 59, 4250. [Google Scholar] [CrossRef]
  4. Fernandez-Beltran, R.; Latorre-Carmona, P.; Pla, F. Single-frame super-resolution in remote sensing: A practical overview. Int. J. Remote Sens. 2017, 38, 314. [Google Scholar] [CrossRef]
  5. Ren, S.B.; Meng, Q.; Wu, Z. Flow-based super-resolution reconstruction of remote sensing images. Chin. Space Sci. Technol. 2022, 42, 99. [Google Scholar] [CrossRef]
  6. Sun, R.; Zhang, H.; Cheng, Z.K.; Zhang, X.D. Super-resolution reconstruction of infrared image based on channel attention and transfer learning. Opto-Electron. Eng. 2021, 48, 200045. [Google Scholar] [CrossRef]
  7. Wang, X.; Chan, K.C.K.; Yu, K.; Dong, C.; Loy, C.C. EDVR: Video restoration with enhanced deformable convolutional networks. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; p. 1954. [Google Scholar] [CrossRef]
  8. Haris, M.; Shakhnarovich, G.; Ukita, N. Recurrent back-projection network for video super-resolution. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 15–20 June 2019; p. 3897. [Google Scholar] [CrossRef]
  9. Chan, K.C.K.; Wang, X.; Yu, K.; Dong, C.; Loy, C.C. BasicVSR: The search for essential components in video super-resolution and beyond. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; p. 4945. [Google Scholar] [CrossRef]
  10. Ranjan, A.; Black, M.J. Optical flow estimation using a spatial pyramid network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; p. 4164. [Google Scholar]
  11. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the 18th International Conference on Medical Image Computing and Computer Assisted Intervention, Munich, Germany, 5–9 October 2015; p. 234. [Google Scholar] [CrossRef]
  12. He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; p. 1026. [Google Scholar] [CrossRef]
  13. Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; p. 1874. [Google Scholar] [CrossRef]
  14. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015; p. 1. [Google Scholar]
  15. He, K.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; p. 770. [Google Scholar] [CrossRef]
  16. Liu, Z.; Mao, H.Z.; Wu, C.Y.; Feichtenhofer, C.; Darrell, T.; Xie, S.N. A convnet for the 2020s. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; p. 11976. [Google Scholar]
  17. Xue, T.F.; Chen, B.A.; Wu, J.J.; Wei, D.; Freeman, W.T. Video enhancement with task-oriented flow. Int. J. Comput. Vis. 2019, 127, 1106. [Google Scholar] [CrossRef]
  18. Tao, X.; Gao, H.Y.; Liao, R.J.; Wang, J.; Jia, J. Detail-revealing Deep Video Super-resolution. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; p. 4472. [Google Scholar] [CrossRef]
  19. Liu, C.; Sun, D.Q. On Bayesian Adaptive Video Super Resolution. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 346. [Google Scholar] [CrossRef] [PubMed]
  20. Kinga, D.; Adam, J.B. A method for stochastic optimization. IEEE Conf. Comput. Vis. Pattern Recog. 2015, 5, 1. [Google Scholar]
  21. Charbonnier, P.; Blanc-Feraud, L.; Aubert, G.; Barlaud, M. Two deterministic half quadratic regularization algorithms for computed imaging. Proc. IEEE Int. Conf. Inf. Process. 1994, 2, 168. [Google Scholar] [CrossRef]
  22. Johnson, J.; Alahi, A.; Li, F.F. Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In Proceedings of the 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; p. 694. [Google Scholar] [CrossRef]
  23. Zhang, R.; Isola, P.; Efros, A.A. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; p. 586. [Google Scholar] [CrossRef]
  24. Jo, Y.; Oh, S.W.; Kang, J.; Kim, S.J. Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; p. 3224. [Google Scholar] [CrossRef]
  25. Xu, K.; Yu, Z.W.; Wang, X.; Mi, M.B.; Yao, A. Enhancing Video Super-Resolution via Implicit Resampling-based Alignment. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; p. 2546. [Google Scholar] [CrossRef]
  26. Shang, W.; Ren, D.W.; Zhang, W.Y.; Fang, Y.; Zuo, W.; Ma, K. Arbitrary-Scale Video Super-Resolution with Structural and Textural Priors. In Proceedings of the 18th European Conference, Milan, Italy, 29 September–4 October 2024; p. 73. [Google Scholar] [CrossRef]
Figure 1. Architecture of Dual Motion Compensation and Multi-scale Structure–Texture Prior Network.
Figure 1. Architecture of Dual Motion Compensation and Multi-scale Structure–Texture Prior Network.
Applsci 16 00631 g001
Figure 2. Dual motion compensation. (a) Direct warping. (b) Progressive warping.
Figure 2. Dual motion compensation. (a) Direct warping. (b) Progressive warping.
Applsci 16 00631 g002
Figure 3. Comparison of different methods on SWIR datasets.
Figure 3. Comparison of different methods on SWIR datasets.
Applsci 16 00631 g003
Figure 4. Qualitative comparison of large-scale and small-scale target.
Figure 4. Qualitative comparison of large-scale and small-scale target.
Applsci 16 00631 g004
Figure 5. Qualitative comparison on the 011 video sequence from REDS4.
Figure 5. Qualitative comparison on the 011 video sequence from REDS4.
Applsci 16 00631 g005
Figure 6. Qualitative comparison on the city video sequence from Vid4.
Figure 6. Qualitative comparison on the city video sequence from Vid4.
Applsci 16 00631 g006
Figure 7. Comparison of ablation studies.
Figure 7. Comparison of ablation studies.
Applsci 16 00631 g007
Figure 8. Ablation studies of large-scale and small-scale targets.
Figure 8. Ablation studies of large-scale and small-scale targets.
Applsci 16 00631 g008
Table 1. Details of video datasets.
Table 1. Details of video datasets.
DatasetImage Size (Pixel)ObjectFrame Number
Vimeo-90k 448 × 256 Multiple objects7 × 91,701
SWIRD 640 × 512 Airplane 7 × 987
SWIRD 640 × 512 Airplane & Objects 200 × 5
REDS4 1280 × 720 Multiple objects 100 × 4
Vid4 720 × ( 480 576 ) Multiple objects ( 40 50 ) × 4
Table 2. Configuration of the experimental environment.
Table 2. Configuration of the experimental environment.
NameInformation
CPU13th Gen Intel(R) Core(TM) i7-13700F
GPUNVIDIA GeForce RTX 4060 Ti
RAM32.0 GB
FrameworkPytorch2.1.1
CUDA12.1
Table 3. The details of models *.
Table 3. The details of models *.
EDVRDUFBasicVSRIARTST-AVSR Ours
Params20.6 M5.8 M6.3 M13.4 M106.5 M5.4 M
* Red and blue highlighting represent the fewest and second-fewest parameters, respectively.
Table 4. Quantitative comparison of different methods on SWIR datasets (PSNR).
Table 4. Quantitative comparison of different methods on SWIR datasets (PSNR).
SeqIR1IR2IR3IR4IR5
Model
EDVR36.6540.6242.3546.0839.14
DUF25.7326.8225.7227.6527.70
BasicVSR47.0654.4150.4454.8457.20
IART47.6154.4150.7655.1656.59
ST-AVSR44.0250.8748.3852.3357.89
Ours (w/o fine-tuning)43.3749.1248.3944.3946.79
Ours44.1849.3650.0346.2848.03
Table 5. Quantitative comparison of different methods on SWIR datasets (SSIM).
Table 5. Quantitative comparison of different methods on SWIR datasets (SSIM).
SeqIR1IR2IR3IR4IR5
Model
EDVR0.96250.98850.98340.97620.9011
DUF0.92420.96370.95230.93810.9456
BasicVSR0.99460.99890.99420.99800.9981
IART0.99490.99890.99430.99770.9979
ST-AVSR0.99290.99880.99380.99780.9985
Ours (w/o fine-tuning)0.99030.99620.99160.99010.9933
Ours0.99140.99770.99380.99350.9954
Table 6. Quantitative comparison of different methods on SWIR datasets (LPIPS) *.
Table 6. Quantitative comparison of different methods on SWIR datasets (LPIPS) *.
SeqIR1IR2IR3IR4IR5
Model
EDVR0.40610.18660.33730.18040.3471
DUF0.19090.15090.26480.37050.3648
BasicVSR0.04450.03060.13450.03040.0385
IART0.04600.03190.12890.02930.0385
ST-AVSR0.04380.02810.14920.02890.0251
Ours (w/o fine-tuning)0.03480.02540.07320.02130.0249
Ours0.03060.02020.05910.01940.0173
* Red and blue highlighting represent the best and second-best performances, respectively.
Table 7. Performance distribution of different methods across various test sequences.
Table 7. Performance distribution of different methods across various test sequences.
ModelPSNRSSIMLPIPS
EDVR40.97 ± 3.170.9623 ± 0.03180.2915 ± 0.0913
DUF26.72 ± 0.870.9448 ± 0.01330.2684 ± 0.0889
BasicVSR52.79 ± 3.600.9968 ± 0.00200.0557 ± 0.0398
IART52.91 ± 3.270.9967 ± 0.00180.0549 ± 0.0374
ST-AVSR50.70 ± 4.570.9964 ± 0.00250.0550 ± 0.0475
Ours (w/o fine-tuning)46.41 ± 2.220.9923 ± 0.00230.0359 ± 0.0192
Ours47.58 ± 2.130.9944 ± 0.00210.0293 ± 0.0156
Table 8. Quantitative comparison of different methods on REDS4 and Vid4(PSNR/SSIM/LPIPS) *.
Table 8. Quantitative comparison of different methods on REDS4 and Vid4(PSNR/SSIM/LPIPS) *.
SeqREDS4Vid4
Model
EDVR31.05/0.8793/0.209727.35/0.8264/-
DUF28.63/0.8251/-27.34/0.8327/-
BasicVSR31.42/0.8909/0.202327.24/0.8251/0.2812
IART32.89/0.9138/0.162928.26/0.8517/0.2501
ST-AVSR31.03/0.8970/-26.16/0.8520/-
Ours (w/o fine-tuning)31.32/0.8936/0.147327.54/0.8283/0.2115
* Red and blue highlighting represent the best and second-best performances, respectively.
Table 9. Ablation studies of two proposed key innovative modules on SWIR datasets (PSNR) *.
Table 9. Ablation studies of two proposed key innovative modules on SWIR datasets (PSNR) *.
SeqIR1IR2IR3IR4IR5
Model
w/o Motion43.8648.8046.1643.1343.49
w/o Texture43.8947.8149.4245.2345.14
Ours44.1849.3650.0346.2848.03
* Red and blue highlighting represent the best and second-best performances, respectively.
Table 10. Ablation studies of two proposed key innovative modules on SWIR dataset (SSIM) *.
Table 10. Ablation studies of two proposed key innovative modules on SWIR dataset (SSIM) *.
SeqIR1IR2IR3IR4IR5
Model
w/o Motion0.99100.99590.99080.99170.9937
w/o Texture0.99140.99690.99450.99610.9980
Ours0.99140.99770.99380.99350.9954
* Red and blue highlighting represent the best and second-best performances, respectively.
Table 11. Ablation studies of two proposed key innovative modules on SWIR dataset (LPIPS) *.
Table 11. Ablation studies of two proposed key innovative modules on SWIR dataset (LPIPS) *.
SeqIR1IR2IR3IR4IR5
Model
w/o Motion0.03020.02130.11430.02480.0232
w/o Texture0.05210.04100.10640.03400.0318
Ours0.03060.02020.05910.01940.0173
* Red and blue highlighting represent the best and second-best performances, respectively.
Table 12. Ablation studies of feature extraction on SWIR datasets (PSNR).
Table 12. Ablation studies of feature extraction on SWIR datasets (PSNR).
SeqIR1IR2IR3IR4IR5
Model
ResNet-5044.4249.9349.6546.7547.95
ConvNeXt-Tiny44.1149.4950.1246.2648.46
VGG16(Ours)44.1849.3650.0346.2848.03
Table 13. Ablation studies of feature extraction on SWIR datasets (SSIM).
Table 13. Ablation studies of feature extraction on SWIR datasets (SSIM).
SeqIR1IR2IR3IR4IR5
Model
ResNet-500.99170.99600.99040.99380.9968
ConvNeXt-Tin0.99110.99650.99440.99560.9977
VGG16(Ours)0.99140.99770.99380.99350.9954
Table 14. Ablation studies of feature extraction on SWIR datasets (LPIPS) *.
Table 14. Ablation studies of feature extraction on SWIR datasets (LPIPS) *.
SeqIR1IR2IR3IR4IR5
Model
ResNet-500.04330.03260.10580.03630.0343
ConvNeXt-Tin0.03600.03170.07420.02450.0256
VGG16 (Ours)0.03060.02020.05910.01940.0173
* Red and blue highlighting represent the best and second-best performances, respectively.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, X.; Shi, J.; Xu, J.; Song, P.; Gao, H.; Wang, F.; Ji, M.; Chen, C.; Kong, X. Video Super-Resolution Combining Dual Motion Compensation and Multi-Scale Structure–Texture Prior. Appl. Sci. 2026, 16, 631. https://doi.org/10.3390/app16020631

AMA Style

Liu X, Shi J, Xu J, Song P, Gao H, Wang F, Ji M, Chen C, Kong X. Video Super-Resolution Combining Dual Motion Compensation and Multi-Scale Structure–Texture Prior. Applied Sciences. 2026; 16(2):631. https://doi.org/10.3390/app16020631

Chicago/Turabian Style

Liu, Xiaolei, Jiawei Shi, Jiayi Xu, Pengfei Song, Hongxia Gao, Fuhai Wang, Meining Ji, Chen Chen, and Xianghao Kong. 2026. "Video Super-Resolution Combining Dual Motion Compensation and Multi-Scale Structure–Texture Prior" Applied Sciences 16, no. 2: 631. https://doi.org/10.3390/app16020631

APA Style

Liu, X., Shi, J., Xu, J., Song, P., Gao, H., Wang, F., Ji, M., Chen, C., & Kong, X. (2026). Video Super-Resolution Combining Dual Motion Compensation and Multi-Scale Structure–Texture Prior. Applied Sciences, 16(2), 631. https://doi.org/10.3390/app16020631

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop