Next Article in Journal
Study on Correction Methods for GPM Rainfall Rate and Radar Reflectivity Using Ground-Based Raindrop Spectrometer Data
Previous Article in Journal
Agronomic Information Extraction from UAV-Based Thermal Photogrammetry Using MATLAB
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

LSTMConvSR: Joint Long–Short-Range Modeling via LSTM-First–CNN-Next Architecture for Remote Sensing Image Super-Resolution

1
School of Computer Technology and Application, Qinghai University, Xining 810016, China
2
Intelligent Computing and Application Laboratory of Qinghai Province, Qinghai University, Xining 810016, China
3
School of Computer and Information Science, Qinghai Institute of Technology, Xining 810018, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(15), 2745; https://doi.org/10.3390/rs17152745 (registering DOI)
Submission received: 11 July 2025 / Revised: 5 August 2025 / Accepted: 6 August 2025 / Published: 7 August 2025
(This article belongs to the Special Issue Neural Networks and Deep Learning for Satellite Image Processing)

Abstract

The inability of existing super-resolution methods to jointly model short-range and long-range spatial dependencies in remote sensing imagery limits reconstruction efficacy. To address this, we propose LSTMConvSR, a novel framework inspired by top-down neural attention mechanisms. Our approach pioneers an LSTM-first–CNN-next architecture. First, an LSTM-based global modeling stage efficiently captures long-range dependencies via downsampling and spatial attention, achieving 80.3% lower FLOPs and 11× faster speed. Second, a CNN-based local refinement stage, guided by the LSTM’s attention maps, enhances details in critical regions. Third, a top-down fusion stage dynamically integrates global context and local features to generate the output. Extensive experiments on Potsdam, UAVid, and RSSCN7 benchmarks demonstrate state-of-the-art performance, achieving 33.94 dB PSNR on Potsdam with 2.4× faster inference than MambaIRv2.
Keywords: Remote Sensing; Super-Resolution; Top-down neural attention; LSTM-first–CNN-next Remote Sensing; Super-Resolution; Top-down neural attention; LSTM-first–CNN-next

Share and Cite

MDPI and ACS Style

Zhu, Q.; Zhang, G.; Wang, X.; Huang, J. LSTMConvSR: Joint Long–Short-Range Modeling via LSTM-First–CNN-Next Architecture for Remote Sensing Image Super-Resolution. Remote Sens. 2025, 17, 2745. https://doi.org/10.3390/rs17152745

AMA Style

Zhu Q, Zhang G, Wang X, Huang J. LSTMConvSR: Joint Long–Short-Range Modeling via LSTM-First–CNN-Next Architecture for Remote Sensing Image Super-Resolution. Remote Sensing. 2025; 17(15):2745. https://doi.org/10.3390/rs17152745

Chicago/Turabian Style

Zhu, Qiwei, Guojing Zhang, Xiaoying Wang, and Jianqiang Huang. 2025. "LSTMConvSR: Joint Long–Short-Range Modeling via LSTM-First–CNN-Next Architecture for Remote Sensing Image Super-Resolution" Remote Sensing 17, no. 15: 2745. https://doi.org/10.3390/rs17152745

APA Style

Zhu, Q., Zhang, G., Wang, X., & Huang, J. (2025). LSTMConvSR: Joint Long–Short-Range Modeling via LSTM-First–CNN-Next Architecture for Remote Sensing Image Super-Resolution. Remote Sensing, 17(15), 2745. https://doi.org/10.3390/rs17152745

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop