Hyper-ISTA-GHD: An Adaptive Hyperparameter Selection Framework for Highly Squinted Mode Sparse SAR Imaging
Highlights
- A novel training-free framework, Hyper-ISTA-GHD, is proposed for sparse SAR imaging, which automatically and adaptively optimizes both the regularization parameter and step size during reconstruction.
- The integration of a high-order phase compensation into the observation operator successfully extends the applicability of the sparse imaging method to challenging highly squinted SAR configurations.
- The proposed method achieves high-precision, rapid imaging for highly squinted SAR, significantly enhancing image quality and robustness against noise without manual parameter tuning or reliance on training datasets.
- This framework offers a highly generalizable and efficient imaging solution applicable to diverse SAR modes and scenes, overcoming critical limitations of existing deep unfolding networks and conventional sparse methods.
Abstract
1. Introduction
- Pioneering application of Hyper-ISTA-GHD to fast sparse SAR imaging reconstruction: This study presents the first integration of Hyper-ISTA-GHD into sparse SAR imaging methods, proposing a novel framework that incorporates an approximate observation operator. This advancement enables automatic regularization parameter tuning and significantly accelerates convergence speed.
- First-time adoption of hypergradient descent for adaptive Hyper-ISTA-GHD step size optimization: The introduction of hypergradient descent to dynamically adjust Hyper-ISTA-GHD step sizes represents an innovation that substantially enhances algorithm performance under low-SNR conditions in practical SAR echo scenarios, addressing critical stability challenges.
- Innovative incorporation of high-order compensation phase into sparse SAR observation operators: By embedding high-order phase compensation into the approximate observation model, this work extends the applicability of sparse SAR imaging methodologies to configurations with larger squint angles, overcoming critical limitations in existing approaches.
2. Signal Model and Related Method
2.1. Highly Squinted SAR Signal Model
2.2. Sparse SAR Imaging Method
2.3. Approximate Observation (AO)
| Algorithm 1 ISTA for sparse highly squinted SAR Imaging |
|
Input: , , , , , , Initialization: , , Iterations: While and End Output: |
2.4. Barzilar–Borwein (BB) Method
3. Sparse Imaging Method of Highly Squinted SAR Based on Hyper-ISTA-GHD
3.1. Hyper-ISTA-AO and Hyper-ISTA-BB
3.1.1. Momentum Term
3.1.2. Soft–Hard (SH) Threshold
3.1.3. Adaptive HPO
3.2. Generalized Hypergradient Descent (GHD)
3.3. Overall Framework of Hyper-ISTA-GHD
| Algorithm 2 Hyper-ISTA-GHD |
|
Input: , , , , , , , , Initialization: , , , Iterations: While and Update via Equation (21) (Until convergence) Update via Equation (22) (Until convergence) Update via Equation (23) Update via Equation (31) (Since convergence) Update via Equation (17) If ( convergence) End End Output: |
3.4. Complexity and Convergence
4. Simulation and Experiment
4.1. Experiment Settings
4.1.1. Simulation Scene
4.1.2. Evaluation Indicators
- Average Performance: The values for PSNR, NMSE, and time represent the average performance calculated over the entire test dataset (10 held-out images). This aggregation ensures that the reported metrics reflect the generalizability of the method rather than the result of a specific best-case scenario.
- Runtime Definition: The “Time (s)” column indicates the average wall-clock inference time per image.
- –
- For deep learning methods (including the proposed framework), this refers to the forward pass duration for a single image on the GPU, excluding data loading overhead.
- –
- For traditional algorithms (implemented in MATLAB), this refers to the execution time measured via standard timing functions (tic/toc) for reconstructing a single image on the CPU.
4.2. Simulation Under Different Squint Angles
4.3. Comparison Simulation of HPO Methods
4.3.1. Variable SNR
4.3.2. Variable PRF
4.4. Analyzes
- Sensitivity of (Gradient Scaling): The coefficient acts as an empirical scaling factor to align the magnitude of the gradient-based estimation with the optimal regularization parameter derived from the L-curve. Our experiments show that the model maintains stable performance within a reasonable range around the chosen default value.
- Sensitivity of (Acceleration Factor): The coefficient controls the acceleration of convergence during the early stages of the unfolding network. Variations in primarily affect the convergence speed rather than the final reconstruction accuracy.
- Insensitivity of : Extensive experiments indicate that the model is highly insensitive to variations in over a wide range. Consequently, to reduce model complexity without compromising performance, we have fixed in our final implementation.
5. Real Data Experiments
5.1. Data and Training Instructions
- Step 1: Zero-Padding. The original broadside SLC image is first zero-padded in the frequency domain. This step is crucial to prevent aliasing artifacts during the subsequent domain transformations.
- Step 2: Inverse Operator Application. We configure the inverse observation operator with the target highly squinted angle (e.g., ). This operator is applied to the padded broadside image to transform it from the image domain back into the raw echo domain.
- Step 3: Cropping. The resulting simulated echo data is then cropped to the original dimensions to emulate the collected raw data size.
- Dense Sampling: For each training sample, we perform a dense grid search over a wide range of potential regularization parameters .
- Optimal Selection: We calculate the curvature of the L-curve for each sampled . The value corresponding to the maximum curvature point is identified as the optimal parameter, denoted as .
- Reconstruction: Using this optimal , we execute the standard ISTA until convergence to obtain the high-quality sparse reconstruction result .
- Data Split and Independence: The FAIR-CSAR-V1.0 dataset used in our experiments consists of 160 samples. We adopted a split of 150 samples for training and 10 samples for testing. We explicitly confirm that the testing samples were strictly held out and unseen during the training phase; no hyperparameter tuning was performed on the test scenes. Given the model-driven nature of our network, which embeds physical priors, this dataset size is sufficient for robust convergence.
- Model Complexity: A key advantage of the proposed deep unfolding framework is its extreme lightweightness. The network contains only 30 trainable parameters in total (3 parameters per stage × 10 stages). This low complexity significantly reduces the risk of overfitting compared to large-scale black-box CNNs, thereby lessening the dependency on massive training datasets or extensive data augmentation.
- Optimizer and Training Strategy: The network was trained using the Adam optimizer with a learning rate of and default beta parameters (). To ensure optimal convergence and prevent overfitting, we implemented an early stopping mechanism monitoring the Normalized Mean Squared Error (NMSE) on the validation set, with a patience of 20 epochs.
- Initialization: Unlike conventional deep learning models that require random weight initialization (necessitating specific random seeds for reproducibility), our method utilizes deterministic initialization based on the physical interpretation of the ISTA algorithm. For instance, the step size parameters are initialized using the theoretical Lipschitz constant. This deterministic approach ensures consistent starting points without the need for random seeds.
- Ocean Ship Scenario (High Sparsity Limit): This dataset represents the upper bound of sparsity, featuring isolated strong targets against a vast, empty background.
- Complex Land Scenario (Low Sparsity Limit): This dataset represents the lower bound, characterized by dense structures and rich texture information.
5.2. Results
6. Conclusions
7. Discussion
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Chang, C.; Jin, M.; Curlander, J. Squint Mode SAR Processing Algorithms. In Proceedings of the 12th Canadian Symposium on Remote Sensing Geoscience and Remote Sensing Symposium, Montreal, QC, Canada, 10–14 July 1989; Volume 3, pp. 1702–1706. [Google Scholar]
- Curlander, J.C.; McDonough, R.N. Synthetic Aperture Radar; Wiley: New York, NY, USA, 1991; Volume 11. [Google Scholar]
- Amitrano, D.; Di Martino, G.; Di Simone, A.; Imperatore, P. Flood detection with SAR: A review of techniques and datasets. Remote Sens. 2024, 16, 656. [Google Scholar] [CrossRef]
- Moreira, A.; Huang, Y. Airborne SAR processing of highly squinted data using a chirp scaling approach with integrated motion compensation. IEEE Trans. Geosci. Remote Sens. 1994, 32, 1029–1040. [Google Scholar] [CrossRef]
- Luo, X.-L.; Xu, W.; Guo, L. The application of PRF variation to squint spotlight SAR. J. Radars 2015, 4, 70–77. [Google Scholar]
- Davidson, G.; Cumming, I.; Ito, M. An approach for improved processing in squint mode SAR. In Proceedings of the IGARSS’93—IEEE International Geoscience and Remote Sensing Symposium, Tokyo, Japan, 18–21 August 1993; pp. 1173–1175. [Google Scholar]
- Davidson, G.W. Image Formation from Squint Mode Synthetic Aperture Radar Data. Ph.D. Thesis, University of British Columbia, Vancouver, BC, Canada, 1994. [Google Scholar]
- Davidson, G.; Cumming, I.G.; Ito, M. A chirp scaling approach for processing squint mode SAR data. IEEE Trans. Aerosp. Electron. Syst. 1996, 32, 121–133. [Google Scholar] [CrossRef]
- Davidson, G.W.; Cumming, I. Signal properties of spaceborne squint-mode SAR. IEEE Trans. Geosci. Remote Sens. 1997, 35, 611–617. [Google Scholar] [CrossRef]
- Schmidt, A.R. Secondary Range Compression for Improved Range/Doppler Processing of SAR Data with High Squint. Ph.D. Thesis, University of British Columbia, Vancouver, BC, Canada, 1986. [Google Scholar]
- Smith, A. A new approach to range-Doppler SAR processing. Int. J. Remote Sens. 1991, 12, 235–251. [Google Scholar] [CrossRef]
- Jian-ping, O.; Wei, L.; Jun, Z. ω-k Imaging Algorithm for SAR with Low Height and Large Squint Angle. J. Signal Process. 2014, 30, 1. [Google Scholar]
- Chen, X.; Sun, G.C.; Xing, M.; Li, B.; Yang, J.; Bao, Z. Ground Cartesian back-projection algorithm for high squint diving TOPS SAR imaging. IEEE Trans. Geosci. Remote Sens. 2020, 59, 5812–5827. [Google Scholar] [CrossRef]
- An, D.; Huang, X.; Jin, T.; Zhou, Z. Extended Nonlinear Chirp Scaling Algorithm for High-resolution Highly Squint SAR Data Focusing. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3595–3609. [Google Scholar] [CrossRef]
- Li, Z.; Chen, J.; Du, W.; Gao, B.; Guo, D.; Jiang, T.; Wu, T.; Zhang, H.; Xing, M. Focusing of maneuvering high-squint-mode SAR data based on equivalent range model and wavenumber-domain imaging algorithm. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 2419–2433. [Google Scholar] [CrossRef]
- Xu, X.; Su, F.; Gao, J.; Jin, X. High-squint SAR imaging of maritime ship targets. IEEE Trans. Geosci. Remote Sens. 2020, 60, 5200716. [Google Scholar] [CrossRef]
- Guo, Y.; Wang, P.; Men, Z.; Chen, J.; Zhou, X.; He, T.; Cui, L. A modified range Doppler algorithm for high-squint SAR data imaging. Remote Sens. 2023, 15, 4200. [Google Scholar] [CrossRef]
- Zhang, B.; Hong, W.; Wu, Y. Sparse microwave imaging: Principles and applications. Sci. China Inf. Sci. 2012, 55, 1722–1754. [Google Scholar] [CrossRef]
- Çetin, M.; Stojanović, I.; Önhon, N.Ö; Varshney, K.; Samadi, S.; Karl, W.C.; Willsky, A.S. Sparsity-driven synthetic aperture radar imaging: Reconstruction, autofocusing, moving targets, and compressed sensing. IEEE Signal Process. Mag. 2014, 31, 27–40. [Google Scholar] [CrossRef]
- Fan, Y.; Zhang, J.; Zhang, B.; Wu, Y. A Multi-Agent Consensus Equilibrium Perspective for Multi-feature Enhancement Sparse SAR Imaging. IEEE Geosci. Remote Sens. Lett. 2025, 22, 4010405. [Google Scholar] [CrossRef]
- Fang, J.; Xu, Z.; Zhang, B.; Hong, W.; Wu, Y. Fast compressed sensing SAR imaging based on approximated observation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 352–363. [Google Scholar] [CrossRef]
- Quan, X.; Zhang, B.; Zhu, X.X.; Wu, Y. Unambiguous SAR Imaging for Nonuniform DPC Sampling: l1 Regularization Method Using Filter Bank. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1596–1600. [Google Scholar] [CrossRef]
- Bi, H.; Zhang, B.; Zhu, X.X.; Jiang, C.; Hong, W. Extended chirp scaling-baseband azimuth scaling-based azimuth–range decouple l1 regularization for TOPS SAR imaging via CAMP. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3748–3763. [Google Scholar] [CrossRef]
- Daubechies, I.; Defrise, M.; De Mol, C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. J. Issued Courant Inst. Math. Sci. 2004, 57, 1413–1457. [Google Scholar] [CrossRef]
- Donoho, D.L.; Maleki, A.; Montanari, A. Message-passing algorithms for compressed sensing. Proc. Natl. Acad. Sci. USA 2009, 106, 18914–18919. [Google Scholar] [CrossRef]
- Neal, P.; Eric, C.; Borja, P.; Jonathan, E. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar]
- Batu, O.; Cetin, M. Parameter selection in sparsity-driven SAR imaging. IEEE Trans. Aerosp. Electron. Syst. 2011, 47, 3040–3050. [Google Scholar] [CrossRef]
- Martín-del Campo-Becerra, G.D.; Serafín-García, S.A.; Reigber, A.; Ortega-Cisneros, S. Parameter selection criteria for Tomo-SAR focusing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 14, 1580–1602. [Google Scholar] [CrossRef]
- Bischl, B.; Binder, M.; Lang, M.; Pielok, T.; Richter, J.; Coors, S.; Thomas, J.; Ullmann, T.; Becker, M.; Boulesteix, A.L.; et al. Hyperparameter optimization: Foundations, algorithms, best practices, and open challenges. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2023, 13, e1484. [Google Scholar] [CrossRef]
- Stein, C.M. Estimation of the mean of a multivariate normal distribution. Ann. Stat. 1981, 9, 1135–1151. [Google Scholar] [CrossRef]
- Golub, G.H.; Heath, M.; Wahba, G. Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics 1979, 21, 215–223. [Google Scholar] [CrossRef]
- Hansen, P.C. Analysis of discrete ill-posed problems by means of the L-curve. SIAM Rev. 1992, 34, 561–580. [Google Scholar] [CrossRef]
- Gregor, K.; LeCun, Y. Learning fast approximations of sparse coding. In Proceedings of the 27th International Conference on International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010; pp. 399–406. [Google Scholar]
- Monga, V.; Li, Y.; Eldar, Y.C. Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing. IEEE Signal Process. Mag. 2021, 38, 18–44. [Google Scholar] [CrossRef]
- Zhou, G.; Xu, Z.; Fan, Y.; Zhang, Z.; Qiu, X.; Zhang, B.; Fu, K.; Wu, Y. SPHR-SAR-Net: Superpixel High-resolution SAR Imaging Network Based on Nonlocal Total Variation. arXiv 2023, arXiv:2304.04428. [Google Scholar]
- Zhang, H.; Ni, J.; Zhang, C.; Luo, Y.; Zhang, Q. PnP-Based ground moving target imaging network for squint SAR and sparse sampling. IEEE Trans. Geosci. Remote Sens. 2023, 62, 5201020. [Google Scholar] [CrossRef]
- Zhou, G.; Zuo, Y.; Zhang, Z.; Zhang, B.; Wu, Y. CR-DEQ-SAR: A deep equilibrium sparse SAR imaging method for compound regularization. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 4680–4695. [Google Scholar] [CrossRef]
- Wang, M.; Hu, Y.; Wei, S.; Shi, J.; Cui, G.; Kong, L.; Guo, Y. Synthetic Aperture Radar Imaging Meets Deep Unfolded Learning: A comprehensive review. IEEE Geosci. Remote Sens. Mag. 2024, 13, 79–120. [Google Scholar] [CrossRef]
- Chen, X.; Liu, J.; Wang, Z.; Yin, W. Theoretical Linear Convergence of Unfolded ISTA and its Practical Weights and Thresholds. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS’18; Curran Associates Inc.: Red Hook, NY, USA, 2018; pp. 9079–9089. [Google Scholar]
- Liu, J.; Chen, X.; Wang, Z.; Yin, W. ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Chen, X.; Liu, J.; Wang, Z.; Yin, W. Hyperparameter Tuning is All You Need for LISTA. Neural Inf. Process. Syst. 2021, 34, 11678–11689. [Google Scholar]
- Barzilai, J.; Borwein, J.M. Two-point step size gradient methods. IMA J. Numer. Anal. 1988, 8, 141–148. [Google Scholar] [CrossRef]
- Tan, C.; Ma, S.; Dai, Y.H.; Qian, Y. Barzilai-borwein step size for stochastic gradient descent. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16; Curran Associates Inc.: Red Hook, NY, USA, 2016; pp. 85–693. [Google Scholar]
- Almeida, L.B.; Langlois, T.; Amaral, J.D.; Plakhov, A. Parameter adaptation in stochastic optimization. In On-Line Learning in Neural Networks; Cambridge University Press: Cambridge, UK, 1999; pp. 111–134. [Google Scholar]
- Baydin, A.G.; Cornish, R.; Rubio, D.M.; Schmidt, M.; Wood, F. Online learning rate adaptation with hypergradient descent. arXiv 2017, arXiv:1703.04782. [Google Scholar]
- Rubio, D.M. Convergence Analysis of an Adaptive Method of Gradient Descent. Master’s Thesis, University of Oxford, Oxford, UK, 2017. [Google Scholar]
- Chandra, K.; Xie, A.; Ragan-Kelley, J.; Meijer, E. Gradient descent: The ultimate optimizer. Adv. Neural Inf. Process. Syst. 2022, 35, 8214–8225. [Google Scholar]
- Haji, S.H.; Abdulazeez, A.M. Comparison of optimization techniques based on gradient descent algorithm: A review. Palarch’s J. Archaeol. Egypt/Egyptol. 2021, 18, 2715–2743. [Google Scholar]
- Seifi, F.; Niaki, S. Extending the hypergradient descent technique to reduce the time of optimal solution achieved in hyperparameter optimization algorithms. Int. J. Ind. Eng. Comput. 2023, 14, 501–510. [Google Scholar] [CrossRef]
- Yang, Z.; Li, X. Powered stochastic optimization with hypergradient descent for large-scale learning systems. Expert Syst. Appl. 2024, 238, 122017. [Google Scholar] [CrossRef]
- Liu, Y.; Pan, Z.; Yang, J.; Zhang, B.; Zhou, G.; Hu, Y.; Ye, Q. Few-shot object detection in remote-sensing images via label-consistent classifier and gradual regression. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5612114. [Google Scholar] [CrossRef]
- Zhang, H.; Lu, S. Proximal recursive generalized hyper-gradient descent method. Appl. Soft Comput. 2025, 175, 113073. [Google Scholar] [CrossRef]
- Chu, Y.C.; Gao, W.; Ye, Y.; Udell, M. Provable and Practical Online Learning Rate Adaptation with Hypergradient Descent. arXiv 2025, arXiv:2502.11229. [Google Scholar] [CrossRef]













| Parameter | Symbol | Value |
|---|---|---|
| Carrier frequency | 9.8 GHz | |
| Signal bandwidth | 150 MHz | |
| Sampling rate | 180 MHz | |
| Antenna size in azimuth | 2 m | |
| Squint angle | [20–50]° | |
| Platform height | H | 500 km |
| Equivalent speed | 7340 m/s | |
| Pulse repetition frequency | PRF | [25–100%] * |
| Range resolution ** | 1 m | |
| Azimuth resolution ** | ≃1 m |
| Method | IRW (m) | PSLR (dB) | ISLR (dB) | |
|---|---|---|---|---|
| ENLCS | 1.290 | −13.263 | −10.248 | |
| SR-ENLCS | 0.712 | |||
| ENLCS | 1.290 | −13.256 | −10.223 | |
| SR-ENLCS | 0.712 | |||
| ENLCS | 1.462 | −13.237 | −10.154 | |
| SR-ENLCS | 0.806 | |||
| ENLCS | 1.720 | −13.040 | −9.955 | |
| SR-ENLCS | 0.949 |
| SNR | Method | PSNR (dB) | NMSE | Time (s) | |
|---|---|---|---|---|---|
| 30 dB | Matched filter | - | 31.529 | 13.125 | 0.023 |
| Too small | 0.011 | 35.056 | 5.211 | 2.118 | |
| Too large | 0.174 | 47.220 | 0.235 | 2.336 | |
| L-curve | 0.044 | 51.467 | 0.125 | 14.11 | |
| Hyper-ISTA-AO | 0.042 | 51.249 | 0.133 | 2.801 | |
| Hyper-ISTA-BB | 0.060 | 53.439 | 0.074 | 3.762 | |
| Hyper-ISTA-GHD | 0.060 | 54.253 | 0.066 | 2.956 | |
| 25 dB | Matched filter | - | 24.701 | 54.040 | 0.020 |
| Too small | 0.045 | 33.123 | 7.219 | 2.187 | |
| Too large | 0.179 | 47.108 | 0.224 | 2.147 | |
| L-curve | 0.090 | 45.531 | 0.447 | 26.71 | |
| Hyper-ISTA-AO | 0.088 | 46.083 | 0.439 | 2.814 | |
| Hyper-ISTA-BB | 0.107 | 49.508 | 0.191 | 3.171 | |
| Hyper-ISTA-GHD | 0.109 | 49.767 | 0.193 | 2.941 | |
| 20 dB | Matched filter | - | 19.576 | 178.294 | 0.019 |
| Too small | 0.106 | 29.243 | 12.312 | 2.181 | |
| Too large | 0.238 | 44.324 | 0.396 | 2.231 | |
| L-curve | 0.159 | 39.027 | 1.837 | 24.45 | |
| Hyper-ISTA-AO | 0.159 | 39.452 | 1.824 | 2.912 | |
| Hyper-ISTA-BB | 0.191 | 44.732 | 0.397 | 3.223 | |
| Hyper-ISTA-GHD | 0.192 | 47.471 | 0.371 | 2.902 |
| PRF/Ba | Method | PSNR (dB) | NMSE | Time (s) | |
|---|---|---|---|---|---|
| 75% | Matched filter | - | 28.875 | 13.316 | 0.023 |
| Too small | 0.010 | 31.971 | 6.540 | 2.165 | |
| Too large | 0.153 | 42.446 | 0.368 | 2.247 | |
| L-curve | 0.038 | 43.680 | 0.393 | 23.01 | |
| Hyper-ISTA-AO | 0.037 | 43.511 | 0.421 | 3.027 | |
| Hyper-ISTA-BB | 0.052 | 48.655 | 0.135 | 3.301 | |
| Hyper-ISTA-GHD | 0.052 | 48.861 | 0.135 | 2.835 | |
| 50% | Matched filter | - | 26.770 | 9.099 | 0.021 |
| Too small | 0.008 | 30.117 | 4.670 | 2.152 | |
| Too large | 0.120 | 36.954 | 0.557 | 2.164 | |
| L-curve | 0.030 | 38.212 | 0.563 | 13.38 | |
| Hyper-ISTA-AO | 0.030 | 39.138 | 0.518 | 2.899 | |
| Hyper-ISTA-BB | 0.042 | 40.869 | 0.338 | 3.275 | |
| Hyper-ISTA-GHD | 0.043 | 41.582 | 0.334 | 2.828 | |
| 25% | Matched filter | - | 23.638 | 4.827 | 0.020 |
| Too small | 0.011 | 28.735 | 1.370 | 2.153 | |
| Too large | 0.044 | 31.054 | 0.716 | 2.170 | |
| L-curve | 0.022 | 30.894 | 0.714 | 12.95 | |
| Hyper-ISTA-AO | 0.022 | 31.147 | 0.692 | 2.886 | |
| Hyper-ISTA-BB | 0.031 | 31.767 | 0.635 | 3.291 | |
| Hyper-ISTA-GHD | 0.030 | 32.481 | 0.653 | 2.863 |
| Scene | Method | PSNR (dB) | NMSE | Time (s) |
|---|---|---|---|---|
| Ocean | Matched filter | 16.005 | 1.851 | - |
| Ocean—LISTA | 21.635 | 0.506 | 3.370 | |
| Land—LISTA | 16.819 | 1.535 | 3.322 | |
| Mixed—LISTA | 17.805 | 1.223 | 3.386 | |
| Hyper-ISTA-GHD | 70.322 | 6.89 | 9.937 | |
| Land | Matched filter | 24.512 | 0.053 | - |
| Ocean—LISTA | 14.216 | 0.563 | 3.582 | |
| Land—LISTA | 26.591 | 0.0326 | 3.618 | |
| Mixed—LISTA | 29.150 | 0.018 | 3.321 | |
| Hyper-ISTA-GHD | 77.357 | 2.73 | 10.084 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Chen, T.; Ding, B.; Gao, H.; Liu, L.; Zhang, B.; Wu, Y. Hyper-ISTA-GHD: An Adaptive Hyperparameter Selection Framework for Highly Squinted Mode Sparse SAR Imaging. Remote Sens. 2026, 18, 369. https://doi.org/10.3390/rs18020369
Chen T, Ding B, Gao H, Liu L, Zhang B, Wu Y. Hyper-ISTA-GHD: An Adaptive Hyperparameter Selection Framework for Highly Squinted Mode Sparse SAR Imaging. Remote Sensing. 2026; 18(2):369. https://doi.org/10.3390/rs18020369
Chicago/Turabian StyleChen, Tiancheng, Bailing Ding, Heli Gao, Lei Liu, Bingchen Zhang, and Yirong Wu. 2026. "Hyper-ISTA-GHD: An Adaptive Hyperparameter Selection Framework for Highly Squinted Mode Sparse SAR Imaging" Remote Sensing 18, no. 2: 369. https://doi.org/10.3390/rs18020369
APA StyleChen, T., Ding, B., Gao, H., Liu, L., Zhang, B., & Wu, Y. (2026). Hyper-ISTA-GHD: An Adaptive Hyperparameter Selection Framework for Highly Squinted Mode Sparse SAR Imaging. Remote Sensing, 18(2), 369. https://doi.org/10.3390/rs18020369

