Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (150)

Search Parameters:
Keywords = low characteristic scenes

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 11340 KiB  
Article
CLSANet: Cognitive Learning-Based Self-Adaptive Feature Fusion for Multimodal Visual Object Detection
by Han Peng, Qionglin Liu, Riqing Ruan, Shuaiqi Yuan and Qin Li
Electronics 2025, 14(15), 3082; https://doi.org/10.3390/electronics14153082 - 1 Aug 2025
Viewed by 201
Abstract
Multimodal object detection leverages the complementary characteristics of visible (RGB) and infrared (IR) imagery, making it well-suited for challenging scenarios such as low illumination, occlusion, and complex backgrounds. However, most existing fusion-based methods rely on static or heuristic strategies, limiting their adaptability to [...] Read more.
Multimodal object detection leverages the complementary characteristics of visible (RGB) and infrared (IR) imagery, making it well-suited for challenging scenarios such as low illumination, occlusion, and complex backgrounds. However, most existing fusion-based methods rely on static or heuristic strategies, limiting their adaptability to dynamic environments. To address this limitation, we propose CLSANet, a cognitive learning-based self-adaptive network that enhances detection performance by dynamically selecting and integrating modality-specific features. CLSANet consists of three key modules: (1) a Dominant Modality Identification Module that selects the most informative modality based on global scene analysis; (2) a Modality Enhancement Module that disentangles and strengthens shared and modality-specific representations; and (3) a Self-Adaptive Fusion Module that adjusts fusion weights spatially according to local scene complexity. Compared to existing methods, CLSANet achieves state-of-the-art detection performance with significantly fewer parameters and lower computational cost. Ablation studies further demonstrate the individual effectiveness of each module under different environmental conditions, particularly in low-light and occluded scenes. CLSANet offers a compact, interpretable, and practical solution for multimodal object detection in resource-constrained settings. Full article
(This article belongs to the Special Issue Digital Intelligence Technology and Applications)
Show Figures

Figure 1

23 pages, 10392 KiB  
Article
Dual-Branch Luminance–Chrominance Attention Network for Hydraulic Concrete Image Enhancement
by Zhangjun Peng, Li Li, Chuanhao Chang, Rong Tang, Guoqiang Zheng, Mingfei Wan, Juanping Jiang, Shuai Zhou, Zhenggang Tian and Zhigui Liu
Appl. Sci. 2025, 15(14), 7762; https://doi.org/10.3390/app15147762 - 10 Jul 2025
Viewed by 257
Abstract
Hydraulic concrete is a critical infrastructure material, with its surface condition playing a vital role in quality assessments for water conservancy and hydropower projects. However, images taken in complex hydraulic environments often suffer from degraded quality due to low lighting, shadows, and noise, [...] Read more.
Hydraulic concrete is a critical infrastructure material, with its surface condition playing a vital role in quality assessments for water conservancy and hydropower projects. However, images taken in complex hydraulic environments often suffer from degraded quality due to low lighting, shadows, and noise, making it difficult to distinguish defects from the background and thereby hindering accurate defect detection and damage evaluation. In this study, following systematic analyses of hydraulic concrete color space characteristics, we propose a Dual-Branch Luminance–Chrominance Attention Network (DBLCANet-HCIE) specifically designed for low-light hydraulic concrete image enhancement. Inspired by human visual perception, the network simultaneously improves global contrast and preserves fine-grained defect textures, which are essential for structural analysis. The proposed architecture consists of a Luminance Adjustment Branch (LAB) and a Chroma Restoration Branch (CRB). The LAB incorporates a Luminance-Aware Hybrid Attention Block (LAHAB) to capture both the global luminance distribution and local texture details, enabling adaptive illumination correction through comprehensive scene understanding. The CRB integrates a Channel Denoiser Block (CDB) for channel-specific noise suppression and a Frequency-Domain Detail Enhancement Block (FDDEB) to refine chrominance information and enhance subtle defect textures. A feature fusion block is designed to fuse and learn the features of the outputs from the two branches, resulting in images with enhanced luminance, reduced noise, and preserved surface anomalies. To validate the proposed approach, we construct a dedicated low-light hydraulic concrete image dataset (LLHCID). Extensive experiments conducted on both LOLv1 and LLHCID benchmarks demonstrate that the proposed method significantly enhances the visual interpretability of hydraulic concrete surfaces while effectively addressing low-light degradation challenges. Full article
Show Figures

Figure 1

24 pages, 4442 KiB  
Article
Time-Series Correlation Optimization for Forest Fire Tracking
by Dongmei Yang, Guohao Nie, Xiaoyuan Xu, Debin Zhang and Xingmei Wang
Forests 2025, 16(7), 1101; https://doi.org/10.3390/f16071101 - 3 Jul 2025
Viewed by 308
Abstract
Accurate real-time tracking of forest fires using UAV platforms is crucial for timely early warning, reliable spread prediction, and effective autonomous suppression. Existing detection-based multi-object tracking methods face challenges in accurately associating targets and maintaining smooth tracking trajectories in complex forest environments. These [...] Read more.
Accurate real-time tracking of forest fires using UAV platforms is crucial for timely early warning, reliable spread prediction, and effective autonomous suppression. Existing detection-based multi-object tracking methods face challenges in accurately associating targets and maintaining smooth tracking trajectories in complex forest environments. These difficulties stem from the highly nonlinear movement of flames relative to the observing UAV and the lack of robust fire-specific feature modeling. To address these challenges, we introduce AO-OCSORT, an association-optimized observation-centric tracking framework designed to enhance robustness in dynamic fire scenarios. AO-OCSORT builds on the YOLOX detector. To associate detection results across frames and form smooth trajectories, we propose a temporal–physical similarity metric that utilizes temporal information from the short-term motion of targets and incorporates physical flame characteristics derived from optical flow and contours. Subsequently, scene classification and low-score filtering are employed to develop a hierarchical association strategy, reducing the impact of false detections and interfering objects. Additionally, a virtual trajectory generation module is proposed, employing a kinematic model to maintain trajectory continuity during flame occlusion. Locally evaluated on the 1080P-resolution FireMOT UAV wildfire dataset, AO-OCSORT achieves a 5.4% improvement in MOTA over advanced baselines at 28.1 FPS, meeting real-time requirements. This improvement enhances the reliability of fire front localization, which is crucial for forest fire management. Furthermore, AO-OCSORT demonstrates strong generalization, achieving 41.4% MOTA on VisDrone, 80.9% on MOT17, and 92.2% MOTA on DanceTrack. Full article
(This article belongs to the Special Issue Advanced Technologies for Forest Fire Detection and Monitoring)
Show Figures

Figure 1

24 pages, 28521 KiB  
Article
Four-Channel Emitting Laser Fuze Structure Based on 3D Particle Hybrid Collision Scattering Under Smoke Characteristic Variation
by Zhe Guo, Bing Yang and Zhonghua Huang
Appl. Sci. 2025, 15(13), 7292; https://doi.org/10.3390/app15137292 - 28 Jun 2025
Viewed by 236
Abstract
Our work presents a laser fuze detector structure with a four-channel center-symmetrical emitting laser under the influence of the three-dimensional (3D) and spatial properties of smoke clouds, which was used to improve the laser fuze’s anti-smoke interference ability, as well as the target [...] Read more.
Our work presents a laser fuze detector structure with a four-channel center-symmetrical emitting laser under the influence of the three-dimensional (3D) and spatial properties of smoke clouds, which was used to improve the laser fuze’s anti-smoke interference ability, as well as the target detection performance. A laser echo signal model under multiple frequency-modulated continuous-wave (FMCW) lasers was constructed by investigating the hybrid collision scattering process of photons and smoke particles. Using a virtual particle system implemented in Unity3D, the laser target characteristics were studied under the conditions of multiple smoke particle characteristic variations. The simulation results showed that false alarms in low-visibility and missed alarms in high-visibility smoke scenes could be effectively solved with four emitting lasers. With this structure of the laser fuze prototype, the smoke echo signal and the target echo signal could be separated, and the average amplitude growth rate of the target echo signal was improved. The conclusions are supported by the results of experiments. Therefore, this study not only reveals laser target properties for 3D and spatial properties of particles, but also provides design guidance and reasonable optimization of FMCW laser fuze multi-channel emission structures in combination with multi-particle collision types and target characteristics. Full article
Show Figures

Figure 1

24 pages, 6594 KiB  
Article
GAT-Enhanced YOLOv8_L with Dilated Encoder for Multi-Scale Space Object Detection
by Haifeng Zhang, Han Ai, Donglin Xue, Zeyu He, Haoran Zhu, Delian Liu, Jianzhong Cao and Chao Mei
Remote Sens. 2025, 17(13), 2119; https://doi.org/10.3390/rs17132119 - 20 Jun 2025
Viewed by 476
Abstract
The problem of inadequate object detection accuracy in complex remote sensing scenarios has been identified as a primary concern. Traditional YOLO-series algorithms encounter challenges such as poor robustness in small object detection and significant interference from complex backgrounds. In this paper, a multi-scale [...] Read more.
The problem of inadequate object detection accuracy in complex remote sensing scenarios has been identified as a primary concern. Traditional YOLO-series algorithms encounter challenges such as poor robustness in small object detection and significant interference from complex backgrounds. In this paper, a multi-scale feature fusion framework based on an improved version of YOLOv8_L is proposed. The combination of a graph attention network (GAT) and Dilated Encoder network significantly improves the algorithm detection and recognition performance for space remote sensing objects. It mainly includes abandoning the original Feature Pyramid Network (FPN) structure, proposing an adaptive fusion strategy based on multi-level features of backbone network, enhancing the expression ability of multi-scale objects through upsampling and feature stacking, and reconstructing the FPN. The local features extracted by convolutional neural networks are mapped to graph-structured data, and the nodal attention mechanism of GAT is used to capture the global topological association of space objects, which makes up for the deficiency of the convolutional operation in weight allocation and realizes GAT integration. The Dilated Encoder network is introduced to cover different-scale targets by differentiating receptive fields, and the feature weight allocation is optimized by combining it with a Convolutional Block Attention Module (CBAM). According to the characteristics of space missions, an annotated dataset containing 8000 satellite and space station images is constructed, covering a variety of lighting, attitude and scale scenes, and providing benchmark support for model training and verification. Experimental results on the space object dataset reveal that the enhanced algorithm achieves a mean average precision (mAP) of 97.2%, representing a 2.1% improvement over the original YOLOv8_L. Comparative experiments with six other models demonstrate that the proposed algorithm outperforms its counterparts. Ablation studies further validate the synergistic effect between the graph attention network (GAT) and the Dilated Encoder. The results indicate that the model maintains a high detection accuracy under challenging conditions, including strong light interference, multi-scale variations, and low-light environments. Full article
(This article belongs to the Special Issue Remote Sensing Image Thorough Analysis by Advanced Machine Learning)
Show Figures

Figure 1

20 pages, 14779 KiB  
Article
Automation of Multi-Class Microscopy Image Classification Based on the Microorganisms Taxonomic Features Extraction
by Aleksei Samarin, Alexander Savelev, Aleksei Toropov, Aleksandra Dozortseva, Egor Kotenko, Artem Nazarenko, Alexander Motyko, Galiya Narova, Elena Mikhailova and Valentin Malykh
J. Imaging 2025, 11(6), 201; https://doi.org/10.3390/jimaging11060201 - 18 Jun 2025
Viewed by 572
Abstract
This study presents a unified low-parameter approach to multi-class classification of microorganisms (micrococci, diplococci, streptococci, and bacilli) based on automated machine learning. The method is designed to produce interpretable taxonomic descriptors through analysis of the external geometric characteristics of microorganisms, including cell shape, [...] Read more.
This study presents a unified low-parameter approach to multi-class classification of microorganisms (micrococci, diplococci, streptococci, and bacilli) based on automated machine learning. The method is designed to produce interpretable taxonomic descriptors through analysis of the external geometric characteristics of microorganisms, including cell shape, colony organization, and dynamic behavior in unfixed microscopic scenes. A key advantage of the proposed approach is its lightweight nature: the resulting models have significantly fewer parameters than deep learning-based alternatives, enabling fast inference even on standard CPU hardware. An annotated dataset containing images of four bacterial types obtained under conditions simulating real clinical trials has been developed and published to validate the method. The results (Precision = 0.910, Recall = 0.901, and F1-score = 0.905) confirm the effectiveness of the proposed method for biomedical diagnostic tasks, especially in settings with limited computational resources and a need for feature interpretability. Our approach demonstrates performance comparable to state-of-the-art methods while offering superior efficiency and lightweight design due to its significantly reduced number of parameters. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

21 pages, 10091 KiB  
Article
Scalable Hyperspectral Enhancement via Patch-Wise Sparse Residual Learning: Insights from Super-Resolved EnMAP Data
by Parth Naik, Rupsa Chakraborty, Sam Thiele and Richard Gloaguen
Remote Sens. 2025, 17(11), 1878; https://doi.org/10.3390/rs17111878 - 28 May 2025
Viewed by 725
Abstract
A majority of hyperspectral super-resolution methods aim to enhance the spatial resolution of hyperspectral imaging data (HSI) by integrating high-resolution multispectral imaging data (MSI), leveraging rich spectral information for various geospatial applications. Key challenges include spectral distortions from high-frequency spatial data, high computational [...] Read more.
A majority of hyperspectral super-resolution methods aim to enhance the spatial resolution of hyperspectral imaging data (HSI) by integrating high-resolution multispectral imaging data (MSI), leveraging rich spectral information for various geospatial applications. Key challenges include spectral distortions from high-frequency spatial data, high computational complexity, and limited training data, particularly for new-generation sensors with unique noise patterns. In this contribution, we propose a novel parallel patch-wise sparse residual learning (P2SR) algorithm for resolution enhancement based on fusion of HSI and MSI. The proposed method uses multi-decomposition techniques (i.e., Independent component analysis, Non-negative matrix factorization, and 3D wavelet transforms) to extract spatial and spectral features to form a sparse dictionary. The spectral and spatial characteristics of the scene encoded in the dictionary enable reconstruction through a first-order optimization algorithm to ensure an efficient sparse representation. The final spatially enhanced HSI is reconstructed by combining the learned features from low-resolution HSI and applying an MSI-regulated guided filter to enhance spatial fidelity while minimizing artifacts. P2SR is deployable on a high-performance computing (HPC) system with parallel processing, ensuring scalability and computational efficiency for large HSI datasets. Extensive evaluations on three diverse study sites demonstrate that P2SR consistently outperforms traditional and state-of-the-art (SOA) methods in both quantitative metrics and qualitative spatial assessments. Specifically, P2SR achieved the best average PSNR (25.2100) and SAM (12.4542) scores, indicating superior spatio-spectral reconstruction contributing to sharper spatial features, reduced mixed pixels, and enhanced geological features. P2SR also achieved the best average ERGAS (8.9295) and Q2n (0.5156), which suggests better overall fidelity across all bands and perceptual accuracy with the least spectral distortions. Importantly, we show that P2SR preserves critical spectral signatures, such as Fe2+ absorption, and improves the detection of fine-scale environmental and geological structures. P2SR’s ability to maintain spectral fidelity while enhancing spatial detail makes it a powerful tool for high-precision remote sensing applications, including mineral mapping, land-use analysis, and environmental monitoring. Full article
Show Figures

Graphical abstract

20 pages, 7529 KiB  
Article
A Fast and Efficient Denoising and Surface Reflectance Retrieval Method for ZY1-02D Hyperspectral Data
by Qiongqiong Lan, Yaqing He, Qijin Han, Yongguang Zhao, Wan Li, Lu Xu and Dongping Ming
Remote Sens. 2025, 17(11), 1844; https://doi.org/10.3390/rs17111844 - 25 May 2025
Viewed by 465
Abstract
Hyperspectral remote sensing is crucial due to its continuous spectral information, especially in the quantitative remote sensing (QRS) field. Surface reflectance (SR), a fundamental product in QRS, can play a pivotal role in application accuracy and serves as a key indicator of sensor [...] Read more.
Hyperspectral remote sensing is crucial due to its continuous spectral information, especially in the quantitative remote sensing (QRS) field. Surface reflectance (SR), a fundamental product in QRS, can play a pivotal role in application accuracy and serves as a key indicator of sensor performance. However, the distinctive spectral characteristics of a hyperspectral image (HSI) make it particularly susceptible to noise during the process of imaging, which inevitably degrades data quality and reduces SR accuracy. Moreover, the validation of hyperspectral SR faces challenges due to the scarcity of reliable validation data. To address these issues, aiming at fast and efficient processing of Chinese domestic ZY1-02D hyperspectral level-1 data, this study proposes a comprehensive processing framework: (1) To address the low efficiency of traditional bad line detection by visual examination, an automatic bad line detection method based on the pixel grayscale gradient threshold algorithm is proposed; (2) A spectral correlation-based interpolation method is developed to overcome the poor performance of adjacent-column averaging in repairing wide bad lines; (3) A reliable validation method was established based on the spectral band adjustment factors method to compare hyperspectral SR with multispectral SR and in-situ ground measurements. The results and analysis demonstrate that the proposed method improves the accuracy of ZY1-02D SR and the method ensures high processing efficiency, requiring only 5 min per scene of ZY1-02D HSI. This study provides a technical foundation for the application of ZY1-02D HSIs and offers valuable insights for the development and enhancement of next-generation hyperspectral sensors. Full article
(This article belongs to the Special Issue Recent Advances in the Processing of Hyperspectral Images)
Show Figures

Figure 1

27 pages, 10202 KiB  
Article
WIGformer: Wavelet-Based Illumination-Guided Transformer
by Wensheng Cao, Tianyu Yan, Zhile Li and Jiongyao Ye
Symmetry 2025, 17(5), 798; https://doi.org/10.3390/sym17050798 - 20 May 2025
Viewed by 435
Abstract
Low-light image enhancement remains a challenging task in computer vision due to the complex interplay of noise, asymmetrical artifacts, illumination non-uniformity, and detail preservation. Existing methods such as traditional histogram equalization, gamma correction, and Retinex-based approaches often struggle to balance contrast improvement and [...] Read more.
Low-light image enhancement remains a challenging task in computer vision due to the complex interplay of noise, asymmetrical artifacts, illumination non-uniformity, and detail preservation. Existing methods such as traditional histogram equalization, gamma correction, and Retinex-based approaches often struggle to balance contrast improvement and naturalness preservation. Deep learning methods such as CNNs and transformers have shown promise, but face limitations in modeling multi-scale illumination and long-range dependencies. To address these issues, we propose WIGformer, a novel wavelet-based illumination-guided transformer framework for low-light image enhancement. The proposed method extends the single-stage Retinex theory to explicitly model noise in both reflectance and illumination components. It introduces a wavelet illumination estimator with a Wavelet Feature Enhancement Convolution (WFEConv) module to capture multi-scale illumination features and an illumination feature-guided corruption restorer with an Illumination-Guided Enhanced Multihead Self-Attention (IGEMSA) mechanism. WIGformer leverages the symmetry properties of wavelet transforms to achieve multi-scale illumination estimation, ensuring balanced feature extraction across different frequency bands. The IGEMSA mechanism integrates adaptive feature refinement and illumination guidance to suppress noise and artifacts while preserving fine details. The same mechanism allows us to further exploit symmetrical dependencies between illumination and reflectance components, enabling robust and natural enhancement of low-light images. Extensive experiments on the LOL-V1, LOL-V2-Real, and LOL-V2-Synthetic datasets demonstrate that WIGformer achieves state-of-the-art performance and outperforms existing methods, with PSNR improvements of up to 26.12 dB and an SSIM score of 0.935. The qualitative results demonstrate WIGformer’s superior capability to not only restore natural illumination but also maintain structural symmetry in challenging conditions, preserving balanced luminance distributions and geometric regularities that are characteristic of properly exposed natural scenes. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

24 pages, 4587 KiB  
Article
Structured Bayesian Super-Resolution Forward-Looking Imaging for Maneuvering Platforms Based on Enhanced Sparsity Model
by Yiheng Guo, Yujie Liang, Yi Liang and Xiangwei Sun
Remote Sens. 2025, 17(5), 775; https://doi.org/10.3390/rs17050775 - 23 Feb 2025
Cited by 1 | Viewed by 521
Abstract
Sparse reconstruction-based imaging techniques can be utilized to solve forward-looking imaging problems with limited azimuth resolution. However, these methods perform well only under the traditional model for the platform with low speed, and the performance deteriorates for the maneuvering trajectory. In this paper, [...] Read more.
Sparse reconstruction-based imaging techniques can be utilized to solve forward-looking imaging problems with limited azimuth resolution. However, these methods perform well only under the traditional model for the platform with low speed, and the performance deteriorates for the maneuvering trajectory. In this paper, a structured Bayesian super-resolution forward-looking imaging algorithm for maneuvering platforms under an enhanced sparsity model is proposed. An enhanced sparsity model for maneuvering platforms is established to address the reconstruction problem, and a hierarchical Student-t (ST) prior is designed to model the distribution characteristics of the sparse imaging scene. To further leverage prior information about structural characteristics of the scatterings, coupled patterns among neighboring pixels are incorporated to construct a structured sparse prior. Finally, forward-looking imaging parameters are estimated using the expectation/maximization-based variational Bayesian inference. Numerical simulations validate the effectiveness of the proposed algorithm and the superiority over conventional methods based on pixel sparse assumptions in forward-looking scenes for maneuvering platforms. Full article
Show Figures

Figure 1

25 pages, 12602 KiB  
Article
Concept, Framework, and Data Model for Geographical Soundscapes
by Xiu Lu, Guannan Li, Xiaoqing Song, Liangchen Zhou and Guonian Lv
ISPRS Int. J. Geo-Inf. 2025, 14(1), 36; https://doi.org/10.3390/ijgi14010036 - 18 Jan 2025
Viewed by 1014
Abstract
Existing concepts and frameworks of soundscapes focus on the analysis and description of the sound source but do not explore geographical environment parameters and receiver characteristics in the geographical scene. Existing soundscape data models ignore the geographical environment and receiver information, which limits [...] Read more.
Existing concepts and frameworks of soundscapes focus on the analysis and description of the sound source but do not explore geographical environment parameters and receiver characteristics in the geographical scene. Existing soundscape data models ignore the geographical environment and receiver information, which limits the comprehensive understanding and expression of soundscapes. They cannot study the relationship between the elements related to the sound source or explore the interaction mechanism between the sound and geographical environments. From the geographical perspective, this study extends soundscape to geographical soundscape (geo-soundscape), defines geo-soundscape by the cognition of the geographical scene, analyzes and expresses the conceptual framework of soundscapes through a content hierarchy structure, and expands the characteristics of the receiver, geographical environment parameters, further-obtained geographical scene elements, and scene element description dimensions. Based on the MPEG-7 data model, this study develops a geographical-MPEG-7 data model which consists of low-, medium-, and high-level feature classes. Taking as an example soundscape data collected on a university road in Nanjing, Jiangsu Province, in a real geographical environment, the concept, framework, and data model architecture of the geo-soundscape proposed in this study are demonstrated and described to validate the completeness and feasibility of the proposed model. The results show that our basic framework for a geo-soundscape is well adapted to the Geo-MPEG-7 data model. The model can store, organize, and describe all the soundscape information containing all elements and inter-element relationships. The soundscape in the real environment is fully expressed and described. This study provides a new research direction for soundscapes from a geographical perspective and provides guidance for urban planning and landscape design. Full article
Show Figures

Figure 1

22 pages, 10897 KiB  
Article
Array Three-Dimensional SAR Imaging via Composite Low-Rank and Sparse Prior
by Zhiliang Yang, Yangyang Wang, Chudi Zhang, Xu Zhan, Guohao Sun, Yuxuan Liu and Yuru Mao
Remote Sens. 2025, 17(2), 321; https://doi.org/10.3390/rs17020321 - 17 Jan 2025
Cited by 4 | Viewed by 821
Abstract
Array three-dimensional (3D) synthetic aperture radar (SAR) imaging has been used for 3D modeling of urban buildings and diagnosis of target scattering characteristics, and represents one of the significant directions in SAR development in recent years. However, sparse driven 3D imaging methods usually [...] Read more.
Array three-dimensional (3D) synthetic aperture radar (SAR) imaging has been used for 3D modeling of urban buildings and diagnosis of target scattering characteristics, and represents one of the significant directions in SAR development in recent years. However, sparse driven 3D imaging methods usually only capture the sparse features of the imaging scene, which can result in the loss of the structural information of the target and cause bias effects, affecting the imaging quality. To address this issue, we propose a novel array 3D SAR imaging method based on composite sparse and low-rank prior (SLRP), which can achieve high-quality imaging even with limited observation data. Firstly, an imaging optimization model based on composite SLRP is established, which captures both sparse and low-rank features simultaneously by combining non-convex regularization functions and improved nuclear norm (INN), reducing bias effects during the imaging process and improving imaging accuracy. Then, the framework that integrates variable splitting and alternative minimization (VSAM) is presented to solve the imaging optimization problem, which is suitable for high-dimensional imaging scenes. Finally, the performance of the method is validated through extensive simulation and real data experiments. The results indicate that the proposed method can significantly improve imaging quality with limited observational data. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis (2nd Edition))
Show Figures

Graphical abstract

30 pages, 17007 KiB  
Article
Analysis and Selection Method for Radar Echo Features in Challenging Scenarios
by Yunlong Dong, Xiao Luo, Hao Ding, Ningbo Liu and Zheng Cao
Remote Sens. 2025, 17(1), 129; https://doi.org/10.3390/rs17010129 - 2 Jan 2025
Viewed by 715
Abstract
In addressing the issue of weak target detection at sea, most existing feature detection methods are designed for scenarios with low sea states and small grazing angles. Under high sea states and large grazing angles, variations in scattering mechanisms lead to changes in [...] Read more.
In addressing the issue of weak target detection at sea, most existing feature detection methods are designed for scenarios with low sea states and small grazing angles. Under high sea states and large grazing angles, variations in scattering mechanisms lead to changes in feature characteristics, resulting in performance degradation when these methods are applied directly due to scene mismatch. To address this, this paper employs four quantitative metrics—mean feature value, coefficient of variation, Bhattacharyya distance, and Spearman correlation coefficient—to analyze the centrality, variability, separability, and correlations of nine features in the time, frequency, and time-frequency domains under varying sea states and grazing angles. The study reveals that, with increasing sea state and changing grazing angles, the separability of time-frequency features, especially the time-frequency ridge accumulation, declines more gradually than other features, and feature correlations generally weaken. These findings provide a reference for joint feature detection in complex scenarios. To optimize feature application, the Spearman correlation coefficient matrix was transformed into a generalized distance matrix, and spectral clustering was used to group features with strong correlations. Feature selection was then performed from the clusters based on mean feature value, coefficient of variation, and Bhattacharyya distance, yielding an optimal feature set for the current scenario. Validation on the SDRDSP dataset under sea states 4–5 showed that the proposed method achieved an average detection probability 10.64% higher than existing methods. Further validation on the Yantai angle airborne test dataset, with grazing angles ranging from 62° to 82°, showed an average detection probability increase of 10.07% over existing methods. Full article
Show Figures

Figure 1

30 pages, 7887 KiB  
Article
A High-Resolution Spotlight Imaging Algorithm via Modified Second-Order Space-Variant Wavefront Curvature Correction for MEO/HM-BiSAR
by Hang Ren, Zheng Lu, Gaopeng Li, Yun Zhang, Xueying Yang, Yalin Guo, Long Li, Xin Qi, Qinglong Hua, Chang Ding, Huilin Mu and Yong Du
Remote Sens. 2024, 16(24), 4768; https://doi.org/10.3390/rs16244768 - 20 Dec 2024
Viewed by 768
Abstract
A bistatic synthetic aperture radar (BiSAR) system with a Medium-Earth-Orbit (MEO) SAR transmitter and high-maneuvering receiver (MEO/HM-BiSAR) can achieve a wide swath and high resolution. However, due to the complex orbit characteristics and the nonlinear trajectory of the receiver, MEO/HM-BiSAR high-resolution imaging faces [...] Read more.
A bistatic synthetic aperture radar (BiSAR) system with a Medium-Earth-Orbit (MEO) SAR transmitter and high-maneuvering receiver (MEO/HM-BiSAR) can achieve a wide swath and high resolution. However, due to the complex orbit characteristics and the nonlinear trajectory of the receiver, MEO/HM-BiSAR high-resolution imaging faces two major challenges. First, the complex geometric configuration of the BiSAR platforms is difficult to model accurately, and the ‘non-stop-go’ effects should also be considered. Second, non-negligible wavefront curvature caused by the nonlinear trajectories introduces residual phase errors. The existing spaceborne BiSAR imaging algorithms often suffer from image defocusing if applied to MEO/HM-BiSAR. To address these problems, a novel high-resolution imaging algorithm named MSSWCC (Modified Second-Order Space-Variant Wavefront Curvature Correction) is proposed. First, a high-precision range model is established based on an analysis of MEO SAR’s orbital characteristics and the receiver’s curved trajectory. Based on the echo model, the wavefront curvature error is then addressed by two-dimensional Taylor expansion to obtain the analytical expressions for the high-order phase errors. By analyzing the phase errors in the wavenumber domain, the compensation functions can be designed. The MSSWCC algorithm not only corrects the geometric distortion through reverse projection, but it also compensates for the second-order residual spatial-variant phase errors by the analytical expressions for the two-dimensional phase errors. It can achieve high-resolution imaging ability in large imaging scenes with low computational load. Simulations and real experiments validate the high-resolution imaging capabilities of the proposed MSSWCC algorithm in MEO/HM-BiSAR. Full article
(This article belongs to the Special Issue Advanced HRWS Spaceborne SAR: System Design and Signal Processing)
Show Figures

Figure 1

24 pages, 46652 KiB  
Article
Hyperspectral Reconstruction Method Based on Global Gradient Information and Local Low-Rank Priors
by Chipeng Cao, Jie Li, Pan Wang, Weiqiang Jin, Runrun Zou and Chun Qi
Remote Sens. 2024, 16(24), 4759; https://doi.org/10.3390/rs16244759 - 20 Dec 2024
Cited by 1 | Viewed by 1242
Abstract
Hyperspectral compressed imaging is a novel imaging detection technology based on compressed sensing theory that can quickly acquire spectral information of terrestrial objects in a single exposure. It combines reconstruction algorithms to recover hyperspectral data from low-dimensional measurement images. However, hyperspectral images from [...] Read more.
Hyperspectral compressed imaging is a novel imaging detection technology based on compressed sensing theory that can quickly acquire spectral information of terrestrial objects in a single exposure. It combines reconstruction algorithms to recover hyperspectral data from low-dimensional measurement images. However, hyperspectral images from different scenes often exhibit high-frequency data sparsity and existing deep reconstruction algorithms struggle to establish accurate mapping models, leading to issues with detail loss in the reconstruction results. To address this issue, we propose a hyperspectral reconstruction method based on global gradient information and local low-rank priors. First, to improve the prior model’s efficiency in utilizing information of different frequencies, we design a gradient sampling strategy and training framework based on decision trees, leveraging changes in the loss function gradient information to enhance the model’s predictive capability for data of varying frequencies. Second, utilizing the local low-rank prior characteristics of the representative coefficient matrix, we develop a sparse sensing denoising module to effectively improve the local smoothness of point predictions. Finally, by establishing a regularization term for the reconstruction process based on the semantic similarity between the denoised results and prior spectral data, we ensure spatial consistency and spectral fidelity in the reconstruction results. Experimental results indicate that the proposed method achieves better detail recovery across different scenes, demonstrates improved generalization performance for reconstructing information of various frequencies, and yields higher reconstruction quality. Full article
Show Figures

Graphical abstract

Back to TopTop